text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
If you are a C# developer and have ever mistakenly created a VB.NET project, you might have noticed the massive number of code snippets that VB folks enjoy.
My good buddy Tim Haines gives us the word that the same Code Snippets available for VB.NET that come with VS2005 out-of-the-box are available for C# on this MSDN download page.
[via Christopher Steen]
I have tinkered with the idea of writing those snippets for C# for awhile, glad to see someone beat me to it! I liked to show off some of the XML snippets to the C# crowd, it really is a great demonstration of how Visual Studio 2005 is focused on productivity.
It looks like Dare’s URL is no longer used as the namespace in the XML snippets (saw this in the betas, thought it was a great touch). Although, I did notice that Dare left his mark in the 2.0 QuickStart sample for Validating an XML Document. | https://blogs.msdn.microsoft.com/kaevans/2006/02/02/c-and-vb-net-code-snippets-parity/ | CC-MAIN-2018-13 | refinedweb | 166 | 78.89 |
Attempting to perform a basic log in using the services authentication page but I keep receiving a 404 response. I've used custom code and a slightly modified version of the basic python code provided on Splunk's page both to no avail. Here's the basic code.
import urllib import httplib2 from xml.dom import minidom baseurl = '' username = 'admin' password = 'password' serverContent = httplib2.Http().request(baseurl + '/services/auth/login', 'POST', headers={}, body=urllib.urlencode({'username':username, 'password':password}))[1] print serverContent
The base url for the REST endpoint should probably be.
The base url for the REST endpoint should probably be.
You're right but I don't have ssl enabled on my test machine. Now I can connect but I am left with a completely separate SSL error. Meh. | https://community.splunk.com/t5/Developing-for-Splunk-Enterprise/404-Accessing-REST-authetication-page/m-p/30239 | CC-MAIN-2020-40 | refinedweb | 131 | 59.9 |
Text::Shellwords::Cursor - Parse a string into tokens
use Text::Shellwords::Cursor; my $parser = Text::Shellwords::Cursor->new(); my $str = 'ab cdef "ghi" j"k\"l "'; my ($tok1) = $parser->parse_line($str); $tok1 = ['ab', 'cdef', 'ghi', 'j', 'k"l '] my ($tok2, $tokno, $tokoff) = $parser->parse_line($str, cursorpos => 6); as above, but $tokno=1, $tokoff=3 (under the 'f')
DESCRIPTION
This module is very similar to Text::Shellwords and Text::ParseWords. However, it has one very significant difference: it keeps track of a character position in the line it's parsing. For instance, if you pass it ("zq fmgb", cursorpos=>6), it would return (['zq', 'fmgb'], 1, 3). The cursorpos parameter tells where in the input string the cursor resides (just before the 'b'), and the result tells you that the cursor was on token 1 ('fmgb'), character 3 ('b'). This is very useful when computing command-line completions involving quoting, escaping, and tokenizing characters (like '(' or '=').
A few helper utilities are included as well. You can escape a string to ensure that parsing it will produce the original string (parse_escape). You can also reassemble the tokens with a visually pleasing amount of whitespace between them (join_line).
This module started out as an integral part of Term::GDBUI using code loosely based on Text::ParseWords. However, it is now basically a ground-up reimplementation. It was split out of Term::GDBUI for version 0.8.
Creates a new parser. Takes named arguments on the command line.
Normally all unescaped, unnecessary quote marks are stripped. If you specify
keep_quotes=>1, however, they are preserved. This is useful if you need to know whether the string was quoted or not (string constants) or what type of quotes was around it (affecting variable interpolation, for instance).). Also, until the Gnu Readline library can accept "=[]," without diving into an endless loop, we will not tell history expansion to use token_chars (it uses " \t\n()<>;&|" by default).
Turns on rather copious debugging to try to show what the parser is thinking at every step.
These variables affect how whitespace in the line is normalized and it is reassembled into a string. See the join_line routine.
This is a reference to a routine that should be called to display a parse error. The routine takes two arguments: a reference to the parser, and the error message to display as a string.
If the parsel routine or any of its subroutines runs into a fatal error, they call parsebail to present a very descriptive diagnostic.
This is the heinous routine that actually does the parsing. You should never need to call it directly. Call parse_line instead.
This is the entrypoint to this module's parsing functionality. It converts a line into tokens, respecting quoted text, escaped characters, etc. It also keeps track of a cursor position on the input text, returning the token number and offset within the token where that position can be found in the output.
This routine originally bore some resemblance to Text::ParseWords. It has changed almost completely, however, to support keeping track of the cursor position. It also has nicer failure modes, modular quoting, token characters (see token_chars in "new"), etc. This routine now does much more.
Arguments:
This is a string containing the command-line to parse.
This routine also accepts the following named parameters:
This is the character position in the line to keep track of. Pass undef (by not specifying it) or the empty string to have the line processed with cursorpos ignored.
Note that passing undef is not the same as passing some random number and ignoring the result! For instance, if you pass 0 and the line begins with whitespace, you'll get a 0-length token at the beginning of the line to represent the cursor in the middle of the whitespace. This allows command completion to work even when the cursor is not near any tokens. If you pass undef, all whitespace at the beginning and end of the line will be trimmed as you would expect.
If it is ambiguous whether the cursor should belong to the previous token or to the following one (i.e. if it's between two quoted strings, say "a""b" or a token_char), it always gravitates to the previous token. This makes more sense when completing.
Sometimes you want to try to recover from a missing close quote (for instance, when calculating completions), but usually you want a missing close quote to be a fatal error. fixclosequote=>1 will implicitly insert the correct quote if it's missing. fixclosequote=>0 is the default.
parse_line is capable of printing very informative error messages. However, sometimes you don't care enough to print a message (like when calculating completions). Messages are printed by default, so pass messages=>0 to turn them off.
This function returns a reference to an array containing three items:
A the tokens that the line was separated into (ref to an array of strings).
The number of the token (index into the previous array) that contains cursorpos.
The character offet into tokno of cursorpos.
If the cursor is at the end of the token, tokoff will point to 1 character past the last character in tokno, a non-existant character. If the cursor is between tokens (surrounded by whitespace), a zero-length token will be created for it.
Escapes characters that would be otherwise interpreted by the parser. Will accept either a single string or an arrayref of strings (which will be modified in-place).
This routine does a somewhat intelligent job of joining tokens back into a command line. If token_chars (see "new") is empty (the default), then it just escapes backslashes and quotes, and joins the tokens with spaces.
However, if token_chars is nonempty, it tries to insert a visually pleasing amount of space between the tokens. For instance, rather than 'a ( b , c )', it tries to produce 'a (b, c)'. It won't reformat any tokens that aren't found in $self->{token_chars}, of course.
To change the formatting, you can redefine the variables $self->{space_none}, $self->{space_before}, and $self->{space_after}. Each variable is a string containing all characters that should not be surrounded by whitespace, should have whitespace before, and should have whitespace after, respectively. Any character found in token_chars, but non in any of these space_ variables, will have space placed both before and after.
None known.
Copyright (c) 2003 Scott Bronson, all rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
Scott Bronson <bronson@rinspin.com> | http://search.cpan.org/dist/GDBUI/lib/Text/Shellwords/Cursor.pm | CC-MAIN-2017-04 | refinedweb | 1,097 | 64.1 |
Attachment 'manual.html'Download
Table of Contents
This tool can be used to compile Qt projects, designed for versions
4.x.y and higher. It is not usable for Qt3 and older versions, since some
of the helper tools (
moc,
uic)
behave different.
For activating the tool "qt4", you have to add its name to the Environment constructor, like this
env = Environment(tools=['default','qt4'])
On its startup, the Qt4 tool tries to read the variable
QT4DIR from the current Environment and
os.environ. If it is not set, the value of
QTDIR (in Environment/
os.environ) is
used as a fallback.
So, you either have to explicitly give the path of your Qt4 installation to the Environment with
env['QT4DIR'] = '/usr/local/Trolltech/Qt-4.2.3'
or set the
QT4DIR as environment variable in your
shell.
Under Linux, "qt4" uses the system tool
pkg-config for automatically setting the required
compile and link flags of the single Qt4 modules (like QtCore, QtGui,...).
This means that
you should have
pkg-configinstalled, and
you additionally have to set
PKG_CONFIG_PATHin your shell environment, such that it points to $
QT4DIR/lib/pkgconfig(or $
QT4DIR/libfor some older versions).
Based on these two environment variables (
QT4DIR
and
PKG_CONFIG_PATH), the "qt4" tool initializes all
QT4_* construction variables listed in the Reference
manual. This happens when the tool is "detected" during Environment
construction. As a consequence, the setup of the tool gets a two-stage
process, if you want to override the values provided by your current shell
settings:
# Stage 1: create plain environment qtEnv = Environment() # Set new vars qtEnv['QT4DIR'] = '/usr/local/Trolltech/Qt-4.2.3 qtEnv['ENV']['PKG_CONFIG_PATH'] = '/usr/local/Trolltech/Qt-4.2.3/lib/pkgconfig' # Stage 2: add qt4 tool qtEnv.Tool('qt4')
Based on the requirements above, we suggest a simple ready-to-go setup as follows:
SConstruct
# Detect Qt version qtdir = detectLatestQtDir() # Create base environment baseEnv = Environment() #...further customization of base env # Clone Qt environment qtEnv = baseEnv.Clone() # Set QT4DIR and PKG_CONFIG_PATH qtEnv['ENV']['PKG_CONFIG_PATH'] = os.path.join(qtdir, 'lib/pkgconfig') qtEnv['QT4DIR'] = qtdir # Add qt4 tool qtEnv.Tool('qt4') #...further customization of qt env # Export environments Export('baseEnv qtEnv') # Your other stuff... # ...including the call to your SConscripts
In a SConscript
# Get the Qt4 environment Import('qtEnv') # Clone it env = qtEnv.clone() # Patch it env.Append(CCFLAGS=['-m32']) # or whatever # Use it env.StaticLibrary('foo', Glob('*.cpp'))
The detection of the Qt directory could be as simple as directly assigning a fixed path
def detectLatestQtDir(): return "/usr/local/qt4.3.2"
or a little more sophisticated
# Tries to detect the path to the installation of Qt with # the highest version number def detectLatestQtDir(): if sys.platform.startswith("linux"): # Simple check: inspect only '/usr/local/Trolltech' paths = glob.glob('/usr/local/Trolltech/*') if len(paths): paths.sort() return paths[-1] else: return "" else: # Simple check: inspect only 'C:\Qt' paths = glob.glob('C:\\Qt\\*') if len(paths): paths.sort() return paths[-1] else: return os.environ.get("QTDIR","")
The following SConscript is for a simple project with some cxx files, using the QtCore, QtGui and QtNetwork modules:
Import('qtEnv') env = qtEnv.Clone() env.EnableQt4Modules([ 'QtGui', 'QtCore', 'QtNetwork' ]) # Add your CCFLAGS and CPPPATHs to env here... env.Program('foo', Glob('*.cpp'))
For the basic support of automocing, nothing needs to be done by the
user. The tool usually detects the
Q_OBJECT macro and
calls the “
moc” executable
accordingly.
If you don't want this, you can switch off the automocing by a
env['QT4_AUTOSCAN'] = 0
in your SConscript file. Then, you have to moc your files explicitly, using the Moc4 builder.
You can also switch to an extended automoc strategy with
env['QT4_AUTOSCAN_STRATEGY'] = 1
Please read the description of the
QT4_AUTOSCAN_STRATEGY variable in the Reference manual
for details.
For debugging purposes, you can set the variable
QT4_DEBUG with
env['QT4_DEBUG'] = 1
which outputs a lot of messages during automocing.
The header files with setup code for your GUI classes, are not
compiled automatically from your
.ui files. You always
have to call the Uic4 builder explicitly like
env.Uic4(Glob('*.ui')) env.Program('foo', Glob('*.cpp'))
Resource files are not built automatically, you always have to add
the names of the
.qrc files to the source list for your
program or library:
env.Program('foo', Glob('*.cpp')+Glob('*.qrc'))
For each of the Resource input files, its prefix defines the name of
the resulting resource. An appropriate
“
-name” option is added to the call of the
rcc executable by default.
You can also call the Qrc4 builder explicitly as
qrccc = env.Qrc4('foo') # ['foo.qrc'] -> ['qrc_foo.cc']
or (overriding the default suffix)
qrccc = env.Qrc4('myprefix_foo.cxx','foo.qrc') # -> ['qrc_myprefix_foo.cxx']
and then add the resulting cxx file to the sources of your Program/Library:
env.Program('foo', Glob('*.cpp') + qrccc)
The update of the
.ts files and the conversion to
binary
.qm files is not done automatically. You have to
call the corresponding builders on your own.
Example for updating a translation file:
env.Ts4('foo.ts','.') # -> ['foo.ts']
By default, the
.ts files are treated as
precious targets. This means that they are not
removed prior to a rebuild, but simply get updated. Additionally, they do
not get cleaned on a “
scons -c”. If you
want to delete the translation files on the
“
-c” SCons command, you can set the
variable “
QT4_CLEAN_TS” like this
env['QT4_CLEAN_TS']=1
Example for releasing a translation file, i.e. compiling it to a
.qm binary file:
env.Qm4('foo') # ['foo.ts'] -> ['foo.qm']
or (overriding the output prefix)
env.Qm4('myprefix','foo') # ['foo.ts'] -> ['myprefix.qm']
As an extension both, the Ts4() and Qm4 builder, support the definition of multiple targets. So, calling
env.Ts4(['app_en','app_de'], Glob('*.cpp'))
and
env.Qm4(['app','copy'], Glob('*.ts'))
should work fine.
Finally, two short notes about the support of directories for the Ts4() builder. You can pass an arbitrary mix of cxx files and subdirs to it, as in
env.Ts4('app_en',['sub1','appwindow.cpp','main.cpp']))
where
sub1 is a folder that gets scanned
recursively for cxx files by
lupdate. But like this,
you lose all dependency information for the subdir, i.e. if a file inside
the folder changes, the .ts file is not updated automatically! In this
case you should tell SCons to always update the target:
ts = env.Ts4('app_en',['sub1','appwindow.cpp','main.cpp']) env.AlwaysBuild(ts)
Last note: specifying the current folder
“
.” as input to Ts4() and storing the
resulting .ts file in the same directory, leads to a dependency cycle! You
then have to store the .ts and .qm files outside of the current folder, or
use
Glob('*.cpp')) instead.
Attached FilesTo refer to attachments on a page, use attachment:filename, as shown below in the list of files. Do NOT use the URL of the [get] link, since this is subject to change and can break easily.
- [get | view] (2013-03-30 18:18:27, 12.2 KB) [[attachment:manual.html]]
- [get | view] (2013-03-30 18:18:27, 20.9 KB) [[attachment:manual.pdf]]
- [get | view] (2013-03-30 18:18:27, 10.5 KB) [[attachment:sconsaddons_qt4_manuals_html.zip]]
- [get | view] (2013-03-30 18:18:27, 37.6 KB) [[attachment:sconsaddons_qt4_manuals_pdf.zip]]
You are not allowed to attach a file to this page. | http://www.scons.org/wiki/Qt4Tool?action=AttachFile&do=view&target=manual.html | CC-MAIN-2014-42 | refinedweb | 1,219 | 51.04 |
The problem is the following: if there is a set
S = {x1, ..., x_n} and a function
f: set -> number, which takes a set as an input and returns a number as an output, what is the best possible coalition structure (coalition structure a set of subsets of a
S.That is, find the subsets,such that the sum of
f(s_i) for every subset
s_i in
S is maximal). The sets in the coalition should not overlap and their union should be
S.
A template is this:
def optimal_coalition(coalitions): """ :param coalitions: a dictionary of the form {coalition: value}, where coalition is a set, and value is a number :return: """ optimal_coalition({set(1): 30, set(2): 40, set(1, 2): 71}) # Should return set(set(1, 2))
This is from a paper I found:
Answer
I transliterated the pseudocode. No doubt you can make it better — I was hewing very closely to show the connection.
I did fix a bug (
Val(C') + Val(C C') > v(C) should be
Val(C') + Val(C C') > Val(C), or else we may overwrite the best partition with one merely better than all of
C) and two typos (
C / C' should be
C C'; and
CS* is a set, not a tree).
import itertools def every_possible_split(c): for i in range(1, len(c) // 2 + 1): yield from map(frozenset, itertools.combinations(tuple(c), i)) def optimal_coalition(v): a = frozenset(x for c in v for x in c) val = {} part = {} for i in range(1, len(a) + 1): for c in map(frozenset, itertools.combinations(tuple(a), i)): val[c] = v.get(c, 0) part[c] = {c} for c_prime in every_possible_split(c): if val[c_prime] + val[c - c_prime] > val[c]: val[c] = val[c_prime] + val[c - c_prime] part[c] = {c_prime, c - c_prime} cs_star = {a} while True: for c in cs_star: if part[c] != {c}: cs_star.remove(c) cs_star.update(part[c]) break else: break return cs_star print( optimal_coalition({frozenset({1}): 30, frozenset({2}): 40, frozenset({1, 2}): 69}) ) | https://www.tutorialguruji.com/python/optimal-coalition-structure/ | CC-MAIN-2021-21 | refinedweb | 336 | 64.44 |
Update: Slightly more complete examples.
I found a nice little technique for debugging Ruby code today.
Ever had a situation where you wanted to insert some debugging code in the middle of an expression? The usual way is to break up the expression and use intermediate variables to get at the value, but it turns out that’s really not necessary in Ruby.
Check this out:
class Object
def tap
yield self
self
end
end
Then, you can insert your debugging tap just about anywhere without disturbing the flow of data. Let’s look at some common cases.
First, let’s look a “pipeline” of sorts.
blah.sort.grep( /foo/ ).map { |x| x.blah }
Let’s imagine that there’s a bug here somewhere — we’ve verified that x.blah does the right thing, but the values coming from upstream are suspect. Here’s a “traditional” way of modifying this to add a debugging print:
x.blah
xs = blah.sort.grep( /foo/ )
p xs
# do whatever we had been doing with the original expression
xs.map { |x| x.blah }
With Object#tap, this becomes much easier — you can just slip it in without radically modifying the code:
Object#tap
blah.sort.grep( /foo/ ).tap { |xs| p xs }.map { |x| x.blah }
Similarly, let’s say we’re suspicious of a component, ( q - t ), in an arithmetic expression:
( q - t )
( k + 1 ) / ( ( q - t ) / 2 )
The traditional approach:
i = ( q - t )
p i
( k + 1 ) / ( i / 2 )
Admittedly, it may be wise to break long arithmetic expressions up like this for comprehensibility anyway.
Regardless, here’s how you could do the same thing using Object#tap:
( k + 1 ) / ( ( q - t ).tap { |i| p i } / 2 )
Object#tap is also useful when you’re directly using the result of an expression as the result of a method. For example:
def blah
@things.map { |x|
x.length
}.inject( 0 ) { |a, b|
a + b
}
end
The traditional way:
def blah
sum = @things.map { |x|
x.length
}.inject( 0 ) { |a, b|
a + b
}
p sum
sum
end
The Object#tap way:
def blah
@things.map { |x|
x.length
}.inject( 0 ) { |a, b|
a + b
}.tap { |sum| p sum }
end
If you think about it, Object#tap is a bit like tee in Unix, really. Ever used tee while debugging a shell pipeline, to make sure that the intermediate results were sane? Same thing.
tee
Anyone else discovered this trick?. | https://web.archive.org/web/20161105081056/http:/moonbase.rydia.net:80/mental/blog/programming/eavesdropping-on-expressions.html | CC-MAIN-2017-34 | refinedweb | 404 | 75.91 |
Iteration is a fundamental concept of computer science, and Python’s iterator generators make it a snap to organize and re-use iteration. I find that I am frequently iterating through large hierarchical structures, both ‘up’ and ‘down’ them looking for nodes with specific qualities.
In Maya, this is often iterating through the DAG, which is Maya’s version of a scengraph. Each level of the DAG can have one parent (for transforms, not shapes) and multiple children. If you are traversing the DAG looking for items, say in a character’s skeleton, Python iteration could be what you’re looking for.
In PyMel, you can grab a PyNode and ask for its parent with getParent(). I find, however, that I often need to iterate farther up the DAG looking for an ancestor that has a particular name or attribute. This could be accomplished by multiple calls to the getParent method of each node, but that can be cumbersome to do in code, and is harder to do in when using list comprehensions. Fortunately, a simple iterator generator can be whipped up in no time:
def parentIter(pnode): p = pnode.getParent() while p: yield p p = p.getParent()
Now we can hand this a PyNode (or anything with a getParent method) and it’ll run all the way up the DAG. This is handy if you have a schema where you are parenting a character’s skeleton under a root transform.
for t in parentIter(PyNode('someJoint')): if t.type() == 'transform': # do something here
This is pretty useful – I often will add a bit of sugar to make life easier. Consider a function that expects to operate on a character – it takes any node that’s part of the character, but needs to get some info off of the root node. Often, you’d need to check to see if the caller passed in the root node itself, or any of the joints. You would need something like this:
def getRoot(node): if node.type() == 'transform' and node.hasAttr('info'): return node for nd in parentIter(node): if node.type() == 'transform' and node.hasAttr('info'): return node
In order to prevent duplicating the testing part (seeing if a node is a transform and has an attribute called info), the parentIter function can be modified as follows:
def parentIter(pnode, inclusive=False): if inclusive: yield pnode p = pnode.getParent() while p: yield p p = p.getParent()
Now, the iteration can be a bit simpler:
def getRoot(node): for nd in parentIter(node, inclusive=True): if node.type() == 'transform' and node.hasAttr('info'): return node
Eliminating those extra lines can reduce visual clutter and potential error. Also, if you need to update the line that does the filtering, you only have to change it in one place.
Child iteration can be just as simple if you don’t care about the order and you don’t need to skip branches of a hierarchy:
def childIter(pnode, inclusive=False): if inclusive: yield pnode for ch in pnode.getChildren(): yield ch for gch in childIter(ch): yield gch
Again, the inclusive keyword is used to simplify the iteration. I usually keep the keyword optional as it seems to blur the line between what you might expect ‘Give me all the nodes under x‘ as opposed to ‘Give me x and all the nodes underneath it‘. I find in my work plenty of use cases for either, so keeping the arg as a convenience seems efficient and clear.
One thing to note about these generators – if you need a list of everything the generator would return, you need to use a list constructor to capture it, i.e.:
# get all the nodes that are ancestors in a list list(parentIter(node))
Failure to do so will often result in the calling code erroring out with the generator object:
# Error: AttributeError: file line 1: 'generator' object has no attribute 'append' # | https://codeheadwords.com/tag/pymel/ | CC-MAIN-2019-26 | refinedweb | 654 | 60.55 |
Summary: In this installment of "Some Assembly Required" column, Scott Hanselman, a Type 1 Diabetic, uses .NET to download blood sugar numbers from a Glucose Meter. Along the way he lays down the rough scaffolding for a plugin system so other meters could be added easily. The application exports to XML or CSV for analysis within Microsoft Excel. Thanks to for their support and the initial idea. Visit SweetSpot to see a greatly expanded version of this Meter Downloader and take a look at to make a donation to the ADA and Team Hanselman and this year's Walk for Diabetes.
I've been diabetic since I was twenty-one. Actually, it happened a just few months before my 21st birthday. Needless to say, it's no fun. Most diabetes prick their fingers at least four times a day, often more. I test my blood sugar at least 10 times a day. This adds up to a lot of data points in a Glucose Meter. Checking your numbers gives you a snapshot of how you're doing, but the really valuable information lies in the analysis - but the data is trapped inside the blood sugar meter.
Most meters have some kind of cable that can be hooked up to your computer, allowing downloads of these hundreds of data points. My friend Adam from SweetSpot and I wanted to get at that raw data, geeks that we are, so we decided to write a program to do it.
I'm using the FreeStyle Flash meter, but there's lots of different meters out there. We wanted a simple plugin system so that folks could write their own plugins and expand the application. First, I created this simple interface:
namespace C4FGlucoseMeterDownloader
{
public interface IMeter
{
MeterResponse GetMeterData(string configuration);
Image GetMeterPicture();
string GetDisplayName();
}
}
It could be made more complex, but this one does the job nicely. The program will call GetMeterData passing in any saved configuration (like USB or Serial details...this was included for future use) when it's time for the plugin to fetch the data from the meter. We'll talk about the "MeterResponse" type later in this article.
We'll also call GetMeterPicture and GetDisplayName while we spin through all the meter types to fill out the details of our main interface as seen at right.
For this sample, we'll be keeping a Dictionary of IMeter instances at the ready. We spin through the currently executing assembly looking for types that implement IMeter. A future version might look for assemblies in a \plugins directory.
Type[] types = Assembly.GetExecutingAssembly().GetTypes();
foreach (Type t in types)
{
if (typeof(IMeter).IsAssignableFrom(t) && t.IsInterface == false)
{
IMeter i = Activator.CreateInstance(t) as IMeter;
string displayName = i.GetDisplayName();
this.comboBox1.Items.Add(displayName);
meterTypes.Add(displayName, i);
}
}
When using reflection to find types that implement a certain interface, you use the very-not-intuitive method "IsAssignableFrom." It makes sense once you think about it, but it didn't jump out at me. I expected a method like "ImplementsInterface," but perhaps that's just me.
The image of the meter would be stored in the same assembly as the plugins, so we can put it out easily without external dependencies.
public Image GetMeterPicture()
{
using (Stream sr = Assembly.GetExecutingAssembly(). GetManifestResourceStream(" C4FGlucoseMeterDownloader.Resources.freestyle_flash_gm.jpg"))
{
Image i = Image.FromStream(sr);
return i;
}
}
Note the use of the "using" statement. I'm a huge fan of this statement. Be sure to use it anytime you're creating something that is IDisposable and you'll be sure to avoid object leaks.
The FreeStyle meter uses a 1/8" headphone jack connecting to a 9-PIN RS-232 serial port. I use a standard USB to 9-PIN adapter to hook it all together. We could ask the user for the Serial Port that the device is connected to, but how would they know? Things like COM Port names are hidden better than ever before. Do I really want my Grandma going into the Device Manager to make an educated guess? Instead, since the FreeStyle has a fantastically simple protocol with easy to spot results, we'll just spin through every COM Port and send the command until one of them works.
There's many ways to tackle Serial Communication with .NET. There's the standard port.Read() way of doing things, but for simple "dump-style" communication, I like the SerialDataReceivedEventHandler. It's an event that's raised by the SerialPort when data shows up. You just have to call port.ReadExisting. It's nice and simple.
string[] portNames = System.IO.Ports.SerialPort.GetPortNames();
foreach (string portName in portNames)
{
if (found == true) { break; }
try
{
Debug.WriteLine("Trying " + portName);
port = new SerialPort(portName);
port.BaudRate = 19200;
port.StopBits = StopBits.One;
port.Parity = Parity.None;
port.DataBits = 8;
port.DataReceived += new SerialDataReceivedEventHandler(PortDataReceived);
port.ReadTimeout = 500; port.WriteTimeout = 500;
port.Open();
port.Write("mem");
long timeout = 0;
while (port.IsOpen)
{
System.Threading.Thread.Sleep(500);
if (++timeout > 10 && found == false)
{
break; //give up after 5 seconds..
}
}
}
//ERROR HANDLING REMOVED FOR BREVITY...SEE THE CODE
finally
{
if (port.IsOpen)
{
port.DataReceived -= PortDataReceived;
port.Close();
}
port = null;
}
}
Here we just keep sending "mem" - the FreeStyle's dump command - to each port we have until one gives us something. The eventHandler sets the "found" flag.
void PortDataReceived(object sender, SerialDataReceivedEventArgs e)
{
string next = port.ReadExisting();
found = true;
System.Diagnostics.Debug.WriteLine("RECEIVED:" + next);
result += next; //Not using a StringBuilder is inefficient, truly, but this is easier and allows us to watch the the string for "END" without worrying about it being truncated like "EN" and "D"
if (result.Contains("END"))
{
Properties.Settings.Default.LastPortFound = port.PortName;
port.Close();
}
}
We keep going until we find the "END" string returned by the Glucose Meter. Again, there's literally a half-dozen ways to have done this. Simply spinning in a while(port.ReadBytes()) loop would do the job as well, but this is a good way to introduce the SerialDataReceivedEventHandler. We just keep appending the received data and the SerialPort class keeps things in order and ensures nothing is dropped.
NOTE: You can run this sample even without a Glucose Meter. Simply uncomment out the line marked "//HACK:" in the MeterFreestyle.cs file and the Meter plugin will return sample data.
NOTE: You can run this sample even without a Glucose Meter. Simply uncomment out the line marked "//HACK:" in the MeterFreestyle.cs file and the Meter plugin will return sample data.
The data returned from the Meter is quite regular, for example:
128 Oct 09 2006 23:44 26 0x00122 Oct 09 2006 22:18 26 0x00261 Oct 09 2006 21:14 26 0x00070 Oct 09 2006 21:14 26 0x000x3B2C END
128 Oct 09 2006 23:44 26 0x00122 Oct 09 2006 22:18 26 0x00261 Oct 09 2006 21:14 26 0x00070 Oct 09 2006 21:14 26 0x000x3B2C END
The first number is the blood sugar value. Ideal numbers are between 80 and 120mg/dl. Diabetics aim for these kinds of values, but often their blood sugar varies. Checking blood sugar often allows us to get back to normal as soon as possible, thereby avoiding side effects of chronically high sugar. Take a look at "Scott's Diabetes Explanation: The Airplane Analogy" for more details on Diabetes, or the Diabetes Section of my blog.
We want to parse this data and turn it into both XML and CSV. I created a simple data structure to get us started, and marked it up for XML Serialization:
namespace C4FGlucoseMeterDownloader
{
public class MeterResponse
{
public MeterResponse() { GlucoseReadings = new List<GlucoseReading>(); }
[XmlIgnore]
public List<GlucoseReading> GlucoseReadings;
[XmlArray("GlucoseReadings")]
public GlucoseReading[] ArrayGlucoseReadings
{
get { return GlucoseReadings.ToArray(); }
set { GlucoseReadings = new List<GlucoseReading>(value); }
}
}
public class GlucoseReading
{
public GlucoseReading() { }
public GlucoseReading(DateTime date, decimal BG)
{
this.BG = BG;
this.Date = date;
}
public decimal BG;
public DateTime Date;
}
}
As we parse the raw data, we return GlucoseReadings to build up a MeterResponse.
protected GlucoseReading ParseResultRowToGlucoseReading(string data)
{
//TODO: Error Handling
//227 Oct 11 2006 01:38 17 0x00
string BGString = data.Substring(0,5);
decimal BGValue = decimal.Parse(BGString.Trim(), System.Globalization.CultureInfo.InvariantCulture);
string timeString = data.Substring(5,18);
DateTime recordDateTime = DateTime.Parse(timeString, System.Globalization.CultureInfo.InvariantCulture);
return new GlucoseReading(recordDateTime, BGValue);
}
And the resulting data is then exported automatically into CSV and XML:
public void SerializeAsXml(MeterResponse r, string fileNameSeed)
{
XmlSerializer x = new XmlSerializer(typeof(MeterResponse));
using (Stream s = File.OpenWrite(fileNameSeed + ".xml"))
{
x.Serialize(s, result);
}
}
public void SerializeAsCsv(MeterResponse r, string fileNameSeed)
{
using (StreamWriter sw = new StreamWriter(fileNameSeed + ".csv"))
{
sw.WriteLine("Date,Glucose");
foreach(GlucoseReading gr in r.GlucoseReadings)
{
sw.WriteLine(String.Format("{0},{1}",gr.Date,gr.BG.ToString()));
}
}
}
The XML could be styled with XSLT, manipulated by another program, or you could store it in a database. The CSV file is easily consumed by Excel as seen in the screenshot at right.
Make sure you use a "scatter plot" in Excel if you want to see all your data. Axes in Line Charts that use Dates as their date type in Excel have all values appearing at midnight. Since Glucose data is time (even minute) sensitive, use a scatter chart to get accuracy.
Don't worry, those aren't my actual blood sugar numbers, it's just generated sample data.
Take a look at to read my personal story and make a donation to the ADA and Team Hanselman and this year's Walk for Diabetes.
There's things that could be extended, added, and improved on with this project. Here are some ideas to get you started:
Be sure to check out and get in on their beta. They've already extended this application significantly with multiple meter support, ClickOnce Deployment, lots of special sauce, and an extensive online analysis system. Thanks again, son Zenzo for indulging him in these hobbies!
If you would like to receive an email when updates are made to this post, please register here
RSS
PingBack from
We did this for our Microsoft Imagine Cup project for 2006. We used the infrared capabilities of one of the acu-check glucometers to pull data to a pocket pc, correlate it with diet and exercise information logged on the pocket pc and charted it. The pocket pc would sync with our website allowing doctors to log in, and view his patients data daily. The goal was to cut log books out all together. Our application was called TypeZero.
Hmmmm. I have a Freestyle Freedom and can't get it to return any data from the unit. It comes back NULL. Anyone know if the "mem" command is the same on this unit?
PingBack from
Nice job Scott
I'd love it if the XML was a microformat, this might be the start of developing and promoting a standard format for blood glucose readings.
If we did this for independent software packages (microformat import/export) then maybe meter manufacturers would think about supporting it in their meter download software going forward.
More about this is in my paper in the March 2007 edition of the Journal of Diabetes Science and Technology ().
I own a lifescan One touch ultra, is it possible for this meter to read out the memory ? does anyone has expirence with this glucose meter ?
Thanks in advance,
Do you happen to know where I can find the pin configurations to make a data cable for the FreeStyle Flash Diabetic Meter? All I need is the wireing diagram on what wires connect from the DB plug to the 1/8 inch mini plug and I can make my own and save myself some money since I cannot work due to medical problems.
cbs12@waltonville.net
This is awesome! I always wondered if I could hook up to one of my blood glucose meters.
thanks!
pinout diagram for Freestyle data cable:
Tip ---> pin 2 of DB9
Ring ---> pin 3 of DB9
Base ---> pin 5 of DB9
Hi,
I'm using same glucometer, but i try to read data through serial port using PHP. Please can anyone have idea or anybody used?
Thanks,
Does anyone know how to reset the user data points to zero on the the FreeStyle Flash?? Not just the calendar, but the actual readings?
Thanks.
Hi!! I just would like to know if you have certain DLL or OCX for freestyle flash glucose meter. Is this code of yours can be compiled in Visual Studio .Net Professional Edition? please send me some feedback... Thanks and more power... | http://blogs.msdn.com/coding4fun/archive/2007/04/17/2161924.aspx | crawl-002 | refinedweb | 2,081 | 56.96 |
The EHC Class should implement both equals(Object) and hashCode() methods. EHC warnings appear if an equals() method was specified without a hashCode() method, or vice versa. This may cause a problem with some collections that expect equal objects to have equal hashcodes.
8 public class EHC_HASH_Sample_1 { 9 private int seed; 10 public EHC_HASH_Sample_1(int seed) { 11 this.seed = seed; 12 } 13 public boolean equals(Object o) { 14 return (o instanceof EHC_HASH_Sample_1) 15 && ((EHC_HASH_Sample_1) o).seed == seed; 16 } 17 // no hashCode method defined 18 }
EHC.HASH is reported for class declaration on line 8: Class defines equals() but does not define hashCode(). | https://docs.roguewave.com/en/klocwork/current/ehc.hash | CC-MAIN-2019-18 | refinedweb | 102 | 66.03 |
This article was created in partnership with IP2Location. Thank you for supporting the partners who make SitePoint possible.
In a world where online commerce has become the norm, we need to build websites that are faster, user friendly and more secure than ever. In this article, you’ll learn how to set up a Node.js powered website that’s capable of directing traffic to relevant landing pages based on a visitor’s country. You’ll also learn how to block anonymous traffic (e.g. Tor) in order to eliminate risks coming from such networks.
In order to implement these features, we’ll be using the IP2Proxy web service provided by IP2Location, a Geo IP solutions provider. The web service is a REST API that accepts an IP address and responds with geolocation data in JSON format.
Here are some of the fields that we’ll receive:
- countryName
- cityName
- isProxy
- proxyType
- etc.
We’ll use Next.js to build a website containing the following landing pages:
- Home Page: API fetching and redirection will trigger from this page
- Landing Page: supported countries will see the product page in their local currency
- Unavailable Page: other countries will see this page with an option to join a waiting list
- Abuse Page: visitors using Tor networks will be taken to this page
Now that you’re aware of the project plan, let’s see what you need to get started.
Prerequisites
On your machine, I would highly recommend the following:
- Latest LTS version of Node.js (v12)
- Yarn
An older version of Node.js will do, but the most recent LTS (long-term support) version contains performance and debugging improvements in the area of async code, which we’ll be dealing with. Yarn isn’t necessary, but you’ll benefit from its faster performance if you use it.
I’m also going to assume you have a good foundation in:
As mentioned earlier, we’ll be using Next.js to build our website. If you’re new to it, you can follow their official interactive tutorial to quickly get up to speed.
IP2Location + Next.js Project Walkthrough
Project Setup
To set up the project, simply launch the terminal and navigate to your workspace. Execute the following command:
npx create-next-app
Feel free to give your app any name. I’ve called mine
next-ip2location-example. After installation is complete, navigate to the project’s root and execute
yarn dev. This will launch the Node.js dev server. If you open your browser and navigate to
localhost:3000, you should see a page with the header “Welcome to Next.js”. This should confirm that we have a working app that runs without errors. Stop the app and install the following dependencies:
yarn add yarn add next-compose-plugins dotenv-load next-env @zeit/next-css bulma isomorphic-unfetch
We’ll be using Bulma CSS framework to add out-of-the-box styling for our site. Since we’ll be connecting to an API service, we’ll set up an
.env file to store our API key. Do note that this file should not be stored in a repository. Next create the file
next.config.js. at the root of the project and add the following code:
const withPlugins = require('next-compose-plugins') const css = require('@zeit/next-css') const nextEnv = require('next-env') const dotenvLoad = require('dotenv-load') dotenvLoad() module.exports = withPlugins([ nextEnv(), [css] ])
The above configuration allows our application to read the
.env file and load values. Do note that the keys will need to have the prefix
NEXT_SERVER_ in order to be loaded in the server environment. Visit the next-env package page for more information. We’ll set the API key in the next section. The above configuration also gives our Next.js app the capability to pre-process CSS code via the
zeit/next-css>
This free key will give you 1,000 free credits. At a minimum, we’ll need the following fields for our application to function:
- countryName
- proxyType
If you look at the pricing section on the IP2Proxy page, you’ll note that the
PX2 package will give us the required response. This means each query will costs us two credits. Below is a sample of how the URL should be constructed:
You can also submit the URL query without the IP. The service will use the IP address of the machine that sent the request. We can also use the
PX8 package to get all the available fields such as
isp and
domain in the top-most package of the IP2Proxy Detection Web Service.
In the next section, we’ll build a simple state management system for storing the proxy data which will be shared among all site pages.
Building Context API in Next.js
Create the file
context/proxy-context and insert the following code:
import React, { useState, useEffect, useRef, createContext } from 2proxy', JSON.stringify(proxy)) } }, [proxy]) return( <ProxyContext.Provider value={[ipLocation, setProxy]}> {props.children} </ProxyContext.Provider> ) }
Basically, we’re declaring a sharable state called
proxy that will store data retrieved from the IP2Proxy web service. The API fetch query will be implemented in
pages/index.js. The information will be used to redirect visitors to the relevant pages. If the visitor tries to refresh the page, the saved state will be lost. To prevent this from happening, we’re going to use the
useEffect() hook to persist state in the browser’s local storage. When a user refreshes a particular landing page, the proxy state will be retrieved from the local storage, so there’s no need to perform the query again. Here’s a quick sneak peek of Chrome’s local storage in action:
Tip: In case you run into problems further down this tutorial, clearing local storage can help resolve some issues.
Displaying Proxy Information
Create the file
components/proxy-view.js and add the following code:
import React, { useContext } from 'react' import { ProxyContext } from '../context/proxy-context' const style = { padding: 12 } const ProxyView = () => { const [proxy] = useContext(ProxyContext) const { ipAddress, countryName, isProxy, proxyType } = proxy return ( <div className="box center" style={style}> <div className="content"> <ul> <li>IP Address : {ipAddress} </li> <li>Country : {countryName} </li> <li>Proxy : {isProxy} </li> <li>Proxy Type: {proxyType} </li> </ul> </div> </div> ) } export default ProxyView
This is simply a display component that we’ll place at the end of each page. We’re only creating this to confirm that our fetch logic and application’s state is working as expected. You should note that the line
const [proxy] = useContext(ProxyContext) won’t run until we’ve declared our
Context Provider at the root of our application. Let’s do that now in the next section.
Implementing Context API Provider in Next.js App
Create the file
pages/_app.js and add the following code:
import React from 'react' import App from 'next/app' import 'bulma/css/bulma.css' import { ProxyContextProvider } from '../context/proxy-context' export default class MyApp extends App { render() { const { Component, pageProps } = this.props return ( <ProxyContextProvider> <Component {...pageProps} /> </ProxyContextProvider> ) } }
The
_app.js file is the root component of our Next.js application where we can share global state with the rest of the site pages and child components. Note that this is also where we’re importing CSS for the Bulma framework we installed earlier. With that set up, let’s now build a layout that we’ll use for all our site pages. currencySymbol = '' const [proxy] = useContext(ProxyContext) const { countryName } = proxy switch (countryName) { case 'Kenya': exchangeRate = 1; currencySymbol = 'KShs.' break; case 'United Kingdom': currencySymbol = '£' exchangeRate = 0.0076; break; default: break; } // Format localPrice to currency format localPrice = (localPrice * exchangeRate).toFixed(2).replace(/\d(?=(\d{3})+\.)/g, '$&,') return ( <Layout> <Head> <title>Landing</title> </Head> <section className="hero is-warning"> Zeit Zeit dashboard here. With that defined, deploying our application is as simple as executing the following command:
now --prod
The command will automatically run the build process, then deploy it to Zeit you. | https://laptrinhx.com/how-to-divert-traffic-using-ip2location-in-a-next-js-website-3016835238/ | CC-MAIN-2020-29 | refinedweb | 1,326 | 56.15 |
react-copy-mailto
Node module to add a copy popover on mailto links.
Motivation
The one thing we all can agree on that we hate it when the default mail app pops up after clicking on the mailto links. Most of the time we just want to copy the email address and that's where this module comes into play. Big shout out to Kuldar whose tweet thread inspired us to build this.
Installation and Usage
The easiest way to use this library is to install it via yarn or npm
yarn add react-copy-mailto
or
npm install react-copy-mailto
Then just use it in your app:
import React from "react"; import CopyMailTo from "react-copy-mailto"; const YourComponent = () => ( <div> <CopyMailTo email="[email protected]" /> </div> );
Props
You can customize almost every aspect of this component using the below props, out of which email is the only required prop.
Development
- Install the dependencies
yarn
- Run the example on the development server
yarn demo:dev | https://reactjsexample.com/a-fully-customizable-react-component-for-copying-email-from-mailto-links/ | CC-MAIN-2021-21 | refinedweb | 165 | 53.85 |
Simple tutorial to understand props vs state in React.js
Use props to send information to a component. Use state to manage information created and updated by a component.#code , #nodejs , #tsc , #react
I have been learning React.js (and blogging about it).
Components, are an essential building blocks in react.js. As I learned about them, I couldn’t understand the difference between
props and
state of a react.js component.
This guide explains my understanding of
props and
state.
Props
props is short for properties. Like parameters in functions, you can pass properties into react.js component.
Consider this simple example:
// define a component with a props interface Props { name: string } class Hello extends React.Component<Props, any> { render() { return <h1>Hello from {this.props.name}!</h1>; } } // invoke the component by passing props ReactDOM.render( <Hello name="Joseph"> />, document.getElementById("main") );
Here the class
Hello is a component that accepts a property. Later in the code, we invoke the component by passing the
name property.
Because this is Typescript, we can define the structure of this property, as an interface.
Components can read the property but can’t modify it.
State
A state object is owned by the component. It is used when a component has to create and update an information.
State is created by component in the constructor and changed with
setState method.
Let us see both property and state in action in the following example.
In this example, we are going to display a timer on the browser. We will start with a certain value and at the tick of every second, the app will decrement that value and display it in the browser.
We will use props to send the initial value, and state to track the changes in that value and also to display that current value.
We will create a Timer component first.
If you want to understand the basics of React.js component, read Create React.js component with Typescript.
import * as React from "react"; interface Props { startWith: number } interface State { currentValue: number } export class Timer extends React.Component<Props, State> { constructor(props: Props) { super(props); this.state = { currentValue: this.props.startWith } setInterval(() => { this.setState({ currentValue: this.state.currentValue - 1 }) }, 1000); } render() { return ( <div className="Timer"> {this.state.currentValue} </div> ) } }
The
Timer component takes
startsWith as a property and holds
currentValue as a state. The
startsWith property will be an input from the invoking component with the inital value for the Timer. As the timer tickes, the
currentValue stores the changes in the timer and renders the modified value.
A state is created in the constructor and initialized. Its value is changed using
setState.
A React.js component has to render itself. This timer component renders its
currentValue. Whenever a
setState is invoked, React.js automatically calls its
render function.
Let us see how to invoke this Timer component with an initial value.
import * as React from "react"; import * as ReactDOM from "react-dom"; import { Timer } from "./components/Timer"; ReactDOM.render( <Timer startWith={500} />, document.getElementById("main") );
Now that we understand
props and
state, I’m going to change the Timer component little bit. The
setState is asynchronous. The above code may not work in some cases because React.js may batch update. We are going to take this async nature into account and change the Timer component.
setInterval(() => { this.setState((prevState: State, props: Props) => ({ currentValue: prevState.currentValue - 1 })) }, 1000); }
setState is a function with two parameters—
prevState, and
props. We get the previous
currentValue from
prevState and decrement it.
Let us recap.
props is used to send information to the component. A component can not modify
props. A component can create and modify
state to track information. To modify
state, use
setState. Because of the async nature, use
prevState to access its previous value. Whenver a component calls
setState, React.js calls its corresponding
render function.
This is part of Learning React.js with Typescript series.
If you liked this article, subscribe with the below form to get new articles in this series. | https://prudentdevs.club/props-vs-state-reactjs | CC-MAIN-2019-18 | refinedweb | 673 | 53.37 |
the upcoming release of .NET Standard 2.1, Microsoft is going to introduce a new Range structure in the System namespace.
Range
All well and good, but since our Spreadsheet API already contains the DevExpress.Spreadsheet.Range interface, you will get the following error when compiling your spreadsheet application under .NET Standard 2.1:
DevExpress.Spreadsheet.Range
error CS0104: 'Range' is an ambiguous reference between 'DevExpress.Spreadsheet.Range' and 'System.Range'
Now the obvious workaround fix is to use the following code
using Range = DevExpress.Spreadsheet.Range;
but that's not going to help new customers or even you when writing new applications. Hence, in order to fix this issue properly, we've decided to rename our DevExpress.Spreadsheet.Range interface to DevExpress.Spreadsheet.CellRange in the next major release, v19.2. This change will affect all DevExpress Spreadsheet products:
DevExpress.Spreadsheet.CellRange
Once v19.2 has been released and you have upgraded to it, you will need to update your projects to use the CellRange name.
CellRange
Why not dont do it now?
@Hedi: Because we hate having breaking changes in minor releases. They are universally bad news: customers assume (as would I, to be honest) that they can update a minor release and recompile with no after effects.
Cheers, Julian
Thanks for taking care of this, I've run into similar problems with other libraries where they have name conflicts with classes that are native to the framework. It's not fun. :)
Thanks a lot for bringing this to our attention Julian
Hi
We're researching to upgrade our .Net windows controls/forms projects to the latest version of .NET Framework (let's say .Net 4.7) and as a result of that, we need to upgrade the referenced DevExpress components to a newer version which is compatible with the .NET version.
Our projects currently use DevExpress components version 10.1.6.0 with .NET 3.5. Our usages of DevExpress include XtraGrid, XtraEditors, XtraLayout, Microsoft Office interface components and several more.
The question is what would it take to achieve that? we understand that's a huge breaking change.
We are looking for advises, guideline and steps to proceed.
Thanks for your help.
Tuong Nguyen
Please
or
to post comments. | https://community.devexpress.com/blogs/ctodx/archive/2019/06/21/spreadsheet-breaking-change-in-v19-2.aspx | CC-MAIN-2019-39 | refinedweb | 373 | 60.61 |
NewsForge haspublished what it bills as the "first-ever comprehensive English-language review
of Red Flag Linux". Most of you probably know that Red Flag Linux is the "official"
Chinese Linux distribution, and receives support - as well as contracts -
from the Chinese government. What you may not have known is that,
despite being based on Red Hat Linux, Red Flag Linux
has opted for KDE as its default desktop. Even more interesting, thedescription
of their "Redflag Linux Desktop" product lists none other thanKOffice as the "desktop office
solution". Hats off to Red Flag Linux for
choosing the right product for the job. I'm not sure if the KDE
mailing lists are prepared for a billion more users, but it sure will
be nice to see how much KDE development is borne from
China's burgeoning info-tech industry!
I doubt that KOffice is really far enough as "desktop office solution". Obviously it was not enough advanced for Korean government. At least they will not be able to send spam letters because of missing serial letter functionality.
It's ready for those who have not foolishly locked themselves into arcane, proprietary and freedom-depriving file formats. KOffice does everything the vast majority of people need, is stable, fast, free, doesn't invade your privacy, does a fair job at importing other file formats and doesn't require you to sign a contract with a greedy, arrogant and law-breaking company.
As for Korea, you have no idea why they selected HancomOffice over KOffice or OpenOffice. Maybe it has something to do with the fact that Hancom Linux is a Korean company and they wanted to support it; maybe it has to do with the fact that they used Hancom products before and just decided to stay with it; or maybe you are right that KOffice and OpenOffice doesn't meet their particular needs. But since you don't know, your point is pure speculation, and there is nothing "obvious" about it.
I think the main reason Korea decided to go with Hancom is politics. Mayby you've missed China and Korea isn't best friends..
"aren't" not "isn't".
maybe some grammar help would be useful in KOffice...
Please don't forget there is a lot of us talking another language than english. What we need is a lot more than grammar help from an office solution.
Men du kan jo prøve å korrigere dette. ( In norwegian )
> Men du kan jo prøve å korrigere dette. ( In norwegian )
----------------------------------------------------------^
'norwegian' skrives med stor 'N' på engelsk -> Norwegian
.. korreksjon utført:-)
--
Andreas Joseph Krogh
> (In norwegian)
Yeah, and since the material inside the parentheses doesn't end with a period (full stop), "In" probably shouldn't be capitalized, either. This looks like one of those "with many eyes, all bugs are shallow" thangs. If enough of us pitch in and submit patches, we just might be able to develop a completely error-free comment. Oh joy!
Mi tre sxatas tiel signifo-plenan, interkulturan, plurlingvan dialogon! (Cxu ankau en cxi tiu frazo oni trovos erarojn? Versxajne jes.)
> (Cxu ankau en cxi tiu frazo oni trovos erarojn? Versxajne jes.)
Hmm, cxu ni bezonas gramatikokontrolilon por Esperanto en KWord? Tiu estus ankaux utila en la posxtilo ktp... Kaj certe en la novajxgrupilo!
Non capisco perché vi ostiniate a non usare l'italiano... :-)
pareil
Also, ich finde es ja dreist, dass die Norweger jetzt Überhand gewinnen ;)
Wenigstens nicht in Olympia *g*
I think KOffice is a nice product and it should be *enough* for most people...
huy (in Russian).
> Non capisco perché vi ostiniate a non usare l'italiano... :-)
Perché no siamo italiani, e no parliamolo? ;)
> Perché no siamo italiani, e no parliamolo? ;)
Mia scipovo de la itala lingvo ne estas perfekta, sed mi kredas, ke oni devus diri:
Perché non siamo italiani e non lo parliamo.
Cxu ne?
> Mia scipovo de la itala lingvo ne estas perfekta, sed mi kredas, ke oni devus
> diri: Perché non siamo italiani e non lo parliamo.
Nu, pri tio mi ne certas cxar mi ne parolas gxin! Jen la dilemo: kiel oni diru "mi ne parolas vian lingvon" por ke la fremdlingvulo komprenu vin, se vi ne parolas la lingvon...
ãîðÿ÷èå íîðâåæñêèå ïàðíè :)
A :-)) would have been nice.
Wolfgang
I think we *do* know why the Korean government chose Hancom Office:
1. It is a Korean product, and it is in the government's
interest to support a local company.
2. As a Korean product, its Korean language support
must be excellent.
3. It is my understanding that Hancom Office is
the de facto standard in Korea. So no problems
reading other people's documents and older documents.
Most existing government documents are probably in
Hancom Office format. (Even if KOffice had Hancom
import and export filters, they would not be perfect,
and they're always a hassle.)
4. The product is cross-platform. It runs natively on
Windows and Linux. There will be no problems stemming
from the fact that some employees will be using Linux
while others are using Windows. No need for
import/export filters.
5. From what i have read, it is a very good product
(at least the word processor component).
This makes Hancom Office sound like a no-brainer to me, considering the environment it which it will be used. I think that the only other product to come close to satisfying these criteria would be StarSuite (the Asian version of StarOffice), but with StarSuite users would need to use import/export filters to read older Hancom documents and to exchange files with Hancom users.
Must say I agree.
I can't imagine anyone using KOffice seriously *yet*.
I even find it to unstable. I've had kspread crash on me way to many times.
Whenever it crashes, I just open up another one. It even saves
documents for me. Just hope the Konqueror with the icon in it
doesn't crash.
Whereas, before I came to GNU/Linux, I would have entire OS crash
and the disks would have to be repaired... Finally I'd have to
go find the document's icon all over again.
"Hats off to Red Flag Linux for choosing the right product for the job."
Well said.
The details of this interest me. They mention "decades of games," so I wonder how many beyond kdegames they ship.
They mention a VCD player, so I Wonder if that's xine or what. If they can play VCDs with kaboodle I wish they'd tell me how they did it!
"Equipped with Netscape browser, pre-configured with 263/163/169/2911 dialup supporting environment." That's disappointing, though vague. They may center on Konq yet. Must get screenshots.
"Support up to 6.4 billion users and groups" heh.
I don't know how they do it, but I read on apps.kde.com about kio_vcd:
Yeah, there´s a Xine-Icon default on their KDE-Desktop.
I like Mplayer too.
Greetz from Germany
The chinese government might have switched to Linux/KDE/KOffice. But Little did they know that after a couple of months, I bet that they are back with previous systems. This information gives the understanding that the whole entire China gov'ment sectors have adopted Linux, what is untrue. And among a population of 10 billion people, it is unlikely that the switch will it even 2% of it.
ip == Reality Check
why will they be back with their previous solutions after a couple of months?
and no, this article doesn't give the impression that the all Chinese governmental concerns run Linux; although it should be alarming to those hoping Linux won't make inroads in the desktop operating system market.
p.s. china had a population of 1.273 billion in 2001, not 10 billion.
Having billions of users in Open Source systems are more significant that in other commercial projects, don't you think?
Imagine the speeds of future releases. Or just stepping by versions!
Would somebody imagine such a number of developers in any OS project?
Radical changes could appear in a very short time.
i think having that many users would actually present new challenges to a Free software project. since they tend to encompass and encourage involvement by users and since the development process is very open and accessable to the public, one would have to wonder what sort of pressures would occur if the population of the "public" swelled to encompass hundreds of millions of people. it already gets fairly noisy as it is.
Hmm, I always thought that china has 1 biljon citizens :)
Furthermore, I don't really care if they all use Linux or what so ever. that the goverment is looking at Linux as a serious prospect is a big win for Linux..
Rinse
Guess how many computers are round there? The government has power. Education is very hierachical. So if the standard computer students books will mention KDE every Chinese it-student will learn this. That's it.
It's the Chinese government that is adopting Red Flag Linux, not the entire population. For the short term at least, people that run Windows will continue to do so.
It should also be noted that much less than 1.3 billion people have computers in China.
That would be very good news. Does anyone know if there are any contacts between the Red Flag Linux developers and the KDE developers?
Note that the american government puts much pressure on china in order to regard software licences. ´Most computers run illegal copys of win, so China ist just served as a future market by microsoft. Chinas support would be great. They will become more independent from us software and lack of software based espionage. Remember the boing airplane?
Correct me if I'm wrong, but the source code for Red Flag Linux is not available.
You're wrong. The source code for Red Flag Linux 2.4 is available here[1].
1.
That directory doesn't exist. Not now it doesn't, but there are directories where you can download source RPMs.
I downloaded their Desktop 3 iso and when I booted it up for the first time there was this screen with just a box which looked like it wanted a serial number typing in. Unfortunately, the Chinese at the top of the box was broken, as is sometimes is in Linux, so I couldn't be certain what it was about. A serial number for Linux? I looked through their forums for users on their homepage and there was this one Chinese guy asking for the serial number for desktop 3, but the message was not there a couple of hours later. I mean, what's all that about, and why do they let you download a disc image if you need a serial number of all things to install it?
Yes. As far as I know, ALL the desktop linux distributions in China choose KDE as their default desktop. And, they all highlight KOffice as their "desktop office suit sollution". And so far, the KDE they shipped is better translated than gnome.
Check these sites(if you can read Chinese :):
XTeam:
Red Flag:
Happy:
Yangchunbaixue: (a Chinese KDE, not a linux distro, with small screen shots :))
Hope Chinese young guys can Konquer their language barrier as soon as possible and contribute to KDE and/or Gnome (they already do so, but limited, yet)
Team KDE deserves the Gold. Even if only 10% of the Chinese
population use and contribute to Linux/KDE, that is a
potential of 100 million users and developers.
I better sell my M$ stock...
Ed
> Even if only 10% of the Chinese population use and contribute to Linux/KDE
50 indian people paid by Sun will contribute to Gnome according to a Slashdot.
and they will make a difference?
look how much they have pulled ahead with the current help. basicly what it translates to is 50 more developers who will be guideing GNOME in SUN's direction...
silly mokeys will be so pissed when they discover SUN will base GNOME 3 (or 4 or what ever it is today)on Java instead of .NET... if they last that long...
really the fact that IBM is using KHTML and dcop means more than SUN throwing 50 developers at something that is still very broken...
-ian reinhart geiser
I do (and you should too) believe they´ll make a difference. In free software, any help is appreciated. Be it from 50 people working for Sun, or you and me.
GNOME has gotten pretty far actually. I have installed GNOME 2.0 post beta (from CVS). It´s very nice, and has what Sun wanted in the first place: accessibility (keynav, etc) and a _lot_ of usability improvements. With regarding GNOME, what´s good for Sun is good for all the GNOME users. (I also have KDE 3 from CVS installed).
As far as adopting Java, get real. It ain´t gonna happen. Ever. GNOME is written in C with the goal of supporting as many languages as possible. This means the core of GNOME will always be in C. And that hopefully GNOME supports/will support Java, Mono, C++, Python, Perl, Ada, etc. Sun has no power or desire to make GNOME become a Java project (rewriting GNOME would be too expensive for Sun: it would take a long time and would break source code compatibility big time. That costs more than people realize for a company like Sun). See the comments by de Icaza about moving GNOME to Mono: the same arguments apply.
And your comment about IBM is, IMHO, naive. They use DCOP, KHTML, and they use Windows for some of their products. Go to news.gnome.org/gnome-news and search for ¨IBM¨. It´s a very big company.
btw, you saying GNOME is very broken is plain wrong. There are many people using GNOME and GNOME-based apps out there, they wouldn´t if you were right.
But my point is: GNOME doesn´t hurt you. KDE will not die because of GNOME. GNOME will never affect you in a negative way. And if having a lot of people running KDE is important to you, know this: it´s much easier to switch to KDE from GNOME than, say, Windows. So having a lot of GNOME users is good for you.
What I´d really like people to understand is that the people working on GNOME are ¨cool¨. They´re not losers. They believe in a lot of the same things that you do, and they´re not doing anything against anyone. They diserve respect, and having you saying something like ¨[GNOME is] something that is still very broken...¨, which is plain FUD, is not good for them, for KDE, for free software, for me, and, believe it or not, for you too.
You´re going to be a happier person if you love more than you hate. Give it a try.
Great Perspective when China moves towards Linux.
Asian (Japanese and Chinese) support has been a major headache for me as private Linux user. It is somewhat working nowadays, but support in applications is still something to be thankful for on individual basis rather than to be taken for granted. KDE supports it. Great improvement. Finding a font setting which looks good with both Asian and Roman characters is a challenge. Mozilla's Japanese version has 50% English menues. Why? Activation in applications differs between kde, gnome, and other x applications. Try copy and past between kde apps and Emacs. You won't enjoy it. The only way is: SAVE in emacs, reload with Kedit, then copy to KDE -- or the other way round. Koffice (I checked 6 months ago, maybe it is better now) messed totally up when I typed Chinese. Japanese was slightly better. I have high respect for the great work being done. But please do not underestimate the challenge of making a well working Japanese or Chinese environment, and -- printing. Asian truetypes must go all the way. Typing -- Preview -- Postscript -- Ghostscript -- Printer!
Staroffice 6.0 is still beta, so that is no option for the Koreans. I expect the official one to get some annoying bugs fixed. Koffice is for the future, but not yet as default. I downloaded the Japanese demo-version of Hancom. My first impression was *very* positive. I would seriously consider buying it, if I needed a good and reliable office SW on my home PC. Actually, it does not hurt the spreading of Linux, if there are a few good quality reasonably proced commercial packages available in addition to open SW.
China is the opportunity for Linux, including the desktop. But it is too early to boast about already achieved victories. It will be a loooong march!
Thank you! Finally a wise person. I really hate to see the KDE crowd an the gnome crowd trying to rip each others guts out.
The developers of both camps (if that is the word even) respect each other.
And they deserve respect of both crowds too.
We're not animals are we? (uhm VietNam, W.W.II, Iraque, ...)
Just code, people, instead of fighting over this :)
> The developers of both camps (if that is the word even) respect each other.
Except for one, Geiseri. :-(
I'd even go further and suggest that the continuation of gnome is in kde's interest, especially if it is used on other platforms (like solaris). Why, because it draws attention to linux on the desktop (if gnome's good enough for sun...) , so rather than dismissing linux on the desktop people might have a look and find themselves using gnome _or_ kde (or both).
It's a good thing that a lot of users (and a few developers as well, not many developers per capita in most countries after all) will be using KDE. It's (according to me) the only desktop that is any good at all that is available to the different Linux based OSes out there. So as far as that goes, it's definitly good.
I am on the other hand no beliver in Red Hat, and Red Flat apparently is based on it. Not so terrific as such maybe, and it's one more distro, which isn't that terribly good either. There is a limited amount of development happening in the world, duplication can both be good and counter productive. This falls in the latter category, again according to me.
China has a reputation of not caring about copyright (and as much as we hate the digital copyright developments, copyright is very good for us as citizens if you start to look at the whole picture. It's just that it's getting worse right now) and for wanting to make money. How will this hold up to the licenses and what will happen with code produced in China? It's a big subject, and I have no whatsoever answer up my sleave, I will let others debate it. So this could be horrible, or really great.
The Chinese state censors, it censors a lot, even more than the companies in the west does(which is yet another chapter to be dealt with). Is this a good thing? Well, no it's not of course. But what kind of implications will this have on the software? Again, touchy subject and no answers and a potential very big "not good".
About KOffice and grammatical checks (yes yes, there is another thread, but I am lazy, we all are;)): It's not easy to do, Chinese, Swedish, and Finnish (for example) are radically different and I would suggest that the KOffice/KDE people finds some computer interested linguists to help them and produce a spelling/gram check Komponent to be used within KDE. | https://dot.kde.org/comment/40458 | CC-MAIN-2018-13 | refinedweb | 3,314 | 74.08 |
The QDomAttr class represents one attribute of a QDomElement. More...
#include <qdom.h>
Inherits QDomNode.
List of all member functions.
For example, the following piece of XML gives an element with no children, but two attributes:
<link href="" color="red" />
One can use the attributes of an element with code similar to:
QDomElement e = ....; QDomAttr a = e.attributeNode( "href" ); cout << a.value() << endl // gives "" a.setValue( "" ); QDomAttr a2 = e.attributeNode( "href" ); cout << a2.value() << endl // gives ""
This example also shows that changing an attribute received from an element changes the attribute of the element. If you do not want to change the value of the element's attribute you have to use cloneNode() to get an independent copy of the attribute.Node.
The data of the copy is shared: modifying one will also change the other. If you want to make a real copy, use cloneNode() instead.
See also value().
See also setValue().
See also specified() and setValue().
This file is part of the Qtopia platform, copyright © 1995-2005 Trolltech, all rights reserved. | http://doc.trolltech.com/qtopia2.2/html/qdomattr.html | crawl-001 | refinedweb | 173 | 70.19 |
Off-Grid Comm Hub With Geiger Counter, Flashlights, & P-AC
Introduction: Off-Grid Comm Hub With Geiger Counter, Flashlights, & P-AC
Just the same as the popular expression “it’s not the fall that kills you, it’s the sudden stop at the end!” the majority of people in an Apocalypse scenario won’t falter because of the events that caused the downfall of society, rather they will falter because society has fallen.
With no power, water, transportation, or other critical utilities/services in operation even those with an emergency supply of food and water will evenly succumb to lack of replenishment. Most notably a fall of society will lead to a lack of effective communications which will leave thousands to suffer from starvation while farmers just a few days travel away have fields of rotting crops because they can't find enough help to harvest them. Thus what is essential in preparing for the worst, is not necessarily a closet full of food, water filters, and shotgun shells but an ability to effectively communicate without grid infrastructure.
Intended to be the ultimate booster for the innovative goTenna, this wearable hub comes packed with features usefully for navigating natural disaster zones or a myriad of apocalypses, both providing utility and convenience.
These features being:
Up to 27mile off-grid communications range*
10,000mah Battery Capacity
Geiger Counter (Requires Smart Phone)
Magnetic Phone Mount and Charging Port
Integrated Bluetooth Speaker & Microphone
Standard LED flashlight
Red Light Flashlight/Reading light
Detachable Laser Pointer/Flashlight/Pen
Built in 1W Solar Panel
4ways to Fast Charge**
Bottle Opener
& Last but not Least a Personal AC (P-AC)
.
.
*3-4mile typical reliable range on level terrain
**Recommend to only use one fast charge method at a time
Step 1: Parts List & Solid Model
If you have Solidworks 2013+ or other compatible 3D software the 3D models used to design this concept are included in the attached zip file, the main assembly is named "Personal Comm Hub MK1". They are included as this design is still experimental and I want to make those who want to try a different solution have access to the source materials.
The specific parts I used to build this comm hub are as follows
Core Items:
SoundBot® SB517 Bluetooth Wireless Speaker
Laser Pointer & Flashlight Pen
Hardware Comps:
40mm x 40mm x 11mm Black Aluminum Heatsink
FrogLeggs LED Bottle Opener Flashlight
12" Micro USB Cable (comes with Soundbot)
8" USB Recharge Cable (pick per your phone)
Electronic Comps:
LED Toggle Switch
DPDT Switch
Momentary Push Button Switch
3A Peltier Cooler (2 needed)
1K Potentiometer with Switch & Knob
Arduino Mini Pro & Programmer
Blue LED & Red LED
2X Fuse Blocks
10Amp Fuse
3Amp Fuse
Dual USB 5V 1A 2A Mobile Power Board Module
5dBi Folding Wifi Antenna, mine was salvaged from an old router
5X 18650 Li-Ion Batteries, mine were salvaged from one of these power units
6V 200ma Solar Panel, mine was salvaged from a solar pack
Step 2: 3D Printing the Frame
To make this comm hub as is, you will need a 3D printer, if you don't have one, see my previous build for a few ideas on how to use alternative frames to make one of these.
Per printing also included in the zip file are the STLs I used to make this prototype, of which the main frame was printed out of ABS in two sections to help prevent cascade warping. The two halves were then glued together with acetone applied by syringe. Once glued together, I used a small brush to apply acetone over all exterior surfaces to both help make all the layers stick together and smooth out the layer lines. When brushing on acetone its normal for parts to have a white residue build up in certain areas, the best way to make them go away I have found is to wait 48-72 hours for the acetone finish to fully dry and then to wipe the part down with an acetone wipe.
Also worth noting is that one trick I have for successful prints is to use a slurry bed and as the first layers is printing use the syringe to drip acetone on all the corners of the part so that they stick extra strong during the printing process thus greatly helping to prevent warping.
Step 3: Wiring Overview & Component Prepping
Though shown here, one last item included in the zip file, and as a separate download, is a high res PDF version of the wiring diagram that above all else is the core design aspect that must always be observed when assembling this comm hub.
To ready components for assembly two tasks need to be done, the first is to use a pipe cutter or hacksaw to cut the Frog light down to size, and then drill a small hole through it so that a ground wire can be secured with solder. The second is to disassemble the Soundbot speaker into its requisite components so it can then be reassembled within the framework of the comm hub.
An optional task to take the male ends of the USB cables, strip off all the insulation, and turn them into right angle power only connectors. This is a trick I often use to make my power draws from USB ports fit into a much tighter space then the USB cable would normally allow.
Step 4: Adding Frame Components Step 1
All components prepped and printed the main frame assembly can begin, following the wring diagram add the battery pack, Soundbot, Solar charger, Frog Light, NeoPixel and USB Power Core. Components were held in place with a combination of GO2 Glue, except for the solar module which was screwed on, applied first to parts and then hot glue hold everything together for convenience while the GO2 sets. Of note is that the speaker does not get glued in place directly because the vibrations would eventually undo all glues, rather the outer rubber rim gets sandwiched between the base and a retaining ring that is glued in place (see solid model). Of note is that the status LEDs fit in the small hole opposite where the red momentary button to turn on the red-light goes, with a small dab of hot glue over the top of the hole used to make an opaque lenses.
On the hardware side add the cover for the speaker after the speaker has been installed, of which the cover shown was not 3D printed, even that file is included in the solid model if needed, rather it salvaged from Gear Head Portable Bluetooth Speaker, as it looks much nicer than a 3D print. Then glue on the magnetic phone mount, per the boss that it fits into, making sure that when the phone is mounted it can swivel without hitting the speaker cover.
Worth noting is that the purpose in the design for the laser pointer pen, that fits into the hole on the underside of the frame just next to where the frog light goes, is both to have a convenient writing utensil available at all times, and also that I have a theory that Zomibes can be lead into traps, just the same as cats, as the majority of mindless critters have a documented tendency to be easily distracted and enticed by them.
Step 5: Adding Frame Components Step 2
Finish the frame assembly by putting together the components for the Geiger Counter and then gluing them to the frame such that the Geiger can be folded outward to better probe potentially irradiated areas.
Worth noting is that while many solutions exist to mount the Geiger Counter on some kind of pivot, a high gain Wifi antenna was used as planned future hardware/software updates will be to add off-grid Wifi device to device connectivity, much like the goTenna, however at much higher bandwidth and shorter range than the goTenna, which will be useful for transferring large files such as apps and media content from one device to another.
As the high gain antenna has a tendency to flip open on its own, a small high strength magnet, was glued into the small hole where the Geiger sleeve fits with super glue, a #6 washer was placed over that and subsequently super glued to the Geiger sleeve so that when the antenna was folded in the metal washer sticking to the magnet would keep the antenna folded in.
Step 6: Finish Wiring & Programming
As everything, aside from the fan, should be in place the finial wiring task is to methodically double check all connections, with the wiring diagram before you start flipping switches, even if you have been checking them as you go. More so given that the utility density of this design creates a high circuit density, which can make mistakes hard spot; as I rather found out the hard way when I smoked a power board and had to get another one because of a reverse polarity issue I missed before powering up!
Once verified, you can upload the program shown below to the Arduino. Of note is that the code was written with the shown three fundamental PCBs in mind and any changes to those might necessitate changes to the program as well. When using the Arduino IDE to upload the program, make sure to download the Adafruit NeoPixle library into the correct folder before trying to upload.
Speaking of the Arduino and its code, the core components of the design, notably all those that run off of the Arduino, can be found in the following open source Autodesk Circuits Simulator project, if you want to experiment with the code before uploading. Of note is that as the simulator does not exactly have the all components I used, a buzzer is used in place of the peltiers, the diodes are fuses, the 5V power supply is the Dual USB 5V Board, a 12 NeoPixel ring is the 8 NeoPixel strip so only the first 8 NeoPixels actually light up, and lastly the DPDT switch is actually two SPDT switches on top of each and thus both have to be switched at the same time properly to simulate changing the DPDT switch from the cold to the hot setting.
At this point if the batteries are too low to test with, and you don't want to leave the unit in sunlight+ for 1.5days, you can fast charge by either use a 2.1mm 5-6V DC jack or USB mini cable with the solar charger, or the 3.5mm USB audio jack with the SoundBot charging port, or a USB micro cable with the Dual USB 5V Board (fastest). However, as previously stated don't use more than one fast charging method at once.
Worth noting about the solar charging unit, should you be planning extensive off-grid use, is that the 2.1mm jack is positioned such that a cable running down the arm from a solar backpack, with a large 5Watt+ cell should be sufficient to keep the unit fully powered through the day, so long as all systems are not used constantly, leaving plenty of juice to last the night.
#include <Adafruit_NeoPixel.h<adafruit_neopixel.h></adafruit_neopixel.h><adafruit_neopixel.h></adafruit_neopixel.h><adafruit_neopixel.h></adafruit_neopixel.h><adafruit_neopixel.h>><br> int pinPot = A0; //Potentiometer int pinLightBTN = 10; //Momentary Button int pinRunPels = 11; //Activate Peltier Coolers int pinStatusLED = 13;<br><p>int iLightMode = 0;<br>int iSensorValue = 0; int iRunTime = 5000; int iOffTime = 10000; int iCountDelay = 0; int iLastTemp = 0;</p><p>#define PIN 12 //Activate NeoPixles //800);</p><p>void setup() { // put your setup code here, to run once: Serial.begin(9600); // set up Serial library at 9600 bps Serial.println("Debug Mode"); strip.begin(); strip.show(); // Initialize all pixels to 'off' pinMode(pinLightBTN, INPUT); pinMode(pinRunPels, OUTPUT); pinMode(pinStatusLED, OUTPUT); }</p><p>void loop() { // put your main code here, to run repeatedly:</p><p> iSensorValue = analogRead(pinPot); // reads the value of the potentiometer (value between 0 and 1023) Serial.println(iSensorValue); iSensorValue = map(iSensorValue, 0, 1023, 8, 0); // scale it between 0 and 8 Serial.println(iSensorValue); Serial.println("--");</p><p> if (iSensorValue == 8) { iRunTime = 15000; iOffTime = 5000; } else if (iSensorValue == 7) { iRunTime = 10000; iOffTime = 5000; } else if (iSensorValue == 6) { iRunTime = 5000; iOffTime = 5000; } else if (iSensorValue == 5) { iRunTime = 5000; iOffTime = 10000; } else if (iSensorValue == 4) { iRunTime = 4000; iOffTime = 10000; } else if (iSensorValue == 3) { iRunTime = 3000; iOffTime = 10000; } else if (iSensorValue == 2) { iRunTime = 20000; iOffTime = 10000; } else if (iSensorValue == 1) { iRunTime = 1000; iOffTime = 10000; } else { iRunTime = 0; iOffTime = 1000; } if (iSensorValue != iLastTemp) { iLastTemp = iSensorValue; DispalyTempCount(); } digitalWrite(pinStatusLED, HIGH); RunPeltiers(); digitalWrite(pinStatusLED, LOW); if (digitalRead(pinLightBTN) == HIGH) { iLightMode += 1; delay(600); if (iLightMode >= 4) { iLightMode = 0; } } if (iLightMode == 1) { //Red Half TurnLightsRedHalf(); } else if (iLightMode == 2) { //Red Full TurnLightsRedFull(); } else if (iLightMode == 3) { //Red Full TurnLightsWhiteQuarter(); } else { //Else Off TurnLightsOff(); }</p><p>}</p><p>///////////////////////////// P-AC Subroutines ///////////////////////////// int RunPeltiers(){ //On Cycle iCountDelay = 0; digitalWrite(pinRunPels, HIGH); do { if (digitalRead(pinLightBTN) == HIGH) { iCountDelay = iRunTime + 1000; Serial.println("Skip 1"); } delay(100); iCountDelay += 100; } while (iCountDelay <= iRunTime); //Off Cycle iCountDelay = 0; digitalWrite(pinRunPels, LOW); do { if (digitalRead(pinLightBTN) == HIGH) { iCountDelay = iOffTime + 1000; Serial.println("Skip 2"); } delay(100); iCountDelay += 100; } while (iCountDelay <= iOffTime); return 0; }</p><p>int DispalyTempCount(){ //strip.setPixelColor(n, red, green, blue); --> 0 = Off 255 = Full if (iLastTemp >= 1) { strip.setPixelColor(7, 0, 25, 0); } if (iLastTemp >= 2) { strip.setPixelColor(6, 0, 25, 0); } if (iLastTemp >= 3) { strip.setPixelColor(5, 0, 25, 0); }</p><p> if (iLastTemp >= 4) { strip.setPixelColor(4, 0, 25, 0); } if (iLastTemp >= 5) { strip.setPixelColor(3, 0, 25, 0); }</p><p> if (iLastTemp >= 6) { strip.setPixelColor(2, 0, 25, 0); } if (iLastTemp >= 7) { strip.setPixelColor(1, 0, 25, 0); } if (iLastTemp >= 8) { strip.setPixelColor(0, 0, 25, 0); } strip.show(); delay(1000); TurnLightsOff(); return 0; }</p><p>///////////////////////////// Lighting Subroutines ///////////////////////////// int TurnLightsOff(){ //strip.setPixelColor(n, red, green, blue); --> 0 = Off 255 = Full strip.setPixelColor(0, 0, 0, 0); strip.setPixelColor(1, 0, 0, 0); strip.setPixelColor(2, 0, 0, 0); strip.setPixelColor(3, 0, 0, 0); strip.setPixelColor(4, 0, 0, 0); strip.setPixelColor(5, 0, 0, 0); strip.setPixelColor(6, 0, 0, 0); strip.setPixelColor(7, 0, 0, 0); strip.show(); return 0; }</p><p>int TurnLightsRedHalf(){ //strip.setPixelColor(n, red, green, blue); --> 0 = Off 255 = Full strip.setPixelColor(0, 50, 0, 0); strip.setPixelColor(1, 50, 0, 0); strip.setPixelColor(2, 50, 0, 0); strip.setPixelColor(3, 50, 0, 0); strip.setPixelColor(4, 50, 0, 0); strip.setPixelColor(5, 50, 0, 0); strip.setPixelColor(6, 50, 0, 0); strip.setPixelColor(7, 50, 0, 0); strip.show(); return 0; }</p><p>int TurnLightsRedFull(){ //strip.setPixelColor(n, red, green, blue); --> 0 = Off 255 = Full strip.setPixelColor(0, 200, 0, 0); strip.setPixelColor(1, 200, 0, 0); strip.setPixelColor(2, 200, 0, 0); strip.setPixelColor(3, 200, 0, 0); strip.setPixelColor(4, 200, 0, 0); strip.setPixelColor(5, 200, 0, 0); strip.setPixelColor(6, 200, 0, 0); strip.setPixelColor(7, 200, 0, 0); strip.show(); return 0; }</p><p>int TurnLightsWhiteQuarter(){ //strip.setPixelColor(n, red, green, blue); --> 0 = Off 255 = Full strip.setPixelColor(0, 25, 25, 25); strip.setPixelColor(1, 25, 25, 25); strip.setPixelColor(2, 25, 25, 25); strip.setPixelColor(3, 25, 25, 25); strip.setPixelColor(4, 25, 25, 25); strip.setPixelColor(5, 25, 25, 25); strip.setPixelColor(6, 25, 25, 25); strip.setPixelColor(7, 25, 25, 25); strip.show(); return 0; }</p><br></adafruit_neopixel.h>
Step 7: Create the P-AC Wrist Mount
A definite optional step as the peltiers used to make the P-AC tend to draw quite a bit of power, for only a marginal gain of utility, since unfortunately they can't actually change your body temperature like a real AC. However, what the P-AC can do* which unlike my previous version of the concept is also now a P-Heater** too, via a flip of the switch, can do is make someone notably more "comfortable" in environments that are between sweating and shivering temperatures.
While the peltiers are optional the brace is not, of which note the photos for how to cut the brace apart and then glue it back together to make the wrist brace Velcro flap starp work in reverse of how in normally does, otherwise such a heavy unit rather notably be constantly rocking back and forth on your wrist when walking.
If you do want to do add the P-AC/P-Heater use thermal epoxy to glue two peltiers to the heat sink with a copper pad in between said peltiers. Of note is that only one peltier is strictly need, though I used two plus the copper pad to achieve the correct height for the unit to be able to be constantly pressed against my skin, but no tighter. Either way use a bit of epoxy to coat the sharp edges of the upper most peltier so that it does not scratch your skin when inserting your hand into the brace.
.
.
*More testing needed but results with this more powerful peltier system in prototype 2.0 seem to defiantly indicate that P-ACs can have a definite mind/over matter effect, and that goes even more so for P-Heaters.
**Only use the P-Heater on the lowest settings as it can become hot enough to give mild burns if left too long on the higher settings.
Step 8: Finally Assembly
Feed the wires from the P-AC wrist mount into the frame and then with careful positioning attach both halves together with a very generous amount of hot-glue set to high heat or other equivalent adhesives.
If assembled Correctly, the front of the comm hub should halfway overlay your fingers with should provide optimal balance of the unit while still allowing for free grasping of items, such as my Apocalypse Mechanics Machete ;) Once verified that everything fits and works install the final component which is the 5v fan, that is last to go on as it greatly hinders installing everything else, I found. By carefully being bent back a few fins of the heat sink I found that #6-32x.75" machine screws could be used to securely mount the fan to the heat sink.
.
Conclusions:
While still an experimental design, that admittedly needs a slimmer shape on the next design iteration. This design greatly improved on its predecessor and is now what I use as my GPS mount & Bluetooth audio booster for road trips. More importantly, however, having a way to keep in contact with no-grid infrastructure plus having a myriad of way to keep everything charged built in, will see me use this current protopye, as is, should I ever get chance to help in a natural disaster area, or get caught in a Zombie Apocalypse!
Great idea ! But it seems we can't rely on this type of counter.
Also worth noting is that next build I'm going to use one these Geiger Counters so that it can be used without a smartphone, and also they are supposedly quite precise
How so? I know these mini Geiger Counters
are not precision instruments, but at the same time I was under the
impression that is was accurate enough to tell if the general
environment you are walking through is radioactive, especially if you deploy it outward so that it has a clear reading of the environment,
should you find yourself in a Fallout wasteland, but forgot your Pipboy
;)
Also of note is that there is a bigger pro version of that
counter you can substitute instead that's supposed to be more precise,
and it should work with the design by making a few slight modifications,...
I didn't saw the zip file. Is it there?
Sorry about that uploaded the PDF and not the zip file by the same name. Anyhow it should now show up just below the bill of materials.
Thank you man! I think it take quite a long time to do it and it looks great. Now I can add it to my prepper "to do" list!
Your welcome and also I should note I'm doing a few updates to the model so that it will work with an off the shelf solar panel will let, as well as adding in a few missing components I did not have time to draw originally, so I will let you know when its updated. | http://www.instructables.com/id/Off-Grid-Comm-Hub-With-Geiger-Counter-Flashlights-/ | CC-MAIN-2018-05 | refinedweb | 3,410 | 55.07 |
If you saw the amount of work you have to do in my past article, you’ll appreciate how much work you DON’T have to do using the new Flex Component Kit for Flash CS3 (now on labs with 50% more beta!). I’ve modified my example from that article using the new kit to show you how much easier it is to both create Flash assets for Flex, but also to use them IN Flex 2. Deadline at work, so too little time to do another Captivate; hopefully images’ll do.
First, let’s take a look at the FLA in CS3 on my MacBook. Notice, nothing on the root timeline / stage.
Going into the BackgroundPanel symbol in the library, notice no code on frames. You can see the bottom layer is where I put my bounding box. This is a convention used since Flash 5 (before my time, hehe) which gives the MovieClip a “default width and height”. Now, for Flash components, this was typically used for the size methods written in ActionScript. In Flex, it does a whole lot more. Flex has a layout engine that measures MovieClip’s, and that way “knows” when to show scrollbars. By default UIMovieClip measures every frame so if your Flash animation increases in size, so too will your content in Flex. When you are playing inside of containers inside of Flex, this is VERY important. Here, though, I actually WANT to remain square. Therefore, I make a 800 x 600 MovieClip with an instance name of “boundingBox”.
Now, the contract you have to abide by as a Flash Designer when creating assets for Flex is 2 things: states and transitions. You create different “states” that your component / animation exists in. In my panel’s case, these are: hidden, intro, main, and spin. Flash Developers typically would use either the timeline or different redraw methods. A Flex Developer uses the state tags. These 2 will now be combined. The Flash component will “represent” the state tags in Flex via frame labels on the MovieClip symbol’s timeline. This way, a Flex Developer can use the familiar syntax on your component:
panel.currentState = “spin”;
The second is transitions. Transitions are the changes that happen form one state to the next. For example, in my Login Form example, I show how you can change from one state to the other and give visual feedback to the user you’ve changed state. You represent transitions in Flash via specifically named frame labels.
For good or ill, transitions are (at least with the version I have) all or nothing. Most Flash Designers who are timeline junkies will read that and laugh. That’s the attitude. I’ve found the component acts weird if you only define transitions for some states, and not others. Best to do at least the default transition to each state.
One quirky thing is that the UIMovieClip will go to your state frame label when it’s done playing the transition, if any. I didn’t really like this. The compromise Adobe came up with is to put your state frame label on the same frame as your ending transition frame label. Now, most of us Flash Purists think putting 2 frame labels on the same frame is blasphemy… madness!
…however, it actually works out well here (thanks Glenn Ruehle!). It makes it easier for as an animator to have my ending transition “end” on the state. However, for more complex animations, like the one Grant Skinner showed at 360Flex, it actually behooves you to animate unique transitions for each state change.
For example, if I were animating a human character, he could have 3 states: stand, run, and sit. If I were to set the current state from sit to run, I wouldn’t want him to immediately start running; I’d have to animate him to stand first. For non-animators, this can seem like a lot of work. For those of us in the know, it’s necessary for a good looking component.
So, again, you may find another frame label timeline convention you dig; no worries.
The syntax for states are “my state”; basically whatever you want. The syntax for transitions is a little more complex. It follows the way transitions in Flex work. They have a “fromState” property and a “toState” property. Transitions are always bound to a state change. You can choose a state name as a string OR a *. A * means “any state”. So, if you write fromState=”*” toState=”spin”, that means anytime the state changes to spin, play this transition.
Frame labels in Flash use a “from-to:state” syntax. Dissecting this, the from is your from state, and can be the name of the state, OR *. Same goes for the to; the to state name or a *. The state is the name of the state, and must be a state name. Always put a start and end set of transition frame labels. If you misspell the frame label, currently there is no JSFL to help you out, and the Flex Dev will most likely get an error dialogue that isn’t easily debug-able. He’ll probably blame you so just make sure for every “start” you see an “end”. Keep in mind too that for some transitions, it’ll use the end frame to play them backwards depending on what state you are changing too, so animate accordingly.
You’ll notice for my simple example, I’ve been slack and just put the state frame labels at the end of the transition animations.
For example, if the Flex Developer were to use this component, and set the currentState = “intro”, then it would do a gotoAndPlay for the “*-intro:start” frame.
Same goes for the spin, regardless of what state you were on previously:
Notice that the main state doesn’t really have any transition; it’s just the main state where you mainly are, and thus it is plain Jane, reflective a main type of state. Word. Yes, it does have an event sound, though.
Now, when creating this component, you do what you typically do in Flash. Create a new MovieClip symbol, name it, and hit ok. You’ll notice the Linkage dialogue in Flash CS3 has a new field: All Your Base Class are Belong to Adobe. When you create a MovieClip in Flash CS3, it’ll default to flash.display.MovieClip. Good, leave it be, let it create your symbol.
Later, you can run the Flex Kit JSFL command, and it’ll put the needed SWC in the Library, and then convert your symbol for you to have the correct base class (mx.flash.UIMovieClip); all you have to do is select it in the Library. It’ll tweak your FLA settings (in a good way) too.
Poof; it’s exported as an SWC where your SWF usually is. Done!
Few more quirks. Flex states are based around the Flash Player 9 DisplayList model. DisplayObjectContainers are basically MovieClip like things. Except, instead of createEmptyMovie, attachMovie, duplicateMovieClip, unloadMovie, and removeMovieClip, we have a brand new API. This API allows us to create MovieClips, but NOT have them draw. In the past, we’d stop it’s animations, turn it invisible, move it off stage, and pray. This didn’t work so well with a ton of MovieClips. Now, we can just go:
var a:MovieClip = new MovieClip();
And call methods on it, but not actually have it automatically have to be drawn in some depth. Now, we go:
addChild(a);
And THEN it’ll draw. Cool, right? You can even re-parent it, and remove it without killing all the code on it. Dope.
Problem? Flex states are built atop this. When you go to a state, things are removed from the DisplayList and others are added. What happens to those that are removed? Flash Player does what it was built to do; it removes them from memory as needed.
Great for application developers, suck for designers. From the day I got into this industry, I’ve made it my mission to ensure my stuff plays hot, and if it means taking all of your CPU and RAM to do so, so be it; it’s there for the taking Blackbeard style.
The hack I showed at 360Flex has been modified slightly. Basically, I’d put all of the frames on a MovieClip, since those have to be preloaded before drawn, and then put it before the animation. Now, I’ve found better results doing the same thing, but having it last he entire tween. Notice the spin and intro preloader layers in the above pictures below the animations. You can see what’s inside of those MovieClips here. Technically, you should just put them all on one frame, but I ran out of time. I then just made them alpha 0. If you move the off-stage, the measuring code will think your Flash component got really really big, and then you’ll see scrollbars in Flex, and go wtf?
So, what do you do with your SWC? You give to the Flex Dev, and smoke ‘em if ya got ‘em. The Flex Dev then, if he’s a purist, he’ll throws it in a lib directory. For this example, I just throw it in the root project. You then link it to your Flex Project via adding it to the Flex Build path as a Library via Project > Properties, and then Flex Build Path, Library Path tab, and then Add SWC button.
You’ll notice a sample implementation here with Buttons setting the current state of the panel. Notice the lack of strong-typing; I’m such a rebel (-es=true -as3=false -strict=false 4 uber h@X!!!).
Now, I think I broke something, but the convention is “boundingBox”… or it’s ilk. You name your MovieClip that, and that’ll define your component’s rect. In MXML (or AS if you were so inclined) you can set the bounding box instance name to use. I don’t recommend this, but hey, freedom’s rad, choose what you want.
This’ll compile to something like this. If you’d like to play, click the image. WARNING: Has sound - she’s loud.
To give you a visual example of what this would look like in Flex (uber simplified btw), here’s the states defined in MXML instead.
And the corresponding (optional) transitions that match those states.
Some dig MXML, some dig Timelines, I dig both.
Few final tips. Every time you recompile a new SWC (Test Movie / Publish, whateva’), don’t forget to Refresh the Flex Project, or you’ll get the cached version. I usually use an Ant script to just copy the new .SWC from the assets directory (or however you setup your Flex projects). Also, not sure if it’s fixed, but if you’re Flex Dev starts getting code hints for “frame31():Object”, don’t be alarmed. If you put code on the timeline, I’m the only guy I know that fires people for that (except stops of course… I’m not THAT mean). These methods are merely created from code you have on that specific frame (thanks Robert Penner!).
One thing to notice in some of the above images of Flash’s timeline is the Flex friendly layer vs. the Flash friendly. You un-guide the Flex friendly one for development and testing. That way, Flex Builder doesn’t chug loading all of those uncompressed PNG’s into memory, thus making Flex Builder compiling really slow. When ready, you just un-guide the Flash friendly one, and re-guide the Flex friendly one, and finally recompile. Naturally, most people are using video nowadays or hand animated bitmaps & vectors which are immensely less RAM intensive. Then again, good to know.
Also, this whole example didn’t use any ActionScript. There is no reason I couldn’t make an ActionScript 3 class for my component. If you don’t write a class, Flash CS3 will make one for you. So, I didn’t have to make a “BackgroundPanel2.as” file. If I wanted to, I could of written an AS3 class to maybe play some other scripted animations, or play sounds… video, whatever.
Finally, don’t forget to remind the Flex Dev to up his framerate via the Application tag (or compiler… either or)!
another great post … thanks again for the useful tidbits. i think i went all googly-eyed when i read this part… heh.
‘…the from is your from state, and can be the name of the state, OR *. Same goes for the to; the to state name or a *. The state is the name of the state, and must be a state name. ‘
word.
Justin
April 24th, 2007
Hah! I shoulda been in bed instead of writing about complicated application state machines.
JesterXL
April 24th, 2007
I was wondering about the best policy for making button components for Flex using CS3. I know there are multiple options but sticking with the Flex Component kit, it’s convenient to create custom buttons using states on UIMovieClip just as for other types of components. This has the advantage of creating cool transitions ont he buttons between the states. However, it is a pain for the Flex developer to wire up all the event listeners for rollover, click, etc. So do you think it is a good idea to create a custom base class which wires the events up for the Flex developer and can be shared across all buttons. As long as the designer uses the same state names then buttons can be used right from the swc by the Flex developer. I tested the below on a button and it worked.
package com.eui.cs3.components {
import mx.flash.UIMovieClip;
import flash.events.Event;
public class MCButton extends UIMovieClip
{
public function MCButton() {
super();
addEventListener(’mouseDown’, handleDown);
addEventListener(’mouseOver’, handleOver);
addEventListener(’mouseUp’, handleUp);
addEventListener(’mouseClick’, handleHit);
}
protected function handleDown(event:Event):void {
this.gotoAndStop(’down’);
}
protected function handleOver(event:Event):void {
this.gotoAndStop(’over’);
}
protected function handleUp(event:Event):void {
this.gotoAndStop(’up’);
}
protected function handleClick(event:Event):void {
this.gotoAndStop(’click’);
}
}
}
John Wright
May 4th, 2007
While you can use UIMovieClip controls as skins, I agree, it doesn’t work wtih the animations, so it’s better for the Flash Designer to code it himself to ensure it looks right. The downside is, this asks a lot of a Flash Designer to code their buttons, but… it’s either him, or the Flex Dev, and I’d have more faith in the Flash Designer getting it right, so…
JesterXL
May 5th, 2007
Hi Jesse, thanks for this tutorial.
Controlling of animation states are really helpful in terms of visual impact of a project,
I was wondering how can i get my SWC to pass out data to the main mxml in which it sits in? Like passing a string var?
Are there any examples for this?
Im just starting out on flex2 trying to learn =]
Cheers,
Ziig
Ziig
May 10th, 2007
When you export, Flash will create an AS3 class for you. You do, however, have the option of writing it youself. It’s just an AS3 class that is treated like any other component. So…
And then in Flex:
JesterXL
May 10th, 2007
Hey Jesse,
I am wondering if you have any insight on why the code completion in flex for the added CS3 swc works at first then stops working completely. I can still build from flex and see the component show up correctly, I just dont get any code completion for either the namespace its added under nor attributes on the CS3 component tag.
I’ve tried removing the swc and putting it back, quitting flex and starting it again. Clean build, refresh. Any ideas? I’m on a mac.
Thanks!
Shawn Makinson
May 11th, 2007
Jesse - you’re the man. Thanks for taking the time to write this up - it provides a great insight.
Question - have you found the performance of flash running slower when it is in a flex movie as opposed to running it outside of flex?
Thanks. Keep up the great work.
Austin
May 14th, 2007
Yes. Obviously the first reason is because Flex is set to like 24 fps by default (I think) whereas the Flash we usually create is 31. When it’s brought into Flex, it runs slower … because it was told to do so.
Even when you match Flex’ framerate though, yeah, a tad. This is uber subjective so I wouldn’t take my agreement as a scientific ackknowledgement. Either way, it could just be the ‘_level0 is framerate god’. Since you are loading a SWF INTO another SWF that is _level0… then, well… ‘no high-framerate 4 j00!’.
JesterXL
May 14th, 2007
Hi,
very nice article. Thank you for it.
Is it possible to bind the value of a text field in the SWC to a bindable property in my application model?
I guess it requires some manual work but it would be nice to know it there is some clever way of doing it.
Best,
Sammi
Sammi
June 22nd, 2007
Hi Jesse,
THanks for this article. I’ve been looking into how to load Flash CS3 - flex kit components into a flex app dynamically - ie not compiling them into the swf. I’ve tried RSLs with no joy. Have you any suggestions.
THanks
Ged
Ged McBreen
June 25th, 2007
Hi,
I have a dynamic text field in a movie clip, that I export an an SWC. I can load the SWC in Flex without any problems. But, how do I change the value of the text in the dynamic text field in Flex?
Thanks.
Vidya
October 10th, 2007
Great Article!
Looking foward for next tutorial.
verz
November 10th, 2007
All of this is well and fine for simplistic examples like the one in the article and the ones in the comments. Try using a more complex Flash movie in Flex like a dynamic wrapping panning world map with parts of symbols outside the viewable area, welcome to a hellish nightmare! I had to manually hide parts of the flash movie that are outside the Flex container and add tons of code to handle resizing the Flex container… anyone had this problem before? Also, I had to instantiate and initialise my Flash symbols from Flex, I got null errors if I didn’t (the script in the flash movie couldn’t find the symbol instances, including stage). I can’t even use creationComplete, I have to delay it later, else e.g. stage won’t be available to my Flash script.
Any comments/suggestions?
virgo_ct
December 2nd, 2007
I noticed that your example above has sound clips in the Flash symbol that you export to Flex. I am doing something a simpler that contains sound clips as well. I have been unable to get the sound clips to work in the Flex application. Everything else works great (states, transitions, etc), I just can’t get any sound. Is there some kind of trick to getting sound to work when using the Flex Component Kit? Would you share your source?
Thanks
ryanw
February 4th, 2008
Hi i want to when we ma ke linkage in flash….in default flash.display.MovieClip i can this this classs… but in this example..flash.display.UIMovieClip
is there any difference between both the class
nishit
April 6th, 2008
hey jesse,
i’ve compiled an swc from flash and imported in flex, its working fine , but when i apply scale 9 grid transformation to that component, it not occuring.do u have any inputs / ideas..?
ravish
September 6th, 2008
adding to the the above point, i am using this in AIR on Flex 3.
ravish
September 6th, 2008 | http://jessewarden.com/2007/04/example-for-flex-component-kit-for-flash-cs3.html | crawl-002 | refinedweb | 3,316 | 73.07 |
Machine Learning is what drives Artificial Intelligence advancements forward. Major developments in the field of AI are being made to expand the capabilities of machines to learn faster through experience, rather than needing an explicit program every time. Supervised learning is one such technique and this blog mainly discusses about ‘What is Supervised Learning?’ Let’s define Supervised Learning and move further along with the topic. of the labelled data.
Practice makes one perfect! The same applies to machines as well. As the number of practice samples increases, the outcomes produced by the machine become more accurate.
Following are the topics covered in this blog:.
There are two types of supervised learning techniques, classification and regression. These are two vastly different methods. But how do we identify which one to use and when? Let’s get into that now.:
The regression technique predicts continuous or real variables. For instance, here, the categories could be ‘height’ or ‘weight.’ This technique finds its application in algorithmic trading, electricity load forecasting, and more. A common application that uses the regression technique is time series prediction. A single output is predicted using the trained data.
On either side of the line are two different classes. The line can distinguish between these classes that represent different things. Here, we use the classification method.
Whereas, regression is used to predict the responses of continuous variables such as stock price, house pricings, the height of a 12-year old girl, etc.
Next, we are checking out the pros and cons of supervised learning. Let us begin with its benefits.
Data is the new oil. Hence, it is put to use in a variety of ways. We will now discuss one such interesting case: Credit card fraud detection. Here, we will see how supervised learning comes into play.
Become a certified ML engineer by signing up for our Machine Learning certification.
There are numerous applications of Supervised Learning including credit card fraud detection. Let us use exploratory data analysis (EDA) to get some basic insights into fraudulent transactions. EDA is an approach used to analyze data to find out its main characteristics and uncover hidden relationships between different parameters.
The digitization of the financial industry has made it vulnerable to digital frauds. As e-payments increase, the competition to provide the best user experience also increases. This nudges various service providers to turn to Machine Learning, Data Analytics, and AI-driven methods to reduce the number of steps involved in the verification process.
Let us upload some data on this onto Python:
#importing packages
%matplotlib inline
import scipy.stats as stats
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('ggplot')
df = pd.read_csv('creditcard.csv')
We can use different algorithms to get the results. But which one to use here? Let us try out these algorithms one by one and understand what each can offer.
pd.set_option('precision', 3)
df.loc[:, ['Time', 'Amount']].describe()
#visualizations of time and amount
plt.figure(figsize=(10,8))
plt.title('Distribution of Time Feature')
sns.distplot(df.Time)
This is among the most common Supervised Learning examples.
We had an in-depth understanding of ‘What is Supervised Learning?’ by learning its definition, types, and functionality. Further, we analyzed its pluses and minuses so that we can decide on when to use the list of supervised learning algorithms in real. In the end, we elucidated a use case that additionally helped us know how supervised learning techniques work. It would be great if we could discuss more on this technique. Share your comments below.
Course Schedule
Leave a Reply Cancel reply
Your email address will not be published. Required fields are marked *
Comment
Name *
Let’s Talk
Updated on: Jul 14, 2021
Updated on: Oct 17, 2020
Updated on: Apr 23, 2021
Updated on: Jul 16, 2021
Updated on: Jul 10, 2021
All Tutorials
Signup for our weekly newsletter to get the latest news, updates and amazing offers delivered directly in your inbox. | https://intellipaat.com/blog/what-is-supervised-learning/ | CC-MAIN-2021-31 | refinedweb | 672 | 59.6 |
Hello,
I am a student and next year we get the subject Cisco. So, I was allowed to take an old Cisco router from my work and use it to practise my command line skills and learn some stuff so that next year would be easier.
Because I don't have any experience with the command line, I decided to configure the router first with the Cisco Configuration Assistant and safe a back-up from the settings and then go learning the command line and practise the command line.
So, I configured the router perfectly. a couple of vlans, wifi networks, every single thing works. If I connect a computer to it, it get's an IP adress and everything works nicely. Except for one thing, I don't have internet access.
I have the following network setup:
Modem -> Sitecom Router (this one works) -> Cisco router
I hooked up one of the lan ports from the Sitecom router to the WAN port of the Cisco router, this should work, but it doesn't.
Then I tried connecting the WAN port of the Cisco router directly to the modem, as described in the Quick Start Guide, but that doesn't work either.
Yes, the WAN Port is active and configured as DHCP not Static or PPPoE.
NAT is turned off, but that should make a difference according to my collegue (if we are wrong feel free to tell).
If I look in the Sitecom router connected computers list, I don't see the Cisco router either. It looks like it doesn't have a MAC adress. But if I go to the command line and go to the interfaces bit. I see the "$FW_OUT" or something like that with an MAC address. I put that address with a static ip in my Sitecom router, but it doesn't recognize my Cisco router, so it can't give that ip.
Does anyone know how to fix this???
It should be possible to get it working right?
Kind Regards,
Maurice
Solved! Go to Solution.
My laptop is currently on vlan1, default, with the ip 192.168.10.11.
Probably not showing in the show run, because it is in standby, but I can connect to the router from my laptop as I can access it for example from the Configuration Assistant.
Can you ping 8.8.8.8 from your PC?
It could be a DNS issue you are facing that means you can't resolve IP addresses. Have you configured your PC IP address manually or picked it up from the DHCP scope on the router?
I have dhcp from the dhcp scope, I can ping 8.8.8.8.
But I don't have any dns servers, I checked it in the GUI because I don't know the commands (yet).
If not having any dns servers is the problem, how do I add a dns server from the command line, I know how it works in the GUI but I want to learn CLI.
And which dns server do I add, my sitecom router I think?
Because that is linked to the dns server from my internet service provider and if I want I can manually connect to Google's dns server in windows.
In your DHCP scope you are telling your PC's that they should use the router for DNS queries - this won't work. You can either change your DNS servers to public google ones, something like:
ip dhcp pool data
import all
network 192.168.10.0 255.255.255.0
default-router 192.168.10.1
dns-server 8.8.8.8 8.8.4.4
Or you can use your ISP assigned DNS servers.
Change that and renew your IP address and you should be on the internet!
What are the CLI commands?
Because #ip dhcp pool data doesn't work.
I'm running IOS 12.4 or something.
It did it anyway via the GUI, but I really want to know the right commands, so I can do it via the console.
And that access-list thing how does that work?
You said something about changing the settings for the access list.
By the way, THANK YOU SO MUCH!
It works!
I assigned every vlan's dns with the ip from my Sitecom router, and it just works nicely.
Now it's time for me to get ftp working and save a back-up from the settings and then I go messing with the CLI!
And if I want to put my Sitecom router behind the Cisco one, I have to do all that static ip stuff again I think?
I also want MAC address filtering, but I can figure that out myself... I think...
I just wanted to get it working and it works!
Thank you very very much for helping, I really appreciated it.
Maurice,
Just put:
#conf t
#ip dhcp pool data
#import all
#network 192.168.10.0 255.255.255.0
#default-router 192.168.10.1
#dns-server 8.8.8.8 8.8.4.4
You have to go into configuration mode before you can enter configuration commands. If your Sitecom router just connects via ethernet to your modem then there is no reason why you can't just get rid of it and change the Cisco router WAN interface address to 192.168.42.x and then modify your default route.
Glad you got it working - persevere with the CLI - it is complex but it pays off in the long run. GUI access is good to get started but understanding the CLI will pay in the long run.
Configuration mode.... right.
Sounds logical for configuration commands.
I only use GUI for get it working, but I noticed a lot of things that you can do with CLI you can't do with GUI. For school we have to learn it with the CLI, that's why I want to practice this summer vacation, so learning all the commands and getting used to the CLI would be easier.
Thank you again for all the help, I really appreciate it.
Euhm... mfurnival? The normal lan networks seem to work, but the wlan not. I can connect, get an ip, but then I get dns errors.
How do I fix this?
The wifi networks are connected to the correct vlan, so it should work, I think?
You need to add the DNS server entries under the wireless vlan dhcp pool.
I use three vlans for both lan as wlan, can that cause conflicts?
Because the dns servers are set correctly and exactly the same as vlan1.
The only difference between the settings of the vlans are the ip's that the dhcp gives out.
I even added the excluded ip's, because those where on the default vlan, and now they all should be exactly the same
OK, I just looked at your config again. You need to add your wireless subnets to acl 1 to tell the router to nat them.
Commands? Please?
Seriously, the GUI helps with basic functions, but I can't find the acces-list or nat stuff.
Do you by any chance now how to configure smartports by using the CLI?
Because I think I kinda fucked up when I did it with the GUI...
I did it!!!
I changed the NAT settings for the vlans all by myself!!! jeeeeeeeej!
Now how does that ACL thingy work with commands?
I said that ACL2 to ACL6 should allow any ip. don't know why I did that, but it was worth a try.
So, how do I add all of the vlans to ACL1? or ACL2? or whatever I have to do to get it working?
And how do you save all the settings with the CLI?
I know how to do it in the GUI, but how do you do it in the CLI, so that when I turn the router off it has stored all the changes?
I changed wlan1 back to vlan3 instead of vlan1, but now I can't get to the internet anymore on vlan1.
Then I checked the nat settings, FastEthernet0/0 was still on outside, vlan1 was on "outside", while the rest was on "inside".
I don't know if this is good or bad, so maybe you know?
And now I can't connect to the internet on any vlan.
For vlan1 I think the problem is that the nat settings are on outside?
But I can't get it to inside weirdly, I tried your commands but that didn't help.
I tried adding the 0.0.0.0 255.255.255.0 to acl1, but that didn't help either.
Even worse, vlan1 doesn't generate ip addresses anymore (maybe because it's nat setting is on outside?).
I've uploaded a new show run log, because I don't know where to look and you do.
Hopefully we can fix this... again...
The dns is still good and untouched!
Maurice, wow you have made quite a mess
There are a few things wrong that I can see.
First, in each of your DHCP pools you have specified two DNS servers but they are in the wrong order. Anyway, to correct it do this:
#conf t
#ip dhcp pool data1
#dns-server 8.8.8.8 192.168.0.1
#ip dhcp pool psp
#dns-server 8.8.8.8 192.168.0.1
#ip dhcp pool gasten
#dns-server 8.8.8.8 192.168.0.1
Also do this:
#no access-list 1 permit any
Also do this:
#no ip nat inside source list 1 interface BVI1 overload
and then type this:
#ip nat inside source list 1 interface fa0/0 overload
and this:
#int BVI1
#ip nat inside
And let me know how you get on. | https://community.cisco.com/t5/switching/solved-cisco-uc520-unable-to-connect-to-internet/td-p/2233082/page/2 | CC-MAIN-2020-40 | refinedweb | 1,645 | 81.83 |
Herkulex servos move in unison
The other day, I configured four Herkulex servos for use at 57600 baud, so I could communicate with them from an Arduino‘s software serial port. Here’s what happened next.
I had previously started on my own Arduino library for communicating with these servos. Then my brother, whose google-jutsu is apparently stronger than mine, sent me a link to this page. It turns out that Dongbu has an Arduino HerkuleX library already available, and mine was still at the blinky-light stage, so I decided to download it and give it a try.
To use the library, you instantiate a Herkulex object, and call one of several “begin” methods (depending on which serial port you’re connecting to the servo chain). You can then call the following methods:
- torqueOn: activate torque mode
- torqueOff: deactivate torque mode (free spin)
- turn: start continuous rotation at the given speed
- getTurnSpeed: get current rotation speed
- movePos: move to a position, specified in 1024ths
- getPos: get current servo position, in 1024ths
- moveAngle: move to a position, specified in degrees
- getAngle: get current servo angle, in degrees
- getStatus: get current error state
- clear: clear the error state, if any
That’s it — pretty basic stuff. Each function takes the ID of the servo you want to talk to. The move, movePos, and turn methods take an optional “play time” (duration over which the motion should be done), and also let you set the LED color.
A few things are missing, in my opinion: no way is provided to get or set any register; you can’t move multiple servos with one command; and you can’t even set the LEDs unless you also issue a move or turn command. But there’s something to be said for simplicity, too.
The library comes with a test program that lets you interact with a single servo by typing simple commands in the serial window. After playing with that a bit, I wanted to try controlling all four at once. I wrote the following Arduino sketch:
#include <HerkuleX.h> #define RX 10 // Connected with HerkuleX TX Pin #define TX 11 // Connected with HerkuleX RX Pin void setup() { Serial.begin(9600); // Open serial communications HerkuleX.begin(57600, RX, TX); // software serial delay(10); // Clear errors on all servos for (int i=1; i<=4; i++) HerkuleX.clear(i); // Torque ON for all servos HerkuleX.torqueOn(HERKULEX_BROADCAST_ID); // Reset all servos to center position, lights on green. for (int i=1; i<=4; i++) { HerkuleX.movePos(i, 512, 0x60, HERKULEX_LED_GREEN); } delay(1000); } void loop() { for (int i=1; i<=4; i++) { HerkuleX.movePos(i, 512 + 64*i, 0x90, HERKULEX_LED_BLUE); } delay(5000); for (int i=1; i<=4; i++) { HerkuleX.movePos(i, 512, 0x90, HERKULEX_LED_GREEN| HERKULEX_LED_RED ); } delay(5000); }
This initializes all four servos to their center position, and then in the main loop, cycles each one between two positions, with a brief delay. Each servo moves a different distance, but all over the same amount of time (about 3 seconds). Here’s a video — please forgive the poor quality of the video, which is entirely my own fault.
I inserted a little breadboard wire into each servo horn, to make it easier to see them turning. You can see that they do all move smoothly, and all arrive at the same time.
The experiment wasn’t entirely without a hitch, however. It turns out that my cheap 1.5-12V adjustable wall wart couldn’t supply enough current to service all four servos at once when they moved quickly. With simultaneous quick movements, the voltage would sag and the servos would reboot (which is easy to spot by the white LED flash always displayed on startup). The wall wart is rated at 0.3A, and these servos can draw about 0.5A each, so I guess this is not too surprising.
I worked around it by using a lower speed (and the fact that this works indicates some nice things about power management in the servos themselves). For the future, though, I’ll certainly need to bring out a beefier power supply.
All told, the Dongbu Arduino library works as advertised, as do the servos themselves. I might like to add a few more functions to the library for advanced use, but it’s certainly enough to get started. Arduino + Herkulex = easy robotic fun! | https://botscene.net/2012/12/02/herkulex-servos-move-in-unison/ | CC-MAIN-2017-34 | refinedweb | 728 | 70.63 |
Jacob Sonia wrote: public class Fizz {
int x = 5;
public static void main(String[] args) {
Fizz f1 = new Fizz(); //new object on the heap
Fizz f2 = new Fizz(); //new object on the heap
Fizz f3 = switchFizz(f1, f2); //f3 reference points on f1 object
if(f1 == f3)f1 = f2; //that's true - f3 reference points on f1 object (== checks if there are the same object)
System.out.println((f2 == f3) + " " + (f2.x == f3.x)); //false false - f2 doesn't point on the same object as f3
//f2.x = 6(because there are f1=f3) and f3.x=5;
}
static Fizz switchFizz(Fizz f1, Fizz f2){ //we have the copy of references passing to this method (copy and original //are the same!)
final Fizz z = f1; //OK this is only final reference. There are no final objects!
z.x = 6;
return z; //reference z points on f1
}
} | http://www.coderanch.com/t/452945/java-programmer-SCJP/certification/code-printing-false-false-book | CC-MAIN-2014-52 | refinedweb | 146 | 72.16 |
Have an account?
Need an account?Create an account
This document assumes that the reader has prior knowledge of LISP and its network components. For detailed information on LISP components, their roles, operation and configuration, refer to and the Cisco LISP Configuration Guide. To help the reader of this whitepaper, the basic fundamental LISP components are discussed here.
LISP uses a dynamic tunneling encapsulation approach rather than requiring the pre-configuration of tunnel endpoints. It is designed to work in a multi-homing environment and supports communications between LISP and non-LISP sites for interworking. A LISP-enabled network includes some or all of the following components:
•
LISP Name Spaces, defining two separate address spaces:
–
End-point Identifier (EID) Addresses: Consists of the IP addresses and prefixes identifying the end-points. EID reachability across LISP sites is achieved by resolving EID-to-RLOC mappings.
–
Route Locator (RLOC) Addresses: Consists of the IP addresses and prefixes identifying the different routers in the IP network. Reachability within the RLOC space is achieved by traditional routing methods.
•
LISP Site Devices, performing the following functionalities:
–
Ingress Tunnel Router (ITR): An ITR is a LISP Site edge device that receives packets from site-facing interfaces (internal hosts) and encapsulates them to remote LISP sites, or natively forwards them to non-LISP sites.
–
Egress Tunnel Router (ETR): An ETR is a LISP Site edge device that receives packets from core-facing interfaces (the transport infrastructure), decapsulates LISP packets and delivers them to local EIDs at the site.
Note
LISP devices typically implement ITR and ETR functions at the same time, to allow establishment of bidirectional flows. When this is indeed the case, the LISP devices are referred to as xTR.
•
LISP Infrastructure Devices:
–
Map-Server (MS): an MS is a LISP Infrastructure device that LISP site ETRs register to with their EID prefixes. The MS stores the registered EID prefixes in a mapping database where they are associated to RLOCs. All LISP sites use the LISP mapping system to resolve EID-to-RLOC mappings.
–
Map-Resolver (MR): an MR is a LISP Infrastructure device to which LISP site ITRs send LISP Map-Request queries when resolving EID-to-RLOC mappings.
–
Proxy ITR (PITR): A PITR is a LISP Infrastructure device that provides connectivity between non-LISP sites and LISP sites by attracting non-LISP traffic destined to LISP sites and encapsulating this traffic to ETRs devices deployed at LISP sites.
–
Proxy ETR (PETR): A PETR is a LISP Infrastructure device that allows EIDs at LISP sites to successfully communicate with devices located at non-LISP sites.
EID namespace is used within the LISP sites for end-site addressing of hosts and routers. These EID addresses go in DNS records, just as they do today. Generally, EID namespace is not globally routed in the underlying transport infrastructure. RLOCs are used as infrastructure addresses for LISP routers and core routers (often belonging to Service Providers), and are globally routed in the underlying infrastructure, just as they are today. Hosts do not know about RLOCs, and RLOCs do not know about hosts.
LISP functionality consists of LISP data plane and control plane functions.
Figure 2-1 Communication between LISP Enabled Sites
1.
The client sitting in the remote LISP enabled site queries through DNS the IP address of the destination server deployed at the LISP enabled Data Center site.
2.
Traffic originated from the client is steered toward the local LISP enabled device (usually the client's default gateway). The LISP device performs first a lookup for the destination (10.17.1.8) in its routing table. Since the destination is an EID subnet, it is not present in the RLOC space so the lookup fails, triggering the LISP control plane.
Note
In the current IOS and NX-OS LISP implementation, the LISP control plane is triggered if the lookup for the destination address produces no results (no match) or if the only available match is a default route.
3.
The ITR receives valid mapping information from the Mapping database and populates its local map-cache (the following "LISP Control Plane" section will clarify the control plane communication required for this to happen). Notice how the destination EID subnet (10.17.1.0/24) is associated to the RLOCs identifying both ETR devices at the Data Center LISP enabled site. Also, each entry has associated priority and weights values that are controlled by the destination site to influence the way inbound traffic is received from the transport infrastructure. The priority is used to determine if both ETR devices can be used to receive LISP encapsulated traffic destined to a local EID subnet (load-balancing scenario). The weight allows tuning the amount of traffic received by each ETR in a load-balancing scenario (hence the weight configuration makes sense only when specifying equal priorities for the local ETRs).
4.
On the data-plane, the ITR performs LISP encapsulation of the original IP traffic and send it into the transport infrastructure, destined to one of the RLOCs of the Data Center ETRs. Assuming the priority and weight values are configured the same on the ETR devices (as shown in Communication between LISP Enabled Sites1), the selection of the specific ETR RLOC is done on a per flow basis based on hashing performed on the 5-tuple L3 and L4 information of the original client's IP packet. The format of a LISP encapsulated packet is shown in Figure 2-2.
Figure 2-2 Format of a LISP Encapsulated Packet
As shown in Figure 2-2, LISP leverages a UDP encapsulation where the src port value is dynamically created and associated to each original flow to ensure better load-balancing of traffic across the transport infrastructure.
5.
The ETR receives the packet, decapsulates it and sends it into the site toward the destination EID.
While Figure 2-1 shows only the North-to-South flow, a similar mechanism would be used for the return traffic originated by the DC EID and destined to the remote client, where the LISP devices would exchange their roles of ITRs and ETRs.
Figure 2-3shows the use of PxTR devices to establish communication between devices deployed in non-LISP sites and EIDs available in LISP enabled sites.
Figure 2-3 Communication between non-LISP Enabled Sites and LISP Enabled Sites
Once the traffic reaches the PITR device, the mechanism used to send traffic to the EID in the Data Center is identical to what previously discussed. For this to work, it is mandatory that all the traffic originated from the non-LISP enabled sites be attracted to the PITR device. This is ensured by having the PITR injecting coarse-aggregate routing information for the Data Center EIDs into the network connecting to the non-LISP sites.
Note
Detailed discussion on the deployment of a PxTR to provide services to non-LISP enabled location is out of the scope of this paper. Please reference the following document for more detailed PxTR configuration information:
Figure 2-4 describes the steps required for an ITR to retrieve valid mapping information from the Mapping Database.
Figure 2-4 LISP Control Plane
1.
The ETRs register with the MS the EID subnet(s) that are locally defined and which they are authoritative. In this example the EID subnet is 10.17.1.0/24. Map-registration messages are sent periodically every 60 seconds by each ETR.
2.
Assuming that a local map-cache entry is not available, when a client wants to establish communication to a Data Center EID, a Map-Request is sent by the remote ITR to the Map-Resolver, which then forwards the message to the Map Server.
Note
The Map Resolver and Map Server functionality can be enabled on the same device. More discussion about MS/MR deployment can be found in the "LISP Host Mobility Operation" section on page 3-5.
3.
The Map Server forwards the original Map-Request to the ETR that last registered the EID subnet. In this example it is ETR with locator 12.1.1.2.
4.
The ETR sends to the ITR a Map-Reply containing the requested mapping information.
5.
The ITR installs the mapping information in its local map-cache, and starts encapsulating traffic toward the Data Center EID destination. | https://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/Data_Center/DCI/5-0/LISPmobility/DCI_LISP_Host_Mobility/LISPmobile_2.html | CC-MAIN-2019-04 | refinedweb | 1,382 | 50.36 |
Kaminsky Bug Options Include "Do Nothing," Says IETF 134.
DNS (Score:5, Funny)
Re: (Score:2, Informative)
It looks like you mixed up the resolver and the client.
Re:DNS (Score:5, Funny)
To know recursion, you must first know recursion.
Re: (Score:3, Funny)
Consider my mind officially blown.
Re:DNS (Score:5, Funny)
You keep that up, I might just blow my stack.
Re: (Score:2)
Why is this troll? This is some funny shit and does seem to be largely accurate.
DNS to slashdot has been hacked .... (Score:3, Funny)
and I am reading the wrong site. The aliens can return the real slashdot now. Surely IETF would never choose to "Do Nothing"
:-)
Re: (Score:2)
Re: (Score:2)
So what powers does the IETF have on this? (Score:3, Interesting)
I'm trying to figure out exactly what they're deciding. Yes, I understand it's a discussion about "upgrade to DNSSEC" vs. "implement the hacks". But these guys don't control the internet, and my understanding is they only make "recommendations", which nobody is obliged to follow.
So exactly what exactly are these guys debating about "doing"? Is it really just "recommend DNSSEC" or "recommend the hack"?
Re:So what powers does the IETF have on this? (Score:5, Interesting)
No one likes patching sinking ships but it's better than nothing. Doing nothing and waiting for DNSSEC are nearly the same thing.
Re: (Score:2)
Re: (Score:3, Informative)
Re: (Score:3, Insightful)
Well, they don't support some other as-yet-nonexistent alternate security fix for the Kaminsky Bug, either.
Re: (Score:3, Insightful)
Re:So what powers does the IETF have on this? (Score:4, Informative)
I guess we have different definitions of "exists", unless you mean it exists as a list of as yet unsolved problems.
Re: (Score:3, Informative)
Ummm, it does exist. It just hasn't been deployed, due to the issues listed.
Car analogy alert:
I have my car (DNSSEC) sitting in the garage. It exists.
I want to drive (deploy) it, but my wife, teenage kids and I are all arguing over who gets to drive, where we are driving to, and what route we are going to take.
Hell, your own post states it:
...and deployments in various domains have begun to take place.
Re: (Score:2)
We have a national highway designed to permit or deny traffic based on road signs, traffic laws, and traffic law protocol and the police as a kind of ICMP.
The entire country, nay the entire world, would li
Re:So what powers does the IETF have on this? (Score:4, Informative)
you need to work on your reading comprehension skills.
DNSSEC exists plain and simple. it's already been deployed for a lot of domains and root nameservers. just because there are difficulties hampering its widespread adoption doesn't mean it doesn't exist. that's like saying IPv6 doesn't exist because it's still suffering from a lack of widespread adoption.
none of the factors preventing more widespread deployment are problems that need "solving." in fact, they're more social/political problems than they are technical problems. so the "solution" to these problems is simply to persuade/pressure/coerce DNS servers to adopt DNSSEC, which is what IETF is debating about.
Re: (Score:3, Interesting)
Like for example, dnscurve [dnscurve.org], which requires very little effort to set up, is actually backwards compatible with DNS, protects against some denial of service attacks (instead of creating them), and oh yeah doesn't require the cooperation of the parent zone.
DNSSEC is a joke. A bad bad joke. Replacing DNS with something not-DNS isn't any better an idea than replacing the Internet wi
Re: (Score:3, Insightful)
sound like recommending that everyone start playing Duke Nukem Forever.
Yes, with the limitation that only one can have the keyboard at a time.
Considering that both Europe and China are launching their own satellite navigation networks, largely as a distrust issue, the idea that a single signed DNS root will be politically digestible over anything but a very short term shows a certain... detachment... from the actual politics of the world.
I suspect that even if DNSSEC gets deployed to any large extent it'll
Re: (Score:2)
Who said YOU have to trust anybody?
Instead, we rid ourselves of all domain names that do not state country code (.com,
.net, .gov, ...). Then, each country sets up a national Pubkey Archive and authenticates via their country key. Each country sets their own up, so they are in charge, not some arbitrary country "somewhere else".
That idea also easily allows enterprising "hackers" and other types to have their own auth server and provide their own domain setup. All you need do is to aim your domain resolver a
Re: (Score:3, Insightful)
This sounds like a good idea on the surface -- it'll never happen, of course, because too many companies and individuals have too much invested in the
.com, .net, etc. without the country codes... but still, I like the consistency it all brings.
Re: (Score:2)
This sounds like a good idea on the surface -- it'll never happen, of course, because too many companies and individuals have too much invested in the
.com, .net, etc. without the country codes... but still, I like the consistency it all brings.
If you've got e-mobsters smacking companies around a lot with DNS cracks because
.com, etc. aren't signed, you will see migration to signed domain trees. And masses of bitching. (You'll always get that.) Or possibly people stopping the arguments over root domain signing and getting on with it by appointing someone to do it. But still with the bitching.
The biggest slip-up in DNS was how the
.us domain was managed for many years. And it was a policy mistake, not a technical one.
Re: (Score:3, Interesting)
With DNSSEC, if the person running the root were to sign incorrect data and publish it, this would be easily detected by the consumers of that data. So it would only serve to create an embarrassing international incident - you couldn't use it to actually fool anybody.
Re: (Score:2)
IETF participants pointed out that DNS software packages from BIND, Nominum, Microsoft and NLnet Labs have added patches for the Kaminsky bug, and 75% of DNS servers have been upgraded to thwart Kaminsky-style attacks. The IETF also is putting the finishing touches on a best-practices document that outlines ways for DNS server operators to protect against spoofing attacks like those that exploit the Kaminsky bug.
So you're correct. Patches are out and the IETF is just debating their stance.
Re: (Score:3, Informative)
Those patches are no fix, they only make the attack a little bit harder, and were easy to do without changing the current protocol or authoritative server software.
Most of the proposed interim solutions do require a change in the protocol and/or authoritative server software, and those will need to be supported until the end of time (or when DNS goes away, which is probably not before a decade after that), and make debugging of misconfigurations that much harder, especially when several of these additions w
Re: (Score:2)
Hesitant? Hesitant!?
Look, this isn't a bunch of ninnies holding back progress. DNSSEC is a replacement for DNS. It always has been, and for some god awful reason it's taken its architects over a decade to get nowhere. Deploying DNSSEC gains you nothing and costs you a lot: You have install costs, heavier hardware, changes to your internal infrastructure- those are the obvious ones-then you've also got the fact that the DNSSEC tokens will get your DNS packets stripped by some firewalls which means you disapp
Re: (Score:2)
Argument by vigorous assertion? If people are interested in delving deeper, reading the namedroppers and dnsop mailing lists for the month around the release of the Kaminsky bug would be instructive.
Re: (Score:2)
The namedroppers list has, in the last 10 years I've been monitoring it, been a source of misinformation and frequently mismanaged.
Kaminsky's bug is a rehash of an old bug that non-BIND nameservers were already strong against.
If your sole source of information about DNS comes from the likes of Randy Bush, you sir are an embarrassment to network administrators everywhere.
1. According to the IETF [dnssec.net], DNSSEC was started in 1993. That's far longer than a decade.
2. A controlled, toplevel deployment of DNSSEC to
.SE
Re: (Score:3, Interesting)
When you read this article, I think it's virtually impossible to come away with a correct impression - this is a really bad case of the game of telephone.
The situation is that the major DNS vendors have all produced patches to their DNS server software that increases the entropy of the queries these servers send, so that now instead of spoofing being something that can be done with the relatively trivial hack Dan came up with, you now have to pretty much bludgeon the network to death for about 24 hours to g
How many legs does the Kaminsky bug have? (Score:2, Funny)
If it's eight, then it's probably that perishing missing space station spider!
In which case, you go get the vacuum cleaner and I'll stand here shaking in the corner emitting arachnophobic screams...
Re:How many legs does the Kaminsky bug have? (Score:5, Funny)
It's a space station. You don't need a vacuum cleaner. Just open a window.
Re: (Score:2)
I concede defeat, sir.
I cannot think of a witty retort to one of the best "smartarse" replies I have ever had.
Re: (Score:2)
It hurts. And you owe me a new keyboard. Know that your life was not in vain, even if you never surpass this.
Re:How many legs does the Kaminsky bug have? (Score:4, Funny)
It's a space station. You don't need a vacuum cleaner. Just open a window.
No way! That would make a vacuum dirtier!
Re: (Score:2)
Groan!
Re: (Score:2, Funny)
If it's eight, then it's probably that missing space station spider!
I'll stand here shaking in the corner emitting arachnophobic screams...
I'm sorry, I can't hear you scream...
Re: (Score:2)
That was funny but cstdennis deserves drowning in positive "+1 Smartarse" moderations...
Old news? (Score:4, Informative)
As often, Ars Technica has had this for a while. [arstechnica.com]
I quote:
"This would be less of an issue if the widely released patch from two weeks ago had been fully deployed"
And:."
Yawn.
Re: (Score:2)
You know, for all of Kaminsky's brilliance, he's got math problems.
Going from "one in several hundred million to one in a couple billion" is not "tens of thousands of times harder". I guess it would make his quote a little less exciting.
"The exploit is now tens of times harder" just doesn't have any flair to it.
Re: (Score:3, Informative)
That's not what he meant. He meant that the chance is *now* between one in several hundred million and one in a couple billion.
Stupid, stupid, stupid! (Score:5, Insightful)
Now, when, and I mean EVER, has a security hole meant that people switch to a new platform? Or when has a severe security hole EVER caused people to even consider moving?
Windows has its leaks. But people keep using it. Why? Because they don't care, don't know or because "hey, what are the odds that it happens to me?". SMTP and POP have flaws, spam is running rampart because of it, and we switch to securer ways of mailing that can verify the sender... not! IPv4 has security problems and we're not even seriously considering switching to something more secure.
People will NOT switch to something else just because of a security problem. Because the people who could enforce it simply don't care. ISPs? ISPs don't even care about trojans running rampart in their network. Most don't even bother trying to block Sasser from spreading. The governments? Spare me that, currently I'd rather expect them to use the flaw themselves for better surveillance of their subjects.
Fix that damn bug! Nobody will move to a better platform just because of a "mere" security problem.
Re: (Score:2)
It's a protocol problem, and needs a protocol level fix.
DNSSEC is an extension of the DNS protocol, and if any party in the transaction doesn't support it or ask for it, DNS still acts just like DNS. If the resolver asks for DNSSEC information, and the DNS server supports it, then you get the normal DNS information + the DNSSEC information.
This is NOT switching to a new platform, it's adding a signature to the existing platform for verification. It can work precicely because it
Re:Stupid, stupid, stupid! (Score:4, Informative)
No, DNSSEC would fix the bug. IF, and only IF, everyone used it. Actually the fact that DNSSEC accepts insecure DNS requests makes this approach flawed.
It's not a technical problem. It's an economic one.
Switching to DNSSEC means additional costs for ISPs. Additional time for server admins, additional hassle to get the verifications, signatures and certs. In one word, expense. Expense without revenue.
Now, old school, insecure DNS works. The customer doesn't see a difference (most of all, he doesn't understand why DNSSEC would be a good idea, if he heard about it at all). Security has never been a selling point for ISPs. Price is. The customer won't request secure DNS and for almost every potential customer of an ISP the question whether a provider uses secure or insecure DNS is not going to influence his decision which one to take. If he has a choice at all, that is.
I do agree that switching to DNSSEC would be a damn good idea. But you, me, some others on
/. and a handful more understand the implications. That's not even a percent of a potential customer base for an ISP. So it doesn't matter.
As long as there is no meaningful pressure on ISPs to adopt DNSSEC, they won't do it. And by meaningful, I mean someone or something requiring you to come from a provider address using DNSSEC to do business with you (banks come to mind). But since they again don't want to lose customers (due to requiring it while some other bank/business doesn't), they won't press for it either.
If you want to force people to use DNSSEC, you have an ally in me. But you will not convince a sizable portion of the users, or even ISPs, just by keeping the alternative insecure. They won't care.
Re: (Score:2)
All the service providers are fed up with having to rush to patch all their DNS servers for a major break about once a year, and the time is right to convince them to switch after the largely publicized Kaminsky bug.
There are 2 major hold ups on DNSSEC adoption:
The root is not signed, and Microsoft doesn't support it.
Pretty much every other DNS server besides MS supports DNSSEC, and most now support the pri
Re: (Score:3, Insightful)
Actually, there are a lot more than two major holdups:
Re: (Score:3, Interesting)
1. No, it doesn't. The zone is signed once, and then queries are served out of cache. The server does not have to sign its response to each query.
2. Some firewalls are broken? What else is new?
3. This isn't true. Every name server out there except for djbdns supports dnssec. It took some real work to add that support, sure, but the work is already done. If you want DNSSEC support, you can have it now.
4. Hm. I had to add a single cron job to automatically resign the zone once an hour. Big who
Re: (Score:2)
1. Yes it does. Dodging the question by pointing out what content servers can do it is irrelevant. Caches will have to check the packets each and every time. Furthermore, the content server does have to re-sign periodically.
2. There's no reason the protocol had to be changed. DNSCurve proved that. The fact is that the DNSSEC people have had since 1993 to figure this out, and this is very strong evidence that they're bonkers-wrong.
3. It's very true. Maradns doesn't support dnssec. LDAPDNS doesn't support DNS
Re: (Score:2)
1. DNSSEC does in fact make DOS attacks worse in the very same ways DNSCurve makes DOS worse: Resource usage and bandwith. However, DNSCurve makes authoratative server computer useage worse, while DNSSEC does not. DNSSEC with RSA makes bandwith useage worse then DNSCurve with ECC, which is why DNSSEC will soon support ECC.
2. DNSCurve misses a lot of the extensibility and forward compatibility including ability to roll over keys, have offline signing, and switch to new crypto alogorithms. All those thi
Re: (Score:2)
1. Wrong. Go read the DNSCurve implementation guide to see why.
2. Extensibility is a red herring. Adding a new keying system or cryptodevice still requires rolling out software and hardware changes on a scale similar to rolling out DNSSEC in the first place.
3. Yes it's true. All of those servers you named are based on the BIND source tree.
4. You're a tool problem. Verizon sent out tens of thousands of routers with an embedded DNS server on them. An embedded DNS server that drops DNSSEC information. All of t
Re: (Score:2)
Re: (Score:2)
1. Simple forgery resiliance isn't minor. It is just as difficult to detect forgery as it is a valid key, with DNSSEC, so the cost is magnified by the number of requests. Being able to detect bad packets easier than verifying good packets minimizes the amount of work clients perform.
2. That's retarded. "The hard work of making a protocol" is minimal to the hard work for deploying it.
3. I stand corrected about NSD, but reject the premise that "we've already implemented this far" is a good reason to switch pr
Re: (Score:2)
There is no new trust relationship; The parents simply sign the signing key. Exactly what trust can be inferred from that is independent of DNS.
There is a time in the roll-over where you have published a new key, but the old key is still cached. How does DNSCurve deal with this?
About new encryption options: Much like HTTP, DNSSEC can be extended by wherever both endpoints support. OS a new client would get fancy ECC encryption, and older DNSSEC capable versions would get RSA. Over time as people upgrade, RSA gets turned off. That's how compatibility works in the real world.
DNSCurve has some good design decisions, but there are many corner
Re: (Score:2)
Pretty much every other DNS server besides MS supports DNSSEC
Can you see someone with deeeeeeeep pockets doing what they can to keep DNSSEC from becoming popular?
Re: (Score:2)
Pretty much every other DNS server besides MS supports DNSSEC
Can you see someone with deeeeeeeep pockets doing what they can to keep DNSSEC from becoming popular?
But why would they? Funnily enough, leaving their customers wide open in this area isn't actually in MS's best interest, and they've for sure got the developers to be able to fix this. I'm a cynic, but I expect that we'll see updates in this area.
Re: (Score:2)
Which makes the whole mess even worse. First, DNSSEC won't become the standard and second, MS will, maybe for the first time, be able to claim correctly that their software is more secure than any alternative. Any alternative in widespread use, that is.
Re: (Score:2)
DNSSEC support is in the current pre-beta releases of Windows 7, and I'm sure it will get added to their next server platform release also.
The requirement to sign all
.gov domains by next year means MS has to support it or lose government support. Though personally I hope no government agencies are using MS DNS servers, that's a different rant...
Re: (Score:2)
So Joe hacker can simply intercept DNSSEC packets and send forged DNS packets in their stead ? He doesn't need to forge the signature, he can simply strip it off ?
Re: (Score:2)
Which is to say, no, that attack won't work for any sane deployment.
Re: (Score:2)
The last thing I expected is that spam would "run rampart".
Re: (Score:2)
chinese farmers = spam and they are running ramps for loots....
Re: (Score:2)
If that is so, I ask you to explain the success of Microsoft Software.
Re: (Score:2)
Sure they do. At least a sizable portion of them does. But they don't get the money from the economy guys to do it. Because they do not understand it. All they understand is that the customer accepts a shoddy solution and doesn't care if you offer a better one, and the better one costs money (time, certs,...) to implement.
Translation.... (Score:2)
"Do nothing"
After applying CIS and corporate-speak filters:
"Aw man, do I have to get up and actually program some code?"
sensationalist nonsense - use 0x20 now! (Score:5, Interesting)
Stupid sensationalism.
You can right now use draft-vixie-dnsex-dns0x20 [ietf.org] to protect against the kaminsky bug. This option is already available in the unbound [unbound.net] nameserver.
Talking about totally talking out of context. Fools!
If IETF does something to mitigate, the unbelievers scream "see we dont need dnssec"
If IETF does not do something, the unbelievers scream "you're blackmailing us into dnssec"
Stop whining and put your foot where your mouth is.
Re: (Score:2)
I think Slashdot is more in the believers community. The discussion is more along the lines of "How can we blackmail them into using DNSSEC? Will the Kaminsky bug be effective for that purpose?"
Re: (Score:2)
Re: (Score:2)
Why is this a troll? 0x20 is in fact one of the proposals for mitigating the Kaminsky weakness.
The idea is: you take each of the letters in the query and randomize the capitalization. The response from the server should have the same randomization. If it doesn't, someone may be trying to hack you so you fall back on a slower, heavier-weight TCP-based DNS query instead. In effect, each letter adds one bit to the 16-bit query ID for the purpose of calculating cryptographic entropy. Combined with source port r
Re: (Score:2)
I just read the draft. That has got to be one of the more awesome hacky ideas I've read about this year.
Thanks for sharing!
DNSSuCk? (Score:2)
1. It's very complicated.
2. It's error prone
3. It's not even going to protect you against many attacks
4. It's coming from the people who wrote bind 4.x, the steaming pile of dung that preceded bind 8.x, the rotting carcass that preceded bind 9.x, the most bloated decomposing corpse of a beached whale of the internet
5. Even sendmail looks better than bind nowadays
6. Last I heard you have to give some more money to Verisign. Sigh.
7. It took them, what, 12 tries to get it "right"? I mean last time they said it
Re: (Score:2)
Re: (Score:2)
1. Have you looked at BIND's implementation of DNSSEC? It's thousands of lines of code alone.
2. See #1.
3. RFC4033: DNSSEC (deliberately) doesn't provide confidentiality; RFC 4033: DNSSEC does not protect against denial of service attacks.
4. The bind people claim that BIND9 was written by "a whole new set of people" but at least thirteen of the developers have been identified to work on both [cr.yp.to].
5. I'm leaving this one alone.
6. CA certificates were planned for an earlier incarnation of DNSSEC
7. I don't think thi
Misreported (Score:5, Informative)
I was in the meeting. As I recall, one gentleman, I'll repeat that, one gentleman from the audience of a few hundred got up and expressed the opinion that we should do nothing so as to spur DNSSEC deployment.
There was rather more consensus for the view that we should avoid making quick hacks that might obstruct DNSSEC deployment since DNSSEC is currently the only approach on the table that we're reasonably sure ends the problem.
Re: (Score:2, Interesting)
I was also in this meeting. One of the following comments (from whom exactly, I don't remember) was that all of the DNS namespace will never be signed using DNSSEC, there will for a very long time if not always, be gaps in the namespace that won't be signed at all.
There are also other reasons to run an unsigned zone for shorter times, but I won't go into details about that.
So we should probably strengthen the DNS protocol in other ways than using DNSSEC, but those improvements must not be ugly hacks.
Re: (Score:3, Interesting)
Is DJB's DNSCurve [dnscurve.org] a viable solution?
Re: (Score:2)
But that's just personal bias.
Re: (Score:2)
DNSCurve trades off more compute resources and the need to have the signing key on the public DNS server to get encrypted DNS, while DNSSEC has a lower server compute load and can store the signing keys off the server, but communicates in the clear.
It's hard to make a case for the need to protect the DNS traffic from sniffing, the threat is modification, not sniffing.
I would like to see elliptic curve crypto standardi
Re: (Score:3, Interesting)
I disagree. DNSSEC isn't widely implemented, and the widest test [64.233.169.132] had numerous problems.
DNSCurve is 100% compatible with DNS. There's nothing a firewall could do that would be compatible with DNS that is incompatible with DNSCurve.
DNSSEC is not.
Re: (Score:3, Informative)
I disagree. DNSSEC isn't widely implemented, and the widest test [64.233.169.132] had numerous problems.
DNSSEC is currently deployed live in multiple countries,
.gov and .arpa are now signed (but only for testing purposes at the moment).
Yes, the number of DNSSEC hosts is only in the low 5 digits, but that's still way more then DNSCurve. 11 vendors have DNSSEC compatible DNS servers, which I believe is 11 more then DNSCurve. DNSCurve would have to be significantly
better in order to garner support at this stage, and I'm not seeing it.
DNSCurve is 100% compatible with DNS. There's nothing a firewall could do that would be compatible with DNS that is incompatible with DNSCurve.
DNSSEC is not.
This is a valid point. Only about 1/4 of recently tested home routers all
Re: (Score:2)
No it's not.
DNS security initiatives are about protecting clients. There are zero clients protected by DNSSEC, ergo, DNSSEC has zero deployment.
The big attempt at protecting clients looking up
.SE users failed miserably- partly due to a bind bug, and partly because clients didn't bother checking at all.
I'll highlight it for you:
Availability
Re: (Score:2)
Ah, DNSSurve seems to claim to protect against DOS attacks by a MITM. Honestly the MITM problem is the LEAST of the DOS attack problem, and there's no reason DNSSEC servers couldn't be trivially modified to behave the same way. In fact DNSSEC is silent on how to handle badly signed or corrupted packets, and that sort of DOS breaking capability is within the standards. The classes of DOS attack mentioned in RFC 4033 (resource and bandwidth consumption ) as well as the ability to use DNS as a DOS amplifier
Re: (Score:2)
I think you overestimate the value of having the signing key on a different machine.
The hypothetical attack where an attacker secretly changes some records on a content DNS server is unlikely, and there's nothing that says it has to be any less likely than breaking the machine with the signing key on it.
Meanwhile, denial of service attacks occur all the time.
It seems naive to protect against the attacks that don't happen, and ignore the attacks that do.
More importantly, you also seem under the impression th
Emphasis on *amateur* (Score:4, Interesting)
Even an amateur cryptographer would tell you that the more you know about the message, the easier it is to break it.
And a professional cryptographer would tell you to use a signature scheme that is provably secure (under standard cryptographic assumptions) against known plaintext signature forgery, and use a key big enough to satisfy you. Heck, you do all the crypto off-line, so you can pick a big one.
Confidentiality protections reduce the amount of knowledge, and thus protect against attacks that are yet unknown.
Prove the security of your signature scheme in the Universal Composability model and it's secure against all attacks, known and unknown.
I don't think you know what you're talking about.
Oh the iro... No, actually, you _do_ know what you're talking about: amateur cryptography.
DNSCurve protects against denial of service attacks [link]
So to back up your claim, you post a link to someone making the same claim. Now I'm convinced...
It requires far less compute-power than DNSSEC.
Yes, but it requires it on-line. It also requires caching keys for your clients unless you want to double your in- and outbound packet load.
Read the page about DNSCurve. It says "DNSCurve and DNSSEC have complementary security goals. If both were widely deployed then each one would provide some security that the other does not provide."
They're, taken at the word, not meant to replace each other.
Re: (Score:2)
Bzzt. Wrong.
Caches still have to verify the packets.
DNSSEC was designed by the same people who created the problem. Yet they keep saying "trust us, we've been doing this for a while".
DNSCurve was designed by cryptographers who went out to solve an actual problem that people are experiencing.
You on the other hand, are doing a lot of hand waving: You clearly don't have even a basic background
Re: (Score:2)
That's the great part about DNSCurve. It's so simple BIND could implement it in an afternoon.
DNSSEC isn't supported by DJBDNS, MaraDNS, LDAPDNS, or any other DNS server not based on BIND's codebase. In order to gain DNSSEC support, everyone effectively would have to switch to bind.
I cannot see why anyone would think that is a good thing.
By the way,
.SE's major problem was serious. It was really serious. The fact that it hadn't been noticed this entire time- with numerous people saying DNSSEC is almost ready
Re: (Score:2)
On the other hand, the emerging consensus in the BEHAVE and SOFTWIRE groups appears to be that port randomization isn't beneficial enough in mitigating the general class of port spoofing attacks to warrant rejecting on that grounds alone the various schemes in play for allowing service providers to aggregate multiple subscribers behind the same public IPv4 address while maintaining the network address translation at the subscriber site by slicing up the public port range available to each site.
Don't worry.
Re: (Score:2)
or else what?
Re: (Score:2)
they cant use the internet. make it not backwards compatible....
Re: (Score:2)
Re: (Score:2)
One of the ideas I had was fixing the randomization size (currently only 16-bit, iirc) only for IPv6 (64-bit, maybe?). Yes, it'd cause some headaches for those of us already using v6, but it would help eliminate the problem.
Of course DNSSEC is the real solution, but it's a ways off, probably after even IPv6 adoption. I know that in some places where I've worked, there's heavy reliance on MS DNS servers. Also, change is very slow to come. Yes, there are still a smattering of NT4 boxes around....and Win95
Minneapolis? (Score:2)
We're trusting Internet security to people who don't know any better than to schedule meetings in Minneapolis in the winter. It's 17 degrees and very windy out right now.
Re:Minneapolis? (Score:4, Funny)
I don't know much about this sort of thing, but I bet it's relatively cheap to book in cold-weather cities in the winter.
As a side benefit, it annoys Californians. Win all around.
Re:Minneapolis? (Score:4, Interesting)
Minneapolis has a "Skyway." [downtownmpls.com] Basically, many of he buildings downtown are connected via heated walkways between the second floors. These second floors form literally miles and miles of indoor pedestrian mall. The Hilton where the conference is held is connected to it.
So basically you can go everywhere without having to ever go outdoors. And we have a gig-e Internet link for the duration of the conference. Its computer geek heaven.
Re:Minneapolis? (Score:4, Interesting)
Eh, the whole downtown is covered in habitrails, so you can walk from building to building in short sleeves, because you don't ever have to go outside. It's kind of like living on a really big space station, only with gravity.
It was kind of cold in my hotel room, though.
Re: (Score:3, Funny)
*whoosh*...
You really should have your humor circuits checked you know ?
YHBT
Re: (Score:2)
oh come on. Internet Explorer Task Force, and then a whole bunch of guys falling over each other to spell out what IETF stands for (As if there is anybody here that doesn't know that. What ? oh, ok... well, never mind then
;) )
Re: (Score:2)
Re: (Score:2)
Re:sounds familiar (Score:4, Funny)
Trust me, there's very little need for Trojans at a typical IETF meeting. | http://tech.slashdot.org/story/08/11/20/2130236/kaminsky-bug-options-include-do-nothing-says-ietf | CC-MAIN-2013-20 | refinedweb | 5,719 | 73.78 |
Hi,
I’d like to use Deeplinks in an Ionic4 app.
A click on the link myapp://mysite.com/payment?id=888&msg=message should open the page MyPaymentsMethodsPage (this page is in the folder “customers”).
This is the code I wrote into the page app.components.ts:
import { Deeplinks } from '@ionic-native/deeplinks/ngx'; import { MyPaymentsMethodsPage } from './customers/my-payments-methods/my-payments-methods.page'; this.deeplinks.route({ '/': {}, '/payment': MyPaymentsMethodsPage }).subscribe((match) => { const jsonMatch = match; alert(JSON.stringify(jsonMatch)); }, (nomatch) => { console.log('@@@@@ NO MATCH:', JSON.stringify(nomatch)); });
The result is that the match with the /payment is found, the alert is shown but the app is not redirected to the payment page.
What is wrong with the redirection according to you?
Thank you very much
cld | https://forum.ionicframework.com/t/how-to-use-deeplink-routing-into-an-ionic4-app/173143 | CC-MAIN-2019-39 | refinedweb | 127 | 54.69 |
No doubt you noticed that Adobe acquired Omniture — a company that provides online business optimization software — starting with web analytics. One of the integration possibilities is to help companies track the activity inside their PDF documents — including forms. What pages did they view? Did they print? save? add annotations? sign? In the case of a form: what fields did they fill in? What buttons did they click? How far into the form did they get before they abandoned their session? Today we’ll work through a sample of adding tracking code to a PDF form.
Tracking code
When you want to track activity on a web side, the Omniture tools offer assistance for instrumenting your html pages. You give it your tokens and it returns the appropriate script to embed in your source. Similarly, we can generate code to add to ActionScript, Java and other environments. In this blog entry I’ll show you how to do the same for your PDF.
A word about privacy
Tracking activity is a sensitive business. End-users have the right to know that their actions are being tracked. They also have the right to opt out of tracking. Adobe Reader has a security policy that protects users. In practise, what this means is that when a PDF is hosted in the browser, the document may post data as long as it adheres to the cross domain restrictions. When a PDF is open stand-alone, it can perform http operations only if there is a level of trust. A couple of ways to establish trust are to use a certified PDF, or the user can explicitly allow http access via the "phone-home" dialog.
Today’s sample, limits the tracking experience to PDF forms that are open in the browser. Tracking a PDF in standalone Reader isn’t really recommended, because the phone home dialog is too ugly:
Data Insertion API
The API used by HTML JavaScript to do tracking is based on doing an HTTP Get operation from an image resource. However, there are other APIs.
Omniture exposes a Data Insertion API where you can http post simple XML fragments to the server. Once you’re logged in with a developer account, you can find this API described at:
The XML grammar used is fairly simple. The sample form constructs XML ‘pulse’ transactions that look like:
<request>
<prop1>Acrobat9.3:WIN</prop1>
<language>en_CA</language>
<visitorID>27585603</visitorID>
<pageURL>51cb51b4-535d-49a4-b6bd-1a975cc94f69</pageURL>
<pageName>firstname:changed</pageName>
<channel>PDF Form</channel>
<reportSuiteID>FormTracker</reportSuiteID>
</request>
Of course, you can format this data any way you like — as long as the reportSuiteID and the URL that you post to are correct.
A few notes about the various fields we populated:
prop1
The API allows us to include up to 50 user defined properties: prop<1> to prop<50>. In the sample, I’ve included some information about the version of Reader/Acrobat and the platform. I originally wanted to put this information under <userAgent>, but that value is applicable only to browsers.
visitorID
When tracking from a web page, the way to identify a visitor is with an IP address or with a script-generated id stored in a cookie. However inside the Acrobat object model, there is no equivalent property to uniquely identify a visitor. Ideally we’d be able to establish a constant visitorID between the users session in the browser and their session in Adobe Reader. There’s some more discussion about establishing a unique visitorID below in "the deep end".
pageURL
We need something to identify the PDF. Using the PDF name is not reliable, since PDFs are easily renamed. The sample below uses the xfa.uuid property. This value remains constant even if the form is renamed. For non-xfa PDFs we could use the doc.docID[0] property.
pageName
The form uses pageName to encode the action that has taken place. I adopted a scheme where the string is a combination of "field name : event : additional information"
channel
A way to categorize groups of transactions for better reporting.
Unfortunately I couldn’t include a fully functioning sample form. I have an Omniture sandbox set up for my own testing, but would rather not expose it to the world. The visitor namespace used in the example is fictitious. Instead, I’ve changed the code that would normally post data and instead it will populate a field with the xml that would otherwise have been posted to the server. To see the sample work — follow the link above and open it in the browser. Or download it and open it in Designer ES2 preview mode.
Detecting a browser
As stated earlier, the sample form will track user activity only when hosted in the browser. To detect when we are in the browser we look at the document path from the acroform object model: event.target.path. If the prefix includes a protocol scheme (e.g. http:) then we know we are hosted in the browser. (as an aside, Designer uses the browser plugin mechanism for hosting Acrobat/Reader when in preview mode. When testing the sample form in Designer preview, it will behave as if it were loaded in the browser. This explains why when you close your designer preview, the form itself doesn’t close — until the next preview. We get the browser behaviour where the document is kept open for a while in the event that the user navigates back to the page hosting that PDF.)
Designer Macro
When you look at the sample form you’ll see that I’ve injected lots of script to gather and emit pulse data:
- A hidden Tracker subform that contains a script object, and several other events
- enter and exit events on every field in order to track when field values change
Manually adding script for tracking would get very tedious. To make it easier, I wrote a designer macro that will instrument my form for tracking. The macro dialog looks like:
Once you select the options you want, the macro injects the required script. If you want to remove the tracking code from your form, de-select all the tracking options and press "Ok".
Here is a zip file with the macro JavaScript, SWF, and MXML.
HTTP Post
Posting from an XFA form is pretty straightforward, given that FormCalc includes a built in post() function. However posting from a non-XFA form is not so easy. I tried a number of options:
doc.submitForm() — While this uses HTTP post, it also displays the server response. In this case the Omniture server returns: <status>SUCCESS</status>.
Net.HTTP.request() — cannot be called from within a document. This function is available only in folder-level JavaScript.
Net.SOAP.request() — The documentation makes it look like it could be dumbed down to do a raw post, but in practise this is not the case.
The method I eventually cobbled together was to embed an XFA-based PDF as an attachment to the document I wanted to track. When the document wanted to initiate tracking, it opened the attachment in the background and called into the tracking functions defined there..
The Deep End
There are several interesting things about the markup injected into the form:
HTTP Post
The call to post data is made using the FormCalc post() function. In order to call post(), I added a "full" event to the tracker subform. We use xfa.event properties to hold the parameters to post() and invoke it with a call to
Tracker.execEvent("full"); This technique is described at: Calling FormCalc From JavaScript.
Multiple events
You might think that adding an enter and exit event to every field object would be a problem if the form happened to have its own enter and exit events. However, the XFA spec allows fields to have multiple events with the same activity. i.e. there’s no problem having two enter events. They’ll both fire. However, Designer will show you only one enter event.
Protos
To keep the markup as terse as possible, I made use of protos when injecting script. The tracking subform contained the source code for the enter and exit events:
<proto>
<event activity="enter" name="Track_enter" id="Track_enter">
<script contentType="application/x-javascript">
Tracker.Track.FieldEnterExit();
</script>
</event>
<event activity="exit" name="Track_exit" id="Track_exit">
<script contentType="application/x-javascript">
Tracker.Track.FieldEnterExit();
</script>
</event>
</proto>
Then when adding these events to field objects, the syntax is very terse:
<field>
<event use="#Track_enter"/>
<event use="#Track_exit"/>
</field>
Propagating Events
Instead of adding enter and exit events to every field, I could have used a single propagating enter/exit event for all fields. But since propagating events are available only since 9.1, I chose to add individual events so that the form would work in older releases of Acrobat/Reader.
Tracking validation errors is a different matter. In this case there is no easy workaround for older versions of Reader — unless you’ve implemented some kind of validation framework. In order to track validation failures the form uses the validation state change event. Any time it fires, the form posts to the Omniture tracking server. Note that the state change event also uses syntax not exposed by designer:
<event activity="validationState" ref="$form"
name="event__validationState" listen="refAndDescendents">
<script contentType="application/x-javascript">
…
</script>
</event>
Notice the attribute "ref="$form". Designer doesn’t expose the ref attribute. It would default to "$" — the current node. In our example we’re able to house this logic inside the Tracker subform, but have it monitor validation activity in the rest of the form by pointing it at the root form model.
Ideally the Designer macro would be able to query the target version and then would control whether logic to track validation failures is feasible.
Unique Visitor ID
There is one way to create a persistent id using the Acrobat object model — by way of the global object. I won’t bore you with all the details about how the global object works, but I will show you how I used it to create a persistent id:
/**
* Effective reporting needs a persistent visitor id — across
* all PDF documents.
* @return a persistent visitor id
*/
function getVisitorID() {
var sVisitorID = "";
// We use the global object to store/retrieve a visitor id.
for (var sVariable in global) {
// The global object security policy doesn’t
// allow us to examine the contents
// of all global variables, but it does allow us
// to enumerate them.
// We’re looking for a variable named:
// _OmnitureTracking_*
// The trailing digits will be our visitor id.
if (sVariable.indexOf("_OmnitureTracking_") === 0) {
sVisitorID = sVariable;
break;
}
}
if (sVisitorID !== "") {
// Strip off the prefix
sVisitorID = sVisitorID.replace(/^_\S*_/, "");
} else {
// Create a new visitor id
sVisitorID = Math.ceil(Math.random() * 100000000);
var sVisitorVar = "_OmnitureTracking_" + sVisitorID;
// Add this visitorID as a global, and make it persist
// so that it will be available next time in as well.
global[sVisitorVar] = "x";
global.setPersistent(sVisitorVar, true);
}
return sVisitorID;
}
In the scenario where the PDF is being tracked in the context of a web site, we might consider embedding the users web site visitorID into the form data. Then for the PDF tracking we’d concatenate the two values.
Hi John,Cool example!So if I am using Designer 8.2, does a newer version exist or is it just in Beta testing right now? Will the update be free?Thanks,Jeff in Columbus
Jeff:Designer gets released both in Acrobat and in LiveCycle. The latest version that I used for this sample was the Designer that shipped with LiveCycle ES2 (but note that the macro capability in Designer ES2 is not a formally supported feature). I don’t know the answer on the upgrade cost.John
Hi – if you are using Acrobat 9 Pro or Pro Extended, you’ll be on LCD ES (v8.2.1……)You can upgrade to LCD ES2 (v9.0.0. …even more redundant digits…) for a very reasonable price of around USD/GBP 30 – just visit the Adobe Store, order and you receive a disk via mail.HTH
Hi John,
The use of proto’s has been great in simplifying my forms; I have used them to implement field focus highlighting and to ensure all my fields have the same borders, fonts, etc. The problem I have with them is when trying to create fragments, this means the proto the controls in the fragment refer to has to be resolved in the source form, the one including the fragment. My goal is to have a fragment that can be styled by the document that is using it, all my forms have very similar parts but always different branding. This seems to be possible from my reading of the “XML Forms Architecture (XFA) Specification Version 3.0” on page 221, but I have had no success in implementing it. Hopefully I am just doing it wrong or is this just my wishful thinking when reading this document. Maybe proto’s could be added to your list of blog topics?
Thanks Dave
Dave:
I’m glad you’re poking at this. protos are a very powerful construct. I support your idea to use them to style your content. I’ll put it on my list.
John
Hi John,
Thanks for your support, we are finding more and more ways of using protos. But we have noticed that Designer 10 has changed the way protos are handled, it seems that any events that are defined on a proto are now generated where the proto is used … as if it was an override.
It has also been pointed out to us that the Designer 9.0 help refers to protos as deprecated.
So we aren’t sure if this is a bug with Designer 10 or not. Would you expect Designer 10 to generate overrides for proto properties?
Dave
Dave:
Historically Designer doesn’t support building documents with proto relationships. However, protos are well-supported by the runtime. I’m unaware of any changes there.
The one big change in the ADEP 10 designer is that the new stylesheet capability is based on protos. This may have caused other Designer behavior changes wrt protos.
As for whether Designer will generate overrides on protos or not — I’m not sure what the current behavior is. But as I said, Designer does not support arbitrary authoring based on protos. The best we can hope for is that it respects established proto relationships.
It’s entirely possible that you might have to cleanup some overrides after the fact. perhaps with a macro.
good luck
John | http://blogs.adobe.com/formfeed/2010/03/track_pdf_forms_with_omniture.html | CC-MAIN-2018-30 | refinedweb | 2,441 | 63.7 |
Fl_Group | +----Fl_Window | +----Fl_Double_Window, Fl_Gl_Window, Fl_Overlay_Window, Fl_Single_Window
#include <FL/Fl_Window.H> new window. If Fl_Group::current() is not NULL, the window is created as a subwindow of the parent window.
The first form of the constructor creates a top-level window and asks the window manager to position the window. The second form of the constructor either creates a subwindow or a top-level window at the specified location,.
Under Microsoft Windows this string is used as the name of the WNDCLASS structure, though it is not clear if this can have any visible effect. The passed pointer is stored unchanged. The string is not copied.
This method only works for the Fl_Window and Fl_Gl_Window classes.
The type Fl_Cursor is an enumeration defined in <Enumerations.H>. (Under X you can get any XC_cursor value by passing Fl_Cursor((XC_foo/2)+1)). The colors only work on X, they are not implemented on WIN32. | http://fltk.org/doc-1.1/Fl_Window.html | crawl-001 | refinedweb | 151 | 59.5 |
12 December 2008 13:00 [Source: ICIS news]
TOKYO (ICIS news)--Japan's Itochu Corp has established a joint venture with US company Bunge to produce bioethanol and sugar in the Brazilian state of Tocantins, the company said on Friday.
Itochu will have a 20% stake in the venture and Bunge will hold 80%, Itochu said.
This was Itochu’s second bioethanol/sugar project with Bunge in ?xml:namespace>
The costs for the two projects are expected to total $800m, the company said.
In September, Itochu signed an agreement with Bunge to buy a 20% stake in Bunge’s Brazilian subsidiary, Agroindustrial Santa Juliana, and partner in the production of bioethanol and sugar.
For the new venture, the two companies plan to construct a plant in Pedro Afonso,
The plant is scheduled to come on stream in 2010, with commercial production of sugar expected to begin in 2012, Itochu said.
The plant’s sugarcane crushing capacity will be 1.4m tonnes/year in the initial year, and the two companies planned to boost capacity to 4.2m tonnes/year at the peak of the plant’s operation, Itochu said.
The firms did not give a time frame for the expansion, however.
The power at the plant would be fuelled by biogases from sugarcane, and the companies plan to sell excess energy to other firms in
The new bioethanol/sugarcane project | http://www.icis.com/Articles/2008/12/12/9179221/japans-itochu-us-bunge-in-2nd-brazil-bioethanol-venture.html | CC-MAIN-2015-11 | refinedweb | 230 | 68.5 |
OpenShift Container Platform 3.x Tested Integrations
Tested Integrations are a defined set of specifically tested integrating technologies that represent the most common combinations that OpenShift customers are using or might want to run. For these integrations, Red Hat has directly, or through certified partners, exercised our full range of platform tests as part of the product release process. Issues identified as part of this testing process are highlighted in release notes for OpenShift Container Platform.
This list of tested integrations will expand over time, however this page is limited to only covering what OpenShift supported at the time the minor version GA'ed. This means that it is possible for other configurations to be "tested" and/or "supported" that are not listed..
For example, the following table below shows 7.3, 7.4 are tested version at the timing of OCP 3.6. It means that we tested these combinations at the
timing of OCP 3.6 GA. Red Hat will provide support for the combination of OCP 3.6 and RHEL 7.5 or later.
Red Hat provides both production and development support for the tested integrations in the same major version family at or above the tested version according to your subscription agreement. Earlier versions of a tested integration in the same major version family are supported on a commercially reasonable basis.
Platform Components
- The Ansible package that is tested/supported with OCP that come from the OCP provided channel's and/or RHEL-Extras channel's, are denoted with '*'.
- The Ansible package that is tested/supported with OCP that comes from the Ansible provided channel are denoted with '**'.
- Other versions or offerings of Ansible, from say epel, are not recommended/tested and as a result are not supported.
- Note: Drivers with a denoted '*' are provided as External Persistent Volume Provisioners
- Note: Ceph RDB and Red Hat Ceph RDB differences are denoted to show upstream and Red Hat Provided Ceph differences.
Client Tools
JBoss Developer Tools
JBoss Developer Tools are tested and supported based on the Red Hat CodeReadyStudio Supported Configurations and Components.
Images
For a general list of supported OpenShift images, you can look at the Red Hat Container Registry for openshift3 images. These images are provided as part of the OCP platform and are meant to provide functional capabilities or resources to OCP. The images are not supported as general purpose application images.
- Note: All images in this section (OCP System Container Images) are Technology Preview, with 3.6. With 3.10 we dropped support for System Containers Entirely, however for Atomic Host (we still provide the ansible installer and
ose-node, system containers. Without these installs/upgrades on Atomic Host would not be possible.
- Note: The use of Atomic Host with 3.10 is deprecated with 4.0 and you should consider and/or start to de-commision hosts running this OS platform.
- Note: All Prometheus images listed in this section (Other Images) are Technology Preview, with 3.7 and 3.9.
- Note: Local storage provisioner and snapshot images in this section (Other Images) are Technology Preview, with 3.7 and 3.9.
- Note: With 3.10 the S2I Builder functionality was condenced into Docker Builder thus depricating the S2I Builder image.
Below is a list of supported images that are intended to be used as base layer images, and provide functionality to developers on the OpenShift platform.
- Images denoted with a * above are found in the S2I and Database Images sections are released outside of OpenShift 14 and have an independent life cycle from OpenShift.
- Note: Images in the S2I and Database Images sections found under the openshift3 repository were deprecated with 3.5 in favor of newer versions of such images found in SCL repositories.
- Note: The Middleware for Openshift section lists the product imagestream versions that have been tested with specific Openshift versions at the time of an Openshift release. Newer versions of middleware product images may be used across versions of Openshift. Specific incompatibilities may be designated in this table from time to time.
Tested Integration Points:
Tested Platforms:
A full range of platform tests has been performed on the following tested configurations. Red Hat has directly, or through certified partners, exercised our full range of platform tests as part of the product release process. Issues identified as part of this testing process are highlighted in release notes for each OpenShift Enterprise release. This list of tested integrations will expand over time.
Red Hat provides production support30 for the tested integration's in the same major version family at or above the tested version according to your subscription agreement. Earlier versions of a tested integration in the same major version family are supported on a commercially reasonable basis.
Technology Preview: Some Operating Systems were not fully supported with some releases, and support was only offered with Technology Preview Status. Please see for more details. ↩
Due to RHEL Extras and the docker package, changes to RHEL atomic host are needed to remain in a supported configuration. See Solution 3414221 for more details. ↩
-
With the GA of OpenShift 3.7 we improperly documented that Ansible 2.4.0* and 2.4.1* were supported with this version of OCP. While these versions were tested, they were tested to help ensure compatibility with the new Ansible release channels, and were not intended to be used to install/upgrade the product. Please see the 3.7 Release Notes for where we documented this change. ↩
With OpenShift 3.2.1 we added support for Docker 1.10 ↩
Plug-in Component: The product version denoted is not shipped or provided by Red Hat but is detected as plug-ins shipped with the OpenShift product are tested with this third party product. ↩
NFS versions that we test with are the same for NFSv3 and NFSv4, however the configuration determines what protocol version we use when testing. ↩
GlusterFS comparability as it pertains to CNS/OCS can be found on article 2356261 ↩
Images (denoted by image name) used for installation on Atomic Host or in pure docker installs. ↩
Support for these images did not start until OpenShift Container Platform (OCP) 3.1.1. In OCP 3.1, these were in a Technology Preview status. ↩
Images denoted in this section apply to the openshift3, rhscs, and dotnet namespace on the registry ↩
We introduced a versioning change with the 3.2 (s2i and database) images that no longer is tied to the OpenShift release. Image version are now tagged based on the component version they represent. ↩
-
Images denoted in this section apply to the rhscl and dotnet namespace on the registry ↩
xPaaS Image Support, or usage, is limited to the OpenShift platform. ↩
Image version with this designation may only be supported on OCP minor point releases (3.0.2), due to incompatibility issues. ↩
-
-
This is not meant to be used in production, but is provided for POC and Development Usecases. ↩
Supported after v3 API version (3.0.2.902-0) of Keystone as provided by Red Hat OpenStack. ↩
-
-
-
-
-
-
-
-
-
Development support can also be offered through Layered Product Entitlements. ↩
Red Hat supports many Certified Cloud Providers and any cloud provider on the list is “supported” so long as it supports RHEL as a base OS (see above for versions). However, we do not explicitly test OCP on all CCPs, so the scope of support may be limited. ↩
Red Hat provides support for Red Hat Enterprise Linux based on the hypervisors listed in as such OpenShift as a platform is supported on these hypervisors but may not be explicitly listed as a tested Infrastructure Layer. ↩
Red Hat has limits on the number VMs that can be run with a subscription for KVM. Review for limit restrictions. ↩
Red Hat OpenStack has limits on the various components. Be sure that any OpenShift deployment on top of OpenStack fits within the defined limits by reviewing. ↩
- Product(s)
- Red Hat OpenShift Container Platform
- Category
- Supportability
24 Comments
Supported ansible versions should also be mentioned.
Agreed. The supported ansible version should be listed.
+1
This is now addressed by the document.
The latest openvswitch is 2.4.0. 2.5.0 is not available in repos rhel-7-server-ose-3.3-rpms.
Looks like the version for Elasticsearch in Openshift 3.5 is wrong (copy/paste error from the kibana version?) 4.6.4 is not even a valid upstream version.
According to docker inspect, this is: ES_VER=2.4.4
This has been corrected.
The article shows the community version of Ceph and Gluster being supported for open shift container platform. Can we update to RH version , might avoid confusion.
I wonder if Fuse 6.3 will be supported by xPaaS 3.6 in the next future?
Hi, when to uprade the 3.7 Tested Integrations results?
why I see the prerequisites below for installation OCP 3.6 is different over here ?
Masters/nodes Physical or virtual system, or an instance running on a public or private IaaS. Base OS: RHEL 7.3 or 7.4 with the "Minimal" installation option and the latest packages from the Extras channel, or RHEL Atomic Host 7.3.6 or later.
Jimmy Zhang
I am not sure I see a miss alignment with the 3.6 docs, in this article and those docs we state both 7.3, 7.4 are supported/tested OS's for OpenShift 3.6
In the xpaas section "EAP 7.1" is not supported for Openshift 3.6, this seems not alligned with what is stated here "This image can be used with OpenShift Container Platform 3.6, OpenShift Container Platform 3.7, and OpenShift Container Platform 3.9." Could you update / or explain ?
Hi Roland, This page shows the integrations that have been tested by Red Hat in regards to specific Openshift releases. It does not imply a level of support. You should be able to run the EAP 7.1 image with Openshift 3.6 and the image is supported there. If you encounter an issue, please open a support case to report it. I'm discussing the errata wording with product management. Thank you for adding this comment and I hope this clears up any confusion.
Thanks for your answers! If you get notified by the wording change & remember, let me know. I'll be following the page, but there is a lot of content so an automatic might slip through...
Assuming we want to incorporate the latest CVE security fixes using the EAP 6.4 container (1.8.x ), according to the matrix I need to run Openshift 3.7+ to be supported. So if I want to incorporate a middleware CVE fix for EAP 6 that comes out tomorrow in a new container images, I need to upgrade my PAAS platform first (assuming I am on Openshift 3.5) ? That is not a realistic use case that an application will ask the PaaS : “you need to update, so I can update“. In the intro it says “Earlier versions of a tested integration in the same major version family are supported on a commercially reasonable basis.”, but I do not know what that means.
I cover some of this in my prior comment. This page shows what we have tested in regard to integrations and specific versions of Openshift. There is no implication of a need for upgrade.
How about RHEL 7.6? Is OCP 3.9 supported on RHEL 7.6?
How about latest 3.11 on RHEL 7.6?
I see a generic statement now that clarifies this, Copied below. So basically this means the combination in question is Tested and Supported..
So....couldn't we update this doc with what becomes supported after GA? It is nice to have a one stop shop.
Does Cinder 2.0 at Storage Drivers section refer to Cinder block storage API? It says v2 is already obsoleted while I found there were mentions that Storge driver only supports Cinder block storage API v2 at OCP 3.7 release note or Ansible Playbook OpenStack Configuration but not v3. But, OCP 3.11 seems it would support Cinder block storage API v1, v2, and v3, here.
Hi, I think there is some mismatch regarding "AMQ Broker 7.2" link that points to an unknown repository and I was not able to find any reference about AMQ OnLine
Please add a statement regarding RHEL 7.7 and, if applicable, RHEL 8! | https://access.redhat.com/articles/2176281 | CC-MAIN-2019-39 | refinedweb | 2,067 | 58.79 |
I have a method that returns an IEnumerable of this type:
public class ProductUpdate { public string ProductId { get; set; } public DateTime DueDateTime { get; set; } }
and I have a
List<string>
which has a list of dates as strings.
What I am trying to do is check for any Product that has a DueDate value which matches an item in the List of strings. Remove it if there is a match.
Ex:
Let's say a ProductUpdate item, PU1, in the IEnumerable has a DueDate 06/07/2015 and the List of strings contains 60/07/2015, then remove PU1 from the IEnumerable collection.
I could get it done using a foreach but am looking at a solution with LINQ.
Thanks in advance. | http://www.howtobuildsoftware.com/index.php/how-do/dg0/c-linq-removing-items-from-an-ienumerable-which-match-items-in-a-list-using-linq | CC-MAIN-2019-39 | refinedweb | 122 | 77.27 |
In mathematics, the Fibonacci numbers or Fibonacci sequence are the numbers in the following integer sequence:
1,1,2,3,5,8,13,21,34,55,89,144..
A simple way is to generate Fibonacci numbers until the generated number is greater than or equal to ‘x’. Following is an interesting property about Fibonacci numbers that can also be used to check if a given number is Fibonacci or not.
The question may arise whether a positive integer x is a Fibonacci number. This is true if and only if one or both of 5x^2+4 or 5x^2-4 is a perfect square. (Source: Wiki)
bool isPerfectSquare(int x) { int s = sqrt(x); return (s*s == x); } // Returns true if n is a Fibonacci Number, else false bool isFibonacci(int x) { return isPerfectSquare(5*x*x + 4) || isPerfectSquare(5*x*x - 4); }
I am a lecturer by profession.congratulations.
glad to let you know tat you have a good and rare collection of puzzle. It will be a great help if you include more problems. | http://www.crazyforcode.com/check-number-fibonacci-number/ | CC-MAIN-2016-50 | refinedweb | 177 | 52.29 |
My colleague Sergey is working on a really nice package around CardSpaces. Watch his blog for updates...
There we go. Doors are open for NRW06!
20 Speakers, max. 250 attendees a lot of community and networking.
After you did you can put this onto your blog or website
Michael hat das Februar Editorial für das Security Portal von MSDN Germany geschrieben und wirft dabei interessante Vorschläge in den Raum:
Wie wäre es, wenn bei den allseits bekannten Programmtests der Fachzeitschriften ein Non-Admin-Test hinzu käme?
Wenn ein Programm auch danach beurteilt würde, ob es mit einem ganz normalen Benutzeraccount einwandfrei funktioniert?
Meiner Meinung nach: Recht hat er.
In one of my current projects (yes, there are more at the moment and yes that is the reason why it's a bit quiet around here) i neede to write an encrypted file to the hard disc using DPAPI (Data Protection API). After I unsuccessfully searched the web and the msdn (the sample reads all bytes to the buffer at once - not so nice), I wrote the following sample app:
using System;
using System.IO;
using System.Security.Cryptography;
public class DataProtectionSample
{
public static void Main()
{
using(MemoryStream ms = new MemoryStream())
{
StreamWriter swriter = new StreamWriter(ms);
swriter.WriteLine("Text to encrypt to file.");
swriter.Flush();
Console.WriteLine("Protecting data ...");
DataProtection.Protect("D:\\_temp\\DPAPI.dat", ms, false);
}
Console.WriteLine("Unprotecting data ...");
using(MemoryStream ms2 =
(MemoryStream)DataProtection.Unprotect("D:\\_temp\\DPAPI.dat", false)) {
StreamReader sreader = new StreamReader(ms2);
Console.WriteLine("");
Console.WriteLine("Decrypted string: " + sreader.ReadToEnd());
}
Console.ReadLine();
}
}
public class DataProtection
private static byte[] _additionalEntropy = { 9, 8, 7, 6, 5 };
private static int _bufferLength = 1024;
public static void Protect(string filename, Stream stream,
bool machineLevel)
if (File.Exists(filename))
File.Delete(filename);
using (FileStream fs = new FileStream(filename, FileMode.CreateNew))
byte[] buffer = new byte[_bufferLength];
long byteCount;
stream.Position = 0;
while ((byteCount =
stream.Read(buffer, 0, buffer.Length)) > 0)
{
buffer = ProtectedData.Protect(buffer, _additionalEntropy,
((machineLevel) ? DataProtectionScope.LocalMachine :
DataProtectionScope.CurrentUser));
fs.Write(buffer, 0, buffer.Length);
fs.Flush();
}
public static Stream Unprotect(string filename, bool machineLevel)
MemoryStream ms = new MemoryStream();
using (FileStream fs = new FileStream(filename, FileMode.Open))
byte[] buffer = new byte[_bufferLength + 146];
fs.Read(buffer, 0, buffer.Length)) > 0)
buffer = ProtectedData.Unprotect(buffer, _additionalEntropy,
((machineLevel) ? DataProtectionScope.LocalMachine :
ms.Write(buffer, 0, buffer.Length);
ms.Flush();
ms.Position = 0;
return ms;
Michael Willers, our security expert, just pointed me to an interesting resource related to ASP.NET Security.
Carefully said I do not like that sharepoint "hijacks" the Internet Information Server. When you create a virtual directory it is just not accessable because SharePoint took over IIS.
Funny fact: This is the second post how to fix issues with IIS and "extension" that cause issues
So i decided to hack a small utility serving my needs:
ExcludeFromSharepoint.zip (3.46 KB)
Enables to exclude applications from sharepoint services through the directory context menu. Install using the "-install" switch; Uninstall using "-uninstall" switch.
Because I'm running my machine under a LUA (Limited User Account) i wrote the tool in a way that you can install and uninstall it without administative rights - the contextmenu will be installed per user!
if(args[0]=="-install")
RegistryKey _rkey = Registry.CurrentUser;
.
Via Willem Odendaal I opend the following web site. It holds an interesting collection of bookmarklets (Javascript commands that can be saved as bookmarks so they can be applied to every page that is opend in your browser).
For example: "remove MaxLength" ... shows how important it is to use ASP.NET Validation Controls in your Web Applications.
Yesterday I arrived in Frankfurt with a delay of 2 hours (thanks to the Deutsche Bahn). Monday is Workshop day and so I just sat arround and did the same stuff that I would normally do in the office. I'm currently working on an ASP.NET project that uses v. 1.1 but will be converted to 2.0 with it's "Go-Live". So I need to make sure that I don't do things that will stand in the way in the next version. Here are a few questions I'm currently asking myself:
In germany we say: "Kommt Zeit, kommt Rat".
. | http://www.lennybacon.com/CategoryView,category,Security.aspx | crawl-001 | refinedweb | 696 | 51.34 |
A Remote Data Request API in Elm
John Kelly
Jan 22
This post is about the core abstractions found in the elm-postgrest package and how those abstractions may be relevant to similar packages.
In Elm, the design space of remote data request APIs has seen its fair share of work.
We have APIs like
lukewestby/elm-http-builder which provide a thin convenience layer over
elm-lang/http.
addItem : String -> Cmd Msg addItem item = HttpBuilder.post "" |> withQueryParams [ ("hello", "world") ] |> withHeader "X-My-Header" "Some Header Value" |> withJsonBody (itemEncoder item) |> withTimeout (10 * Time.second) |> withExpect (Http.expectJson itemsDecoder) |> withCredentials |> send handleRequestComplete
We have APIs like
krisajenkins/remotedata which model the various states remote data can take.
type RemoteData e a = NotAsked | Loading | Failure e | Success a
And, we have APIs like
jamesmacaulay/elm-graphql,
jahewson/elm-graphql,
dillonkearns/graphqelm,
mgold/elm-data,
noahzgordon/elm-jsonapi, and others which abstract over
elm-lang/http to provide an API which is nice in the domain language of their respective specification. We'll refer to this group of APIs as backend specific request builders.
In addition to community efforts, Evan himself wrote up a vision for data interchange in Elm. And although the API for this specific vision likely sits on the same level of abstraction as
elm-lang/http,
Json.Decode, and
Json.Encode rather than backend specific request builders, it legitimized the exploration around "how do you send information between clients and servers?"
Design Space
What is in the design space of remote data request APIs? More specifically, what is in the design space of backend specific request builders?
For the sake of this post, we'll define the design space as:
A means to describe the capabilities of a data model and subsequently build requests against that data model for client-server applications.
With the following design goals:
- Domain Language vs HTTP - We want to interact with our backends in their own terms rather than their raw transfer protocol. For example, in the context of GraphQL, this means queries, mutations, selection sets, fragments, etc.
- Selections vs Decoders - We want to speak in terms of what we wish to select rather than how we wish to decode it.
- Resources vs JSON - We want to speak in terms of the abstract representation of our data model rather than its specific interchange format and/or storage format.
- Typed vs Untyped - We want to compose our requests using the values of our application rather than the concatenation of query strings.
Let's take a second look at these design goals but this time in the form of a diagram:
The dividing horizontal line in the diagram represents an abstraction barrier. The barrier, in this case, separates backend specific request builders (above) from their implementation (below). Users at one layer should not need to concern themselves with the details below. The remainder of this post will examine an Elm API at the abstraction level of backend specific request builder.
elm-postgrest
I'm the author of
john-kelly/elm-postgrest; a package that abstracts over
elm-lang/http,
Json.Decode, and
Json.Encode to provide a nice API in the context of PostgREST. Like previously stated, this package falls into the category of backend specific request builders.
This post is about the core abstractions found in the elm-postgrest package and how those abstractions may be relevant to similar packages. All examples will be based on the work from
john-kelly/elm-postgrest-spa-example, which is an almost complete port of
rtfeldman/elm-spa-example to PostgREST. For those unfamiliar with PostgREST, here's an excerpt from their official documentation:
PostgREST is a standalone web server that turns your PostgreSQL database directly into a RESTful API. The structural constraints and permissions in the database determine the API endpoints and operations ... The PostgREST philosophy establishes a single declarative source of truth: the data itself.
The mental model for how these 3 pieces fit together:
elm-postgrest (client) ⇄ PostgREST (server) ⇄ PostgreSQL (db)
In case you're wondering, no knowledge of PostgREST is necessary to make it through this post, however, intermediate knowledge of Elm and technologies like REST, GraphQL, JSON API, Firebase, Parse, or other remote data server specifications will be helpful.
Alright. Now that we have some context, let's dig into the code.
Our First Request
Our first request will retrieve all articles from our remote data server.
For this example, we'll assume that we have a collection of article resources at
example.com/api/articles. Each article has a title, a body, and the count of the number of favorites.
I take a top down approach for the code in this example. Keep this in mind! Later sections will help you better understand earlier sections.
Types
We're going to start out by looking at 4 of the core types in elm-postgrest. I provide the internal implementation of each type, however, don't get bogged down in the definition. I show the implementation in an attempt to ground the new PostgRest types to something you're familiar with.
import PostgRest as PG exposing ( Request , Selection , Schema , Attribute )
- Request - A fully constructed request. The only thing left to do is convert this value into an
Http.Requestand send it off to the Elm runtime. As we'll learn, a
Requestcan be constructed with a
Selectionand a
Schema.
type Request a = Read { parameters : Parameters , decoder : Decode.Decoder a }
- Selection - The
Selectionis one of the primary means to build requests against the data model. Specifically, the
Selectionrepresents which fields to select and which related resources to embed.
type Selection attributes a = Selection (attributes -> { attributeNames : List String , embeds : List Parameters , decoder : Decode.Decoder a } )
- Schema - The
Schemais the means to describe the capabilities of a data model. Capabilities means what we can select, what we can filter by, and what we can order by. We're only going to cover selection in this post.
type Schema id attributes = Schema String attributes
- Attribute - An individual select-able unit of a
Schema. For example, the article resource has a title
Attribute String.
type Attribute a = Attribute { name : String , decoder : Decode.Decoder a , encoder : a -> Encode.Value , urlEncoder : a -> String }
Request
Here we're constructing a
Request which will result in a
List String. The mental model for this type should be the same as that of an
Http.Request: "If we were to send this
Request, we can expect back a
List String."
getArticles : Request (List String) getArticles = PG.readAll articleSchema articleSelection
Let's take a look at the function signature of
PG.readAll before moving on to the next section.
readAll : Schema id attributes -> Selection attributes a -> Request (List a)
As we can see by the signature of
readAll, a
Request can be constructed with a
Selection and a
Schema. Let's now take a look at our
Selection.
Selection
The
Selection type has 2 type parameters:
attributes and
a. The mental model for reading this type is "If given a
Schema of
attributes, a value of type
a could be selected."
articleSelection : Selection { attributes | title : Attribute String } String articleSelection = PG.field .title
Things will look vaguely familiar if you've worked with
Json.Decode.field. This is intentional. Overall, you'll find that the
Selection API is quite similar to the
Decoder API. Let's examine the signature of
PG.field:
PG.field : (attributes -> Attribute a) -> Selection attributes a
A field
Selection is composed of a dot accessor for an
Attribute. If we remember back to the mental model for a
Selection, we'll recall that we're in need of a
Schema to fulfill the
Selection. Given that the first type parameter of our
articleSelection is
{ attributes | title : Attribute String }, our
Schema will likely itself have this record of
Attributes. Let's take a look!
Schema
In theory, we could pass anything as the second parameter to the
PG.schema function, but in practice this value will always be an Elm record of
Attributes.
articleSchema : Schema x { title : Attribute String , body : Attribute String , favoritesCount : Attribute Int } articleSchema = PG.schema "articles" { title = PG.string "title" , body = PG.string "body" , favoritesCount = PG.int "favorites_count" }
PG.schema takes a
String which corresponds to the path to our resource (ex: example.com/api/articles) and a record of
Attributes. This record of
Attributes describes the capabilities of a data model. In our specific case, it describes what we are able to select!
Let's take a look at how
Schema and
PG.schema are defined internally:
type Schema id attributes = Schema String attributes schema : String -> attributes -> Schema id attributes schema name attrs = Schema name attrs
At first glance, we'll see that a
Schema is nothing more than a wrapper around a record of
Attributes. And this is true, but it's important to highlight that it's an opaque wrapper around a record of
Attributes. It may not be immediately obvious, but it is this API that guides users towards a separation of the description of capabilities (
Schema) from the building of requests (
Selection). A user can't just write something like
PG.field mySchema.title because the record is wrapped, and a user can't just unwrap the
Schema because it's opaque! They are forced to use the functions provided by the package to compose things (namely
PG.field). This API guides users towards writing selections in terms of an eventual record of attributes!
Hopefully the previous explanation sheds a bit of light on why
PG.field takes a dot accessor for an
Attribute rather than an
Attribute directly.
Before moving on, let's review a few of these type signatures side by side:
PG.readAll : Schema id attributes -> Selection attributes a -> Request (List a) articleSelection : Selection { attributes | title : Attribute String } String articleSchema : Schema x { title : Attribute String , body : Attribute String , favoritesCount : Attribute Int }
Just take a moment to take this all in. It's pretty cool how the pieces fit together, and we can thank Elm's extensible record system for that!
Just to wrap things up for those who are curious, there exists a function of type
PG.toHttpRequest : PG.Request -> Http.Request. From there you can convert to a
Task with
Http.toTask or directly to a
Cmd with
Http.send.
Conclusion
Did we meet our design goals?
Yes! In our example, we built a request to read all the titles (Request Builder) of our article collection resource (Schema Representation) as opposed to making an HTTP GET request to the
api/articles?select=title URL (Transfer Protocol) and decoding the JSON response (Interchange Format). The former is how we expressed our request in the example, and the latter is an implementation detail.
What has this design bought us?
- Type Safety
- Reuse
Type Safety
If the
Schema is valid, our
Request will be valid. Our
Selection is defined in terms of a
Schema, and we can only construct a
Request if the
Schema and
Selection agree statically. Put another way, a subset of request building errors become static errors rather than logic errors.
For example, let's say we mistype
.title when we're constructing our
Selection. If our
Schema correctly describes our remote resource, we'll get a nice compiler message. Let's take a look at that error message!
The definition of `articleSelection` does not match its type annotation. 18| articleSelection : 19| Selection 20| { attributes 21| | title : Attribute String 22| } 23| String 24| articleSelection = 25|> PG.field .titl The type annotation for `articleSelection` says it is a: Selection { attributes | title : ... } String But the definition (shown above) is a: Selection { b | titl : ... } a
Pretty cool. However...
Close readers will argue that we've just moved the logic error to the
Schema from the
Decoder. This is true, however, the difference is that we only have 1
Schema for an entire entity as opposed to a
Decoder for each way we wish to decode the entity. A
Schema represents a single source of truth for all
Selection capabilities of a remote resource. This in turn reduces the surface area of decoding logic errors.
So, in summary: If the
Schema is valid, our
Request will be valid.
Reuse
A
Selection can be reused to construct
Requests with any
Schema that has the proper
Attributes! For example, if our remote data server had both article resources and book resources:
articleSchema : Schema x { title : Attribute String , body : Attribute String , favoritesCount : Attribute Int } articleSchema = PG.schema "articles" { title = PG.string "title" , body = PG.string "body" , favoritesCount = PG.int "favorites_count" } bookSchema : Schema x { title : Attribute String , pages : Attribute Int , authorName : Attribute String } bookSchema = PG.schema "books" { title = PG.string "title" , pages = PG.int "pages" , authorName = PG.string "author_name" }
We could use the same
Selection:
titleSelection : Selection { attributes | title : Attribute String } String titleSelection = PG.field .title
To construct our 2 separate requests:
getArticles : Request (List String) getArticles = PG.readAll articleSchema titleSelection getBooks : Request (List String) getBooks = PG.readAll bookSchema titleSelection
Pretty cool. However...
To be completely honest, I have not yet had a need for this reuse feature. With that being said, there's still something about it that makes the API feel right.
So, in summary: Extensible records in
Selection API grant us reuse.
Which ideas could find their way into similar projects?
Schemaas single source of truth for
Selectioncapabilities
- Separation of
Schemaand
Selection
- Extensible records central to design of this separation
SelectionAPI similar to that of
DecoderAPI
- And more.. we'll discuss those in the future posts
Future
In the interest of space, time and boredom, I have not included all of the API designs of the
elm-postgrest package in this post. In the future, I may write posts to highlight the concepts which were left out here. For example:
- Combining Selections
- Schema Relationships and Embedding Selections
- Conditions and Orders
- Create, Update, and
Thanks for reading.
If you'd like to view some more simple examples, here's a link to the examples on github. Take a look at each individual git commit.
If you'd like to see a more "RealWorld" example application, here's a link to john-kelly/elm-postgrest-spa-example.
If you're interested in taking a look at the development of
john-kelly/elm-postgrest, head over to the dev branch.
How to begin a project with just an idea
Hi, I have an idea of an application, I'm excited about it, I'm a junior, I'm g...
| https://dev.to/johndashkelly/a-remote-data-request-api-in-elm-3e5e | CC-MAIN-2018-22 | refinedweb | 2,400 | 56.66 |
Tuesday, January 05, 2016
353Solutions - 2015 in Review
Happy new year!
First full calendar year that 353solutions is operating. Let's start with the numbers and then some insights and future goals.
First full calendar year that 353solutions is operating. Let's start with the numbers and then some insights and future goals.
Numbers
- 170 days of work in total
- Work day is a day where I billed someone for some part of it
- Can be and hour can be 24 hours (when teaching abroad)
- There were total of 251 work days in 2015
- There were some work days that are not billable (drafting syllabuses, answering emails ...) but not that many
- 111 of days consulting to 4 clients
- 1st Go project!!!
- 58 days teaching 14 courses
- Python at all levels and scientific Python (including new async workshop)
- In UK, Poland and Israel
Insights
- Social network provided almost all the work
- Keep investing in good friends (not just for work :)
- Workshops pay way more than consulting
- However can't work from home in workshops
- Consulting keeps you updated with latest tech
- Had to let go of a client due to draconian contract
- No regrets here, it was the right decision
- Super nice team. Sadly lawyers had final say the company
- Python and data science are big and in high demand
- Delegating overhead to the right person helps a lot
- Accounting, contracts ...
Future Goals
- Keep positioning in Python and Scientific Python area
- Drive more Go projects and workshops
- Works less days, have same revenue at end of year
- Start some "public classes" where we rent a class and people show up
- Some companies don't have big enough data science team
- Need to invest in advertising
- Publish my book (more on that later)
Posted by Miki Tebeka at 19:34 0 comments
Blog Archive
- ▼ 2016 (17)
- ► 2015 (18)
- ► 2014 (24)
- ► 2013 (35)
- ► 2012 (22)
- ► 2011 (29)
- ► 2010 (17)
- ► 2009 (38)
- ► 2008 (45)
- ► 2007 (26)
| http://pythonwise.blogspot.com/2016_01_01_archive.html | CC-MAIN-2017-47 | refinedweb | 322 | 56.22 |
two dimensional - Java Beginners
two dimensional write a program to create a 3*3 array and print the sum of all the numbers stored in it. Hi Friend,
Try the following code:
import java.io.*;
import java.util.*;
public class matrix
Difference in two dates - Java Beginners
on that.
The thing is, that I need to find the difference between the two dates in JAVA... for more information: in two dates Hello there once again........
Dear Sir -
Algorithm_2 - Java Beginners
Sort,please visit the following link:
Thanks... is S) into two disjoint groups L and R.
L = { x Σ S ? {v} | x
Algorithm_3 - Java Beginners
the following links:... is traversed from 0 to the length-1 index of the array and compared first two values
java sorting codes - Java Beginners
java sorting codes I want javasorting codes. please be kind enogh and send me the codes emmediately/// Hi Friend,
Please visit the following link:
Here
merge sorting in arrays - Java Beginners
,
Please visit the following link:
Thanks
java - Java Beginners
.
http...:
http
to calculate the difference between two dates in java - Java Beginners
to calculate the difference between two dates in java to write a function which calculates the difference between 2 different dates
1.The function...) {
// Creates two calendars instances
Calendar calendar1 = Calendar.getInstance
java - Java Beginners
:
Thanks
java - Java Beginners
://
Here you
programs - Java Beginners
information. Array Programs How to create an array program in Java? Hi public class OneDArray { public static void main (String[]args){ int
java - Java Beginners
link:... in JAVA explain all with example and how does that example work.
thanks
... Search:
java - Java Beginners
information.
Two compilation errors.Can anyone help soon. - Java Beginners
Two compilation errors.Can anyone help soon. a program called Date.java to perform error-checking on the initial values for instance fields month, day and year. Also, provide a method nextDay() to increment the day by one
compare two strings in java
compare two strings in java How to compare two strings in java...)
{
System.out.println("The two strings are the same.");
}
}
}
Output:
The two strings are the same.
Description:-Here is an example of comparing two
java
the following link:
array manipulation - Java Beginners
example at:
Java - Java Beginners
Java How to add and print two numbers in a java program single...;
System.out.prinln(a+b);
Hi friend,
Code to add two number in java
class... :
Thanks
insertionSort - Java Beginners
));
}
}
For more information on Java Array visit to :
Thanks
java beginners - Java Beginners
java beginners
to Write a program to convert entered number into words.
Output : You have entered number = 356
The number in words...[] Number1 = {""," Hundrad"};
static final String[] Number2 = {"","One","Two
How to concatenate two arrays in Java?
How to concatenate two arrays in Java? How to concatenate two arrays in Java
java related - Java Beginners
.
The best place for learning Java is "" and visit the below
two link.../
Thanks...java related Hello sir,
I want to learn java. But I don't
java - Java Beginners
Visit to :
Thanks... in an array...
I have to determine if each cell in a two dimensional array is alive...; Hi friend,
Code for two dimension Array :
class
Delphi to java - Java Beginners
Delphi to java The program in Java or should I use Java beans or exist it any program that translate delphi code to java...Delphi to java I have done a program in delphi with two edits
java code - Java Beginners
.
Thanks...java code Dear
sir
i need matris form like
1 2 3
4 5 6
7 8 9... , dowhile and while any one only use display two D arry like matrix Hi
matrices - Java Beginners
matrices Write a program to add the two matrices Hi Friend,
Please visit the following link:
Hope that it will be helpful for you.
Thanks
java"oop" - Java Beginners
:// OOPs Concept What is OOPs programming and what it has to do with the Java? Hi i hope you understand it.//To print the even numbers
java threads - Java Beginners
java threads What are the two basic ways in which classes that can be run as threads may be defined
program1 - Java Beginners
;
}
}
}
}
}
----------------------------------
Visit for more information.
Thanks
Core Java - Java Beginners
Microsystems. We generally introduce java in two ways, core java and advance java. But you need not to confuse in between two, as both the are java ;)When we...Core Java What is Java? I am looking for Core Java Training
core java - Java Beginners
-in-java/
it's about calculating two numbers
Java Program Code for calculating two numbers java can we write a program for adding two numbers without
java - Java Beginners
that it is always 2(two)
Hi friend,
For more information on Java visit...java what is the version of java(latest),is java a open source...://
Thanks
java program - Java Beginners
);
}
}
To get the program of product of two matrix,please visit the following link:
Thanks...java program Pl. let me know about the keyword 'this' with at least
programmes - Java Beginners
://
java programming - Java Beginners
java programming asking for the java code for solving mathematical equation with two unknown .thnx ahead.. Hi Friend,
Please clarify your question. Which mathematical equations you want to solve?
Thanks
java programe - Java Beginners
java programe Write a java program that does the following:
1. Switch the values of two variables using a method
2. Print the switched values onto the screen
java concepts - Java Beginners
java concepts i need theory for designing a java interface for adt stack .develop two different classes that implement the interface one using array and another using linkedlist
java-io - Java Beginners
://
Thanks...java-io Hi Deepak;
down core java io using class in myn
Java - Java Beginners
Java Hi friends,
I want to compare two time objects.
the first time object is before or not How can i do this in java code
Java basics - Java Beginners
Java basics I have two classes
class cat {
int height, weight... to visit...
Thanks... question is why java is complaining (L3 through L8) when main method signature
reverse two dimensional arrays in java
reverse two dimensional arrays in java reverse array elements in two dimensional array such that the last element becomes the first
java - Java Beginners
java All the data types uses in java and write a program to add 2... all the data type used in Java :
To add two numbers program please visit the following
java - Java Beginners
may refer to different methods.
In java,there are two type of polymorphism... the following links:
http
array in java - Java Interview Questions
Friend,
Please visit the following link:
Thanks...array in java array is a object in java. is it true, if true
java beginners - Java Beginners
the following links: beginners what is StringTokenizer?
what is the funciton
java - Java Beginners
java Hi , roseindia
I got small doubt in java, now my problem is i want difference between two dates, now i am storing these two dates... ="03.11.2009"....i want result 2..difference of these two dates...but i tried much
java swings - Java Beginners
java swings Hi,
I have two listboxes .Move the one listbox value into another listbox using add button.Move the value one by one into another listbox.Please give the code using jpanel.
Thanks,
Valarmathi
java loops - Java Beginners
java loops Q1 print the following pyramid... a recursive function to add two integers?
Q3- Write a recursive function to find the HCF of two positive integers
java beginners
java beginners Q1: Write a method named showChar. The method should accept two arguments: a reference to a String object and an integer. The integer argument is a character position within the String, with the first character
Java to html - Java Beginners
Java to html
Hi ,
I have to compare two text files, and put..., i need help for writing the results from the java application to the html file... in java to write data as html content.
So write ur data with PrintWriter
core java - Java Beginners
means class to have two or more methods with same name in the but with the different... and Overriding:
java - Java Beginners
and calculating sum of two numbers");
System.out.println("Sum is : "+c... links:
java script - Java Beginners
java script hi sir,
the program code that i need for is :
1)there are two button in javascripting if you click the Ok button the cancel should disappear . Hi Friend,
Try the following code:
function
Java coding for beginners
This article is for beginners who want to learn Java
Tutorials of this section are especially for the beginners who want to incept
the Java program from very beginning. Tutorial Java coding for beginners will
enable you to know about
java - Java Beginners
one end of a two-way communications link
between two programs running...:
Thanks
Java Code - Java Beginners
Java Code Given two arrays named numbers1 and numbers2, code an if clause that tests if the two arrays are of the same type and have elements with the same values. Hi friend,
Code to tests if the two arrays
Java Program - Java Beginners
Java Program Write a java program to find out the sum of a given number by the user? Hi Friend,
Try the following code:
import... num2=input.nextInt();
int sum=num1+num2;
System.out.println("Sum of two numbers
java code - Java Beginners
java code dear
i need one java code
display 2d array with one loop dosent use two loops
but use any one loop
mens do while.../java/
Thanks
Java Swings - Java Beginners
Java Swings hi ,
I am doing project using netbeans. I have... items from executing a method. For this i found two option in the combobox model... code:
java answerURGENT! - Java Beginners
, and 'z'.
f.Write a Java statement that prints the value returned by method two...java answerURGENT! consider folowing method headings:
public static... is the type of the method test?
b.How many parameters does method two have?What
java - Java Beginners
java write a package for college which has two classes teacher and subject .teacher has two methods accept() and display and student has two methods accepts() and display().display the information about teacher and subject
java - Java Beginners
java Q>: Why two folders get installed during installation of JDK?Wht is the use of two folders?
use advance jdk installer jdk-6u12-windows-i586-p.exe
Java - Java Beginners
Java prime number program How to show the prime number in Java? Can... { public static void main(String[] args){ int power_of_two = 2; for(int n=2; n<31; n++) { int mersenne = power_of_two - 1
java multithread - Java Beginners
java multithread Hi,
Thanks for your multithreading java code...) {
System.out.println(e);
}
}
}
}
I want two objects i.e ob1 and ob2 as two diferent threads and should run simultaneously without affecting
Java Project - Java Beginners
Java Project Dear Sir,
I have to do Project in "IT in HR i.e.... these modules. All these modules are using more than two common master files i.e. employee master.
Now, can you tell in Java programming with Mysql How can
java program - Java Beginners
java program i have two classes like schema and table. schema class has fields like schema name and set of tables(3), table class has fields like name, catelogue,columns, primarykeys, foreignkeys. so i need to write 2 java basics - Java Beginners
Java basics I have two classes
class cat {
int height, weight... java is complaining when main method signature is commented out though ?I do... object/class/variable work. new to java.
-Thanks
javanewbie
Java Abstraction - Java Beginners
Java Abstraction suppose we have an interface & that interface contains five methods. if a class implements that interface then we have to bound... that class as abstract then can we call only two methods to give the deinition
java operators - Java Beginners
java operators Hello...........Can we perform addition of two numbers without using any arithmatic operator? Hi Friend,
Yes, you can use BigInteger class to add, subtract,multiply,divide the numbers:
import
java program - Java Beginners
java program write a program that asks the user for a starting value and an ending value and then writes all the integers (inclusive) between those two value. Hi Friend,
Try the following code:
import java.util. code - Java Beginners
java code plese provide code for the fallowing task
Write a small record management application for a school. Tasks will be Add Record, Edit... be stored in one or two files.
Listing records should print the names of the users
java class - Java Beginners
java class hi sir,
i have to compile two classes(eg:controller.java and client.java) and i have imported some packages in it using jar files(like Activation.jar and servlet.jar) when ever i am running in command promt
Java Coding - Java Beginners
Java Coding Two overloading methods that returns average using following headers
A) public static int average(int[] array)
B) public static double average(double[] array
The program should prompt the user to enter
Concatenating two Strings in Java
Concatenating two Strings in Java
In this section we will help you, how to concatenate two strings in java.
String are sequence of character which is used... that concatenate the two strings.
String str1="hello";
String str2=" Java
JAVA - Java Beginners
of code and the action listeners. You will need two panels ? one for the drawing... and will have two methods: public void paintComponent(), public void setMouth... or sad ? to draw.
Java Hint
The action listener you produce to respond
java applets - Java Beginners
java applets 1.write main method for display clock applet including... to implement arithematic operationsby clicking the button +,-,*,/ for two no.s and result... calculator using java codes?...
4.write a java application to open the file
arrays in java - Java Beginners
arrays in java Hi All,
I have two arrays. in this two array some name are same. I want to merge those arrays into single. But while merging I want to delete duplicate entries. How merge those arrays.
Thanks,
mln15584
java - Java Beginners
java consider two panel.one consoting of 5 java different component.When user drag that component to other panel at that time that dragged component... information.
Java - Java Beginners
to the instructions given in the lecture series.
(c) Convert the above two pseudo codes (sorting and partitioning) to Java. It should be able
to execute any set... and contrast it to the quicksort
algorithm
(f) Write a java program
java code - Java Beginners
java code
Sir
Ineed one code for Matrix
"Take a 2D array and display all elements in matrix form using only one loop "
request... with using two loops i need only one loop
thanking you.
Hi
java - Java Beginners
java How can I write a java program to do the following:-
1. Switch the values of two variables using a method
2. Print the switched values onto the screen
thanks my homey Hi Friend,
Try the following code
core java - Java Beginners
core java write a program to add two numbers using bitwise operators? Hi friend,
i am sending running code.
public class...://
Thnaks.
Amardeep
java - Java Beginners
java write a package for college which has two classes teacher and subject .teacher has two methods accept() and display and student has two methods... of the college package Hi Friend,
Directory Structure:
C://
|
java
java program - Java Beginners
java program "Helo man&sir can you share or gave me a java code hope....
b. Write the definition of method TWO as follows:
I. Read a number... A JAVA CODE.THANKS Hi friend,
Having some doubts on Your
Java programming - Java Beginners
Java programming Write a program that reads a character and a string... of the first two occurrences of the character in the string.
c. Finally, print...!");
}
}
}
For more information on Java visit to :
Comparing two dates in java
Comparing two dates in java
In this example you will learn how to compare two dates in java.
java.util.Date provide a method to compare two dates... date. The
example below compares the two dates.
import | http://www.roseindia.net/tutorialhelp/comment/55579 | CC-MAIN-2014-10 | refinedweb | 2,693 | 56.15 |
I was faced with an interesting situation recently. A customer had a request to discover and monitor an application which installs a service, among other things. Whoopty-doo right? Well, the unique part of this request was that the service name varied across different computers. The service name was not static, but contained some static data, and included the computer name in the service name as well, which would be different for each computer/agent.
So the service might look like: AppNameComputerNameVersion Where <ComputerName> was random depending on the computer name of the agent. What is worse – is that sometimes the <ComputerName> value did not match the actual NetBIOS name of the machine!
The challenge we are faced with, is that the built in Service Unit Monitor does not support wildcards. This is a specialized monitor type which requires the passing of a service name to the monitor workflow. The native module/API to monitor services in OpsMgr is very efficient, but there is no mechanism to support a wildcard. So the challenge is – how do I monitor this application service?
The typical solution would be to write a custom monitor, which runs a script, and have the script query WMI for the service name and state, as WMI does support wildcards. The problem with doing so is that this method requires someone to develop the script, test the script, support changes to the script down the road, and running scripts on a very frequent timer (like once a minute) will consume a lot of OS resources during script startup, runtime, and teardown.
So, another solution, is to discover the service name, and then pass that service name to the service monitor workflow using an XPATH query variable, which resolves as the actual service name (since we discovered it). Brian Wren wrote up a very similar solution here. Brian's solution to this challenge involved using the Windows Service Template to handle this…. The Windows Service Template creates a lot of monitoring from a simple wizard. First – it takes your service name as input. Then it creates a class for that service, and a discovery to discover instances of this new class. Then it creates the monitor for the service, and passes the service name to the monitoring workflow. His example works very well to accomplish the goal of monitoring just services with random naming using a wildcard.
The downside to this approach above, is that we will have to create a new class for each service with a unique name. Any tweaks to the monitor will have to be done for EACH monitor uniquely. Additionally – what if we are writing a complete management pack for our application, and want to treat all the services identically, just monitor them as if the service monitor did support wildcards?
Here is another approach:
What we can do, is to create a new class for our application, and write a discovery for our class. Then – we can add a new property to our class, for the “Service Name”. Then – write another discovery, to simply discover the service name as an attribute of the class. This way – once we discover the attribute of Service Name, we can pass this to the monitoring workflow using the standard service unit monitor. This is a much cleaner approach, but requires getting your hands a little dirtier in MP authoring to accomplish it.
Ok, that’s enough background – lets get started! So, for this example, I will walk through the entire thing, in 4 easy steps:
Step 1: First, we will create a class for our Application.
Step 2: Next, we will write a discovery to discover instances of the class.
Step 3: Then, we will add a property to the class for our service name. We will have to create the additional WMI discovery to discover the service name property and populate the class with that.
Step 4: Lastly – we will create the service monitor and pass this discovered property to the service unit monitor.
Step 1: Create a new Class
We will start with the authoring console. Open the Authoring console and create a new, empty management pack.
Give the MP an Identity (ID) which cannot contain spaces. This is the MP ID that all workflows and classes will leverage. I will call mine Example.Application
Give the MP a Display Name (and a description optionally). I chose “Example Application”. This will be the display name of the management pack.
Select the “Service Model” pane. Select Classes. In the actions pane on the right – choose New > Custom Class.
Give the new class an ID. Every object in the MP will begin with the ID of the management pack, in my case Example.Application.xxxx. For my example application, I don’t have a good service with random text in the name, so I will just use the Opalis Services for my example here. So my Class ID will be Example.Application.OpalisActionServer.
For the base class – we pretty much always start with Microsoft.Windows.LocalApplication as that class was designed to be a good base class locally installed applications on Microsoft servers.
Click OK to save the new class.
Done!
Step 2: Create a new discovery to discover instances of our class
Now – go to the Health Model pane, Discoveries, and in the Action pane choose New > Custom Discovery
We need to prove an ID for the Discovery which will populate instances of the Example.Application.OpalisActionServer class. I will use “Example.Application.OpalisActionServerDiscovery”
Fill in a good display name for the discovery. I like to always add the word “Discovery” at the end of any discovery workflow display name – it makes more sense when looking at it in the console down the road. I do the same for Group, Rule, Monitor, etc…. A good naming standard is critical to successful MP authoring and long term management.
For the Target – this is the class that we want to run the discovery on. Since we need to discover on ALL servers, if they have this application (Opalis Action Server) we will choose a good seed discovery target class, such as Microsoft.Windows.Server.Computer (Windows Server).
On the Discovered Classes Tab – add you new class. This is just saying that this discovery will discover instances of the chosen class:
On the Configuration Tab – select “Browse for a type”. We need to choose a predefined module for this discovery – such as the Registry, WMI, or Script. I will use the “Microsoft.Windows.FilteredRegistryDiscoveryProvider” type. This is a pre-created module for inspecting the registry on a target, and then adding an expression to add it to our new class if it finds a “match” in the registry. Name the Module ID “DS”, which is simply giving our DataSource a name:
Next – we need to configure the registry discovery. Set the interval to something short for testing (like 60 seconds), but for production use this should never be more frequent than once every 4 hours (14400 seconds)
Next – add a registry attribute on the Registry Probe configuration tab. We can use a key, or a value. We can inspect it for existence, or for a specific expression against the contents of the value. For my example – I am looking for the existence of the “HKLM\SOFTWARE\Opalis\Opalis Integration Server\Action Server” key. I choose Key, and then give my attribute a name like “OpalisASExists”. Fill in the registry info – taking care to notice that “HKLM\” is already assumed. I set my attribute type to “Check if exists” in this case.
On the Expression page – we will Insert, and choose our new attribute, setting it to “Equals” and “true”. This simply means that the existence of our reg key will be true or false, and we want to discover instances that have that specific key. That means they are Opalis Action Servers.
Lastly – on the discovery mapper tab – select your new custom class ID. We need to fill in the key and non key properties using a veriable from the flyout on the right. I will match up the Key property of Microsoft.Windows.Computer\PrincipalName to the appropriate variable on the right – which will be $Target/Property[Type="Windows!Microsoft.Windows.Computer"]/PrincipalName$. For the non key property, this is optional, but if we don’t fill this out – our discovered property for this column “DisplayName” will default to the ID of the class, which is ugly and provides no benefit. For this reason, I like to set this to $Target/Property[Type="Windows!Microsoft.Windows.Computer"]/NetworkName$ using the flyout on the right.
Let’s Click OK, and save our work to a file. This is also a good time to review what we have done thus far.
We have created a new MP. We have created a class for our Opalis Action Server. We have then created a discovery to discover and populate the class instances, and any properties of the class we want. At this point – if we imported the management pack in a test environment – we should be able to discover any Opalis Action servers. We can verify this using Discovered inventory.
Step 3: Add a new property to our existing class, and create a discovery for that new WMI property
Next up – adding a property to the class – to discover the service name. If the service name varies from computer to computer, we need to discover the name. This way – we can pass the discovered service name to the Unit Monitor workflow which monitors the service status.
Back to the authoring console!
Select the Service Model pane, then Classes, then bring up the properties of our newly created class.
Choose the Properties tab. Right Click – and choose “Add Property”. Lets call this one “OpalisASServiceName”. Then – give it a nice display name to be seen in the console. Such as “Action Service Name”. Leave the rest at defaults and click OK.
Now – we need to add an additional discovery to populate this class property. If we could easily get the service name from the registry, we could just add to our existing registry discovery that we already have, to discover and populate instances of the class. Instead, in this example – we will author an additional discovery, and use WMI to populate the information.
Go to the Health Model pane, and Discoveries. Add a new > Custom Discovery. I will call mine “Example.Application.OpalisActionServiceNameDiscovery”
Give the discovery a good display name. I used “Opalis Action Server Service Name Discovery”
Choose an appropriate target. For this discovery – we ONLY want it to run on previously discovered Opalis Action Servers, so we can target the discovery to the same class we are adding the property information to.
For the discovered classes – we will add our existing custom class. Then we can right click that class – and add the property which we will discover in THIS specific discovery – which is the OpalisASServiceName property.
For the Configuration tab – we need to Browse for a Type. We know we need to get this information from WMI – so in the “Look for:” box – type in WMI. We want to use the “Microsoft.Windows.Discovery.WMISinglePropertyProvider2”. How do you know that? Well – we are discovering a property – so that makes sense. The other ones sound overloy complex. So – we can guess – or we can look up the different providers and make sure we are picking a good one from the MSDN Module Reference where we can read about the specifics of the Microsoft.Windows.Discovery.WMISinglePropertyProvider2
Picking the right module type is probably the hardest part…. the best recommendation I can get (and what I often do) is look through other MP’s and see how they do it, what modules they used, and how they passed data to them, as a reference. You will find there is only a handful that you will use on a regular basis.
So – pick this module, and give your Module ID (Data Source) a name. I typically just put in “DS” for DataSource.
Click OK, and we will see the default, unconfigured module type config. We could fill this all in on this screen, but at this point it is just easier to go to XML. Click the “Edit” button below, which will bring up the XML snippet in notepad. If you havent ever used the Auth Console before – you will need to choose Notepad as your default editor.
Here is the default sample of XML for this provider:
<NameSpace>NameSpace</NameSpace> <Query>Query</Query> <Frequency>0</Frequency> <ClassID>ClassID</ClassID> <PropertyName>PropertyName</PropertyName> <InstanceSettings></InstanceSettings>
We need to pass the correct information to the provider, for what we want to discover, which is the Opalis Action Server Service Name.
Here is an example of how to get this information from WMI. Open WBEMTEST on the server which has the service you are looking to discover.
Connect to root\cimv2
Develop a query – using wildcards – which will allow you to discover a service that might have random text in the service name. My service name for Opalis is OpalisActionService. For this example I will pretend that part of the name contains random characters:
Hit the “Query” button in WBEMTEST and use something like: Select Name from Win32_Service where Name like 'OpalisAc%Service'
Test your query and ensure it only returns the discovery property data you are looking for, and won’t return multiple results.
Now that we have a working query – we are ready to input the XML, replacing the default snippet in between the <Configuration…..> and </Configuration>
<NameSpace>root\cimv2</NameSpace> <Query>Select Name from Win32_Service where Name like 'OpalisAc%Service'</Query> <Frequency>60</Frequency> <ClassID>$MPElement[Name="Example.Application.OpalisActionServer"]$</ClassID> <PropertyName>OpalisASServiceName</PropertyName> <InstanceSettings> <Settings> <Setting> <Name>$MPElement[Name="Example.Application.OpalisActionServer"]/OpalisASServiceName$</Name> <Value>$Data/Property[@Name='Name']$</Value> </Setting> <Setting> <Name>$MPElement[Name="Windows!Microsoft.Windows.Computer"]/PrincipalName$</Name> <Value>$Target/Host/Property[Type="Windows!Microsoft.Windows.Computer"]/PrincipalName$</Value> </Setting> </Settings> </InstanceSettings>
Namespace is the WMI namespace to connect to.
Query is the WMI WHQL query to execute which returns the property you are looking for.
ClassID is a reference to which class the discovery is returning data for. It will be in the format of $MPElement[Name="ClassID"]$
PropertyName is just a name for the discovered property, in this case it doesn’t matter what you use.
InstanceSettings are the class properties we need to populate, and the source of the information we will get it from. In this case – you can see we are populating the “OpalisASServiceName” class property with $Data/Property[@Name='Name']$ which is “Name” from the WMI select statement.
So – paste this into your discovery configuration in notepad – it should look something like this below, then hit ok:
Hit OK to save you discovery configuration. Now we should have two complete discoveries, and 1 class.
At this point, it might be a good idea now to increment our MP Version (File > Management Pack Properties > Version) and then save it to a file. Then – we can import it into a test environment/LAB where we can ensure it is working as designed.
I will import it, wait about 15 minutes to send to my agents, run the discoveries, and report back data. Then – I will open Discovered Inventory, and change the target type to my custom class “Opalis Action Server”
IT WORKED!!!!
I now have a new property, and it discovered the full service name of the service!
Whew! The hard work is over.
Step 4: Creating a service monitor for our wildcard based service.
This part is really simple. We will be creating a standard service unit monitor, just like any other, however, instead of typing in a service name for the monitored service, we will be using an XPATH variable, which will resolve the service name from the discovered class property.
Back in the authoring console. Select the Health Model pane > Monitors > New > Windows Services > Basic Service Monitor.
Give the monitor an ID, I used Example.Application.ActionServiceMonitor
Give a display name for the monitor, you will see this in the console. I used “Opalis Action Server Service Monitor”
On a side note – I like to always end each workflow ID and Display Name with the name of the workflow type. I find this makes life a lot easier when reading XML, or searching in the console. So rules end with “Rule”, monitors end with “Monitor”, discoveries end with “Discovery”, groups end with “Group” etc….
For the target – chose your custom class we just created.
For Parent Monitor, it is CRITICAL you NOT accept the default – and choose a parent. We default to the root and this is a HUGE mistake…. nothing should ever be saved there. Most of your custom monitors will fall under Availability State or Performance State. Since this is a core service availability, I will choose System.Health.Availability.State
Set the category to match your Parent Monitor.
For the service name – NORMALLY we would input the service name verbatim. This module does NOT support wildcards like * or %. So – in this case, we will instead input an XPATH variable, which is essentially the discovered property for the service name, that we used wildcards to discover.
My example will be:
$Target/Property[Type="Example.Application.OpalisActionServer"]/OpalisASServiceName$
All you would need to change is the Class ID, and the Property Name in most cases for your custom MP’s, from this example. What happens here, is during runtime of the service monitor, it will resolve this to the name of the discovered property for the service name, and use that in the monitoring workflow, as if it was added explicitly.
Click Finish and we are done!
Now – this is another good time to increment our MP version, and save our work to a file. Then – import it into a test environment, and test for the desired functionality.
If all goes well, our discovered instances of the Opalis Action Server class should now show as monitored and healthy:
And when you stop the Action Server service, we should see a state change for the service monitor:
That’s all there is to it! If you have more than one service, then you would use more than one class property using this technique. Always try and combine multiple items in the same discovery whenever possible.
If you run into trouble at any step of the way – you can compare your XML to my example, which I am attaching below.
Example.Application.xml.zip
Hello,
Many thanks for this publication, I just want to know if this might serve to add the newly created service as a component in a distributed application.
Thanks in advance
@Marlon – there is a link above which takes you right to the authoring console:…/details.aspx
I have a bit of a noob question. I have several systems that have multiple regisrty entries with multiple services associated. What I initially did was when I created the discovery (with only one class and one property) I added all the reg entries into the one discovery. Then I wildcarded the multiple services. Will that even work, or do I need to create a separate property, and discovery for each? Or can I just create a new discovery with each of the properties included? I'm thinking I would be able to add all of the properties to the discovery and then change the xml to reflect that. Or is that flawed?
Hi Kevin,
Tx for all your articles, they have been a big help for me in using SCOM
I use SCOM2012(test environment) and SCOM2007 R2(production) but the only panes available in th Authoring Console for both the versions are :
Administration
Authoring
Monitoring
My Workspace
Reporting
How do I get the Health Model, Serivce Model and Type Library visible/imported/available in the Authoring console ??
Regards, Marlon
@Donked
I too would like to know the answer to that question.
Kevin's 2nd last paragraph (That’s all there is to it! If you have more than one service, then you would use more than one class property using this technique…) states to use multiple classes. Does that not defeat the purpose of using wild cards?
Very nice article, thanks for taking the time to put this together. Like anything you do it's thorough which is great. I do prefer to use the Service template when possible to accomplish a task like this due to the fact it creates perf. collection rules for each discovered service. This saves the author quite a bit of time.
Happy new year to you Kevin, and the rest of the forum members.
Just returning from my holidays.
I will check the link, many thanks
Maybe a stupid question, but I get stuck with scom 2012…
I want to do the same thing.. but… for instance the create class is nowhere to be found in 2012…… 🙁
Any idea how to implement this in this version??
Thx a lot Kevin, the xml file saved me a lot of time ! | https://blogs.technet.microsoft.com/kevinholman/2011/01/20/how-to-monitor-a-service-with-unique-names-across-multiple-computers-using-a-wildcard/ | CC-MAIN-2018-39 | refinedweb | 3,515 | 63.39 |
[Solved] KDChart crash on constructor.
Hi guys,
I just came across KDChart and I'm trying it's functionality.
I built the library (for windows) following the instructions and compiled an example, a very easy one:
@
#include <QApplication>
#include <KDChartWidget>
int main( int argc, char** argv ) {
QApplication app( argc, argv );
KDChart::Widget widget; //Crash here widget.resize( 600, 600 ); QVector< qreal > vec0, vec1, vec2; vec0 << -5 << -4 << -3 << -2 << -1 << 0 << 1 << 2 << 3 << 4 << 5; vec1 << 25 << 16 << 9 << 4 << 1 << 0 << 1 << 4 << 9 << 16 << 25; vec2 << -125 << -64 << -27 << -8 << -1 << 0 << 1 << 8 << 27 << 64 << 125; widget.setDataset( 0, vec0, "Linear" ); widget.setDataset( 1, vec1, "Quadratic" ); widget.setDataset( 2, vec2, "Cubic" ); widget.show(); return app.exec();
}
@
Apparently it crashes, at runtime, when I try to instantiate the widget causing the main to return 1.
Any Idea what I did wrong?
Using Qt 4.8 on VS2010 and Windows 7
Edit:
Ok, I'm just dumb, I forgot I compiled KD Chart for release and tried compiling the example as debug. That it's what caused the crash.
Leaving it here if somebody else is silly enough to run into my same problem | https://forum.qt.io/topic/33553/solved-kdchart-crash-on-constructor | CC-MAIN-2017-47 | refinedweb | 196 | 75.1 |
CFNull Reference
Inheritance
Not Applicable
Conforms To
Not Applicable
Import Statement
Swift
import CoreFoundation
Objective-C
@import CoreFoundation;
The CFNull opaque type defines a unique object used to represent null values in collection objects (which don’t allow
NULL values). CFNull objects are neither created nor destroyed. Instead, a single CFNull constant object—
kCFNull—is defined and is used wherever a null value is needed.
The CFNull opaque type is available in OS X v10.2 and later.
Returns the type identifier for the CFNull opaque type.
Return Value
The type identifier for the CFNull opaque type.
Import Statement
Objective-C
@import CoreFoundation;
Swift
import CoreFoundation
Availability
Available in OS X v10.2 and later.
Copyright © 2015 Apple Inc. All rights reserved. Terms of Use | Privacy Policy | Updated: 2005-12-06 | https://developer.apple.com/library/mac/documentation/CoreFoundation/Reference/CFNullRef/index.html | CC-MAIN-2015-35 | refinedweb | 131 | 52.56 |
From: David Allan Finch (sarum_at_[hidden])
Date: 2001-04-02 09:07:01
Phlip wrote:
> Does this library use the "slots and signals" system that Qt invented and
> which I suspect Gtk+ emulates?
I don't know about Qt or Gtk+ sorry.
> I posit that a "quasi-standard" library, such as Boost always tries to be,
> should avoid using an implementation of the Observer Design Pattern that
> sucks, such as MFC's terribly heavy Message Maps.
I agree :-)
Here is an very old example. Note this does not show the libraries
off to it full advantage, as it also has 'derived method callbacks'
and 'observer pattern' on each event as well as 'C style'
callbacks (I do have examples of these somewhere).
We where also adding in a View-Controller system just before we
ended the project.
BTW the library was called 'Sticky' :-)
---8<----
//
// Input Test
//
// this is to test the new SSlider class
// DAF - 13 Mar 96
#include <sticky.hpp>
#include <math.h>
SSlider<double>* s1;
SSlider<double>* s2;
SSlider<double>* s3;
SSlider<double>* s4;
// **********************************************************************
void set_sliders()
{
double d1;
double d2;
double d3;
double d4;
d2 << *s2;
d4 << *s4;
d1 = sqrt( d2 * d4 );
d3 = sqrt( (d2 * d2) + ( d4 + d4 ) );
*s1 << d1;
*s3 << d3;
}
// **********************************************************************
bool slider_changed( SSlider<double>* s )
{
set_sliders();
return( true );
}
// **********************************************************************
int StickyMain( int& argc, char* argv[] )
{
// THE SYSTEM SETUP
SApplication app( argc, argv );
// THE FIRST WINDOW
// This have a vertical layout panel
STopWindow main_window;
SWorkspace workspace;
SVPane top_holder;
SVPane Vholder;
SHPane Hholder;
SSlider<double> slider1;
SSlider<double> slider2;
SSlider<double> slider3;
SSlider<double> slider4;
s1 = &slider1;
s2 = &slider2;
s3 = &slider3;
s4 = &slider4;
slider1.min_value = 0;
slider1.max_value = 1000;
slider1.read_only = true;
slider1.orientation = Horizontal;
slider2.min_value = 0;
slider2.max_value = 1000;
slider2.read_only = false;
slider2.orientation = Horizontal;
slider2.move += (SCallback)slider_changed;
slider3.min_value = 0;
slider3.max_value = 1000;
slider3.read_only = true;
slider3.orientation = Vertical;
slider4.min_value = 0;
slider4.max_value = 1000;
slider4.read_only = false;
slider4.orientation = Vertical;
slider4.move += (SCallback)slider_changed;
slider2 << (double)110;
slider4 << (double)110;
set_sliders();
app += main_window;
main_window += workspace;
workspace += top_holder;
top_holder += Vholder;
Vholder += slider1;
Vholder += slider2;
top_holder += Hholder;
Hholder += slider3;
Hholder += slider4;
main_window.name = "SSlider Test";
return( app() );
}
---8<----
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2001/04/10487.php | CC-MAIN-2020-45 | refinedweb | 374 | 52.36 |
About tgext.less
LESS is a dynamic stylesheet language that extends CSS with dynamic behaviour such as variables, mixins, operations and functions.
tgext.less is a middleware aimed at making TurboGears2 development easier, tgext.less converts regular less files to css using the official less compiler (lessc), thus it currently requires is to be installed.
tgext.less is based on tgext.scss by Alessandro Molina and is under the same license (MIT).
Installing
tgext.less can be installed both from pypi or from bitbucket:
easy_install tgext.less
You will also need to install the less compiler, for instructions on this check the less website under the server side usage section.
Enabling tgext.less
Using tgext.less is really simple, you edit your config/middeware.py and just after the #Wrap your base TurboGears 2 application with custom middleware here comment wrap app with LESSMiddleware:
from tgext.less import LESS = LESSMiddleware(app) return app
Now you just have to put your .less files inside public/css and they will be served as CSS.). | https://bitbucket.org/clsdaniel/tgext.less/src | CC-MAIN-2018-26 | refinedweb | 172 | 58.08 |
- is a configuration setting to make $_GET and $_POST not global, so you are forced to declare it global or pass it :) It's supossed to force you to do "the correct thing".
In the default configuration, "index.jsp?action=delete" automagically creates in index.jsp a variable called 'action' with content 'delete'. PHP is funny and damn easy.No need to retrieve the variables from the request, because PHP does it for you! :P Of course, this is dangerous because visitors can set any variable to anything in your program.
Admin
and you're well known to be a tosser, it's been seen on many other posts on this site. Your point?
Admin
Seriously, the code is not the WTF. I imagine I wrote the original code. 'switch' cannot do everything 'if' can, so there's probably a reason I used 'if' in the first place. For whatever reason, the code ended up the way it was, as a long chain of 'if' and 'ifelse'. However, it works and is tested.
And now O.C. tells me, I have to use a switch, because... uhm... because... well, because he says so? I would either turn in exactly the code submitted ("You wanted a switch?! You get a switch!") or implement a refactoring tool, prove it correct (this might take a week or so, at the very least) and then apply "turn-if-into-case" exactly once. ("I had to be absolutely sure not to introduce an error in the transformation O.C. requested.")
See the WTF? Yeah, stupid code review practice it is.
Admin
Scheme is a Lisp dialect (nobody uses the original Lisp nowadays, people use dialects built from lisp with structures such as loops already included [Lisp doesn't have anything like while or for loops originally, it only has something like 7 statements anyway], and an OOP implementation). The most common dialects of Lisp are probably Scheme and CLisp (Common Lisp), but there are many more.
Smalltalk, on the other hand, is a completely different beast (and - as a side note - the language that birthed unit tests thanks to Kent Beck's sUnit, which was later reimplemented in Java by the same Kent, thus spawning jUnit and the whole xUnit legacy).
Admin
The general idea is to put up stuff that is absolutely indefensible - where doing it right would be easier and quicker as well as more efficient and clean.
Your mind must be a sad place then, because what's actually here are people that will quite often admit to having made similar mistakes, and nearly always try hard to find justification for the code besides simple stupidity and incompetence.
Recognize a bit too much of your own coding practices here? What's more retarded and pointless, making fun of other people's really bad mistakes and discussing how it should be done ideally, or getting all worked up and morally outraged about some totally harmless site on the internet?
Admin
Last time i checked, the only thing switch/case couldn't do that if/else could was non-equality (not as in "!=", as in "not ==") comparisons. A common operation, but one which doesn't occur in this code.
On the other hand, switch/case feature fallthrough, which would allow here for a much cleaner syntax overall, and an enhanced readability (compared to the current version, or a version using in_array in order to get rid of the duplicate code)
And using a switch/case structure prevents you from hitting the "=" comparison issue.
There is a quite cool process that i've known has been in existence for the last few hundreds of millenia (sp?), it's called 'a question'.
It's widely used when you don't understand the reason for an instruction, it allows you to get more informations about the aforementioned reasons and stop being a dumbfuck
Admin
Whoa! Lighten up Francis!
Admin
I think you're referring to the register_globals configuration option. Yes, disabling this stops the variable $action from automatically appearing your scripts global variables, but this configuration option by no means affects the globality or superglobality of the $_GET or $_POST arrays. (In fact, if they were not global, how would you be supposed to get at them?)
But you're right, that's what it does and why you should have it disabled.
Admin
Funny enough, driving a nail into a piece of wood is not considered an art, but designing and constructing opulent buildings is and always has been. I leave the moral of this up to the reader.
Admin
That was obviously a typo - not a grammatical error.
Sincerely,
Richard Nixon
Admin
Just as a BTW, I'd like to present a situation where I feel that
?:is better than
if-
then-
else-- Where you're doing the same thing with the result of the conditional, regardless of which branch is taken. For instance, given:
my ($id, $name, $change, $page) = split /\s*,\s*/, $line;
(ie $id, $name, $change and $page are the first four CSVs from $line)
...you can do the obvious:
if (!$id ) { die "No id" }
elsif (exists $types{$id} ) { die "Id clash" }
elsif (!$name ) { die "No name" }
elsif ($change ne "+" && $change ne "-") { die "Change not +/-" }
elsif (!exists $pages{$page} ) { die "Page nonexistant" }
...in which case you can be glad the thing you're doing with all the values is called something nice and short (ie
die()). Yes, you could muck about with assigning the results of each of those blocks to
$error something, but then you've got single-use variables floating around, and I prefer to avoid that.
Instead, I would do:
CHECK: {
}
die
!$id ? "No id" :
exists $types{$id} ? "Id clash" :
!$name ? "No name" :
$change ne "+" && $change ne "-" ? "Change not +/-" :
!exists $pages{$page} ? "Page nonexistant" :
last CHECK # escape the die()
This saves you from an unncessary variable, having to type "
die" or "
$err =" over and over again, and the - in my mind, unncessary and crossed-eyes-inducing - noise of repeated
elsifs and their associated punctuation. Yes, it does involve using loop control to escape from the parameter list of a function to stop it being called, which looks disturbingly hackish, but it's so much easier on the eyes.
Let's see what y'all think's Evil And Wrong (TM) about this, then. I'm sure there'll be something.
Admin
It's not for that case that elseif is considered bad...It's for the case where you have 20 more elseifs, and a few nested if statements. Besides that, switches are generally more readable; your pretty formatting makes the elseif look just as readable as the switch, but with an elseif block, you're generally counting {}'s, whereas with a switch, you're just going between big obvious breakpoints.
Admin
It does:
Case Expressions
Admin
What he wrote was more or less Python, no braces there, please move along.
(Oh, and i don't really see where you get "big obvious breakpoints" in switch/case statements)
Thanks sir.
Admin
Why is he typecasting in PHP? This guy is retarted.
Admin
We have been clearly led to believe that the coder was new to this lanauge, while O.C. is an expert. While there are many situations where switch/case cannot repalce if/elseif/else, given the above I would assume that O.C. is correct when he says this code can be changed to switch/case
In compiled languages a switch/case is turned into a hash table for large amounts of cases (about 10). Therefore a switch/case is much faster than if/elseif/else when there are many cases.
I don't know how php does things, but someone examined the output from the C# compiler last time we had this discussion and verified that it really does create a hash table.
siwtch/case is also more readable because it is clear from the start that you are only looking at one variable. (Note, for some languages this doesn't apply) With an if/else you need to watch for the variable being exampled changing, which makes reading code hard:
if x==5
doSomething()
else if y == 5
SoemthingElse()
else if x == 6
doSomethingElse()
Of course if you really need to mix variables to get the logic correct you need the if/elseif sequence. (and some comments so I understand why...)
Admin
I agree that both are equally bad/good. Long switches are hideous monstrosities as well. Their performance advantage over the else if chains hardly justifies their existence. If an else-if chain smells bad enough to justify going to a switch, then you may as well go into a strategy or command design pattern.
Admin
Even for strings? I thought compilers could only do that for ordinal types.
Admin
<Pun forewarning>
Because he's switching to the dark side, just in case...
Admin
Or a jump table.
Even in xBASE (which uses expressions), it is useful. I sometimes have to iterate through the four possibilities of truth values of two conditions. I find it much clearer to write
do case
case p and q
case p and !q
case !p and q
otherwise && !p and !q
endcase
than cascading if-else statements. Less often, I have have to handle more combinations. Cascading if-elses are thoroughly nasty then.
Sincerely,
Gene Wirchenko
Admin
I was under the impression that it was to keep a consistent coding style in the codebase. I mean, christ, who hasn't come across that project maintained by 5 people before you, all of whom had distinct coding styles and preferred to precision-insert changes instead of rewriting a block? Welcome to readability hell. Style standards are a good thing.
On the other hand, sometimes the coding equivelent of a middle finger is also a good thing. =D
Admin
No. That is also bad code.
if(IsTrue(true) == true)
Admin
It most certainly CAN be an art form. Developing elegant and efficient code takes skill and drive. I'd suggest that many programmers put into their code what people put into painting, music, or other "arts."
Admin
Actually, no, you can't. Not even jump tables work here. If you had values for 1 and 20008, your jump table would be larger than any CPU would support and would have a lot of junk pointers to the default routine. It always compiles to an if/else chain. Strings don't fit in registers, so could you imagine the size of a jump table if you representing the strings as integer values? No CPU has registers that large, and I'll go out on the bill gates branch here and say that no cpu ever will have registers that large, and certainly will never support an accompanying jump table (could you imagine a jump table for all possible strings? Not freaking likely).
Admin
I think it works for him, too. I bet he thought exactly the same thing as you. I don't believe that he didn't know how to use a switch statement, I think the programmer figured that for an if/else with only THREE outcomes, it wasn't actually worth a rewrite into a real case statement.
This looks like a case of rebellious programming, not bad programming.
Admin
Admin
Actually in C I believe the switch statement has fall-through for each of the cases, so e.g.
switch(c) {
1: { ... }
2: { ... }
3: { ... }
Each one just "falls though" to the next case? In this way switch looks more like a series of gotos. All cases should be mutually exclusive anyway, so else isn't required either.
Admin
In PHP, you have to prepend always all variables with "$", or PHP won't recognize it as a variable. Same with arrays.
In unix-like shells like bash, sh, tcsh, zsh, etc. you have to do the same, except when doing an assignment.
Admin
You know what binary search is? You know what a hashtable is? Apparently not, otherwise you wouldn't write such nonsense.
Admin
Aha. This is the kind of person that if you ask him "Say hello to Alice, Peter!" responds with: "Hello to Alice, Peter!".....
Admin
As a professional PHP developer I resent your generalization. Not all PHP developers employ bad coding practices. We are actually some coders that put pride and thought into our code.
Remember, it's not the tool that makes a bad coder.
Admin
Yep, I don't understand this either.
The world is full of strange things, you know ...
Admin
I work with C and C++ which only allow switch on ordinal types so I'm, not sure.
However I see no reason you can't do a hash table on a string. The has function is a little more complex, but it doesn't have to be perfect. There are plenty of papers on hashing of strings, just pick something that works when you implement the compiler.
Admin
Why won't a branch table work? You just need to be a little intelligent about it. For your example you don't jump on the full word, just the least significant bit.
foo[] = [(function for 20008), (function for 1)
foo[x & 0x0001]
Of course your functions then need to test for to be sure x is the correct value, so for this simple case it is a pointless optimization.
The C# compiler was tested some months back. When the number of cases is less than 10 it turns a switch into an if/elseif. When the number of cases is more than that it builds a hash/jump table.
Notice that we did not specify any particular table implimentation. A jump table with all possible values does not fit into memory. However compiler writers are smarter than that. Both hash tables, and binary searches will fit into memory (If it won't you have bigger problems, and need to think about hand optimized assembly).
Admin
Certainly there are programmers who use PHP and don't write poor code but by saying that PHP is just a tool and the people are responsible, you fail to take into account the often-present culture that surrounds a language and the users of a language. In that respect, I think the assessment of PHP which you objected to is correct.
Sincerely,
Richard Nixon
Admin
<FONT face="Courier New" size=2>was i the only one who caught the =/== mistake? or was that just a typo?</FONT>
Admin
It isn't that you are too old, but that your experience has likely been in the dos/windows world or the mainframe world. The family of unix shell languages that evolved from the Bourne shell use the dollar sign to indicate a variable, much like dos uses the percent signs. Perl was developed as a better unix scripting language and so absorbed many syntactic elements of the Bourne shell languages. PHP in turn was developed as a better Perl for the web, and so absorbed many syntactic elements from Perl.
It is interesting to observe how many widely used languages were specifically designed to broadly resemble some preceding successful language. Anyone who is comfortable with shell scripting, sed, awk, etc. can naturally absorb Perl in small easy to digest chunks. Meanwhile the Perl developer can use PHP and Python more easily than a non-Perl developer.. Likewise C begat C++, which begat Java and Java's near clone C#. Meanwhile languages like Ruby are slower to gain acceptance. There is a certain feeling of comfort when much of the syntax is recognizable.
Admin
Not quite ;p
Admin
Yeah, that's much better than the readable switch-version ;-) Job security via code obscurity
Admin
Just saying, but whitespace is not everything. I am one of those people that prefers the coding style
if (x)
Just because that lines up the brackets, and actually puts the block on the same level as the rest.
And a quick question:
(x)
Would that compile? If so, that's my counter argument ;-)
Admin
heh... looks like something I would have done... 'you want a switch statement!? I got your stupid switch statement right here buddy...'
-lo
Admin
Hey, uh.. all your corrections that have been made, they don't stay true to the original code.
Admin
I discourage people from trying PHP because it turns out dickheads like you who think they know it all and attempt to correct real programmers.
$_GET is not a String[][]. String[][] -- even though it doesn't exist in PHP since it's simply called 'array' -- would be indexed via $_GET['foo']['bar'], so you can see why you're wrong without me even explaining type to you. You also confuse declare and define, more evidence you only know PHP. (I refer to your comments here.)
Also, PHP does not create an empty variable whenever you mistype a variable name. I present sample code, $foo = $bar all by itself:
PHP Notice: Undefined variable: bar in test.php5 on line 2
Learn C and take an English class, then try to hock a programmer discussion. Seriously.
Admin
Nope. Need an x. If you throw in an
int
=
;
x
0
before it...then it should, yes. Talk about token fun!
Admin
Being a junior or transitioned is one thing. . . lacking common sence is another
Well here is one more for the desert (from another "transitioning" programmer):
<FONT color=#0000ff>String</FONT> fName = <FONT color=#800080>null</FONT>;
<FONT color=#800080>if</FONT> (fname == <FONT color=#800080>null</FONT>). . . . .
The
"=="I can understand. . . after all, he is "transitioning", but . . . . WTF
Admin
Hrmm...
def IsTrue(true):
if(IsTrue(true))...
Admin
Was this anonymized to hide a Delphi snippet perhaps? Not sure what other languages might be affected, but Borland's "native" string type in 32-bit environments was implemented as an instance of AnsiString class. Objects were not allowed in case statements, so the code just wouldn't fail. In such a case, the WTF would be that O.C. didn't know this.
Admin
Yeah, you may want to replace it too...
Admin
No - he was just being cute or a smart a**. "OK then I'll use switch, but I'll do it my way". Any of those reviewing his code the second time would have realised this. | https://thedailywtf.com/articles/comments/Having_a_Hard_Time_Switching/3 | CC-MAIN-2022-21 | refinedweb | 3,050 | 73.47 |
NAME
link - make a new name for a file
SYNOPSIS
#include <unistd.h> int link(const char *oldpath, const char *newpath);.)
CONFORMING TO
SVr4, 4.3BSD, POSIX.1-2001 (except as noted above).
NOTES
Hard links, as created by link(), cannot span filesystems. Use symlink(2) if this is.
BUGS
On NFS file systems, the return code may be wrong in case the NFS server performs the link creation and dies before it can say so. Use stat(2) to find out if the link got created.
SEE ALSO
ln(1), linkat(2), open(2), rename(2), stat(2), symlink(2), unlink(2), path_resolution(7)
COLOPHON
This page is part of release 2.77 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. | http://manpages.ubuntu.com/manpages/hardy/en/man2/link.2.html | CC-MAIN-2014-52 | refinedweb | 134 | 77.64 |
Microsoft Scripting Guy Ed Wilson here. Our alarm clock died a slow, but rather painless death. All of a sudden, it began to lose track of time. It would forget to wake us up in the morning, and worst of all, it would at times just randomly display time from different time zones and even alternate universes. At last, it was time to bid it a fond farewell. The demise of the alarm clock was not a major inconvenience, because the Scripting Wife and I actually set four alarms each morning. I have an alarm set on my Windows Mobile Smart phone, the Scripting Wife has one set on her Windows Mobile Smart phone, there is a windup backup alarm, and lastly the battery-backed electric alarm clock about whom we were previously speaking. You may think I am paranoid, but I call it just being prudent. A windup, backup alarm clock is of little use if you forget to wind it up. An electric alarm clock is of no use if the battery runs down and the electricity goes off. A Windows Mobile Smart phone is no good if you forget to recharge it or leave it in your computer bag downstairs. Therefore, with multiple redundant alarm systems in place, we are pretty much assured of avoiding the potentially embarrassing “I overslept” situation.
Anyway, trying out a new electric alarm clock is always a bit of a surprise that is sometimes pleasant, sometimes not. This morning was one of the “not” occasions. I am not sure where the Scripting Wife obtained the new alarm clock, but when it went off this morning, the results were alarming! (I’ll be here all week. Try the veal.) It is the loudest, most obnoxious sound I have ever heard, excepting the drill sergeant banging on a metal garbage can at oh dark thirty. I wonder if anyone has ever suffered a heart attack from one of these alarm clocks. Oh, well.
While I lingered over a pot of English Breakfast tea, I focused on something I had been wanting look at for a long time—the concept of enumerations. I have used enumerations numerous times in the scripts I have written for the Hey, Scripting Guy! posts, particularly in connection with the scripts I have written about automating Microsoft Word, Microsoft Excel, Microsoft PowerPoint, and Microsoft Outlook. Rather than simply hardcoding a numeric value into a method call, I create an instance of the enumeration, and then use the enumeration property names in the script. This promotes readability, and points the way to successfully modifying the script to accomplish other purposes because one can refer to the documentation on MSDN.
What if I want to create an enumeration for use in my Windows PowerShell scripts? How would I go about accomplishing that? One could always store the values in a hash table, and use the contains method to check for values, as shown here:
PS C:> $hash = @{“a” = 5; “b” = 7}
PS C:> $hash.Contains(“b”)
True
PS C:>
A better approach would be to create a real instance of the System.Enum .NET Framework class. There is no direct way to do this, but because Windows PowerShell 2.0 has the Add-Type cmdlet that allows me to use C# code, I can create my own enumeration. This is not very difficult to do. Understanding how to use the enumeration after it is created is more of a challenge, but creating the enumeration is simple. The Create-FruitEnum.ps1 script illustrates the technique to create an enumeration. The complete script is shown here.
Create-FruitEnum.ps1
$enum = ”
namespace myspace
{
public enum fruit
{
apple = 29, pear = 30, kiwi = 31
}
}
”
Add-Type -TypeDefinition $enum -Language CSharpVersion3
The key to using the Add-Type cmdlet to create the enumeration is to use the –TypeDefinition parameter. You also need to specify the language. Permissible language values are CSharp, CSharpVersion3, VisualBasic, and Jscript.
The TypeDefinition is a string that contains the code that will be used to create the enumeration. The first thing that needs to be done is to specify a namespace that will contain the enumeration. Namespaces are used to group similar classes and enumerations in the .NET Framework. If I were creating a number of enumerations that I would use in a project, I would ensure they were all stored in the same namespace. For this example, I am using a rather silly namespace called myspace. This namespace does not exist, I would imagine, before running this script. Keep in mind that C# is case sensitive. The command is namespace, and the name of the namespace to create is myspace.
After the namespace declaration, the enum command is used to create the enumeration. Because I want to be able to use the enumeration in other places, I use the public keyword to make it available. After the public command, the enum command is followed by the name of the enumeration to create. Because my enumeration will consist of several different types of fruit, I call it Fruit.
The last step involved in creating an enumeration is to list each enumeration and its associated value. Even though each different kind of fruit is a string, you do not place quotation marks around the name of the fruit. That will cause an error to be generated. Therefore, I have the name of a type of fruit, an equal sign, and the value I wish to assign to the enumeration.
When I run the code, nothing is displayed, which is demonstrated in the following image.
The question may arise: How can I tell if it worked or not?
One of the easiest ways is to put the enumeration namespace and name inside square brackets and press ENTER. This is done in the command pane. If the enumeration has been created properly, you will see the output shown here:
PS C:Usersedwils> [myspace.fruit]
IsPublic IsSerial Name BaseType
——– ——– —- ——–
True True fruit System.Enum
If you want to see which enumeration values have been created, you can use the GetValues static method from the system.enum .NET Framework class. This is shown here:
PS C:Usersedwils> [enum]::GetValues([myspace.fruit])
apple
pear
kiwi
If you wish to retrieve a specific value from the fruit enumeration, you access the properties as if they were static properties. This is shown here:
PS C:Usersedwils> [myspace.fruit]::pear
pear
As you can see, the results are not too terribly exciting. What is cool, however, is being able to retrieve the numeric value we assigned to the enumeration property when we created the fruit enumeration. To do this, use the value__ (that is a double underscore trailing the word value) property shown here:
PS C:Usersedwils> [myspace.fruit]::pear.value__
30
It looks like it is going to be a nice day here in Charlotte. I think I will head out to the woodworking shop for a while. I will catch you later. Have a great day.
Join the conversationAdd Comment | https://blogs.technet.microsoft.com/heyscriptingguy/2010/06/06/hey-scripting-guy-weekend-scripter-the-fruity-bouquet-of-windows-powershell-enumerations/ | CC-MAIN-2016-36 | refinedweb | 1,169 | 64.1 |
This article documents the development of a small exploratory
project for flowchart visualization and editing that is built upon SVG and AngularJS. It makes good use of the MVVM pattern so that UI logic can be unit-tested.
After so many articles on WPF
it may come as a surprise that I now have an article on web UI. For the
last couple of years I have been ramping up my web development skills.
Professionally I have been using web UI in some pretty interesting
ways connected to game development. For example building game dev tools
and in-game web UIs, but I'm not talking about that today.
It seemed only natural that I should take my NetworkView WPF article
and bring it over to web UI. I've always been interested in
visualization and editing of networks, graphs and flow-charts and it is
one of the ways that I put my skills to the test in any particular area.
During development of the code I have certainly moved my skills forward in many areas, including Javascript, TDD,
SVG and AngularJS. Specifically I have learned how to apply the
goodness of the MVVM pattern to web UI development. In this article I
will show how I have deployed the MVVM concepts in HTML5 and Javascript.
A
little over a year ago I started developing using TDD, something I
always wanted to do when working with WPF, but never got around to it
(or really appreciated the power of it). TDD has really helped me to
realize the full potential of MVVM.
My first attempt at NetworkView, in WPF, took a long time.
Over two years (in my spare-time of course!) I wrote a series of 5
articles that were all building up to NetworkView. A lot of effort went
into achieving those articles! This time around the development and
writing of the article has been much quicker - only a few months
(stealing 30 minutes here and there from my busy
life). I attribute the faster development time to the following
reasons:
So let's re-live my exploration of SVG + AngularJS flowcharts.
This is an annotated screenshot of the flowchart web app. On the
left is an editable JSON representation of the flowchart's
data-model. On the right is the graphical representation of the
flowchart's view-model. Editing either side (left as text, right
visually) updates the other, they automatically stay in sync.
So who should read this article?
If you are interested in developing graphical web applications using
SVG and AngularJS, this article should help.
You should already know a bit of HTML5/Javascript or be willing
to learn quickly as we go. A basic knowledge of SVG and AngularJS will
help, although I'll
expect you are learning some of that right now and I'll do my best to
help you get on track with it. I'll expect you already know something
about MVVM, I have talked about it extensively in previous articles, if
not then don't worry I'll give an overview of what it is
and why it is useful.
I will also mention TDD as well to help you understand how it might help you as a developer.
This article is about a web UI redevelopment of my original NetworkView WPF control.
As mentioned the new code isn't as feature rich or general
purpose as the original WPF control. Developing something that
was completely functional wasn't the intention, I really was just looking for a way to
exercise my skills in Javascript, TDD, AngularJS and SVG and consolidate my web development skills.
I really enjoy working with web UI. Since I was first looking at web
technologies in the early days of my career to now I have seen many
changes in the tech landscape. Web technologies have progressed a
remarkably long way and the community is alive and brimming with
enthusiasm.
My NetworkView article was popular and a rebuild in web
UI seemed like a good idea. I was building
something I already knew about so I could achieve it much quicker
than if I had started something new. However there are many parts of the original article that don't have a
counterpart in the new code. There is no zooming and panning, there is
no equivalent to adorners to provide feedback. There is no templating
for different types of nodes.
In summary, this code will be useful to you if:
Whatever your reason for reading this article, you have some work
ahead of you either in understanding or modifying my code. If you are trying to make progress with
AngularJS + SVG or even just web UI graphics in general, then I'm sure this will help.
First up let's look at the live demo. This allows you to
see what you are getting without having to get the code locally and run
your own web server (which isn't difficult anyway).
Here is the main demo:
Here are the unit-tests:
Everything you need to run the code locally is attached to this article as a zip file. However I recommend going to
the github repository for the most up-to-date code. I recommend using SourceTree as an interface to Git.
Running the sample app requires that you run it through a local web
server and view it through your browser. You could just disable your
browser's web security and load the app in your browser directly from
the file system using file://. However I can't recommend that
as you would have to override the security in your web browser, besides
it is easier than ever to run a local web server. Let me show you how.
I have provided a simple bare bones web server (that I found on StackOverflow) that is built on NodeJS. When you have NodeJS installed open a cli and change directory to where the code is. Run the following command:
node server.js
You now have a local web server. Point your web browser at the following URL to see the web app:
And to run the unit-tests:
Update: I found an even easier way to run a web server. Install http-server using the following command:
npm install http-server -g
Navigate to the directory with the code and run the web-server:
http-server
Javascript is the language of the internet and recently I have
developed an appreciation for it. Sure, it has some bad
parts, but if you follow Crockford's advice you can stick to the good parts.
First-class functions are an important feature and very powerful.
I'm really glad we (kind of) have these in C# now. Even the
latest C++ standard supports lambdas, it seems that function programming is creeping in everywhere these days. Coming at Javascript from a classical language you may find that prototypal inheritance is rather unusual, but it is more powerful, even if difficult to understand.
Once you are setup and used to it, it's hard
to beat the Javascript workflow. Install Google Chrome, install a good
text editor, you now have a development environment! Including
profiling and debugging. Combine this with node-livereload
and a suite of unit-tests
and you have a system where your web application and unit-tests will re-run automatically as you type your code.
I can't emphasize enough how important this is for productivity.
Extremely fast feedback cycles are so important
for effective Agile development.
Test-driven development
has been one of the most positive changes in my career so far. It was
always hard to keep design simple and minimize defects in code
that is rapidly evolving. As the code grows large it becomes harder to
manage, harder to keep under control and difficult to refactor. This
problem is only worse when using a language like
Javascript.
Unit-testing is hard. It takes effort and discipline. You can't
afford to leave it until after you have developed the code - it is too easy to be lazy and just skip the
tests when things seem to be working. TDD turns this around. You have to be disciplined, you have to slow
down, you are forced to write your unit-tests (otherwise you just
aren't doing TDD). You have to think up front about your coding, there
is no way around it. Thinking before coding is always a good thing and generally all to rare.
TDD makes you design your code for testability. This sometimes means
slightly over-engineered code, but TDD means you can safely refactor on
the go, this keeps the design simple and under control. The trick
to refactoring is to make the design look perfect for the evolved
requirements, even though the code has changed drastically above and beyond
the original design. When I say slightly over-engineered that's exactly what I mean, only
slightly. I have seen and participated in massively over-engineered
coding. TDD for the most part has a negative effect on
over-engineering. TDD means you only code what you are
testing. This ensures your code is always needed, always to the point.
Your efforts are focused on the end requirements and you don't end up
coding something you don't need (unless your testing for something you
don't need, and why would you do that?). This attitude of code only what you need solves
one of the most insidious problems that developers have ever faced: it
helps to prevent development of code that will never be used. Eliminate waste is principle number 1 in lean software development.
Creating a permanent scaffolding of unit-tests for your program prevents code rot and enables refactoring. Another thing it is good for: preserving your sanity and increasing your confidence in the code.
Now granted that this web application is smallish and not overly
complicated, however professionally I have used TDD on much more
complex
programs. This web application was built in my spare time, only
spending 30-60 minutes at a time on it. Occasionally I took a week or
two off to concentrate on other things. Switching projects takes
significant mental effort, but TDD makes it much easier to switch back.
When you come back to the project you run the tests and then pick a
simple
next test to
implement. There is no better way of getting back into it again from a
cold start.
TDD has helped me keep the code-base in check as it changed, adding
feature after feature, heading towards my end goal. Along the way
I refactored aggressively without adding defects. This is important. So
often I have experienced refactoring go horribly wrong, I'm talking
about the kind of event that causes defects for weeks if not months.
One of TDD's most attractive benefits is a reduction in the pain
associated with constant code evolution.
I once heard someone say TDD is like training wheels for
programmers. I laughed at the time, but after some thought I decided
this comment, though funny, was far from the truth. I have worked on a
TDD team for a year now and I can honestly say that TDD is
significantly harder than the usual fire from the hip programming. It takes effort to learn and makes you slower (what I would call the true cost of development).
TDD is very powerful and it isn't right for every project (it has
little value in prototype or exploratory projects), but the payoff for
longer term projects is potentially enormous if you are willing to make
the investment.
Last word. Having the unit-tests was essential for making the app
work across browsers. I didn't have to do cross-browser testing
during development. Near the end it was mostly enough to get the
unit-tests working under each browser.
Four years ago when I was first learning WPF
I never would have imagined how far down the rabbit hole I was going to
end up. Initially the learning curve was steep, but after two years and
multiple articles I had a good understanding of WPF and MVVM.
MVVM is a pattern evolved from MVC
and it isn't actually that difficult to understand, although I think
something about the combination of MVVM and WPF and the resulting
complexities gives people (including myself) a lot of trouble in the
beginning.
The basic concept of MVVM is pretty simple: Separate your UI logic from your UI rendering so that you can unit-test your UI logic. That's essentially it! It answers the question: how do I unit-test my GUI?
Fitting MVVM into Javascript looks a bit different, but is similar
and simpler than MVVM under WPF. Javascript/HTML may not
be the ideal way to build an application, but it is more
productive than working with C#/WPF. I hope to
show that MVVM + web UI gives you the benefits of MVVM, minus the
complexity of WPF (although you may not appreciate this unless you have
worked with WPF).
I am very pleased to have discovered AngularJS right at the point where I was moving into web UI.
What is AngularJS? Probably best to learn that direct from the source.
Why use AngularJS? Well in this article I'm mostly interested in its data-binding capabilities. AngularJS provides the magic necessary to glue your HTML view to your view-model and its data-bindings are trivial to use.
Why else would you use AngularJS? Google reasons to use AngularJS or benefits of AngularJS and you will find many.
How does AngularJS fit in with MVVM? I'm glad you asked. It looks like this:
The controller sets up a scope. The scope contains variables that are data-bound to the HTML view. In the flowchart application the scope contains a reference to the flowchart's view-model, which in-turn wraps up the flowchart's data-model.
Hang on, there is a controller? Doesn't
that mean it is MVC rather than
MVVM? Well to be sure AngularJS is a bit different to what we know of
as MVVM. The AngularJS pattern is also different to traditional MVC.
This happens all the time: new patterns are created, old patterns are
evolved or built-on, that's part of progress in software development.
It comes down to professional developers making their own patterns as
they need them and they do it all the time mostly without even thinking
much about it. In some cases they are based on established patterns
like
MVC, other times they are completely unique to the problem at
hand. So it's no mystery that these two patterns are different
even though they are similar on a deeper level. In the same way that
Microsoft gave birth to the MVVM pattern through WPF, Google have
created their own MVC-like pattern through AngularJS.
In the end it is just terminology and semantics and I consider the AngularJS controller to simply be a part of the
view-model. The way I see it, the application is comprised of a data-model, a
view-model and the view. Ultimately it really depends on how you think
about things, I come from a MVVM/WPF background so I see my work in
light of that, you will no doubt see it differently.
AngularJS has been pleasantly simple to use with and I have
encountered very few issues.
Since I have been using it there has been multiple releases that have
actually fixed problems that I was having. I have even delved into the
source code from time to time to gain a better understanding. Although
not trival the code is certainly very readable and understandable.
I'll talk more about AngularJS issues and solutions at the end of the article.
Want more info about AngularJS? They have awesome documentation.
I have only paid attention to SVG
in recent years, but it is impressive that it has
actually been around for a long time (since 1999 according to wikipedia).
After working with XAML I was amused to discover how many
features that Microsoft lifted straight out of SVG. That's how things
work, we wouldn't get anywhere in particular if we weren't
innovating on top of previous discoveries and inventions.
I suspect that SVG was somewhat forgotten and
is now experiencing something of a renaissance. These days, SVG has good browser support
although there are still issues to be aware of and some features to
stay clear of. To a certain extent you can embed SVG directly in HTML and
treat it as though it were just HTML! Unfortunately you get bitten by
bad library support (I'm looking at you jQuery)
but I'm pleased to say that AngularJS have made progress with their SVG
support whilst I have been developing the flowchart web application.
I'll talk about the SVG issues at the end of the article.
This is a quick outline of my development environment.
Core tools:
Core libraries:
Other tools:
Honorable mentions:
In this section we walk-through the HTML and code for
the flowchart application and understand how the application
interacts with
the flowchart view-model. We will mostly be looking at index.html and app.js.
In the process we'll get a feel for how an AngularJS application works.
The following diagram shows the AngularJS modules in the application and the dependencies between them:
The next diagram overviews the files in the project:
And drilling down into the flowchart directory:
Our entry point into the application is index.html. This contains the HTML that defines the application UI and references the necessary scripts.
Traditionally scripts are included within the head element, although here only a single script is included in the head. This is the script that enables
live reload support:
<script src=""></script>
Live reload enables automatic refresh of the page within the browser
whenever the the source files have changed. This is what makes
Javascript development so productive, you can change the code and have
the application reload and restart automatically, no compilation is
needed, no manual steps are needed. The feedback loop is substantially
reduced.
Live reload can also be achieved by using a browser plugin instead of adding a script. I opt for the script usually
so that development can happen on any machine without requiring a browser plugin. You probably don't want live reload
in production of course, so your production server should remove this script.
To try out live reload locally ensure you have installed the NodeJS plugin and
run node-live-reload from the directory that contains the web page.
All other scripts are included from the end of the body
element. This allows the scripts to be loaded asynchronously as the
body of the web page is loaded. Whether scripts are included in the head or the body depends how you need your application to work.
The first two scripts are jQuery and AngularJS, the core libraries that this application builds on:
<script src="lib/jquery-2.0.2.js" type="text/javascript"></script>
<script src="lib/angular-1.2.3.js" type="text/javascript"></script>
Next are the scripts that contain reusable code, including SVG, mouse handling and the flowchart:
<script src="debug.js" type="text/javascript"></script>
<script src="flowchart/svg_class.js" type="text/javascript"></script>
<script src="flowchart/mouse_capture_directive.js" type="text/javascript"></script>
<script src="flowchart/dragging_directive.js" type="text/javascript"></script>
<script src="flowchart/flowchart_viewmodel.js" type="text/javascript"></script>
<script src="flowchart/flowchart_directive.js" type="text/javascript"></script>
The application code is included last:
<script src="app.js" type="text/javascript"></script>
Now back to the top of index.html, the body element contains a number of important attributes:
<body
ng-app="app"
ng-controller="AppCtrl"
mouse-capture
ng-app designates the root element that contains the AngularJS application. The value of this attribute specifies the AngularJS module that contains the application code. This is the most important attribute in the application because this is what bootstraps AngularJS. Without ng-app there is no AngularJS application. In this instance we have specified app which links the DOM to our app module which is registered in app.js, we'll take a look at that in a moment. With the ng-app
and the AngularJS source code included in the page, the AngularJS app
is bootstrapped automatically. If necessary, for example to
control initialization order, you can also manually bootstrap the AngularJS app. It is interesting to note here that ng-app
is applied to the entire body of the web page. This suits me because I
want the entire page to be an AngularJS application, however it is
also possible to put ng-app on any sub-element and thus only allow a portion of your page to be controlled by AngularJS.
ng-controller assigns an AngularJS controller to the body of the page. Here AppCtrl is assigned which is the root controller for the entire application.
mouse-capture is a custom attribute I have created to manage mouse capture within the application.
ng-keydown and ng-keyup link the DOM events to Javascript handler functions.
If you already know HTML but don't know AngularJS, by now you may
have guessed that AngularJS gives you the ability to create custom HTML
attributes to wire behavior and logic to your declarative user interface.
If you don't realize how amazing this is I suggest you go and do some
traditional web programming before coming back to AngularJS. AngularJS
allows the extension of HTML with new elements and attributes using AngularJS directives.
The flow-chart element is a custom element used to insert a flowchart into the page:
<flow-chart
</flow-chart>
The flow-chart element is defined by a directive that
injects a HTML/SVG template into the DOM at this point. The directive
coordinates the components that make up the flowchart. It attaches
mouse input
handlers to the DOM and translates them into actions performed against
the view-model.
Let's look at app.js to see the application's setup of the data-model. The first line registers the app module:
angular.module('app', ['flowChart', ])
Using the module function the app
module is registered.
This is the same app that was referenced by ng-app="app" in index.html.
The first parameter is the name of the module. The second parameter is
a list of modules that this module depends on. In this case the app module depends on the flowChart module. The flowChart module contains the flowchart directive and associated code, which we look at later.
After the module, an AngularJS service is registered.
This is the simplest example of a service and in a moment you will see
how it is used. This service simply returns the browser's prompt function:
.factory('prompt', function () {
return prompt;
}
Next the application's controller is registered:
.controller('AppCtrl', ['$scope', 'prompt', function AppCtrl ($scope, prompt) {
// ... controller code ...
}])
;
The second parameter to the controller function is an array that contains two strings and a function. The parameters of the
function have the same names as the strings in the array.
If it wasn't for minification we could define the controller more simply like this:
.controller('AppCtrl', function AppCtrl ($scope, prompt) {
// ... controller code ...
})
;
In the second case the array has been replaced only by the function which
is simpler but works only during development and not in production.
AngularJS instances the
controller by calling the registered Javascript constructor function. AngularJS knows to instance this particular controller because it was specified by name in the HTML using ng-controller="AppCtrl". The controller's parameters are then satisfied by dependency injection based on parameter name. The AngularJS implementation of dependency injection is so simple,
seamless and reliable that it has convinced me in general that a good
dependency injection framework should be a permanent part of my
programming toolkit.
Of course the simple case doesn't work in production where the
application has been minified. The parameter names will have been
shortened or mangled, so we must provide the
explicit list of dependency names before the constructor function. It
is a pity really that we have to do this, as the implicit method of
dependency specification is more elegant.
The $scope parameter is the scope automatically created by
AngularJS for the controller. In this case the scope is
associated with the body element. The dollar prefix here indicates that $scope
is provided by AngularJS itself. The dollar sign in Javascript is
simply a character that can be used in an identifier, it has no special
meaning to the interpreter. I suggest you don't use $ for your
own variables because then you can't easily identify the variables
provided by AngularJS.
The prompt parameter is the prompt service that we saw a moment ago. AngularJS automatically instances the prompt service from the factory we registered earlier. The question you might be asking now is why decouple the prompt service from the application controller? Well generally it so that we can unit-test
the application controller, even though I don't bother testing the
application code in this case (although I do test the flowchart code,
which you'll see later). The decoupling means the prompt service can be
mocked thus isolating the code that we want to test. In this case, the only reason I decoupled the prompt service is simply because I wanted to take the opportunity to demonstrate in the simplest scenario how and why to use a service.
Now let's break down the application controller from app.js:
.controller('AppCtrl', ['$scope', 'prompt', function AppCtrl ($scope, prompt) {
// ... Various private variables used by the controller ...
// ... Create example data-model of the chart ...
// ... Define application level key event handlers ...
// ... Functions for adding/removing nodes and connectors ...
// ... Create of the view-model and assignment to the AngularJS scope ...
}])
;
For the moment we will skip the details of the chart data-model. We will come back to that next section.
The most important thing that happens in the application controller
is the instantiation of the view-model at the end of the function:
$scope.chartViewModel = new flowchart.ChartViewModel(chartDataModel);
ChartViewModel wraps the data-model and is assigned to the scope making it accessible from
the HTML. This allows us to data-bind the chart attribute to chartViewModel as we have seen in index.html:
The application controller creates the flowchart view-model
so that it may have direct access to its services. This was an important
design decision. Originally the application created only the data-model
which was passed directly to the flowchart directive, internally then the
flowchart directive wrapped the data-model in the view-model. I found
that this strategy gave the application inadequate control over the UI. As an
example consider deleting selected flowchart items. The delete key is handled and the application must call into the view-model to delete the currently selected
flowchart items. The initial strategy was to delete the elements
directly
from the data-model and have the directive detect this and update the
view-model accordingly, however this failed because there is no way to
know from the data which items are selected! In addition it made the
flowchart directive more complicated because it would now have to watch the data-model changes,
normally it just watches the view-model and this happens automatically
anyway. A naive approach would have been to add fields to
the data-model to indicate which items are selected,
but this would be bad design: polluting the data-model with
view specific
concepts! In any case, changing the data-model to support selection (or
other view features) would mean that you can't then share the
data-model
between completely different kinds of views, so you can see that even
in
principle it is just wrong to combine the view-model and data-model
concepts. The better solution is to have a
view-model that is distinct from the flowchart
directive and mimics the structure of the data-model. The application
is then put in direct control of that
view-model so it can be manipulated directly.
The following diagram indicates the dependencies between the application and the flowchart components:
As an example of how the application interacts with the view-model we will look at the previously mentioned delete selected feature, that allows deletion of flowchart items. ng-keyup is handled for the body element:
ng-keyup="keyUp($event)"
The browser's onkeyup event is bound to keyUp in the application scope. The $event object is made available for use by AngularJS and is passed as a parameter to keyUp. This should be pretty much the same as the jQuery event object, although the AngularJS docs doesn't have much to say about it.
The keyUp function is defined in app.js and assigned directly to the application scope:
$scope.keyUp = function (evt) {
if (evt.keyCode === deleteKeyCode) {
//
// Delete key.
//
$scope.chartViewModel.deleteSelected();
}
// ... handling for other keys ...
};
The keyUp function simply calls deleteSelected on the
view-model. This is an example of the application directly manipulating
the flowchart view-model, later we'll have a closer look at this
function.
Let's back up and look at the setup of the flowchart's data-model.
The example-data model is defined inline in app.js:
var chartDataModel = {
nodes: [
// Nodes defined here.
],
connections: [
// Connections defined here.
]
};
Then it is wrapped by the view-model:
We could also have asynchronously loaded the data-model as a JSON file.
Before digging further into the structure of the data-model, you may want to develop
a better understanding of the components of a flowchart. Rather
than prepare fresh diagrams, I'll refer to those from my older
article. Please take a look at the Overview of Concepts in that article and then come back. ...
Ok, so you read the overview right? And you know the difference between nodes, connectors and connections.
Here is the definition of a single node as defined in app.js:
Here is the definition of a single connection:
Connections in the data-model reference their attached nodes by IDs. Connectors are referenced by
index. An alternative approach would be to drop the node reference and
reference only the connector by an ID that is unique for each connector
in the flowchart.
This section examines the implementation of the flowchart directive, controller, view-model and template.
An AngularJS directive is registered with the name flow-chart. When AngularJS bootstraps and encounters the flow-chart element in the DOM it automatically instantiates the directive. The directive then specifies a template and this replaces the flow-chart tag in the HTML. The directive also specifies the controller and dictates the setup of its scope.
The flowchart directive controls and coordinates the other components as shown in the following diagram:
The flowchart directive and controller are defined in flowchart_directive.js under the flowchart directory. The first line defines the AngularJS module:
angular.module('flowChart', ['dragging'] )
The module depends on the dragging module, which provides mouse handling services.
This module actually contains two AngularJS directives:
.directive('flowChart', function() {
// ...
})
.directive('chartJsonEdit', function () {
// ...
})
The flowChart directive specifies the SVG template and the flowchart controller. We will look at this in detail in the next section.
The chartJsonEdit directive is a helper that allows us to see and edit the flowchart's JSON
representation alongside the visual SVG representation. This is mostly
for testing, debugging and helping understand how the flowchart works,
you probably wont't use this in production, but I have left it in as it provides a good example of how two
views can display the same view-model, we'll look into this in more detail later.
After the two directives, the flowchart controller takes up the majority of this file:
.controller('FlowChartController', ['$scope', 'dragging', '$element',
function FlowChartController ($scope, dragging, $element) {
// ...
}
])
;
In the coming sections we will cover each of the flowchart components in detail.
Using a directive to implement the flowchart is essentially making it into a reusable control. The entire directive is small and self-contained:
.directive('flowChart', function() {
return {
restrict: 'E',
templateUrl: "flowchart/flowchart_template.html",
replace: true,
scope: {
chart: "=chart",
},
controller: 'FlowChartController',
};
})
The directive is restricted to use as a HTML element:
restrict: 'E'
This effectively creates a new HTML element, such is the power of
AngularJS, you can extend HTML with your own elements and attributes.
There are other codes that can be applied here, for example, restricting to use as a HTML
attribute (effectively creating a new HTML attribute):
restrict: 'A'
The next two lines specify the flowchart's template and that it should replace the flow-chart element:
templateUrl: "flowchart/flowchart_template.html",
replace: true,
This causes the template to be injected into the DOM in place of the flowchart element:
Next, an isolated scope is setup:
scope: {
chart: "=chart",
},
This has the effect of creating a new child scope
for the directive that is independent of the application's scope.
Normally, creation of a new scope (say by a sub-controller) results in a child scope being nested under the parent scope.
The child scope is linked to the parent via the prototypal inheritance chain, therefore the fields and functions of the parent are avaible via the child and may even be overridden by the child.
An isolated scope breaks this connection, which is important for a reusable control like the flowchart as we don't want the two scopes interfering with each other.
Note the line:
chart: "=chart",
This causes the chart attribute of the HTML element to be data-bound to the chart
variable in the scope. In this way we connect the chart's
view-model from the application scope to the flowchart scope in a declarative manner.
The last part of the directive links it to the controller:
controller: "FlowChartController",
AngularJS creates the controller by name when the directive is instantiated.
Most examples of directives you see in the wild have a link function. In this case I use a controller instead of a link function to contain the directive's UI logic, I'll soon explain why.
The other directive defined in the same file is chartJsonEdit, which displays the flowchart's data-model as editable JSON text. This is really just a helper and not a crucial flowchart component.
I use it for debugging and testing and it can also be useful to
understand how things work generally. I include it here mainly because
it is interesting to see how two separate views (if we consider the directives as views) can display the same view-model and stay synchronized.
.directive('chartJsonEdit', function () {
return {
restrict: 'A',
scope: {
viewModel: "="
},
link: function (scope, elem, attr) {
//
// Serialize the data model as json and update the textarea.
//
var updateJson = function () {
if (scope.viewModel) {
var json = JSON.stringify(scope.viewModel.data, null, 4);
$(elem).val(json);
}
};
//
// First up, set the initial value of the textarea.
//
updateJson();
//
// Watch for changes in the data model and update the textarea whenever necessary.
//
scope.$watch("viewModel.data", updateJson, true);
//
// Handle the change event from the textarea and update the data model
// from the modified json.
//
$(elem).bind("input propertychange", function () {
var json = $(elem).val();
var dataModel = JSON.parse(json);
scope.viewModel = new flowchart.ChartViewModel(dataModel);
scope.$digest();
});
}
};
})
The purpose of the controller is to provide the input event handlers that are bound to the DOM by the template. Event handling is then generally routed to the view-model. As the UI logic is
delegated to the view-model, the controller's job is simply to translate input
events into view-model operations. This job could have easily been done by the directive's link function, however separating the UI logic out to the controller
has made it much easer to unit-test as the controller
can be instantiated without a DOM.
The controller is registered in flowchart_directive.js after the two directives and takes up most of the file. The controller itself is a Javascript constructor function registered via the flowchart module's controller function:
The controller is registered with the name FlowChartController, which is the name used to reference the controller from the directive:
The controller parameters are automaticatically created and dependency injected by AngularJS when the controller is instantiated. As we saw with the application
controller the names of the parameters are specified twice. If we
didn't need minification
we could get by with the names only specified once, as the names of the parameters themselves.
$scope is the directive's isolated scope, containing a chart field that is the view-model that has been transferred
over from the application's scope.
dragging is a custom service that helps with mouse handling, which is so interesting it gets its own section.
$element is the HTML element that the controller is attached to. This parameter is easily mocked for unit-testing, which allows testing of the controller without actually instantiating the DOM.
In the first line of the controller we cache the this variable as a local variable named controller:
var controller = this;
This is the same as Javascript's usual var that = this idiom and is required so that the this variable, i.e. the flowchart controller, can be accessed from anonymous callback functions.
Next we cache a reference to the document and jQuery:
this.document = document;
this.jQuery = function (element) {
return $(element);
}
This enables unit-testing as document and jQuery are easily replaced by mock objects.
Next we setup the scope variables, followed by a
number of the controller's functions. Then event handlers, such as mouseDown, are assigned to the scope
to be referenced from the template.
That's all the detail on the controller for now, there is still a lot to cover here and we'll deal with it
piece by piece in coming sections.
The template defines the SVG that makes up the flowchart visuals. It is entirely self-contained with no sub-templates. Sub-templates are of course possible with AngularJS (and usually desirable), but they can cause problems with SVG.
The template generates the UI from the view-model and determines how
DOM events are bound to functions in the scope.
The template can be found in flowchart_template.html. After understanding the flowchart directive we know that the template's content completely replaces the flow-chart element in index.html.
The entire template is wrapped in a single root SVG element:
<svg
class="draggable-container"
xmlns=""
ng-
<defs>
<!-- ... -->
</defs>
<!-- ... content ... -->
</svg>
Mouse handling is performed at multiple levels in the DOM. Mouse down and mouse move
are handled on the SVG element to implement drag selection and mouse
over. Other examples of mouse handling can be found through-out the
template as it underpins multiple features, such as: selection of nodes and connections, dragging of
nodes and dragging of connections.
The defs element defines a single reusable SVG linearGradient that
is used to fill the background of the nodes. The remainder of the
template is the content that displays the nodes, connectors and
connections. Near the end of the template graphics are defined for the dragging connection (the connection the user is dragging out) and the drag selection rectangle.
The view-model closely wraps the data-model and represents it to the
view. It provides UI logic and coordinates operations on
the data. The view-model can be found in flowchart_viewmodel.js.
So really, why have a view-model at all?
It's true that all the flowchart code could live in the flowchart
controller, or even in the flowchart directive. We already know that
the flowchart controller is separate for ease
of unit-testing. Separating the view-model also helps
unit-testing, as well as improving modularity and simplifying the code.
However, the primary reason for separation of the view-model is that it
allows the application code to interact directly with
the view-model, which is much more convenient than interacting with the directive or controller.
Simply put, the application owns the view-model which it passes to the directive/controller.
The application is then free to directly manipulate the view-model
and the application code doesn't interface at all with the directive
or controller.
All of the constructor functions take as a parameter (at least) the data-model to be wrapped-up. The data-model in the simplest case can be an empty object:
var chartDataModel = {};
var chartViewModel = new flowchart.ChartViewModel(chartDataModel);
When the data-model is empty, the view-model will flesh it out as necessary.
A view-model can also be created from a fully or partially complete data-model,
for example one that is AJAX'd as JSON:
var chartDataModel = {
nodes: [
// ...
],
connections: [
// ...
]
};
var chartViewModel = new flowchart.ChartViewModel(chartDataModel);
View-models for each node are created in a similar way:
var nodeViewModel = new flowchart.NodeViewModel(nodeDataModel);
Connectors are a bit different, the x, y coordinates of the
connector are computed and passed in, along with a reference to the
view-model of the parent node:
var connectorViewModel = new flowchart.ConnectorViewModel(connectorDataModel, computedX, computedY, parentNodeViewModel);
Connections are different again and given references to the view-models for the source and dest connectors they are attached to:
var connectionViewModel = new flowchart.ConnectionViewModel(connectionDataModel, sourceConnectorViewModel, destConnectorViewModel);
The following diagram illustrates how the view-model wraps the data-model:
In summary, the flowchart view-model wraps up numerous functions for
manipulating and presenting the flowchart. Including selection, drag
selection, deleting nodes and connections and creating new connections.
TDD and the unit-tests have kept this project alive and kicking
from the start. The unit tests really came into their own and saved the day when it was
time to make my code run on multiple browsers (arguably I should have
been doing this from the beginning, but I'm pretty new to the
cross-browser stuff).
As a standard unit-test files have the same name as the source file under test, but with .spec on the end. For example the unit-tests for flowchart_viewmodel.js are in flowchart_viewmodel.spec.js.
Jasmine is a fantastic testing framework. Along with the code I have included the Jasmine spec runner, the HTML page that runs the tests. It is under the jasmine directory. When you have the web server running you can point your browser at to run the unit-tests.
In this section I discuss each element of the flowchart and what is required to represent it in the UI.
To render a collection of things, eg flowchart nodes, we use AngularJS's ng-repeat. Here it is used to render all of the nodes in the view-model:
<g
ng-
<!-- ... node content ... -->
</g>
ng-repeat causes the SVG g element to be expanded out and repeated once for each node. The repetition is driven by the array of nodes supplied by the view-model: chart.nodes. At each repetition a variable node is defined that references the view-model for the node.
ng-mousedown binds the mouse down event for nodes to the controller's nodeMouseDown which contains the logic to be invoked when the mouse is pressed on a node, the node itself is passed through as a parameter.
ng-attr-transform sets the SVG transform attribute to a translation that positions the node according to x, y coordinates from the view-model.
ng-attr-<attribute-name> is a new AngularJS feature that sets a given HTML or SVG attribute after evaluating an AngularJS expression.
This feature is so new that there doesn't appear to be any
documentation for it yet, although you will find a mention of it
(specifically related to SVG) in the directive documentation. I'll talk more about the need for ng-attr- in the section Problems with SVG, meanwhile we will see it used throughout the template.
The background of each node is an SVG rect:
<rect
ng-
</rect>
ng-attr-class conditionally sets the SVG class
depending on whether the node is selected, unselected or whether the
mouse is hovered over the node. Other methods of setting SVG class (via jQuery/AngularJS), that normally work for HTML class, don't work so well as I will describe later.
ng-attr-width and -height set the width and height of the rect.
fill sets the fill of the rect to nodeBackgroundGradient which was defined early in the defs section of the SVG.
Next an SVG text displays the node's name:
<text
ng-
{{node.name()}}
</text>
The text is centered horizontally by
anchoring it to the middle of the node. The example here of ng-attr-x really starts to show the power of AngularJS expressions.
Here we are doing a computation within the expression to determine the
horizontal center point of the node, the result of the expression sets
the x coordinate of the text.
After the text we see two separate sections that display the node's
input and output connectors. Before we look deeper into the visuals for
connectors let's have an overview of how the rendered node relates to
its SVG template.
The ng-repeat:
Node background and name:
Input and output connectors are roughly the same and so I will only discuss input connectors and point out the differences.
Here again is a use of ng-repeat to generate multiple SVG elements:
<g
ng-
<!-- ... connector content ... -->
</g>
This looks very similar to the SVG for a node having an ng-repeat and a handler for mouse down. This time a static class is applied to the SVG g element that defines it as both a connector and an input-connector. If it were an output connector it would instead have the output-connector class applied.
Each connector is made from two elements. The first is a text element to display the name:
<text
ng-
{{connector.name()}}
</text>
The only difference between the input and output connectors is the expression assigned to the x coordinate. An input connector is on the left of the node and so it is offset slightly to the right. An output connector is on the opposite side and therefore it is offset to the left.
The second element is a circle shape that represents the connection anchor point, this is an SVG circle positioned at the connector's coordinates:
<circle
ng-
ng-attr-class is used to conditionally set the class of the connector depending on whether the mouse is hovered over it. The other attributes set the position and size of the circle.
The following diagram shows how the rendered connectors relate to the SVG template. First the ng-repeat:
And the content of each connector:
Connections are composed of a curved SVG path with SVG circles attached at each end.
Multiple connections are displayed using the now familiar ng-repeat:
<g
ng-
<!-- ... connection content ... -->
</g>
The coordinates for the curved path are computed by the view-model:
<path
ng-attr-class="{{connection.selected() && 'selected-connection-line' || (connection == mouseOverConnection && 'mouseover-connection-line' || 'connection-line')}}"
ng-attr-d="M {{connection.sourceCoordX()}}, {{connection.sourceCoordY()}}
C {{connection.sourceTangentX()}}, {{connection.sourceTangentY()}}
{{connection.destTangentX()}}, {{connection.destTangentY()}}
{{connection.destCoordX()}}, {{connection.destCoordY()}}"
>
</path>
Each end of the connection is capped with a small filled circle.
The source and dest -ends look much the same, so let's look at the
source-end only:
<circle
ng-
</circle>
Now some diagrams to understand the relationship between the rendered connections and the template.
The content of a connection:
In this section I will cover the implementation of a
number of UI features. The discussion will cross-cut through
application, directive, controller, view-model and template to examine
the workings of each feature.
Nodes and connections can be in either the selected or unselected state. A single left-click selects a node or connection. A click on the background deselects all. Control + click enables multiple selection.
Supporting selection is a major reason for individually wrapping the
data-models for nodes and connections in view-models. These view-models
at their simplest have a _selected boolean field that
stores the current selection state. This value must be stored in
the view-model and not in the data-model, to do otherwise would
unnecessarily pollute the data-model and make it less reusable with
different types of views.
The view-models for nodes and connections, NodeViewModel and ConnectionViewModel, both have a simple API for managing selection consisting of:
ChartViewModel has a selection API for managing chart selection as a whole:
The visuals for nodes and connections are modified dynamically according to their selection state. ng-attr-class completely switches classes depending on the result of a call to selected(), for example, setting the class of a node:
<rect
ng-attr-class="{{node.selected() && 'selected-node-rect' || (node == mouseOverNode && 'mouseover-node-rect' || 'node-rect')}}"
...
>
</rect>
Of course the expression is more complicated because we are also setting the class based on the mouse-over state. If you are new to Javacript I should note that the kind of expression used above acts like the ternary operator.
When node.selected() returns true the class of the SVG rect is set to selected-node-rect,
a class defined in app.css, and modifies the node's visual to indicate that it is selected.
The same technique is also used to conditionally set the class of connections.
Nodes and connections can also be selected by dragging out a selection rectangle to contain the items
to be selected:
Drag selection is handled at multiple levels:
Ultimately, the final action during drag selection, is to select nodes and connections that are contained within the drag selection rect. The coordinates and size of the rect are passed to applySelectionRect. This function applies the selection in the following steps:
The flowchart controller receives mouse events and coordinates the dragging operation. Mouse down is the event we are interested in here which is handled by mouseDown in the controller:
<svg
class="draggable-container"
xmlns=""
ng-
<!-- ... -->
</svg>
Looking into mouseDown we see the first use of the dragging service. This
is a custom service I have created to help manage dragging operations
in AngularJS. Over the next few sections we'll see multiple examples of
it and later we'll look at the implementation. The
dragging service is dependency injected as the dragging parameter to the controller and this allows us to use the service anywhere within the controller.
The first thing to note about mouseDown is that it is attached to $scope and this makes it available for binding in the HTML:
$scope.mouseDown = function (evt) {
// ...
};
mouseDown's first task is to ensure nothing is selected. This means that any mouse down
in the flowchart deselects everything. This is exactly the behavior
we want when clicking in the background of the flowchart:
$scope.mouseDown = function (evt) {
$scope.chart.deselectAll();
// ...
};
After deselecting all, startDrag is called on the dragging service to commence the dragging operation:
$scope.mouseDown = function (evt) {
// ... deselect all ...
dragging.startDrag(evt, {
// ...
});
};
The dragging operation will continue until a mouse up is detected, in this case a mouse up on the root SVG element. Note though that we don't handle mouse up explicitly, it is handled automatically by the dragging service and it is the draggable-container class on the SVG element which identifies it as the element within which dragging will be contained.
Multiple event handlers (or callbacks) are passed as parameters and are invoked at key points in the dragging operation:
dragging.startDrag(evt, {
dragStarted: function (x, y) {
// ...
},
dragging: function (x, y) {
// ...
},
dragEnded: function () {
// ...
},
});
dragStarted sets up scope variables that track the state of the dragging operation:
dragging.startDrag(evt, {
dragStarted: function (x, y) {
$scope.dragSelecting = true;
var startPoint = controller.translateCoordinates(x, y);
$scope.dragSelectionStartPoint = startPoint;
$scope.dragSelectionRect = {
x: startPoint.x,
y: startPoint.y,
width: 0,
height: 0,
};
},
dragging: // ...
dragEnded: // ...
});
dragSelectionRect tracks the coordinates and size of the
selection rectangle and is needed to visually display the selection rect.
dragging is invoked on each mouse movement during the dragging operation. It continuously updates dragSelectionRect as the rect is dragged by the user:
dragging.startDrag(evt, {
dragStarted: // ...
dragging: function (deltaX, deltaY, x, y) {
var startPoint = $scope.dragSelectionStartPoint;
var curPoint = controller.translateCoordinates(x, y);
$scope.dragSelectionRect = {
x: curPoint.x > startPoint.x ? startPoint.x : curPoint.x,
y: curPoint.y > startPoint.y ? startPoint.y : curPoint.y,
width: curPoint.x > startPoint.x ? x - startPoint.x : startPoint.x - x,
height: curPoint.y > startPoint.y ? y - startPoint.y : startPoint.y - y,
};
},
dragEnded: // ...
});
Eventually the drag operation completes and dragEnded is invoked. This calls into the view-model to apply the selection rect and then deletes the scope variables that were used to track the selection rectangle:
dragging.startDrag(evt, {
dragStarted: // ...
dragging: // ...
dragEnded: function () {
$scope.dragSelecting = false;
$scope.chart.applySelectionRect($scope.dragSelectionRect);
delete $scope.dragSelectionStartPoint;
delete $scope.dragSelectionRect;
},
});
The selection rect itself is displayed as a simple SVG rect:
<rect
ng-
</rect>
The rect only needs to be shown when the user is actually dragging, so it is conditionally enabled using an ng-if that is bound to the dragSelecting variable. If you look back at dragStarted and dragEnded you will see that this variable is set to true during the dragging operation.
The rect is positioned by the ng-attr- atttributes that set its coordinates and size:
Nodes can be dragged by clicking anywhere on a node and dragging. Multiple selected nodes can be dragged at the same time.
Mouse down is handled for nodes and calls nodeMouseDown:
<g
ng-
<! -- ... -->
</g>
nodeMouseDown uses the dragging service to coordinate the dragging of nodes:
$scope.nodeMouseDown = function (evt, node) {
// ...
dragging.startDrag(evt, {
dragStarted: // ...
dragging: // ...
clicked: // ...
});
};
As we have already seen, a number of event handlers (or callbacks) are passed to startDrag which are invoked during the dragging operation.
dragStarted is invoked when dragging commences.
dragStarted: function (x, y) {
lastMouseCoords = controller.translateCoordinates(x, y);
if (!node.selected()) {
chart.deselectAll();
node.select();
}
},
When dragging a selected node all selected
nodes are also dragged and the selection is not changed. However when
dragging a node that is not already selected, only that node is
selected and dragged.
dragging is invoked repeatedly during the dragging operation. It computes delta mouse coordinates and calls into the view-model to update the positions of the selected nodes.
dragging: function (x, y) {
var curCoords = controller.translateCoordinates(x, y);
var deltaX = curCoords.x - lastMouseCoords.x;
var deltaY = curCoords.y - lastMouseCoords.y;
chart.updateSelectedNodesLocation(deltaX, deltaY);
lastMouseCoords = curCoords;
},
updateSelectedNodesLocation is the view-model function that
updates the positions of the nodes being dragged. It is trivial, simply
enumerating selected nodes and directly updating their coordinates:
this.updateSelectedNodesLocation = function (deltaX, deltaY) {
var selectedNodes = this.getSelectedNodes();
for (var i = 0; i < selectedNodes.length; ++i) {
var node = selectedNodes[i];
node.data.x += deltaX;
node.data.y += deltaY;
}
};
There is no need to handle dragEnded in this circumstance, so it is omitted and ignored by the dragging service.
The clicked callback is new, it is invoked when the mouse down results in a click rather than a drag operation. In this case we delegate to the view-model:
clicked: function () {
chart.handleNodeClicked(node, evt.ctrlKey);
},
handleNodeClicked either toggles the selection (when control is pressed) or deselects all and then only selects the clicked node:
this.handleNodeClicked = function (node, ctrlKey) {
if (ctrlKey) {
node.toggleSelected();
}
else {
this.deselectAll();
node.select();
}
var nodeIndex = this.nodes.indexOf(node);
if (nodeIndex == -1) {
throw new Error("Failed to find node in view model!");
}
this.nodes.splice(nodeIndex, 1);
this.nodes.push(node);
};
Notice the code at the end, it changes the order of nodes after each click. The node that was clicked is moved to the
end of the list. As the list of nodes drives an ng-repeat, as seen earlier, it actually controls the render order
of the nodes. This is usually known as Z order. This means that clicked nodes are always bought to the front.
The UI for adding nodes to the flowchart is simple enough, I didn't spend much time on it. It is simply a button
in index.html:
<button
ng-
Add Node
</button>
The ng-click binds the click event to the
addNewNode function. Clicking the button calls this function that is defined in app.js:
$scope.addNewNode = function () {
var nodeName = prompt("Enter a node name:", "New node");
if (!nodeName) {
return;
}
var newNodeDataModel = {
// ... define node data-model ...
};
$scope.chartViewModel.addNode(newNodeDataModel);
};
The function first prompts the user to enter a name for the new node. This makes use of the prompt service which is defined in the same file and is an abstraction over the browser's prompt
function. Next the data-model for the new node is setup, this is pretty
much the same as the chart's initial data-model. Finally addNode is called to inject the new node into the chart's view-model.
Adding connectors is very similar to adding nodes. There are buttons
for adding either an input or output connector. A function is
called on button click, the user enters a name and a data-model is created before adding the connector
to each selected node.
Nodes and connections are deleted through the same mechanism. You
select or multi-select what you want to delete then press the delete
key or click the Delete Selected button. Clicking the button calls the deleteSelected function, which in turn calls through to the view-model:
$scope.deleteSelected = function () {
$scope.chartViewModel.deleteSelected();
};
The delete key is handled for the body of the page using ng-keyup:
<body
...
<!-- ... -->
</body>
keyUp is called whenever a key is pressed, it checks the keycode for the delete key and it calls through to the view-model:
$scope.keyUp = function (evt) {
if (evt.keyCode === deleteKeyCode) {
//
// Delete key.
//
$scope.chartViewModel.deleteSelected();
}
// ....
};
This method of key event handling seems a bit ugly to me. I'm aware
that AngularJS plugins exist to bind hotkeys directly to scope
functions, but I didn't want to include any extra dependencies in this
project. If anyone knows a cleaner way of setting this up in AngularJS
please let me know and I'll update the article!
When the view-model's deleteSelected is called it follows a
few simple rules to determine which nodes and connectors to delete and
which ones to keep, as illustrated in the following diagram:
deleteSelected has three main parts:
The first part:
this.deleteSelected = function () {
var newNodeViewModels = [];
var newNodeDataModels = [];
var deletedNodeIds = [];
for (var nodeIndex = 0; nodeIndex < this.nodes.length; ++nodeIndex) {
var node = this.nodes[nodeIndex];
if (!node.selected()) {
// Only retain non-selected nodes.
newNodeViewModels.push(node);
newNodeDataModels.push(node.data);
}
else {
// Keep track of nodes that were deleted, so their connections can also
// be deleted.
deletedNodeIds.push(node.data.id);
}
}
// ...
};
This code builds a new list that contains the nodes to be kept.
Nodes that are not selected are added to this list. A separate list is
built that contains the ids of nodes to be deleted. We hang onto the ids of deleted nodes in order to check
which connections are now defunct because an attached node has been
deleted.
And the second part:
this.deleteSelected = function () {
var newNodeViewModels = [];
var newNodeDataModels = [];
var deletedNodeIds = [];
// ... delete nodes ...
var newConnectionViewModels = [];
var newConnectionDataModels = [];
for (var connectionIndex = 0; connectionIndex < this.connections.length; ++connectionIndex) {
var connection = this.connections[connectionIndex];
if (!connection.selected() &&
deletedNodeIds.indexOf(connection.data.source.nodeID) === -1 &&
deletedNodeIds.indexOf(connection.data.dest.nodeID) === -1) {
//
// The nodes this connection is attached to, where not deleted,
// so keep the connection.
//
newConnectionViewModels.push(connection);
newConnectionDataModels.push(connection.data);
}
}
// ...
};
The code for deleting connections is similar to that for deleting
nodes. Again we build a list of connections to be kept. In this case we
are deleting connections not only when they are selected, but also when
the attached node was just deleted.
The third part is the simplest, it updates the view-model and the data-model from the lists that were just built:
this.deleteSelected = function () {
// ... delete nodes ...
// ... delete connections ...
this.nodes = newNodeViewModels;
this.data.nodes = newNodeDataModels;
this.connections = newConnectionViewModels;
this.data.connections = newConnectionDataModels;
};
I have implemented mouse over support so that items in the
flowchart can be highlighted when the mouse is hovered over them. It is
interesting to look at this in more detail as I was unable to achieve
it using AngularJS's event handling (eg ng-mouseenter and ng-mouseleave). Instead I had to implement SVG hit-test manually in order to determine the element that is under the mouse cursor.
The mouse-over feature isn't just cosmetic, it is necessary for
connection dragging to know which connector a new connection is being
dropped on.
The root SVG element binds ng-mousemove
to the mouseMove function:
<svg
...
<!-- ... -->
</svg>
This enables mouse movement tracking for the entire SVG canvas.
mouseMove first clears the mouse over elements that might have been cached in the previous invocation:
$scope.mouseMove = function (evt) {
$scope.mouseOverConnection = null;
$scope.mouseOverConnector = null;
$scope.mouseOverNode = null;
// ...
};
Next is the actual hit-test:
$scope.mouseMove = function (evt) {
// ... clear cached elements ...
var mouseOverElement = controller.hitTest(evt.clientX, evt.clientY);
if (mouseOverElement == null) {
// Mouse isn't over anything, just clear all.
return;
}
// ...
};
Hit-testing is invoked after each mouse movement to determine the SVG element currently under the mouse cursor. When no
SVG element is under the mouse, because nothing was hit, mouseMove returns straight away because it has nothing more to do. When this happens the
cached elements have already been cleared so the current state of the
controller records that nothing was hit.
Next, various checks are made to determine what kind of element was
clicked, so that the element (if it turns out to be a connection,
connector or node) can be cached in the appropriate variable. Checking for connection mouse over is necessary only when connection dragging is not currently in progress. Therefore connection hit-testing must be conditionally enabled:
$scope.mouseMove = function (evt) {
// ...
if (!$scope.draggingConnection) { // Only allow 'connection mouse over' when not dragging out a connection.
// Figure out if the mouse is over a connection.
var scope = controller.checkForHit(mouseOverElement, controller.connectionClass);
$scope.mouseOverConnection = (scope && scope.connection) ? scope.connection : null;
if ($scope.mouseOverConnection) {
// Don't attempt to mouse over anything else.
return;
}
}
// ...
};
After connection hit-testing is connector hit-testing, followed by node hit-testing:
$scope.mouseMove = function (evt) {
// ...
// Figure out if the mouse is over a connector.
var scope = controller.checkForHit(mouseOverElement, controller.connectorClass);
$scope.mouseOverConnector = (scope && scope.connector) ? scope.connector : null;
if ($scope.mouseOverConnector) {
// Don't attempt to mouse over anything else.
return;
}
// Figure out if the mouse is over a node.
var scope = controller.checkForHit(mouseOverElement, controller.nodeClass);
$scope.mouseOverNode = (scope && scope.node) ? scope.node : null;
};
The mouse over element is cached in one of three variables: mouseOverConnection, mouseOverConnector or mouseOverNode. Each of these are scope variables and referenced from the SVG to conditionally enable a special class on mouse over to make the connection, connector or node look different when the mouse is hovered over it.
ng-attr-class conditionally sets the class of the SVG
element depending on the mouse-over state (and also the selection-state):
ng-attr-class="{{connection.selected() && 'selected-connection-line' || (connection == mouseOverConnection && 'mouseover-connection-line' || 'connection-line')}}"
This convoluted expression sets the class to selected-connection-line when the connection is selected, to mouseover-connection-line when the mouse is hovered over it or to connection-line when neither of these conditions is true.
mouseMove relies on the functions hitTest and checkForHit to do its dirty work. hitTest simply calls elementFromPoint to determine the element under the specified coordinates:
this.hitTest = function (clientX, clientY) {
return this.document.elementFromPoint(clientX, clientY);
};
checkForHit invokes searchUp which recursively searches up the DOM for the element that has one of the following classes: connection, connector or node. In this way we can find the SVG element that relates most directly to the flowchart component we are hit-testing against.
this.searchUp = function (element, parentClass) {
//
// Reached the root.
//
if (element == null || element.length == 0) {
return null;
}
//
// Check if the element has the class that identifies it as a connector.
//
if (hasClassSVG(element, parentClass)) {
//
// Found the connector element.
//
return element;
}
//
// Recursively search parent elements.
//
return this.searchUp(element.parent(), parentClass);
};
searchUp relies on the custom function hasClassSVG to check
the class of the element. jQuery would normally be used to check the
class of a HTML element, but unfortunately it doesn't work correctly
for SVG elements. I discuss this more in Problems with SVG.
Both hitTest and checkForHit are implemented as separate functions so they are easily replaced with mocks in the unit-tests.
Connections are created by dragging out a connector, creating a
connection that can be dragged about by the user. Creation of the new connection is completed when its end-point has
been dragged over to another connector and it is committed to the view-model. When a connection is being dragged it is represented by an SVG
visual that is separate to the other connections in the flowchart.
ng-if conditionally displays the visual when
draggingConnection is set to true:
<g
ng-
<path
class="dragging-connection dragging-connection-line"
ng-attr-d="M {{dragPoint1.x}}, {{dragPoint1.y}}
C {{dragTangent1.x}}, {{dragTangent1.y}}
{{dragTangent2.x}}, {{dragTangent2.y}}
{{dragPoint2.x}}, {{dragPoint2.y}}"
>
</path>
<circle
class="dragging-connection dragging-connection-endpoint"
r="4"
ng-
</circle>
<circle
class="dragging-connection dragging-connection-endpoint"
r="4"
ng-
</circle>
</g>
The end-points and curve of the connection are defined by the following variables: dragPoint1, dragPoint2, dragTangent1 and dragTangent2.
Connection dragging is initiated by a mouse down on a connector. The mouse down event is bound to connectorMouseDown:
<g
ng-
<!-- ... connector ... -->
</g>
connectorMouseDown uses the dragging service to manage the dragging operation, something we now seen multiple times:
$scope.connectorMouseDown = function (evt, node, connector, connectorIndex, isInputConnector) {
dragging.startDrag(evt, {
// ... handle dragging events ...
});
};
The end-points and tangents are computed when dragging commences:
dragStarted: function (x, y) {
var curCoords = controller.translateCoordinates(x, y);
$scope.draggingConnection = true;
$scope.dragPoint1 = flowchart.computeConnectorPos(node, connectorIndex, isInputConnector);
$scope.dragPoint2 = {
x: curCoords.x,
y: cur);
},
draggingConnection has been set to true enabling display of the SVG visual.
The first end-point is anchored to the connector that was dragged out.
The second end-point is anchored to the current position of the mouse cursor.
The connection's end-points and tangents are updated repeatedly during dragging:
dragging: function (x, y, evt) {
var startCoords = controller.translateCoordinates(x, y);
$scope.dragPoint1 = flowchart.computeConnectorPos(node, connectorIndex, isInputConnector);
$scope.dragPoint2 = {
x: startCoords.x,
y: start);
},
Upon completion of the drag operation the new connection is committed to the flowchart:
dragEnded: function () {
if ($scope.mouseOverConnector &&
$scope.mouseOverConnector !== connector) {
$scope.chart.createNewConnection(connector, $scope.mouseOverConnector);
}
$scope.draggingConnection = false;
delete $scope.dragPoint1;
delete $scope.dragTangent1;
delete $scope.dragPoint2;
delete $scope.dragTangent2;
},
The scope variables that are no longer needed are deleted. draggingConnection is then set to false to disable rendering of the dragging connection visual.
Note the single validation rule: A connection cannot be created that loops back to the same connector. If this were production code it would likely have more validation rules or some way of adding user-defined rules.
If you are interested in the call to translateCoordinates, I'll explain that in Problems with SVG.
The flowchart relies on good handling of mouse input, so it was
really important to get that right. It is only the flowchart directive
that talks to the dragging service, the view-model has no knowledge of it. The dragging service in turn depends on the mouse capture service.
Dragging is necessary in many different applications and it is
surprisingly tricky to get right. Dragging code directly embedded
in UI code complicates things because you generally
have to manage the dragging operation as some kind of state machine.
This can become more painful as different types of dragging
operations are required and
complexity grows.There are Javascript libraries and plugins that already
do this kind of thing, however I wanted to make something
that worked well with HTML, SVG and AngularJS.
The flowchart directive makes use of the dragging directive in the
following ways and we have already examined how these work:
You can start to imagine, if the dragging wasn't in a
separate reusable library, how the flowchart directive
(though relatively simple) could get very complicated, having all
three dragging
operations handled directly. Event driven programming
comes to our rescue and Javascript has particular good support for this
with its anonymous functions that we use to define inline callbacks for
events.
startDrag must be called to initate the dragging operation. This is intended to be called in response to a mouse down event. Anonymous functions to handle the dragging events are passed as parameters:
dragging.startDrag(evt, {
dragStarted: function (x, y) {
// ... event handler ...
},
dragging: function (x, y, evt) {
// ... event handler ...
},
dragEnded: function () {
// ... event handler ...
},
clicked: function () {
// ... event handler ...
},
});
dragStarted, dragging and dragEnded are invoked for key events during the dragging operation. clicked is invoked when a mouse down is followed by a mouse up but
no dragging has occurred (or
at least the mouse has not moved beyond a small threshold). This is considered to be
a mouse click rather than a mouse drag.
The implementation of the service is in dragging_service.js. An AngularJS module is defined at the start:
angular.module('dragging', ['mouseCapture', ] )
The dragging module depends on the mouseCapture module. The rest of the file contains the definition of the service:
.factory('dragging', function ($rootScope, mouseCapture) {
// ...
return {
// ... functions exported by the service ...
};
})
;
The object returned by the factory function is the actual service. The service is registered under the name dragging so that AngularJS can instantiate the service when it needs to be dependency injected into the FlowChartController as the dragging parameter.
The service exports the single function startDrag which we have already used several times:
return {
startDrag: function (evt, config) {
// ...
},
};
The parameters to startDrag are the event object for the mouse down event and a configuration object containing the event handlers. startDrag captures the mouse for the duration of the dragging operation. The nested functions handle mouse events during the capture so that it may monitor the state of the mouse:
startDrag: function (evt, config) {
var dragging = false;
var x = evt.pageX;
var y = evt.pageY;
var mouseMove = function (evt) {
// ... handle mouse move events during dragging ...
};
var released = function() {
// ... handle release of mouse capture and end dragging ...
};
var mouseUp = function (evt) {
// ... handle mouse up and release the mouse capture ...
};
mouseCapture.acquire(evt, {
mouseMove: mouseMove,
mouseUp: mouseUp,
released: released,\
});
evt.stopPropagation();
evt.preventDefault();
},
Calling mouseCapture.acquire captures the mouse and the
service subsequently handles mouse input events. This allows the
dragging operation to be initiated for a sub-element of the page (via a
mouse down on that element) with dragging then handled by events on a parent element (in this case the body element). In Windows programming mouse capture
is supported by the operating system. When working within the browser
however this must be implemented manually, so I created a custom mouse capture service which is discussed in the next section.
Note that startDrag stops propagation of the DOM event and
prevents the default action, the dragging service provides
custom input handling so we prevent the browser's default action.
Let's look at the mouse event handlers that are active during dragging. The mouse move handler has two personalities. Before dragging
has started it continuously checks the mouse coordinates to see if they
move beyond a small threshold. When that happens the dragging operation
commences and dragStarted is called.
From then on dragging is in progress and the mouseMove continuously tracks the coordinates of the mouse and repeatedly
calls the dragging function.
var mouseMove = function (evt) {
if (!dragging) {
if (evt.pageX - x > threshold ||
evt.pageY - y > threshold) {
dragging = true;
if (config.dragStarted) {
config.dragStarted(x, y, evt);
}
if (config.dragging) {
// First 'dragging' call to take into account that we have
// already moved the mouse by a 'threshold' amount.
config.dragging(evt.pageX, evt.pageY, evt);
}
}
}
else {
if (config.dragging) {
config.dragging(evt.pageX, evt.pageY, evt);
}
x = evt.pageX;
y = evt.pageY;
}
};
The release handler is called when mouse capture has been released. This can happen in one of two ways. The mouse up
handler has stopped the dragging operation and requested that the mouse
be released. Alternatively if some other code has acquired the mouse capture
which forces a release. release also has two personalities, if dragging was in progress it invokes dragEnded. If dragging never commenced, because the mouse never moved beyond the threshold, clicked is instead invoked to indicate that dragging never started and the user simply mouse-clicked.
var released = function() {
if (dragging) {
if (config.dragEnded) {
config.dragEnded();
}
}
else {
if (config.clicked) {
config.clicked();
}
}
};
The mouse up handler is simple, it just releases the mouse capture (which invokes the release handler) and stops propagation of the event.
var mouseUp = function (evt) {
mouseCapture.release();
evt.stopPropagation();
evt.preventDefault();
};
Mouse capture
is used as a matter of course when developing a Windows
application. When mouse capture is acquired we are able to specially handle the mouse events for an element until
the capture is released. When working in the browser there appears to be no
built-in way to achieve this. Using an AngularJS directive and a service I was able to create my own custom attribute that attaches this behavior to the DOM.
The mouse-capture attribute identifies the element that can capture the mouse. In the flowchart application mouse-capture is applied to the body of the HTML page:
<body
ng-app="app"
ng-controller="AppCtrl"
mouse-capture
<!-- ... -->
</body>
The small directive that implements this attribute is at the end of mouse_capture_directive.js. The rest of the file implements the service that is used to acquire the mouse capture.
The file starts by registering the module:
angular.module('mouseCapture', [])
This module has no dependencies, hence the empty array.
Next the service is registered:
.factory('mouseCapture', function ($rootScope) {
// ... setup and event handlers ...
return {
// ... functions exported by the service ...
};
})
This is quite a big one and we'll come back to it in a moment. At
the end of the file is a directive with the same name as the service:
.directive('mouseCapture', function () {
return {
restrict: 'A',
controller: function($scope, $element, $attrs, mouseCapture) {
mouseCapture.registerElement($element);
},
};
})
;
Both the service and the directive can have the same name
because they are used in different contexts. The service is dependency
injected into Javascript functions and the directive is used as a HTML
attribute (hence the restrict: 'A'), so their usage does not overlap.
The directive defines a controller that it is initialized when the DOM is loaded. The mouseCapture service
itself is injected into the controller along with the DOM element. The
directive uses the service to register the element for mouse capture, this is the element for which mouse move and mouse up will be handled during the capture.
Going back to the service. The factory function defines several mouse event handlers before returning the service:
.factory('mouseCapture', function ($rootScope) {
// ... state variables ...
var mouseMove = function (evt) {
// ... handle mouse movement while the mouse is captured ...
};
var mouseUp = function (evt) {
// ... handle mouse up while the mouse is capture ...
};
return {
// ... functions exported by the service ...
};
})
The handlers are dynamically attached to the DOM when
mouse capture is acquired and detached when mouse capture is
released.
The service itself exports three functions:
return {
registerElement: function(element) {
// ... register the DOM element whose mouse events will be hooked ...
},
acquire: function (evt, config) {
// ... acquires the mouse capture ...
},
release: function () {
// ... releases the mouse capture ...
},
};
registerElement is simple, it caches the single element whose mouse events can be captured (in this case the body element).
registerElement: function(element) {
$element = element;
},
acquire releases any previous mouse capture, caches the configuration object and binds the event handlers:
acquire: function (evt, config) {
this.release();
mouseCaptureConfig = config;
$element.mousemove(mouseMove);
$element.mouseup(mouseUp);
},
release invokes the released event handler and unbinds the event handlers:
release: function () {
if (mouseCaptureConfig) {
if (mouseCaptureConfig.released) {
mouseCaptureConfig.released();
}
mouseCaptureConfig = null;
}
$element.unbind("mousemove", mouseMove);
$element.unbind("mouseup", mouseUp);
},
While the mouse is captured mouseMove and mouseUp are invoked to handle mouse events, the events are relayed to higher-level code (such as the dragging service).
mouseMove are mouseUp are pretty similar, so let's just look at mouseMove:
var mouseMove = function (evt) {
if (mouseCaptureConfig && mouseCaptureConfig.mouseMove) {
mouseCaptureConfig.mouseMove(evt);
$rootScope.$digest();
}
};
The $digest function must be called to make AngularJS aware of data-model changes made by clients of the mouse capture service. AngularJS needs to know when the data-model has changed so that it can
re-render the DOM as necessary. Most of the time when writing an AngularJS
application you don't need to know about $digest,
it only comes into play when you are working at a low-level in a
directive or a service and usually working directly with the DOM.
Client-side web development is fraught with problems and this is obvious to anyone who has been engaged in it.
Using
libraries such as jQuery and frameworks like AngularJS goes a long way
to avoiding problems. Using Javascript appropriately (thanks Mr
Crockford!) and having your code scaffolded by unit-tests goes even
further to avoiding traditional issues. Good software development
skills and an understanding of appropriate patterns help tremendously
to avoid the Javascript maintenance and debugging nightmares of the
past.
Even with all the problems associated with client-side web
development I think I actually prefer it to regular application
development. As a professional software developer I do a bit of both,
but if possible in the future I may consider developing desktop
applications as stand-alone web applications. The productivity boost
associated with not having to use a compiler (unless you want to) and
also the possibilities that arise from having a skinable application
can't be overlooked, although I do miss Visual Studio's refactoring support.
One thing I really missed from Windows desktop programming was being able to capture the mouse and to achieve this I had create my own DIY mouse capture system.
Although I had a few issues with AngularJS, I want to be completely
clear: AngularJS is awesome. It makes client-side web development so
much easier to the point where it has pretty much convinced me that
this is the better way to make UIs over and above WPF.
Since I first started this project AngularJS has evolved. Support for ng-attr-
was added recently and appears be specifically to solve problems with
data-binding attributes on SVG elements, exactly the problem I was
having! This feature was so new and so necessary that originally I had
to clone direct from the AngularJS repository to get early access to
it. It is still so new that the only documentation they appear to have
is part of the help for directives.
ng-if
was another feature that came along during this project and being able
to conditionally display HTML/SVG elements turned out to be very useful.
The learning curve was steep. This wasn't just AngularJS but
leveling up my web development skills took considerable effort. All
told though, there were very few problems with AngularJS and the amount
of problems that it solves genuinely out-weighted its learning curve or
any problems I had using it.
When I first started integrating AngularJS/jQuery and SVG I hit
many small problems. To help figure out what I could and couldn't do,
I made a massive test-bed that tested many different aspects of the
integration. This allowed me to figure out the problem areas that I
wanted to avoid, and find solutions for the areas that I couldn't avoid.
Creating the test-bed allowed me to work through the issues and improve my
understanding of SVG and how it interacts with AngularJS features such
as ng-repeat. I discovered that it was very difficult to create
directives that inject SVG elements underneath the root SVG element.
This appears to be due to jQuery creating elements in the HTML
namespace rather than the SVG namespace. AngularJS uses jQuery under
the hood so instantiating portions of SVG templates causes the elements
not to be SVG elements at all, which clearly doesn't help. This is a
well known problem when creating SVG elements with jQuery (if you guys
are listening, please just fix it!) and there is a fair amount of
information out there that will show you the hoops to jump through as a
workaround. In the flowchart application though I was able to avoid the
namespace problem completely by containing all my SVG under the one
single template with the namespace explicitly specified in the SVG
element.
Unfortunately the SVG DOM is different to the HTML DOM and so many jQuery functions that you might expect to work don't (although some do work fine). A notable example is with setting the class of an element. As this doesn't work for SVG when using jQuery, it doesn't work for AngularJS either, which builds on jQuery. So ng-class can't be used. This is why I have been forced to use ng-attr-class multiple times in the SVG for conditionally setting the class. This isn't such a bad option anyway as I think ng-attr-class is easier to understand than the alternatives, even though it does have the limitation of only being able to apply a single class to an
element at a time. In other cases (eg, the mouse over code)
I have worked around the class problem by avoiding jQuery and using custom functions for checking SVG class. Thanks to Justin McCandless for sharing his solution to this problem.
There are existing libraries that help deal with jQuery's bad SVG support. The jQuery SVG plugin looks good, but only if you want to create and manipulate SVG programatically. I was keen to define the SVG declaratively using an AngularJS template.
By implementing my own code for hit testing and mouse-over, I avoided potential problems with jQuery's mouseenter/mouseleave events relating to SVG. For this using the extremely simple function elementFromPoint seemed like the most convenient option.
Another jQuery problem I had was with the offset function.
Originally I was using this to translate page coordinates into SVG
coordinates. For some reason this didn't work properly under Firefox. After research online I created the translateCoordinates function that uses the SVG API to achieve the translation.
Another issue under Firefox was that the fill property for an SVG rect cannot be set using CSS. This worked in other browsers, but under Firefox I had to change it so fill had to be set as an attribute of the rect rather than via CSS.
I had one other problem that is worth mentioning. It was very odd and I never completely figured it out. I had nested SVG g elements representing a connection with ng-repeat applied
to it to render multiple connections (that is a g nested within another
g). When there were no connections (resulting in the ng-repeat displaying nothing) every SVG element after the connections was blown away. Just gone! The nested g element was actually redundant so I was able to cut it down to a single g
containing the visuals for a connection. That fixed this very unusual
problem. I tested out the problem under HTML instead of SVG and didn't
get the issue, so I assume that it only manifests when using SVG under
AngularJS (or possibly something to do with jQuery).
This is the end of the article. Thanks for taking the time to read it.
Any
feedback or bug reports you give will be greatly appreciated and I'll
endeavor to update the article and code as appropriate. I'll
leave you with some ideas for the future and links to useful resources.
The future improvements that could be applied to this code are simply the features that were in NetworkView from the original article:
AngularJS:
jQuery:
SVG:
Jasmine:
Test Driven Development:
This article, along with any associated source code and files, is licensed under The MIT License
ng-repeat ="connector in node.outputConnectors | filter:{isVisible:'true'}"
{{connector.isVisible()}}
filter: {isVisible: 'true'}
connector.isVisible()
What do you suggest to save several different flowcharts and then access it again when I select one of them ???
ng-attr-class="{{connection.selected() && 'selected-connection-line' || (connection == mouseOverConnection && 'mouseover-connection-line' || 'connection-line')}}"
ng-attr-class="{{getClassForConnection(coonnection)}}"
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://codeproject.freetls.fastly.net/Articles/709340/Implementing-a-Flowchart-with-SVG-and-AngularJS?msg=4989243#xx4989243xx | CC-MAIN-2021-43 | refinedweb | 14,104 | 55.95 |
Learn how to use Ubidots with Arduino to build an IoT project. Step by step guide covering all the details and how to connect Android to Arduino
This article covers how to connect Ubidots to Arduino to build an IoT project. Moreover, during this practical guide covers how to connect Arduino and Android so that an Android app can read data stored by Arduino into the IoT cloud platform. This aspect is important because it is possible to store data in the cloud and analyze it later using different means. Once the data, like sensor values, is in the cloud is possible to access it using smartphones. This Arduino and Android integration project demonstrates how easy is connecting Arduino and Android together implementing an IoT system. This IoT project can have several implementations and it can be really useful in real life IoT project. Usually, the integration between the mobile and the IoT ecosystem is a quite common scenario. In this tutorial, we want to cover how to use Ubidots with Arduino and next how to integrate Arduino and Android so that an Android app can interact with Arduino and reads the sensor’s data.
This project is built by two different parts:
- the first part describes how to use Ubidots to Arduino to collect data from sensors connected to Arduino board and send this information to a cloud platform that stores it
- the second part describes how to access this information using an Android smartphone.: how to use Ubidots with Arduino
The picture below shows the project overview :
In this sketch, DHT11 sensor is connected to Arduino board, that, in turn, uses the Arduino Ethernet shield to connect to the network to send data. As a first step, we check if everything is connected correctly trying to read the value of the temperature and the humidity. The snippet below shows the Arduino sketch to test the sensor:
#include "DHT.h" #include <spi.h> #define DHTPIN 2 #define DHTTYPE DHT11 DHT dht(DHTPIN, DHTTYPE); void setup() { Serial.begin(9600); dht.begin(); } void loop() { delay(50000); float h = dht.readHumidity(); // Read temperature as Celsius (the default) float t = dht.readTemperature(); Serial.print("Humidity: "); Serial.print(h); Serial.print(" %t"); Serial.print("Temperature: "); Serial.print(t); Serial.println(" *C "); }
One thing to remember is importing DHT11 library in your Arduino IDE. Running the example you should get the temperature and the humidity.
If everything works correctly, it is time to make things a little more complex explaining how to use Ubidots with Arduino.
As stated before, the purpose of this first part of this IoT project is describing how to use Ubidots to Arduino so that values read by sensors connected to Arduino are sent the the cloud. In a second step, this IoT practical guide covers how to develop an Android app that reads data stored in Ubidots.
Ubidots provides an example that can be useful. In Arduino, we have to develop an Arduino HTTP client that calls a JSON service passing the data we want to store in the cloud.
Referring
Therefore, the URL to call is:
while the data in JSON format to send is:
[json][ {"variable": "varId", "value":val, "timestamp":timestamp}, {"variable": "vardId1", "value":val1, "timestamp":timestamp1} ][/json]); byte mac[] = { 0xDE, 0xAD, 0xBE, 0xEF, 0xFE, 0xED }; char server[]="things.ubidots.com"; EthernetClient client; IPAddress ip(192, 168, 1, 40); // Arduino IP Add IPAddress myDns(8,8,8,8); IPAddress myGateway(192,168,1,1);"); } else { // if you didn't get a connection to the server: Serial.println("connection failed"); } boolean sta = client.connected(); Serial.println("Connection ["+String(sta)+"]"); if (!client.connected()) { Serial.println(); Serial.println("disconnecting."); client.stop(); } Serial.println("Reading.."); while (client.available()) { char c = client.read(); Serial.print(c); } client.flush(); client.stop(); } ready, it is time to configure the project in Ubidots.
Configuring Ubidots to receive data from Arduino
Now, it is necessary to configure the project on Ubidots so that the Arduino client can send data. It is possible to configure the project using Ubidots web interface.
Before configuring the variable, we have to create a Ubidots project:
As you can see, these two variable have two ids that we used previously when we created the JSON request.
You may like also
How to monitor environment parameters
The IDs of these variables:
Hi,
Thank you for the code! but it initially gave a error on like 62 saying stray , that was solved by putting an extra before the double quotes before tempvar
OLD(String varString = “[{“variable”: “” + tempVarId + “”, “value”:” + String(tempValue) + “}”;)
NEW (String varString = “[{“variable”: “” + tempVarId + “”, “value”:” + String(tempValue) + “}”;)
but the information is not being passed onto ubidots
Serial output]
disconnecting.
Reading..]
i used the sketch from ubidotsethernet shield an dthe data passes on to ubidots so i kn ow my network and configuration are allright
Thank You | https://www.survivingwithandroid.com/2015/12/how-to-use-ubidots-with-arduino-to-build-iot-projects.html | CC-MAIN-2019-18 | refinedweb | 803 | 51.07 |
Hi Jason, I don't know Python, but let me share some thoughts that you might find useful. First, a few questions about your manual translations. Are your functions curried? For example, can I partially apply zipWith? Also, you put a "thunk" around things like "cons(...)" --- should it not be the arguments to "cons" that are thunked? Now, on to an automatic translation. As you may know already, Haskell programs can be transformed to "combinator programs" which are quite simple and easy to work with. Here is what I mean by a "combinator program": p ::= d* (a program is a list of combinator definitions) d ::= c v* = e (combinator definition) e ::= e e (application) | v (variable/argument) | c (constant: integer literal, combinator name, etc.) As an example of a combinator program, here is one that reverses the list [0,1,2]. rev v acc = v acc (rev2 acc) rev2 acc x xs = rev xs (cons x acc) cons x xs n c = c x xs nil n c = n main = rev (cons 0 (cons 1 (cons 2 nil))) nil This program does not type-check in Haskell! But Python, being dynamically typed, doesn't suffer from this problem. :-) A translation scheme, D[], from a combinator definition to a Python definition might look as follows. D[c v* = e] = def c() : return (lambda v1: ... lambda vn: E[e]) E[e0 e1] = E[e0] (E[e1]) E[v] = v E[c] = c() Here is the result of (manually) applying D to the list-reversing program. def nil() : return (lambda n: lambda c: n) def cons() : return (lambda x: lambda xs: lambda n: lambda c: c(x)(xs)) def rev2() : return (lambda acc: lambda x: lambda xs: rev()(xs)(cons()(x)(acc))) def rev() : return (lambda v: lambda acc: v(acc)(rev2()(acc))) def main() : return (rev() (cons()(0)( cons()(1)( cons()(2)( nil()))))(nil())) The result of main() is a partially-applied function, which python won't display. But using the helper def list(f) : return (f([])(lambda x: lambda xs: [x] + list(xs))) we can see the result of main(): >>> list(main()) [2, 1, 0] Of course, Python is a strict language, so we have lost Haskell's non-strictness during the translation. However, there exists a transformation which, no matter how a combinator program is evaluated (strictly, non-strictly, or lazily), the result will be just as if it had been evaluated non-strictly. Let's call it N, for "Non-strict" or "call-by-Name". N[e0 e1] = N[e0] (\x. N[e1]) N[v] = v (\x. x) N[f] = f I've cheekily introduced lambdas on the RHS here --- they are not valid combinator expressions! But since Python supports lambdas, this is not a big worry. NOTE 1: We can't remove the lambdas above by introducing combinators because the arguments to the combinator would be evaluated and that would defeat the purpose of the transformation! NOTE 2: "i" could be replaced with anything above --- it is never actually inspected. For the sake of interest, there is also a "dual" transformation which gives a program that enforces strict evaluation, no matter how it is evaluated. Let's call it S for "Strict". S[e0 e1] = \k. S[e0] (\f. S[e1] (\x. k (f x))) S[v] = \k. k v S[f] = \k. k f I believe this is commonly referred to as the CPS (continuation-passing style) transformation. Now, non-strict evaluation is all very well, but what we really want is lazy evaluation. Let's take the N transformation, rename it to L for "Lazy", and indulge in a side-effecting reference, ML style. L[e0 e1] = L[e0] (let r = ref None in \x. match !r with None -> let b = L[e1] in r := Some b ; b | Some b -> b) L[v] = v (\x. x) L[f] = f I don't know enough to define L w.r.t Python. I haven't tried too hard to fully understand your translation, and likewise, you may not try to fully understand mine! But I thought I'd share my view, and hope that it might be useful (and correct!) in some way. Matthew. | http://www.haskell.org/pipermail/haskell-cafe/2008-October/049094.html | CC-MAIN-2013-48 | refinedweb | 697 | 72.05 |
23 September 2009 11:09 [Source: ICIS news]
Correction: In the ICIS news story headlined "Chinese PTA producers hike Sept prices by $161/tonne" dated 23 September 2009, please read the headline as "...producers drop Sept prices..." instead of "...producers hike Sept prices..."
SINGAPORE (ICIS news)--Major purified terephthalic acid (PTA) producers in the Chinese domestic market have lowered their September prices by yuan (CNY) 1,100/tonne ($161/tonne) from August values, market sources said on Wednesday.
Sinopec, Yisheng Petrochemical, Xiang Lu Petrochemical, Zhejiang Yuan Dong Petrochemical and BP Zhuhai have finalised their PTA prices at CNY7,100/tonne ?xml:namespace>
The lower prices for September were in line with falls in spot values of the material over the past month, company sources said.
Contracts were finalised at CNY8,200/tonne
Other PTA producers in China include Ningbo Mitsubishi Chemical and Oriental Petrochemical (
($1 = CNY6.83) | http://www.icis.com/Articles/2009/09/23/9249549/corrected-chinese-pta-producers-drop-sept-prices-by-161tonne.html | CC-MAIN-2014-42 | refinedweb | 147 | 51.89 |
Php paypal demo jobs
First thing need to give a demo of packaging in ubuntu. Take a example c code build a .deb package file. that package must automatically add dependencies when i am installing with apt-get. Need address all issues while packaging. Second Thing give a docker demo with above package. need to cover all issues with docker final outcome is step by step
Based on the work of the developer...
Hello, Sometime ago, we created this project: The awarded freelancer sent us 3 zip files (attached to this project) and installed them on our website. Today, this freelancer is not available anymore but we would like to be able to install and use this module again
angular2 file upload and master page -must show demo
I can provide db data during the project Now just make demo for test purpose, so I can see that you can develop this kind of projects Give your price for demo and let me know VB.Net
i need you to design a WPF fronthand demo it has to be done in less than 24 hours and the budget is very limited so only bid if you can do it on my requirements.
I have a website , i want to create a presentation of the website workflow.
Show all products , category , search filters ,
import demo content for a wordpress theme [url removed, login to view] the theme is already installed on the wordpress site and it is live tasks to complete [url removed, login to view] home page 7 style [url removed, login to view] 2. Add sidebar_layout for shop page [url removed, login to view] sidebar_layout=left&product_columns=4
We have 5 animated GIFs that we want to connect in to one small animated video to use as a demo on our website. In the "Notes" section below each powerpoint slide are notes that we want to use as a voiceover for each GIF.
Need a killer App Demo Video for the App [url removed, login to view] Must show all functionality with very sleek animation and catchy tunes and text.]
I can provide db data during the project Now just make demo for test purpose, so I can see that you can develop this kind of projects Give your price for demo and let me know VB.Net
30-60 seconds software demo advertisement voiceover for online text analysis tool: [url removed, login to view] Script will be provided. Own input and ideas are appreciated and will be considered.. | https://www.freelancer.com/job-search/php-paypal-demo/ | CC-MAIN-2018-09 | refinedweb | 420 | 65.15 |
Learn Game Programming using DirectX9 visit:
how can i make a game wit direct x sdk, and win 2005 visual basic c/c++Reply
I am getting compilation error because of above files. fatal error C1083: Cannot open include file: '../GameLib/Image.h': No such file or directory
The file isn't needed at all, just delete the line: #include "../GameLib/Image.h"Reply
I encountered the same problem,but when I followed your suggestion,my computer compiled succeddfully.Thanks a lot!Reply
I had the same problem. Luckily I found the file using explorer. I copied into the project file, added it to the header files in 'file view' and it worked ok. If you cannot find a copy on your machine, the contents are lested below #ifndef __IMAGE_H__ #define __IMAGE_H__ typedef struct { unsigned short imagic; unsigned short type; unsigned short dim; unsigned short sizeX, sizeY, sizeZ; char name[128]; unsigned char *data; } IMAGE; IMAGE *ImageLoad(char *); #endif /* !__IMAGE_H__! */Reply
If the user changes the display mode -- for example from 1024x768 pixels to 800x600 pixels -- the application cannot work anymore. But I think I have found the right way to get round this. When the user changes the resolution, all the windows receive the WM_DISPLAYCHANGE message. So I added this declaration in file ddraw_in_mfcwindView.h, class CDdraw_in_mfcwindView: afx_msg LRESULT OnDisplayChange(WPARAM wParam, LPARAM lParam); I updated the message map in file ddraw_in_mfcwindView.cpp like this: BEGIN_MESSAGE_MAP(CDdraw_in_mfcwindView, CView) //{{AFX_MSG_MAP(CDdraw_in_mfcwindView) ON_WM_LBUTTONDOWN() //}}AFX_MSG_MAP ON_MESSAGE(WM_DISPLAYCHANGE, OnDisplayChange) END_MESSAGE_MAP() And I added the code for OnDisplayChange method in file ddraw_in_mfcwindView.cpp like this: LRESULT CDdraw_in_mfcwindView::OnDisplayChange(WPARAM wParam, LPARAM lParam) { LRESULT _lResult; _lResult = CWnd::OnDisplayChange(wParam, lParam); ddobj.Terminate(); bDDrawActive = FALSE; if (!ddobj.Init(GetSafeHwnd())) { AfxMessageBox("Failed to Create DirectDraw Interface."); } else { bDDrawActive = TRUE; } return _lResult; } It seems to work pretty well.Reply
Originally posted by: Deutsch
I've tried to create a simple animation on the basis of this
tutorial. The problem is, that the function OnDraw in the view class must be called to renew the screen, and because of this the screen is blinking.
Originally posted by: Ren�
Nice code, however I can assure you it will not work in an.
Originally posted by: anita
hai
can you tell me where can I foind ddutil.h and resource.h
thank you
Originally posted by: Zygoman
Very good stuff !!!! I used it for a dialog-based application, and there was no problem at all !!!!! Thanks a lot, save a lot amount of time :-)Reply
Originally posted by: Tyrone Deane
Thanks for the tip about how to include the DirectX lib and include directories. My code now complies. Yippee!!!!Reply | http://www.codeguru.com/comment/get/48285110/ | CC-MAIN-2014-42 | refinedweb | 439 | 56.86 |
Swift is cross-platform, but it behaves differently on Apple platforms vs. all other operating systems, mainly for two reasons:
- The Objective-C runtime is only available on Apple platforms.
- Foundation and the other core libraries have separate implementations for non-Apple OSes. This means some Foundation APIs may produce divergent results on macOS/iOS and Linux (though the stated goal is implementation parity), or they may simply not be fully implemented yet.
Therefore, when you write a library that doesn’t depend on any Apple-specific functionality, it’s a good idea to test your code on macOS/iOS and Linux.
Test discovery on Linux
Any unit testing framework must be able to find the tests it should run. On Apple platforms, the XCTest framework uses the Objective-C runtime to enumerate all test suites and their methods in a test target. Since the Objective-C runtime is not available on Linux and the Swift runtime currently lacks equivalent functionality, XCTest on Linux requires the developer to provide an explicit list of tests to run.
The
allTests property
The way this works (by convention established by the Swift Package Manager) is that you add an additional property named
allTests to each of your
XCTestCase subclasses, which returns an array of test functions and their names. For example, a class containing a single test might look like this:
// Tests/BananaKitTests/BananaTests.swift import XCTest import BananaKit class BananaTests: XCTestCase { static var allTests = [ ("testYellowBananaIsRipe", testYellowBananaIsRipe), ] func testYellowBananaIsRipe() { let banana = Banana(color: .yellow) XCTAssertTrue(banana.isRipe) } }
LinuxMain.swift
The package manager creates another file named
LinuxMain.swift that acts as the test runner on non-Apple platforms. It contains a call to
XCTMain(_:) in which you must list all your test suites:
// Tests/LinuxMain.swift import XCTest @testable import BananaKitTests XCTMain([ testCase(BananaTests.allTests), ])
Manual maintenance is easy to forget
This approach is obviously not ideal because it requires manual maintenance in two places:
- Every time you add a new test, you must also add it to its class’s
allTests.
- Every time you create a new test suite, you must add it to the
XCTMaincall in
LinuxMain.swift.
Both of these steps are easy to forget. Even worse, when you inevitably forget one of them it’s not at all obvious that something is wrong — your tests will still pass on Linux, and unless you manually compare the number of executed tests on macOS vs. Linux you might not even notice that some tests didn’t run on Linux.
This happened to me numerous times, so I decided to do something about it.
Swift 4.1 can update
allTests for you
UpdateMay 31, 2018: Swift 4.1 includes a new package manager command for keeping the list of tests for Linux up to date. You can invoke it like this:
> swift test --generate-linuxmain
This will build the test target and then generate the code for the required
__allTests properties in a separate
XCTestManifests.swift file. Note that the command internally still uses the Objective-C runtime for test discovery, so you have to run it on macOS.
And you have to automate this step as part of your build process — otherwise you run the risk of your Linux and Darwin tests running out of sync. Installing a safeguard that warns you when this happens is still a good idea. Therefore, I believe the rest of this article also applies to Swift 4.1, although you may have to adjust some names (Swift switched from
allTests to
__allTests).
Safeguarding Linux tests against omissions
Let’s try to build a mechanism that automatically causes the test suite to fail when we forget one of the maintenance steps. We’re going to add the following test to each of our
XCTestCase classes (and to their
allTests arrays):
class BananaTests: XCTestCase { static var allTests = [ ("testLinuxTestSuiteIncludesAllTests", testLinuxTestSuiteIncludesAllTests), // Your other tests here... ] func testLinuxTestSuiteIncludesAllTests() { #if os(macOS) || os(iOS) || os(tvOS) || os(watchOS) let thisClass = type(of: self) let linuxCount = thisClass.allTests.count #if swift(>=4.0) let darwinCount = thisClass .defaultTestSuite.testCaseCount #else let darwinCount = Int(thisClass .defaultTestSuite().testCaseCount) #endif XCTAssertEqual(linuxCount, darwinCount, "\(darwinCount - linuxCount) tests are missing from allTests") #endif } // Your other tests here... }
This test compares the number of items in the
allTests array to the number of tests discovered by the Objective-C runtime and will fail if it finds a discrepancy between the two, which is exactly what we want.
(The dependency on the Obj-C runtime means the test only works on Apple platforms — it won’t even compile on Linux, which is why we need to wrap it in the
#if os(macOS) ... block.1)
A failing test when you forget to add a test to
allTests
To test this out, let’s add another test, this time “forgetting” to update
allTests:
import XCTest import BananaKit class BananaTests: XCTestCase { static var allTests = [ ("testLinuxTestSuiteIncludesAllTests", testLinuxTestSuiteIncludesAllTests), ("testYellowBananaIsRipe", testYellowBananaIsRipe), // testGreenBananaIsNotRipe is missing! ] // ... func testGreenBananaIsNotRipe() { let banana = Banana(color: .green) XCTAssertFalse(banana.isRipe) } }
When we now run the tests on macOS, our safeguard test will fail:
I really like this. Obviously, it only works if you want the
allTests array to contain every test, i.e. you’ll have to wrap any Darwin- or Linux-specific tests in conditional compilation blocks as we did above. I believe this is an acceptable limitation for most code bases.
Safeguarding
LinuxMain.swift
What about the other problem, verifying that
LinuxMain.swift is complete? This is harder.
LinuxMain.swift is not (and cannot be) part of the actual test target, so you can’t easily verify what gets passed into
XCTMain.
The only solution I can see would be to add a Run Script build phase to your test target and have the script parse the code in
LinuxMain.swift and somehow compare the number of items in the array to the number of test suites in the test target. I haven’t tried it, but it sounds quite complicated.
Conclusion
Even with the new test, things are far from perfect since there are still two things you can potentially forget. Every time you create a new
XCTestCase class, you must:
- Copy and paste the
testtest into the new class.
Linux Test Suite Includes All Tests
- Update
LinuxMain.swift.
Still, I think this is considerably better than the status quo because the new test covers the most common case — adding a single test to an existing test suite and forgetting to update the
allTests array.
I can’t wait for Swift’s reflection capabilities to become more powerful2, making all this unnecessary.
Appendix: Code generation with Sourcery
In what seems to be a recurring theme lately, Krzysztof Zabłocki pointed out that his excellent code generation tool Sourcery can also maintain the Linux test infrastructure for you. This is a great alternative and fairly easy to set up. Here’s how:
Install Sourcery. Adding it as a Swift Package Manager dependency didn’t work for me (build failed), but I suspect that’s a temporary problem related to Swift 3.1 being brand new. I ended up downloading the latest release and running the binary directly.
Create a file named
LinuxMain.stencilwith the following contents. Save it in a convenient place inside your project folder — I put mine in a
sourcery/subdirectory:
// sourcery:file:Tests/LinuxMain.swift import XCTest {{ argument.testimports }} {% for type in types.classes|based:"XCTestCase" %} {% if not type.annotations.disableTests %}extension {{ type.name }} { static var allTests = [ {% for method in type.methods %}{% if method.parameters.count == 0 and method.shortName|hasPrefix:"test" %} ("{{ method.shortName }}", {{ method.shortName }}), {% endif %}{% endfor %}] } {% endif %}{% endfor %} XCTMain([ {% for type in types.classes|based:"XCTestCase" %}{% if not type.annotations.disableTests %} testCase({{ type.name }}.allTests), {% endif %}{% endfor %}]) // sourcery:end
This is based on a template written by Ilya Puchka. I just added the
// sourcery:...annotations in the first and last lines, which determine the path of the generated file (requires Sourcery 0.5.9).
As you can see, this is Swift code mixed with a templating language. When you invoke Sourcery, it will parse the types in your projects source code and use them to generate code according to the template(s) you pass in. For example, the loop beginning with
{% for type in types.classes|based:"XCTestCase" %}will iterate over all classes that inherit from
XCTestCaseand generate an extension containing the
allTestsproperty for them.
Delete the existing definitions for
allTestsfrom your test classes. We’ll generate them with Sourcery in the next step. If you’ve already added
testmethods, you can delete them too or choose to leave them in. They don’t hurt and may still detect issues, e.g. when you don’t run Sourcery again after adding a test, but they aren’t strictly necessary anymore.
Linux Test Suite Includes All Tests
Run Sourcery from your project directory:
$ sourcery --sources Tests/ \ --templates sourcery/LinuxMain.stencil \ --args testimports='@testable import BananaKitTests' Scanning sources... Found 1 types. Loading templates... Loaded 1 templates. Generating code... Finished. Processing time 0.0301569700241089 seconds
This will overwrite the existing
Tests/LinuxMain.swiftfile with this generated code:
// Generated using Sourcery 0.5.9 — // DO NOT EDIT import XCTest @testable import BananaKitTests extension BananaTests { static var allTests = [ ("testYellowBananaIsRipe", testYellowBananaIsRipe), ("testGreenBananaIsNotRipe", testGreenBananaIsNotRipe), ] } XCTMain([ testCase(BananaTests.allTests), ])
In our minimal example there’s only a single class with two tests, but this will also work for multiple test classes.
And that’s it. Add the command that invokes Sourcery to a script that you run on every build and your Linux tests will always be up to date. Very cool!
I don’t know of a more concise way than this for checking that we’re on an Apple platform. Once SE-0075 (accepted but not implemented yet) lands, we should be able to replace this with
#if canImport(Darwin). ↩︎
Reflection (i.e. test discovery at runtime) is not the only way to tackle this problem. The bug tracking test discovery on Linux is SR-710, and after thorough discussion in the spring of 2016 it was decided to rely on SourceKit (i.e. source code parsing) for finding test methods. Porting SourceKit to Linux was and is a huge task, but as far as I can tell major progress has been made recently, so maybe the situation around testing on Linux will improve in Swift 4. ↩︎ | https://oleb.net/blog/2017/03/keeping-xctest-in-sync/?utm_campaign=iOS%2BDev%2BWeekly&utm_medium=web&utm_source=iOS_Dev_Weekly_Issue_295%3Cspan%20style= | CC-MAIN-2019-30 | refinedweb | 1,717 | 56.05 |
Feb 12, 2014 04:27 PM|lax4u|LINK
I have a asynchronous http handler that downloads the file. When user clicks on the download link it invokes the httphandler, and the broswer's download window pops up. Depending on the browser version, the window may be different. but the current page's content remains as it is and download starts.
If any exception occur on the server is there any way to inform the user without clearing the page? Becuase in Catch block if i do context.Response.Write("Something Bad Happened") it will clear the content of the existing page.
so i thought of sending alert so the current browser content will remain as it is
below is my code
public class DownloadFileHandler : IHttpAsyncHandler { public bool IsReusable { get { return false; } } public DownloadFileHandler() { } public IAsyncResult BeginProcessRequest(HttpContext context, AsyncCallback cb, Object extraData) {) { try {
//Instantiate the Stream here
context.Response.ContentType = "application/x-zip-compressed"; context.Response.AddHeader("Content-Disposition", "attachment; filename=" + filename); int read = 0; //prevent infinite loop if user disconnects while (context.Response.IsClientConnected && (read = _stream.Read(buffer, 0, buffer.Length)) > 0) { context.Response.OutputStream.Write(buffer, 0, read); context.Response.Flush(); } } catch (Exception ex) { if (_context.Response.IsClientConnected && _context.Response != null) { //??? How to send Javascript alert here and
//??? what would be the content type and What
//??? What about the header(Conetent-Disposition which is already added in the response) _context.Response.Write("How to send javascript alert here??"); _context.Response.Flush(); } } finally { // Close the response if (_context != null && _context.Response != null) { _context.Response.Close(); } } _completed = true; _callback(this); } }
Questions:
1>When exception occur, i guess i need to change the content type if i want to write javascript back to user. What would be the contenttype? ( Please see inline comments in the code)
2>Is there any other way to inform user that something bad hapend without clearing the content?
Feb 12, 2014 04:41 PM|AidyF|LINK
You are sending a response which is triggered by a request. When you start the download you are telling the client it is receiving a zip file and you start to push the data. Once the headers are written, you can't then undo that and tell the browser "actually it's not a zip file you're getting any more so abort what you were doing, it's now a 200 OK Html page that I'm sending" The download is separate from the browser anyway, I can start downloading a file then close my browser but leave the download running, so where is your html going to show? This is the stateless nature of the internet. If there is a problem with the download the user will know because the download manager will tell them.
Feb 12, 2014 05:18 PM|lax4u|LINK
Thanks
then in above scenario how one should handle it? Should i rethrow the exception or eat the exception? how download manger would know?
try { Download() } catch(Exception ex) { //Log ex here
// rethrow exception here throw }
try { Download() } catch(Exception ex) { //Log ex here // Dont rethrow, eat the exception here }
Feb 12, 2014 05:20 PM|AidyF|LINK
Just ignore exceptions, the connection will be aborted and the user will be told the download failed.
Feb 12, 2014 05:23 PM|lax4u|LINK
But i have to log the exception so we can diagnose why download failed.
That means in catch block after logging just rethrow the exception as below ?????
try { Download() } catch(Exception ex) { //Log ex here
throw; } finally { if (_context != null && _context.Response != null) { _context.Response.Close(); } }
Feb 12, 2014 06:23 PM|AidyF|LINK
You can log the exception ok, you just can't really do anything to inform the client.
Feb 12, 2014 10:16 PM|lax4u|LINK
Thanks, I do understand that i cant infom client. But after logging is done, should rethrow the exception or just leave it to terminate the call? Also note that exception may occur before even download starts, and in that case there wont be any popup on client side.
Feb 13, 2014 04:27 AM|AidyF|LINK
I wouldn't bother rethrowing it. If the exception occurs before you have written anything to the output stream then in that case you can write out a normal html page instead. The issue is only trying to output html *after* data has been sent.
Feb 25, 2014 06:15 AM|rstrahl|LINK
When you're making a link or form submission request from the browser to the server you are requesting a new page and the server will always response with new HTTP content, regardless of whether the request works or not. The result is whatever the server returns and the original output is cleared and new content is set to the result or - if it's a server error a server or ASP.NET error display which in effect becomes the HTTP response.
If you want to request data from within the page, capture the output and take action from the current page you need to use AJAX calls to do that. You can a something like jQuery to make a call to the server and capture the output:
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js" type="text/javascript"> </script> <script> $("button").click(function() { $.ajax({ url: "myhandler.hnd" }) .done(function(resultText) { // update existing page content from server response $("#result").text(resultText); alert('done'); }) .fail(function(xhr, status) { alert('server call failed ' + status); }) .always(function() { alert('always'); }); }); </script>
If you just want to wait for the call to complete and then navigate you can take over the .done() handler and navigate from there with window.location='newpage.html' or whatever...
+++ Rick ---
8 replies
Last post Feb 25, 2014 06:15 AM by rstrahl | https://forums.asp.net/t/1967104.aspx?How+to+write+javascript+from+HttpHandler+s+HttpContext | CC-MAIN-2021-17 | refinedweb | 964 | 64.3 |
Proposed features/University Campus (tertiary education)
Vote
- I approve this proposal. User:David.earl 12.40, 19 December 2006 (UTC). ditto.
- I approve this proposal. --KristianThy 13:33, 19 December 2006 (UTC)
- I approve this proposal. --Batchoy 13:59, 19 December 2006 (UTC)
- I disapprove this proposal. I think we are overusing the amenities namespace. I'd prefer to have education=school, education=college, education=university, ... or something like it. Joto 22:02, 19 December 2006 (UTC)
- I approve this proposal. MikeCollinson 03:10, 20 December 2006 (UTC) I am using it already. I do however think that Joto has a good point and future grouping by retagging as amenity=education education=school / college ... has value. It would make creating maps or lists of specific types of point of interest much easier.
- I approve this proposal, and agree with Mike Collinson about future retagging. TomChance
- I approve this proposal and agree with Mike Collinson. FredB 07:50, 20 December 2006 (UTC)
- I approve the need for a tag, but sorta agree with Joto's point. (as MikeCollinson said) "amenity=education", "education=university" or something simliar. This would be similar way to the shop=bakery proposal. This vote appeared rather fast, budging in front of many others in the line; why? Ben. 03:30 23 Decemeber 2006 (UTC)
- I approve the use of tags such as amenity="education" and education="university|college|school" OlivierB | https://wiki.openstreetmap.org/wiki/Proposed_features/University_Campus_(tertiary_education) | CC-MAIN-2018-51 | refinedweb | 233 | 51.85 |
A package that I'm using in my python program is throwing a warning that I'd like to understand the exact cause of. I've set
logging.captureWarning(True) and am capturing the warning in my logging, but still have no idea where it is coming from. How do I also log the stack trace so I can see where in my code the warning is coming from? Do I use
traceback?
I've ended up going with the below:
import warnings import traceback _formatwarning = warnings.formatwarning def formatwarning_tb(*args, **kwargs): s = _formatwarning(*args, **kwargs) tb = traceback.format_stack() s += ''.join(tb[:-1]) return s warnings.formatwarning = formatwarning_tb logging.captureWarnings(True) | http://databasefaq.com/index.php/answer/263652/python-logging-warnings-stack-trace-log-stack-trace-for-python-warning | CC-MAIN-2019-09 | refinedweb | 111 | 60.61 |
An intuitive library tracking dates and timeseries in common using NumPy arrays.
Project description
Thymus-Timeseries
An intuitive library tracking dates and timeseries in common using numpy arrays.
When working with arrays of timeseries, the manipulation process can easily cause mismatching sets of arrays in time, arrays in the wrong order, slow down the analysis, and lead to generally spending more time to ensure consistency.
This library attempts to address the problem in a way that enables ready access to the current date range, but stays out of your way most of the time. Essentially, this library is a wrapper around numpy arrays.
This library grew out of the use of market and trading data. The timeseries is typically composed of regular intervals but with gaps such as weekends and holidays. In the case of intra-day data, there are interuptions due to periods when the market is closed or gaps in trading.
While the library grew from addressing issues associated with market data, the implementation does not preclude use in other venues. Direct access to the numpy arrays is expected and the point of being able to use the library.
Dependencies
Other than NumPy being installed, there are no other requirements.
Installation
pip install thymus-timeseries
A Brief Look at Capabilities.
Creating a Small Sample Timeseries Object
As a first look, we will create a small timeseries object and show a few ways that it can used. For this example, we will use daily data.
from datetime import datetime import numpy as np
from thymus.timeseries import Timeseries
ts = Timeseries()
Elements of Timeseries()
key: An optional identifier for the timeseries.
columns: Defaults to None but is an an optional list of column names for the data.
frequency: Defaults to
d, the d in this case refers to the default daily data. current frequencies supported are
sec,
min,
h,
d,
w,
m,
q,
y.
dseries: This is a numpy array of dates in numeric format.
tseries: This is a numpy array of data. most of the work takes place here.
end-of-period: Defaults to True indicating that the data is as of the end of the period. This only comes into play when converting from one frequency to another and will be ignored for the moment.
While normal usage of the timeseries object would involve pulling data from a database and inserting data into the timeseries object, we will use a quick-and-dirty method of inputting some data. Dates are stored as either ordinals or timestamps, avoiding clogging up memory with large sets of datetime objects. Because it is daily data, ordinals will be used for this example.
ts = Timeseries() start_date = datetime(2015, 12, 31).toordinal() ts.dseries = start_date + np.arange(10) ts.tseries = np.arange(10) ts.make_arrays()
We created an initial timeseries object. It starts at the end of 2015 and continues for 10 days. Setting the values in dseries and tseries can be somewhat sloppy. For example, a list could be assigned initially to either dseries (the dates) and a numpy array to tseries (the values).
The use of the make_arrays() function converts the date series to an int32 array (because they are ordinal values) and tseries to a float64 array. The idea is that the data might often enter the timeseries object as lists, but then be converted to arrays of appropriate format for use.
The completed timeseries object is:
print(ts) <Timeseries> key: columns: None frequency: d daterange: ('2015-12-31', '2016-01-09') end-of-period: True shape: (10,)
You can see the date range contained in the date series. The shape refers to the shape of the tseries array. key and columns are free-form, available to update as appropriate to identify the timeseries and content of the columns. Again, the end-of-period flag can be ignored right now.
Selection
Selection of elements is the same as numpy arrays. Currently, our sample has 10 elements.
print(ts[:5]) <Timeseries> key: columns: [] frequency: d daterange: ('2015-12-31', '2016-01-04') end-of-period: True shape: (5,)
Note how the date range above reflects the selected elements.
ts1 = ts % 2 == 0 ts1.tseries [True False True False True False True False True False]
We can isolate the dates of even numbers: note that
tseries, not the timeseries obj, is explicitly used with
np.argwhere. More on when to operate directly on tseries later.
evens = np.argwhere((ts % 2 == 0).tseries) ts_even = ts[evens]
This just prints a list of date and value pairs only useful with very small sets (or examples like this)
print(ts_even.items('str')) ('2015-12-31', '[0.0]') ('2016-01-02', '[2.0]') ('2016-01-04', '[4.0]') ('2016-01-06', '[6.0]') ('2016-01-08', '[8.0]')
Date-based Selection
So let us use a slightly larger timeseries. 1000 rows 2 columns of data. And, use random values to ensure uselessness.
ts = Timeseries() start_date = datetime(2015, 12, 31).toordinal() ts.dseries = start_date + np.arange(1000) ts.tseries = np.random.random((1000, 2)) ts.make_arrays() print(ts) <Timeseries> key: columns: [] frequency: d daterange: ('2015-12-31', '2018-09-25') end-of-period: True shape: (1000, 2)
You can select on the basis of date ranges, but first we will use a row number technique that is based on slicing. This function is called trunc() for truncation.
Normal Truncation
You will end up with a timeseries with row 100 through 499. This provides in-place execution.
ts.trunc(start=100, finish=500) # this version returns a new timeseries, effective for chaining. ts1 = ts.trunc(start=100, finish=500, new=True)
Truncation by Date Range
But suppose you want to select a specific date range? This leads to the next function, truncdate().
# select using datetime objects ts1 = ts.truncdate( start=datetime(2017, 1, 1), finish=datetime(2017, 12, 31), new=True) print(ts1) <Timeseries> key: columns: [] frequency: d daterange: ('2017-01-01', '2017-12-31') end-of-period: True shape: (365, 2)
As you might expect, the timeseries object has a date range of all the days
during 2017. But see how this is slightly different than slicing. When you use
truncdate() it selects everything within the date range inclusive of the
ending date as well. The idea is to avoid having to always find one day after
the date range that you want to select to accommodate slicing behavior. This
way is more convenient in this context.
You can also convert data from a higher frequency to a lower frequency. Suppose we needed monthly data for 2017 from our timeseries.
start = datetime(2017, 1, 1) finish = datetime(2017, 12, 31) ts1 = ts.truncdate(start=start, finish=finish, new=True).convert('m') print(ts1.items('str')) ('2017-01-31', '[0.1724835781570483, 0.9856812220255055]') ('2017-02-28', '[0.3855043513164875, 0.30697511661843124]') ('2017-03-31', '[0.7067982987769881, 0.7680886691626396]') ('2017-04-30', '[0.07770763295126926, 0.04697651222041588]') ('2017-05-31', '[0.4473657194650975, 0.49443624153533783]') ('2017-06-30', '[0.3793816656495891, 0.03646544387811124]') ('2017-07-31', '[0.2783335012003322, 0.5144979569785825]') ('2017-08-31', '[0.9261879195281345, 0.6980224313957553]') ('2017-09-30', '[0.09531834159018227, 0.5435208082899813]') ('2017-10-31', '[0.6865842769906441, 0.7951735180348887]') ('2017-11-30', '[0.34901775001111657, 0.7014208950555662]') ('2017-12-31', '[0.4731393617405252, 0.630488855197775]')
Or yearly. In this case, we use a flag that governs whether to include the partial period leading up to the last year. The default includes it. However, when unwanted the flag, include_partial can be set to False.
ts1 = ts.convert('y', include_partial=True) print(ts]') ('2018-09-25', '[0.7634145837512148, 0.32026411425902257]') ts2 = ts.convert('y', include_partial=False) print(ts]]')
Points
Sometimes when examining a
tseries, a particular point stands out and you want to investigate it further. When was it? Since this package separates dates and values by design, there needs to be a quick way to find this out.
There are two ways to do this. Suppose the value in question is row 100.
row = 100 # would give you the ordinal/timestamp date ts.dseries[row] # gives a datetime object. datetime.fromordinal(ts.dseries[row])
This is not particularly difficult, but you do enough times, it feels laborious. To cut down on the typing, there is another way.
Usage: get_point(rowdate=None, row_no=None) row = 100 point = ts.get_point(row_no=100) print(point) <Point: row_no: 100, date: 2020-04-10, [48.3886577 48.48543501 48.58221233 48.67898964 48.77576696] />
This gives all the information in one place, the row number, a meaningful date, and the values of interest.
The point object created contains attributes:
- ts: The originating timeseries.
- row_no: The location within the data.
- date: This ordinal/timestamp in the data
- date_str: This method shows the date in string format.
- datetime: This method shows the date as datetime object.
- values: The values contained in the row.
Note that the
Point class is designed to be an active window into your data. Changing an item in values is a direct change to the timeseries.
Changing the
row_no shifts contents of
values to reflect the data in the new row.
Columns
If you use columns in your timeseries, you can also improve your output.
ts.columns = ["dog", "cat", "squirrel", "cow", "monkeys"] print(point) <Point: row_no: 100, date: 2020-04-10, dog: 48.38865769863544 cat: 48.48543501403271 squirrel: 48.58221232942998 cow: 48.678989644827254 monkey: 48.77576696022452 />
The point object uses the columns of the timeseries to create attributes.
The point object now has created the following attributes:
- ts: The originating timeseries.
- row_no: The location within the data.
- date: This ordinal/timestamp in the data
- date_str: This method shows the date in string format.
- datetime: This method shows the date as datetime object.
- values: The values contained in the row.
New Attributes:
- dog: Column 0
- cat: Column 1
- squirrel: Column 2
- cow: Column 3
- monkey: Column 4
Just as
values is a direct window, these attributes are also a direct window. Changing
point.dog affects the
tseries[row_no][0] value.
With just a few columns of data, it is not hard to remember which is which. However, more columns become increasingly unwieldy.
Iteration
Because the
Point class automatically changes as the row number changes, it can also be used for iteration. A subclassed Point can provide easy programmatic access for calculations and updates with meaningful variable names.
Combining Timeseries
Suppose you want to combine multiple timeseries together that are of different lengths? In this case we assume that the two timeseries end on the same date, but one has a longer tail than the other. However, the operation that you need requires common dates.
By combine we mean instead of two timeseries make one timeseries that has the columns of both.
ts_short = Timeseries() ts_long = Timeseries() end_date = datetime(2016, 12, 31) ts_short.dseries = [ (end_date + timedelta(days=-i)).toordinal() for i in range(5)] ts_long.dseries = [ (end_date + timedelta(days=-i)).toordinal() for i in range(10)] ts_short.tseries = np.zeros((5)) ts_long.tseries = np.ones((10)) ts_short.make_arrays() ts_long.make_arrays() ts_combine = ts_short.combine(ts_long) print(ts.items('str')) ('2016-12-31', '[0.0, 1.0]') ('2016-12-30', '[0.0, 1.0]') ('2016-12-29', '[0.0, 1.0]') ('2016-12-28', '[0.0, 1.0]') ('2016-12-27', '[0.0, 1.0]')
The combine function has a couple variations. While it can be helpful to automatically discard the unwanted rows, you can also enforce that combining does not take place if the number of rows do not match. Also, you can build out the missing information with padding to create a timeseries that has the length of the longest timeseries.
# this would raise an error -- the two are different lengths ts_combine = ts_short.combine(ts_long discard=False) # this combines, and fills 99 as a missing value ts_combine = ts_short.combine(ts_long discard=False, pad=99) print(ts_combine.items('str')) ('2016-12-31', '[0.0, 1.0]') ('2016-12-30', '[0.0, 1.0]') ('2016-12-29', '[0.0, 1.0]') ('2016-12-28', '[0.0, 1.0]') ('2016-12-27', '[0.0, 1.0]') ('2016-12-26', '[99.0, 1.0]') ('2016-12-25', '[99.0, 1.0]') ('2016-12-24', '[99.0, 1.0]') ('2016-12-23', '[99.0, 1.0]') ('2016-12-22', '[99.0, 1.0]')
The combining can also receive multiple timeseries.
ts_combine = ts_short.combine([ts_long, ts_long, ts_long]) print(ts_combine.items('str')) ('2016-12-31', '[0.0, 1.0, 1.0, 1.0]') ('2016-12-30', '[0.0, 1.0, 1.0, 1.0]') ('2016-12-29', '[0.0, 1.0, 1.0, 1.0]') ('2016-12-28', '[0.0, 1.0, 1.0, 1.0]') ('2016-12-27', '[0.0, 1.0, 1.0, 1.0]')
Splitting Timeseries
In some ways it would make sense to mirror the combine() function with a split() from an aesthetic standpoint. However, splitting is very straight-forward without such a function. For example, suppose you want a timeseries that only has the the first two columns from our previous example. As you can see in the ts_split tseries, the first two columns were taken.
ts_split = ts_combine[:, :2] print(ts_split.items('str')) ('2016-12-31', '[0.0, 1.0]') ('2016-12-30', '[0.0, 1.0]') ('2016-12-29', '[0.0, 1.0]') ('2016-12-28', '[0.0, 1.0]') ('2016-12-27', '[0.0, 1.0]')
Arithmetic Operations
We have combined timeseries together to stack up rows in common. In addition, we looked at the issue of mismatched lengths. Now we will look at arithmetic approaches and some of the design decisions and tradeoffs associated with mathematical operations.
We will start with the add() function. First, if we assume that all we are adding together are arrays that have exactly the same dateseries, and therefore the same length, and we assume they have exactly the same number of columns, then the whole question becomes trivial. If we relax those constraints, then some choices need to be made.
We will use the long and short timeseries from the previous example.
# this will fail due to dissimilar lengths ts_added = ts_short.add(ts_long, match=True) # this will work ts_added = ts_short.add(ts_long, match=False) [ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
The add() function checks to see if the number of columns match. If they do not an error is raised. If the match flag is True, then it also checks that all the dates in both timeseries match prior to the operation.
If match is False, then as long as the columns are compatible, the operation can take place. It also supports the concept of sparse arrays as well. For example, suppose you have a timeseries that is primary, but you would like to add in a timeseries values from only a few dates within the range. This function will find the appropriate dates adding in the values at just those rows.
To summarize, all dates in common to both timeseries will be included in the new timeseries if match is False.
Because the previous function is somewhat specialized, you can assume that the checking of common dates and creating the new timeseries can be somewhat slower than other approaches.
If we assume some commonalities about our timeseries, then we can do our work in a more intuitive fashion.
Assumptions of Commonality
Let us assume that our timeseries might be varying in length, but we absolutely know what either our starting date or ending date is. And, let us assume that all the dates for the periods in common to the timeseries match.
If we accept those assumptions, then a number of operations become quite easy.
The timeseries object can accept simple arithmetic as if it is an array. It automatically passes the values on to the tseries array. If the two arrays are not the same length the longer array is truncated to the shorter length. So if you were add two arrays together that end at the same date, you would want to sort them latest date to earliest date using the function sort_by_date().
Examples
# starting tseries ts.tseries [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] (ts + 3).tseries [ 3. 4. 5. 6. 7. 8. 9. 10. 11. 12.] # Also, reverse (__radd__) (3 + ts).tseries [ 3. 4. 5. 6. 7. 8. 9. 10. 11. 12.] # of course not just addition 5 * ts.tseries [ 0. 5. 10. 15. 20. 25. 30. 35. 40. 45.]
Also, in-place operations. But first, we will make a copy.
ts1 = ts.clone() ts1.tseries /= 3 print(ts1.tseries) [0.0 0.3333333333333333 0.6666666666666666 1.0 1.3333333333333333 1.6666666666666667 2.0 2.3333333333333335 2.6666666666666665 3.0] ts1 = ts ** 3 ts1.tseries 0.0 1.0 8.0 27.0 64.0 125.0 216.0 343.0 512.0 729.0 ts1 = 10 ** ts ts1.tseries [1.0 10.0 100.0 1000.0 10000.0 100000.0 1000000.0 10000000.0 100000000.0 1000000000.0]
In other words, the normal container functions you can use with numpy arrays are available to the timeseries objects. The following container functions for arrays are supported.
__pow__ __add__ __rsub__ __sub__ __eq__ __ge__ __gt__ __le__ __lt__ __mod__ __mul__ __ne__ __radd__ __rmod__ __rmul__ __rpow__ __abs__ __pos__ __neg__ __invert__ __rdivmod__ __rfloordiv__ __floordiv__ __truediv__ __rtruediv__ __divmod__ __and__ __or__ __ror__ __rand__ __rxor__ __xor__ __rshift__ __rlshift__ __lshift__ __rrshift__ __iadd__ __ifloordiv__ __imod__ __imul__ __ipow__ __isub__ __itruediv__] __iand__ __ilshift__ __ior__ __irshift__ __ixor__
Functions of Arrays Not Supported
The purpose the timeseries objects is to implement an intuitive usage of timeseries objects in a fashion that is consistent with NumPy. However, it is not intended to replace functions that are better handled explicitly with the dseries and tseries arrays directly. The difference will be clear by comparing the list of functions for the timeseries object versus a numpy array. Most of the functions of the timeseries object is related to handling the commonality of date series with time series. You can see that the bulk of the thymus functions relate to maintenance of the coordination betwee the date series and timeseries. The meat of the functions still lie with the numpy arrays by design.
# timeseries members and functions: ts.add ts.daterange ts.get_pcdiffs ts.series_direction ts.as_dict ts.datetime_series ts.header ts.set_ones ts.as_json ts.dseries ts.if_dseries_match ts.set_zeros ts.as_list ts.end_date ts.if_tseries_match ts.shape ts.clone ts.end_of_period ts.items ts.sort_by_date ts.closest_date ts.extend ts.key ts.start_date ts.columns ts.fmt_date ts.lengths ts.trunc ts.combine ts.frequency ts.make_arrays ts.truncdate ts.common_length ts.get_date_series_type ts.months ts.tseries ts.convert ts.get_datetime ts.replace ts.years ts.date_native ts.get_diffs ts.reverse ts.date_string_series ts.get_duped_dates ts.row_no # numpy functions in the arrays ts.tseries.T ts.tseries.cumsum ts.tseries.min ts.tseries.shape ts.tseries.all ts.tseries.data ts.tseries.nbytes ts.tseries.size ts.tseries.any ts.tseries.diagonal ts.tseries.ndim ts.tseries.sort ts.tseries.argmax ts.tseries.dot ts.tseries.newbyteorder ts.tseries.squeeze ts.tseries.argmin ts.tseries.dtype ts.tseries.nonzero ts.tseries.std ts.tseries.argpartition ts.tseries.dump ts.tseries.partition ts.tseries.strides ts.tseries.argsort ts.tseries.dumps ts.tseries.prod ts.tseries.sum ts.tseries.astype ts.tseries.fill ts.tseries.ptp ts.tseries.swapaxes ts.tseries.base ts.tseries.flags ts.tseries.put ts.tseries.take ts.tseries.byteswap ts.tseries.flat ts.tseries.ravel ts.tseries.tobytes ts.tseries.choose ts.tseries.flatten ts.tseries.real ts.tseries.tofile ts.tseries.clip ts.tseries.getfield ts.tseries.repeat ts.tseries.tolist ts.tseries.compress ts.tseries.imag ts.tseries.reshape ts.tseries.tostring ts.tseries.conj ts.tseries.item ts.tseries.resize ts.tseries.trace ts.tseries.conjugate ts.tseries.itemset ts.tseries.round ts.tseries.transpose ts.tseries.copy ts.tseries.itemsize ts.tseries.searchsorted ts.tseries.var ts.tseries.ctypes ts.tseries.max ts.tseries.setfield ts.tseries.view ts.tseries.cumprod ts.tseries.mean ts.tseries.setflags
Other Date Functions
Variations on a theme:
# truncation ts.truncdate( start=datetime(2017, 1, 1), finish=datetime(2017, 12, 31)) # just start date etc. ts.truncdate( start=datetime(2017, 1, 1)) # this was in date order but suppose it was in reverse order? # this result will give the same answer ts1 = ts.truncdate( start=datetime(2017, 1, 1), new=True) ts.reverse() ts1 = ts.truncdate( start=datetime(2017, 1, 1), new=True) # use the date format native to the dateseries (ordinal / timestamp) ts1 = ts.truncdate( start=datetime(2017, 1, 1).toordinal(), new=True) # suppose you start with a variable that represents a date range # date range can be either a list or tuple ts.truncdate( [datetime(2017, 1, 1), datetime(2017, 12, 31)])
Assorted Date Functions
# native format ts.daterange() (735963, 735972) # str format ts.daterange('str') ('2015-12-31', '2016-01-09') # datetime format ts.daterange('datetime') (datetime.datetime(2015, 12, 31, 0, 0), datetime.datetime(2016, 1, 9, 0, 0)) # native format ts.start_date(); ts.end_date() 735963 735972 # str format ts.start_date('str'); ts.end_date('str') 2015-12-31 2016-01-09 # datetime format ts.start_date('datetime'); ts.end_date('datetime') 2015-12-31 00:00:00 2016-01-09 00:00:00
Sometimes it is helpful to find a particular row based on the date. Also, that date might not be in the dateseries, and so, the closest date will suffice.
We will create a sample timeseries to illustrate.
ts = Timeseries() ts.dseries = [] ts.tseries = [] start_date = datetime(2015, 12, 31) for i in range(40): date = start_date + timedelta(days=i) if date.weekday() not in [5, 6]: # skipping weekends ts.dseries.append(date.toordinal()) ts.tseries.append(i) ts.make_arrays() # row_no, date (0, '2015-12-31') (1, '2016-01-01') (2, '2016-01-04') (3, '2016-01-05') (4, '2016-01-06') (5, '2016-01-07') (6, '2016-01-08') (7, '2016-01-11') (8, '2016-01-12') (9, '2016-01-13') (10, '2016-01-14') (11, '2016-01-15') (12, '2016-01-18') (13, '2016-01-19') (14, '2016-01-20') (15, '2016-01-21') (16, '2016-01-22') (17, '2016-01-25') (18, '2016-01-26') (19, '2016-01-27') (20, '2016-01-28') (21, '2016-01-29') (22, '2016-02-01') (23, '2016-02-02') (24, '2016-02-03') (25, '2016-02-04') (26, '2016-02-05') (27, '2016-02-08') date1 = datetime(2016, 1, 7) # existing date within date series date2 = datetime(2016, 1, 16) # date falling on a weekend date3 = datetime(2015, 6, 16) # date prior to start of date series date4 = datetime(2016, 3, 8) # date after to end of date series # as datetime and in the series existing_row = ts.row_no(rowdate=date1, closest=1) 5 existing_date = ts.closest_date(rowdate=date1, closest=1) print(datetime.fromordinal(existing_date)) 2016-01-07 00:00:00 # as datetime but date not in series next_row = ts.row_no(rowdate=date2, closest=1) 12 next_date = ts.closest_date(rowdate=date2, closest=1) print(datetime.fromordinal(next_date)) 2016-01-18 00:00:00 prev_row = ts.row_no(rowdate=date2, closest=-1) 11 prev_date = ts.closest_date(rowdate=date2, closest=-1) print(datetime.fromordinal(prev_date)) 2016-01-15 00:00:00 # this will fail -- date is outside the date series # as datetime but date not in series, look for earlier date ts.closest_date(rowdate=date3, closest=-1) # this will fail -- date is outside the date series ts.closest_date(rowdate=date4, closest=1)
Functions by Category
Output
Timeseries
ts.to_dict()
Returns the time series as a dict with the date as the key.
Usage: self.to_dict(dt_fmt=None, data_list=False)
This has been reworked to include all fields of the timeseries rather than just dates and times, so header informtion is now included.
For flexibility, the date can be formatted in various ways:
dt_fmt=None Native format depending on frequency but converted to string.
dt_fmt='datetime' Datetime objects.
dt_fmt='str' Converts dates to string using constants
timeseries.FMT_DATEor
timeseries.FMT_IDATE, depending on the timeseries type.
data_list A boolean that signals whether dates should be used as keys in a dict for the values, or whether the dates and values are output as a list.
This matters because some operations are necessary to target specific dates, but it does not preserve order. Or, if data_list is True, then the combination of dates and values are output as a list and order is maintained.
ts.to_json()
This function returns the timeseries in JSON format.
Usage: self.as_json(indent=2, dt_fmt=str, data_list=True)
dt_fmt options are the same as for to_dict
ts.to_list()
Returns the timeseries as a list.
Point
point.to_dict()
This function returns a dict of the point variables.
Usage: to_dict(dt_fmt=None)
Parameters: dt_fmt: (None|str) : Format choice is "str" or "datetime"
Returns: point (dict)
Typical output:
point.to_dict(dt_fmt="str") { "row_no": 100, "date": "2020-04-10", "dog": 48.38865769863544, "cat": 48.48543501403271, "squirrel": 48.58221232942998, "cow": 48.678989644827254, "monkeys": 48.77576696022452 }
Miscellaneous
ts.header()
This function returns a dict of the non-timeseries data.
ts.items(fmt=None)
This function returns the date series and the time series as if it is in one list. The term items used to suggest the iteration of dicts where items are the key, value combination.
if fmt == 'str': the dates are output as strings
ts.months(include_partial=True)
This function provides a quick way to summarize daily (or less) as monthly data.
It is basically a pass-through to the convert function with more decoration of the months.
Usage:
months(include_partial=True) returns a dict with year-month as keys
ts.years(include_partial=True)
This function provides a quick way to summarize daily (or less) as yearly data.
It is basically a pass-through to the convert function with more decoration of the years.
Usage:
years(include_partial=True)
returns a dict with year as keys
ts.datetime_series()
This function returns the dateseries converted to a list of datetime objects.
ts.date_string_series(dt_fmt=None)
This function returns a list of the dates in the timeseries as strings.
Usage: self.date_string_series(dt_fmt=None)
dt_fmt is a datetime mask to alter the default formatting.
Array Manipulation
ts.add(ts, match=True)
Adds two timeseries together.
if match is True: means there should be a one to one corresponding date in each time series. If not raise error. else: means that timeseries with sporadic or missing dates can be added
Note: this does not evaluate whether both timeseries have the same number of columns. It will fail if they do not.
Returns the timeseries. Not in-place.
ts.clone()
This function returns a copy of the timeseries.
ts.combine(tss, discard=True, pad=None)
This function combines timeseries into a single array. Combining in this case means accumulating additional columns of information.
Truncation takes place at the end of rows. So if the timeseries is sorted from latest dates to earliest dates, the older values would be removed.
Usage: self.combine(tss, discard=True, pad=None)
Think of tss as the plural of timeseries.
If discard: Will truncate all timeseries lengths down to the shortest timeseries.
if discard is False: An error will be raised if the all the lengths do not match
unless: if pad is not None: the shorter timeseries will be padded with the value pad.
Returns the new ts.
ts.common_length(*ts)
This static method trims the lengths of timeseries and returns the timeseries trimmed to the same length.
The idea is that in order to do array operations there must be a common length for each timeseries.
Reflecting the bias for using timeseries sorted from latest info to earlier info, truncation takes place at the end of the array. That way older less important values are removed if necessary.
Usage: ts1_new, ts2_new = self.common_length(ts1, ts2) [ts1, ts2, ..., ts_n] = self.common_length(*ts)
ts.convert(new_freq, include_partial=True, **kwargs)
This function returns the timeseries converted to another frequency, such as daily to monthly.
Usage: convert(new_freq, include_partial=True, **kwargs)
The only kwarg is weekday=<some value>
This is used when converting to weekly data. The weekday number corresponds to the the datetime.weekday() function.
ts.extend(ts, overlay=True)
This function combines a timeseries to another, taking into account the possibility of overlap.
This assumes that the frequency is the same.
This function is chiefly envisioned to extend a timeseries with additional dates.
Usage: self.extend(ts, overlay=True)
If overlay is True then the incoming timeseries will overlay any values that are duplicated.
ts.trunc(start=None, finish=None, new=False)
This function truncates in place, typically.
truncate from (start:finish) remember start is lowest number, latest date
This truncation works on the basis of slicing, so finish is not inclusive.
Usage: self.trunc(start=None, finish=None, new=False)
ts.truncdate(start=None, finish=None, new=False)
This function truncates in place on the basis of dates.
Usage: self.truncdate(start=None, finish=None, new=False)
start and finish are dates, input as either datetime or the actual internal format of the dseries (ordinals or timestamps).
If the dates are not actually in the list, the starting date will be the next viable date after the start date requested. If the finish date is not available, the previous date from the finish date will be the last.
If new is True, the timeseries will not be modified in place. Rather a new timeseries will be returned instead.
ts.replace(ts, match=True)
This function replaces values where the dates match an incoming timeseries. So if the incoming date on the timeseries matches, the value in the current timeseries will be replaced by the incoming timeseries.
Usage: self.replace(ts, match=True)
If match is False, the incoming timseries may have dates not found in the self timeseries.
Returns the modified timeseries. Not in place.
ts.reverse()
This function does in-place reversal of the timeseries and dateseries.
ts.get_diffs()
This function gets the differences between values from date to date in the timeseries.
ts.get_pcdiffs()
This function gets the percent differences between values in the timeseries.
No provision for dividing by zero here.
ts.set_ones(fmt=None, new=False)
This function converts an existing timeseries to ones using the same shape as the existing timeseries.
It is used as a convenience to create an empty timeseries with a specified date range.
if fmt use as shape
usage: set_ones(self, fmt=None, new=False)
ts.set_zeros(fmt=None, new=False)
This function converts an existing timeseries to zeros using the same shape as the existing timeseries.
It is used as a convenience to create an empty timeseries with a specified date range.
if fmt use as shape
usage: set_zeros(self, fmt=None, new=False)
ts.sort_by_date(reverse=False, force=False)
This function converts a timeseries to either date order or reverse date order.
Usage: sort_by_date(self, reverse=False, force=False)
If reverse is True, then order will be newest to oldest. If force is False, the assumption is made that comparing the first and last date will determine the current order of the timeseries. That would mean that unnecessary sorting can be avoided. Also, if the order needs to be reversed, the sort is changed via the less expensive reverse function.
If dates and values are in no particular order, with force=True, the actual sort takes place.
This function changes the data in-place.
Evaluation
ts.daterange(fmt=None)
This function returns the starting and ending dates of the timeseries.
Usage:
self.daterange() (735963, 735972) self.daterange('str') ('2015-12-31', '2016-01-09') self.daterange('datetime') (datetime(2015, 12, 31, 0, 0), datetime.datetime(2016, 1, 9, 0, 0))
ts.start_date(fmt=None)
This function returns the starting date of the timeseries in its native value, timestamp or ordinal.
If fmt is 'str' returns in string format If fmt is 'datetime' returns in string format
ts.end_date(fmt=None)
This funtcion returns the ending date of the timeseries in its native value, timestamp or ordinal.
If fmt is 'str' returns in string format If fmt is 'datetime' returns in string format
ts.get_duped_dates()
This function pulls dates that are duplicated. This is to be used to locate timeseries that are faulty.
Usage: get_duped_dates()
returns [[odate1, count], [odate2, count]]
ts.series_direction()
if a lower row is a lower date, then 1 for ascending if a lower row is a higher date then -1 for descending
ts.get_date_series_type()
This function returns the date series type associated with the timeseries. The choices are TS_ORDINAL or TS_TIMESTAMP.
ts.if_dseries_match(ts)
This function returns True if the date series are the same.
ts.if_tseries_match(ts)
This function returns True if the time series are the same.
Utilities
ts.date_native(date)
This awkwardly named function returns a date in the native format of the timeseries, namely ordinal or timestamp.
ts.row_no(rowdate, closest=0, no_error=False)
Shows the row in the timeseries
Usage: ts.row(rowdate=<datetime>) ts.row(rowdate=<date as either ordinal or timestamp>)
Returns an error if the date is not found in the index
if closest is invoked: closest = 1 find the closest date after the rowdate closest = -1 find the closest date before the rowdate
If no_error returns -1 instead of raising an error if the date was outside of the timeseries.
ts.get_datetime(date)
This function returns a date as a datetime object. This takes into account the type of date stored in dseries.
Usage: self.get_datetime(date)
ts.lengths()
This function returns the lengths of both the date series and time series. Both numbers are included in case a mismatch has occurred.
ts.shape()
This function return the shape of the timeseries. This is a shortcut to putting in ts.tseries.shape.
ts.fmt_date(numericdate, dt_type, dt_fmt=None)
This static method accepts a date and converts it to the format used in the timeseries.
ts.make_arrays()
Convert the date and time series lists (if so) to numpy arrays
ts.get_fromDB(**kwargs)
This is just a stub to suggest a viable name for getting data from a database.
ts.save_toDB(**kwargs):
This is just a stub to suggest a viable name for saving data to a database.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/thymus-timeseries/ | CC-MAIN-2021-21 | refinedweb | 5,702 | 58.99 |
Introduction: How to Control a Motor Through a Serial Port Without a Microcontroller
You don’t need an Arduino, PIC or any other microcontroller to make a robot. It’s possible to connect your project directly to a computer through a usb port, using a serial-usb adapter (or directly through a serial port if your computer has one). This will let you attach wheels to your laptop and drive it around your apartment without having to buy an extra Arduino, or whatever project you can imagine. USB-Serial adapters can be very cheap ($4 USD) so they make a nice alternative to microcontrollers for simple projects.
Need:
USB-Serial adapter (RS-232)
Motor bridge (TA7291P or others)
One or Two DC motors (this project uses 3v motors)
2x 1kΩ resistors
And one of the below (not needed if you have a 5v motor)
5Ω (or less) resistor with 2w rating or higher
Lots of 100Ω or 50Ω resistors with 1/4w rating (what I used)
Step 1: Hacking the Serial Plug (RS-232)
The first thing to do is hack the USB-Serial adapter. There are nine pins, but you only need three. The way a serial connector works is by sending 5v logic signals to say it’s ready to send or receive a message, along with that message itself. Instead of sending any messages, we’ll just be using those 5v logic pins. The data transmit pin is possible to use, but would require an IC to decode it (like a shift register), or a transistor to amplify the weak current it sends. If you do that, binary ‘0’ is a high voltage on RS-232 and binary ‘1’ is low.
On the serial plug, there is a guard around the pins which you might want to remove to make it easier to solder the pins, but you don’t need to. I was lucky that the pins on the serial plug were hollow. I could clip off the ends and slide a wire inside them which held it in place as I soldered. Solder a wire to the DTR, RTS and ground pins. Solder one to the Transmit Data pin (TD) if you want to play around with that too. Since this part will not get hot when operating the circuit, you can insulate it using a glue gun if need be.
Step 2: The Circuit
Next we need to make the circuit. It is a simple motor bridge connection. The only problem is that TA7291P needs 4.5v to operate, but the motors are 3v. There are two ways to fix this: get higher voltage motors or put in a resistor to limit the current. The resistor would need to be 2 - 5Ω with high wattage (2w or more). However if you bought 1/4 watt resistors in bulk (like I did) you can can put enough in parallel to divide that wattage since eight 1/4 watt resistors can take 2w. Since resistors in parallel divide their resistance, 20 resistors at 100Ω = 5Ω (as would 10 resistors at 50Ω). That is a lot of resistors, but at about $0.02 each, that beats buying an extra 2w resistor. You can see my 100Ω resistor cluster in the photo. You don’t need the switch I’ve put into the circuit, but sometimes the serial pins are set to high when the USB turns on which will make your motor(s) rev before your program can start up and turn them off.
With only one motor bridge, to go forward the robot must ‘waddle’ with each motor being turned on for a few ms before the other is turned on. You could also just get another USB-to-Serial cable and set up another motor bridge for better forward control and power.
Step 3: The Basic (Testing) Program
Now all you need left is the program. I used Python, but it is possible to write it in almost any language. Just Google for how to control a serial port in your preferred language. My program connected to the internet to get commands, but you could also use UDP, TCP/IP, or store the commands in an array. Here is the basic code to test if everything is working. After plugging in your USB device, you have to look in your device manager to see what the connection is called. If you’re in Windows, it will be COM6 or something like that. In Linux, it will be something like /dev/ttyUSB0.")
###main loop
c = 1
while c:
ser.setRTS(False)
ser.setDTR(True)
time.sleep(2)
ser.setRTS(True)
ser.setDTR(False)
time.sleep(2)
Step 4: A Sample Program
If you have two motors on one motor bridge, here is sample code for how you can move forward. There are many ways to feed the robot commands like making a list of them or communicating directly with the robot with UDP or TCP/IP. I used an indirect method with AJAX and storing the next command on a web server. Here is a video of my robot moving. The lag in response is because I'm slowly punching in the commands on my webserver (located in Canada) while the robot fetches them (from Japan). The waddle as it moves forward is hardly detectable. It is running without the 5 ohm resistor, and the right and left turns are also operated in pulses so that they aren't too much more powerful than the forward drive. You'll notice the wheels are different in the video because the treads (seen in the other photos) were too weak to support the weight of a laptop reliably. More about the project can be found here.")
### function to waddle forward (for one motor bridge with two motors)
def move_forward(times):
while times > 0:
ser.setRTS(True)
ser.setDTR(False)
time.sleep(0.2)
ser.setRTS(False)
ser.setDTR(True)
time.sleep(0.2)
times -= 1
###main loop
c = 1
while c:
move_forward(6)
time.sleep(2)
5 Discussions
Product Summary
The TA7291P is a bridge driver with output voltage control.
Parametrics
TA7291P absolute
maximum ratings: (1)Supply voltage, Vcc: 25V; (2)Motor Drive Voltage,
Vs: 25V; (3)Reference voltage, Vref: 25V; (4)Operating temperature,Topr:
-30-75 ℃; (5)Storage temperature, Tstg:-55-150 ℃.
Features
TA7291P features:
(1)4 modes avaliable(CW/CCW/STOP/BRAKE); (2)Output current; (3)Wide
range of operating voltage; (4)Build in thermal shutdown, over current
protector and punch=through current restriction circuit; (5)Stand-by
mode avaliable(STOP MODE); (6)Hysteresis for all inputs.
Diagrams
hey dude your on hack-a-day! go take a look!
Woohoo! I get +5 nerd points!
Very interesting.
I suppose that using this method, with some modifications, you could control a step motor, isn't so?
Anything that takes only two logic signals to control would be possible. I assume the data-out plug can be read by a shift register IC, so the sky's the limit! | https://www.instructables.com/id/How-to-Control-a-Motor-Through-a-Serial-Port-Witho/ | CC-MAIN-2018-39 | refinedweb | 1,171 | 69.72 |
Opened 5 years ago
Last modified 5 years ago
#13009 new New feature
provide django.forms field type info for use in templates
Description
My use case is that it would be useful to have access to this info from templates when generating forms, eg:
{% for field in form.fields %} {% if field.type == 'checkbox' %} {# render one way... #} {% else %} {# render another way #} {% endif %} {% endfor %}
FWIW, django.contrib.admin seems to get around this problem in AdminField by adding an is_checkbox attribute, but it seems to me that the type of field being rendered should be easily available in the template, in other words, IMHO this makes sense as a core feature of django.forms.
Change History (5)
comment:1 Changed 5 years ago by russellm
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
- Triage Stage changed from Unreviewed to Accepted
comment:2 Changed 4 years ago by lukeplant
- Type set to New feature
comment:3 Changed 4 years ago by lukeplant
- Severity set to Normal.
Broadly the idea has merit, but it needs a bit of finesse - specifically, what determines the "type" of the field? Do we just use the type of the input element on the widget? What do we do in the case of multi-widgets?
Unfortunately, the solution used in admin isn't a great reference point. Admin has control of all it's widgets, and the way in which they are used. This isn't true of the broader widget framework, which must integrate with arbitrary external widgets, used in unknown ways. | https://code.djangoproject.com/ticket/13009 | CC-MAIN-2015-18 | refinedweb | 257 | 60.85 |
The .NET Language Strategy
I am constantly aware of the enormous impact our language investments have on so many people’s daily lives. Our languages are a huge strength of the .NET platform, and a primary factor in people choosing to bet on it – and stay on it..
This post is meant to provide that additional context for the principles we use to make decisions for each language. You should consider it as guidance, not as a roadmap.
C#
C# is used by millions of people. As one data point, this year’s Stack Overflow developer survey shows C# as one of the most popular programming languages, surpassed only by Java and of course JavaScript (not counting SQL as a programming language, but let’s not have a fight about it). These numbers may well have some skew deriving from which language communities use Stack Overflow more, but it is beyond doubt that C# is among the most widespread programming languages on the planet. The diversity of target scenarios is staggering, ranging across games in Unity, mobile apps in Xamarin, web apps in ASP.NET, business applications on Windows, .NET Core microservices on Linux in Azure and AWS, and so much more.
C# is also one of the few big mainstream languages to figure on the most loved top 10 in the StackOverflow survey, joining Python as the only two programming languages occurring on both top 10s. After all these years, people still love C#! Why? Anecdotally we’ve been good at evolving it tastefully and pragmatically, addressing new challenges while keeping the spirit of the language intact. C# is seen as productive, powerful and easy to use, and is often perceived as almost synonymous with .NET, neither having a future without the.
Every new version of C# has come with major language evolution: generics in C# 2.0, language integrated queries (and much functional goodness) in C# 3.0, dynamic in C# 4.0, async in C# 5.0 and a whole slew of small but useful features in C# 6.0. Many of the features served emerging scenarios, and since C# 5.0 there’s been a strong focus on connected devices and services, the latency of those connections and working with the data that flows across them. C# 7.0 will be no exception, with tuples and pattern matching as the biggest features, transforming and streamlining the flow of data and control in code.
Since C# 6.0, language design notes have been public. The language has been increasingly shaped by conversation with the community, now to the point of taking language features as contributions from outside Microsoft.
The C# design process unfolds in the dotnet/csharplang GitHub repository, and C# design discussions happen on the csharplang mailing list.
Visual Basic
Visual Basic is used by hundreds of thousands of people. Most are using WinForms to build business applications in Windows, and a few are building websites, overwhelmingly using ASP.NET Web Forms. A majority are also C# users. For many this may simply be because of language requirements of different projects they work on. However, outside of VB’s core scenarios many undoubtedly switch to C# even when VB is supported: The ecosystem, samples and community are often richer and more abundant in C#..
The Stack Overflow survey is not kind to VB, which tops the list of languages whose users would rather use another language in the future. I think this should be taken with several grains of salt: First of all this number may include VB6 developers, whom I cannot blame for wanting to move on. Also, Stack Overflow is not a primary hangout for VB developers, so if you’re there to even take the survey to begin with, that may be because you’re already roaming Stack Overflow as a fan of another language. Finally, a deeper look at the data reveals that the place most VB developers would rather be is C#, so this may be more about consolidating their .NET development on one language, and less of a rejection of the VB experience itself.
All that said, the statistic is eye opening. It does look like a lot of VB users feel left behind, or are uncertain about the future of the language. Let’s take this opportunity to start addressing that!.
This is a shift from the co-evolution strategy that we laid out in 2010, which set C# and VB on a shared course. For VB to follow C# in its aggressive evolution would not only miss the mark, but would actively undermine the straightforward approachability that is one of VB’s key strengths.
In VS 2015, C# 6.0 and VB 14 were still largely co-evolved, and shared many new features: null-conditional operators
?.,
NameOf, etc. However, both C# and VB also each addressed a number of nuisances that were specific to the language; for instance VB added multi-line string literals, comments after implicit line continuation, and many more. C#, on the other hand, added expression-bodied members and other features that wouldn’t address a need or fit naturally in VB.
VB 15 comes with a subset of C# 7.0’s new features. Tuples are useful in and of themselves, but also ensure continued great interop, as API’s will start to have tuples in their signatures. However, VB 15 does not get features such as is-expressions, out-variables and local functions, which would probably do more harm than good to VB’s readability, and add significantly to its concept count.
The Visual Basic design process unfolds in the dotnet/vblang GitHub repository, and VB design discussions happen on the vblang mailing list.
For more details on the VB language strategy see this post on the VB Team Blog.
F#
F# is used by tens of thousands of people and shows great actual and potential growth. As a general purpose language it does see quite broad and varied usage, but it certainly has a center of gravity around web and cloud services, tools and utilities, analytic workloads, and data manipulation.
F# is very high on the most loved languages list: People simply love working in it! While it has fantastic tooling compared to most other languages on that list, it doesn’t quite measure up to the rich and polished experience of C# and VB. Many recent initiatives do a lot to catch up, and an increasing share of the .NET ecosystem – inside and outside of Microsoft – are thinking of F# as a language to take into account, target and test for.
F# has a phenomenally engaged community, which takes a very active role in its evolution and constant improvement, not least through the entirely open language design process. It’s been an absolute front runner for open-source .NET, and continues to have a large proportion of its contributions come from outside of Microsoft..
On top of the strong functional legacy from the ML family of languages and the deep integration with the .NET platform, F# has some truly groundbreaking language features. Type providers, active patterns, and computation expressions all offer astounding expressiveness to those who are willing to take the jump and learn the language. What F# needs more than anything is a focus on removing hurdles to adoption and productivity at all levels.
Thus, F# 4.1 sees vastly improved tooling in Visual Studio through integration with Roslyn’s editor workspace abstraction, targeting of .NET Core and .NET Standard, and improved error messages from the compiler. Much of the improved Visual Studio tooling and especially the improved error messages are a direct product of the strong F# open source community. Looking down the road, we intend to work both with the F# community and other teams at Microsoft to ensure F# tooling is best-of-breed, with the intention of making F# the best-tooled functional programming language in the market.
F# language design unfolds in the language suggestion and RFC repositories.
Conclusion
Hopefully this post has shed some light on our decision-making framework for the .NET languages. Whenever we make a certain choice, I’d like you to be able to see where that came from. If you’re left to fill in the gaps, that easily leads to unnecessary fear or speculation. You have business decisions to make as well, and the better you can glean our intentions, the better informed those can be.
Happy hacking!
Mads
Join the conversationAdd Comment
I am sincerely happy to see that C++/CLI did not make this list.
C++ is run/managed by another group at Microsoft, who have lost all love for .NET long time ago. Your unwarned concern of C++/CLI is largely misplaced for more then one reason…
An inclusive sentiment would do better. Its one of strengths of .NET platform (a core fundamental idea of CIL) to support many programming languages… With open-source mindset, .NET ecosystem would do well to spread as far and wide as possible… Take a moment, think about it !
Well said, Mike (hey, nice name!). What seems particularly interesting to me is that the infrastructure isn’t in place to co-evolve these languages (and others) with as little friction as possible when new ideas (and manifested features) emerge. I know, pie-in-the-sky thinking, but .NET truly is about the ability to support multiple languages and code towards the same set of libraries, tooling, and frameworks. Roslyn is a good example in helping to assist in this goal. Seems like this should be something that the top of the top architects are living and breathing each and every day, to ensure that all .NET languages are supported and nurtured with the same passion and energy, leaving none at risk of ever falling behind.
If you are afraid of C++/CLI don’t use it.
I like to dive into the internals of technologies.
If I know something of the internals of .NET, I owe it to C++/CLI and … surprise … even more to Managed C++
How do you do in C# or VB.NET the following?
__gc class G
{
public:
int i;
};
__value class V
{
public:
int i;
};
V __gc * __nogc * __nogc * pppV; // pointer to pointer to handle to class
G __gc * __gc * __nogc * pppG; // pointer to handle to handle to class
Can you do this in C#, VB.NET or whatever you like?
With C++/CLI (what I show is C++/CLI’s ancestor) I can do ANYTHING you can in C#, VB.NET or whatever.
I post the above in the wrong place.
That was a response to “kantos”.
It’s surely the great advantage of C#, that it does not allow such constructions.
I think that it’s wonderful that we have a portfolio of languages on .NET that appeal to different people. I don’t think this is the place to be judgmental about other people’s choice of favorite language.
So sorry.
Please delete my comment.
Very elegantly put, Mads.
I use and rather like the VB syntax, and I also care a lot about programming and code craftsmanship. What I personally don’t care for is anti-VB snobbery, or the need to dismiss developers who chose different paths.
At the beginning of .Net, we were mainly using C++, so C# helped the transition. Since then, I have used VB .Net and find it hard to go back to C# which in my opinion should be deprecated.
VB .Net is easier to learn, write, maintain so they say “it’s for beginners”. Wrong. It’s actually more efficient. It’s for those who care about efficiency.
Microsoft chooses to artificially give more “advantages” to C# over VB .Net.
I am sincerely Unhappy to see that C++/CLI did not make this list.
Mads, I might be wrong but c# feels like, it’s getting more functional features piece by piece but without any cohesion. Many features like tuples, nested functions, pattern matching etc isn’t as powerful as they are now but mere code reducers rather than being part of a paradigm. I would personally want c# to approach to c++which became a cool language again, rather than being functional. ( i believe functional programming is the future though).Remember the original mission of c# that is to be something in the middle ground between Java and c++. If I want functional functional programming I would go with f# rather than trying to survive with those half baked functional features. C# on the other hand should focus on more imperative and oop features.
For example it is tedious to implement idisposable, the language certainly can help on this. Also non reified generics (which exists in f#) would be immensely helpful. Another thing is Java streams, which are breeze. GoF patterns, what is the most efficient way to implement Singleton correctly? Just like async await which changed entire industry , you see? There are lots of patterns like these waiting for being exploited by the language otherwise it is difficult to do correctly. There’s definitely some room for these.
Sounds like you’re looking for a framework. Plenty of those exist, Google it.
To be fair some languages have some of the features baked in e.g. scala has support for singletons out of the box. Otherwise yes agree most of the don’t sound like language features to me. And I’m not sure what is meant about F# and non-reified generics either – the person who drove the whole push for reified generics in the first place was Don Syme.
Maybe Onurg mentions compile time resolved generic code, like F# SRTP or C++ templates.
I meant Generics like java, c++ or f# static generics.
See this :
Basically Nemerle uses lisp style macro generators to ensure the pattern is implemented easily.
What library would allow me to implement idisposable correctly?
public class CustomDisposable { public CustomDisposable(Action onDispose){this.onDispose = onDispose;} Action onDispose; public void Dispose() => onDispose();}
Implementing a Dispose Method:
How about eliminate IDisposable completely!? The oversight surrounding deterministic disposal of non-memory resources (aka copying Java’s oversight) takes the really great, modern C# language and reduces it to C-style malloc/free pattern in this regard. All the burden is on the programmer to not only implement IDisposable correctly but to also USE IDisposable objects correctly. If you think people are good at this, just look at the number of snippetts on stack overflow where people forget to dispose or wrap their code in using(). Later some helpful commenter points out that they should have used using. It doesn’t have to be this way.
> Java streams, which are breeze
we have LINQ many years
See the difference
Or here:
That really looks like the Reactive Framework to me.
I’ll take that farther… many of the new features in C# feel like they’re trying to turn C# into the next Python/Perl/Javascript, which worries me a lot. The core problem with those languages is that they were designed on the back of a napkin and became successful without any real management. As a result, they’re a hodgepodge of design concepts that change regularly.
While the idea of pattern matching built into the language sounds good – where does the line get drawn? Pattern matching can be ‘baked’ into the language by way of operator definitions. That’s the proper mechanism for extending the language. If it’s not up to it, then fix that. But this idea that tons of practice specific features should be glued into the language is a bad one.
The link to the F# RFC’s is pointing to the same URL as the language suggestions, I think it should be:
Oops! Thanks for catching that. Fixed!
Possibly C++ isn’t mentioned here because it isn’t really a Microsoft language unlike the others mentioned here. There are also other languages on the CLR I’d like to see improve, particularly Python. The JVM is much better for choice and the .NET Framework really could do with at least one dynamic language being a first class citizen.
I’m a strongly-typed-language person, myself, but I can see value in promoting a dynamic language within the .NET Framework.
Powershell is .Net’s dynamic programming language
That to implies you see “dynamic languages” as merely a euphemism for “scripting languages” i.e. not something you’d want to write proper software in?
Ugh, yeah I forgot about that one. It’s so bloody verbose though and not a language anyone is going to use to build anything. I meant something like Python or Ruby.
I would count powershell as the dynamic language for .NET …. and Im saying that as a person who uses c# daily at work and python at home for my own projects.
I am reluctant to count PowerShell as a programming language.
Powershell is dynamic C# with dollars for and as you say, not a full programming language.
PowerShell is a dynamic .NET language!
I posted my comments about VB on the VB blog if you care to read them. It’s a powerful language that shouldn’t be left behind.
Sorry but I would politely disagree, as a someone currently working on cleaning up a tangled VB/C# mess for a client that has mixed vb/c# code-base developed by a host of “dotnet” developers, it really does need to be end of life’d. It’s just not different or distinct enough from C# to warrant continued investment(by ms) and costs incurred I see first hand that are brought about keeping both around.
Dual-language projects may be at risk of turning messy, but there are surely less drastic ways to manage that than eradicating one of the languages involved!!
I absolutely agree that more languages with different strengths/weaknesses make the world richer. But with C#/VB.NET are for all practical purposes the same language with different syntax, and very often organizations end up with mixed projects because they are both “.NET languages”. My point is that working with both daily I see no benefit having multiple languages, and incur the cost of dealing with both. Plus the communities are split and MS incurs costs supporting both. (Moral of story, find a different job)
Diversity of tools is good, so why not another language that’s quite different than either C#/VB.NET? Let’s say a clojure like language, with a bit more syntax, immutable data, organized by unit of code, interpreted at dev time for interactive dev and compiled with corert for runtime? I can dream LOL 🙂
Microsoft has been trying for years to make VB6 programming go away. And VBA programming too. But Microsoft failed.
As you say “Having more languages does make the world more complicated, but it also makes it richer and more beautiful. Let’s respect and embrace that people have different preferences, and make different choices!”
So now bring back VB6 and forget the imposter.
I’ve been doing C# for almost fifteen years now, and now VB kind of reads better to me, while I hated it when I had to start dealing with it instead of C#.
But more importantly, VB caters to a lot of people who don’t frequent Stack Overflow and aren’t primarily developers, Basic is also a language with long history at MS.
C# users should try to see that C# is not the panacea for everyone (and learn other languages than C based ones).
I’d argue VB should consider adding global type inference and work in reducing the amount of ceremony before getting code to run, for users who don’t do development, having to specify types instead of just using them comes as a barrier to achieve their job.
Mike, also VB has better type inference than C# (now C# 7 brought local function, but try to declare local lambda Func/Action in VB and C#, vb doesn’t need type name)
There are things in VB that are still more elegant than with C#.
I was a VB6 developer I loved it moved to VB.net loved it. last year I moved to C# its the greatest. all new development is in C# but we will never migrate our VB.NET apps to C# because they just work. These apps will be around for many decades. Microsoft should support the language for ever but should stop adding features to VB.NET because we don’t need new features we need solid environments to maintain these systems. Billions where invested into VB6 applications I still see VB6 applications everywhere these Applications will still be around for another 100 years because they just work.
The planet will be uninhabitable in another 100 years. Maintaining your VB 6 code will be the least of your problems.
LOL now THAT’S the spirit, Jon!!!
@mike, completely agree, having two languages doing the same thing, from the same company, helps nobody, all resources spent on improving VB.NET could be spent on making C# better. Another issue today is different languages used for client (JavaScript) and server (C#) side. Browsers could support client side C#. White list of namespaces and methods, similar to JavaScript, is neded. With C#, running on client side, so many math, encryption and other funcions would be available imediatelly. There would be huge savings on development costs.
@Evaldas, FWIW WebAssembly (WASM) is 20x faster than JavaScript and is already deployed in Chrome. There is a (very popular) vote out there for MSFT to embrace this and bring .NET back into the browser:
This would make .NET accessible in the browser, and when paired with Xamarin’s contributions of iOS/Droid accessibility, would truly make .NET a ubiquitous platform.
What you are fighting with is bad programming, not a bad programming language. I’m sure everyone here understands this, but it seems to be forgotten in these discussions: C# and VB both run against the .Net CLR. There is no difference. The difference is all in the syntax (keywords, language constructs, etc.). If you want to start condemning languages because people have made messy code using them, where does it end? I know that I have had to clean up some pretty horrible C# projects in the past. The fact that the project you are fighting through (and I do feel for you, btw) is a mixed-language one tells me that it was not well planned or maintained and would be bad in any language. To blame VB (or C# for that matter) doesn’t make much sense.
I hear you. It’s nice to see that it remains a first class citizen. I started in VB almost 15 years ago. I did some pretty interesting Enterprise applications in the language, Winform and Webform. Some pretty big apps. Totally RAD!
I spend more time in C# now, and every once in a while I would see some hip developers bashing the language. I can understand some devs are passionate. Here is the thing…in some companies, turn around must be fast for internal apps. VB allowed me to deliver. Yeah C# can be lightning fast, but most of the time internal users don’t really care for something 1 second faster.
I once did an entire EDI application in 4 days…complete with handling invoices, credit memos, acks, AS2, certificate and a bundle of other features…all in VB.
I also thought several persons to code in VB. They had no experience whatsoever in programming. In 8 months they were fluent. There is something about the language structure which allows me to explain it much easier to someone vs. C#…but that’s just me.
8 months to become fluent in a language?
I’m a C++, Java, Javascript, C# and VB developer. The language i prefer is VB. I always develop Windows applications and ASP.NET MVC in VB. More quickly, more powerfull, more readable, more organizated.
Regarding your VB/C# dev community, I would imagine most are stuck with VB because of a legacy codebase. VB served its purpose as a bridge from classic ASP. I think the community would be best served if you provided tooling through Roslyn to safely port VB to C# and swapped VB for F# in the #2 spot for languages that Microsoft cares about. F#’a small community is vibrant in spite of what some would consider neglect on Microsoft’s part. VB’s community is more stagnant. I don’t mean to sound confrontational to VB devs but VB is hard to hire for and somewhat sad to have to train junior devs in. Functional programming may be hard to hire for but at least the paradigm pays dividends in terms of improved testability, correctness, etc.. Apologies ahead of time if this kicks off a flame war. It was not my intention.
I find it interesting that VB is specifically called out as a first class citizen but F# is not.
It also seems strange that F# is owned by the F# foundation where as C# and VB.net are part of the .NET foundation.
Are there any plans to bring F# into the first class citizen category in the future?
The “citizen” comment was in the narrow context of ensuring that C# and VB continue to work great together across assembly boundaries. That story is a little more complex with F#, because it already puts many things on the assembly boundary that C# and VB cannot understand.
We don’t actually have categories of 1st and 2nd class citizenship that we operate by :-).
F# was way ahead of C# and VB on the Open Source front, so they had a nice foundation going before the .NET Foundation was even thought of. So that’s more a historical thing.
Interesting. Thanks for the response (and the awesome work on C#!)
Any update on non-nullable types? I have eagerly been waiting for years (Spec# c. 2004?), but it keeps being delayed. I remember it being planned for C# 6 or 7, but there must have been a NRE somewhere along the way. Any updates on the status would be appreciated.
Ben, check Cobra Language which is out since 2008 and has no nil references and plenty of other cool stuff.
Microsoft should have hired the guy behind that language.
I encourage you to take a look at the new repo for C# language design. While we’re still “moving in”, the proposal for non-nullable types was actually the one I used as a “guinea pig” when setting up the repo. So there’s a proposal, and some relevant language design notes to be found there.
The short story is that we are eager to do this feature (in some form) in C# 8.0, whenever that will be.
Really glad to hear about non-nullable types. This is the biggest reason why we are moving away from C# and writing more projects in F#. Most of the developers on my team are on-call and it’s no fun to get called in the middle of the night for null pointer exceptions 🙂 While it’s great the C# is getting other features that are nice to have, issues that stem from null, and immutability are real problems for us even though we are aggressive with null checks and other workarounds (creating maybe types). Even though we are doing more projects in F# that has help tremendously, we will never be 100% F# so it’s good that you guys are looking at solving this problem.
Lee has your team considered Kotlin or server-side Swift?
They are both object-oriented and perhaps years ahead of where C# sit on nullability. It mostly depends on what frameworks you are counting on – ie: are you looking to leverage ASP.NET/EF or is it more general job processing and JSON workloads that don’t need a full-blown web server.
This is especially interesting when it can be used to make arrays of non-nullable types that have a memory layout similar to arrays of stucts! There are a lot of huge performance gains to be made, related to cpu cache misses.
so C++ is dead?
Finally.
3… 2… 1… another poster comments: “One does not simply kill C++!”
Well, I never though that I could witness a language way better than C#. But then I started to learn F#. And now I’m beginning to understand what the hype in the Unity feedback section is all about. I’m not done learning F#. But so far there is no chance in hell that I would want to go back to C# for game development.
I started to make a Finite State Machine with C#. Now I made one with F#, and C# looks like a mess compared to F#. Enums appear to be pathetic compared to Unions.
Indeed, once tooling catches up to F# (VS and otherwise — but more importantly something from JetBrains), I will be tempted to make the switch. I’ve been keeping an eye on Mark Seemann’s great blog and his adventures in F#. Here is the first post that made me realize F#’s value, mainly in that it essentially forces you to make better code, leading to less development time (and therefore lower TCO):
I highly recommend subscribing to his blog if you haven’t already.
F# is missing a vital tool of the ML toolbox: functors (or in other words: parameterised modules). Without them, we’re so to speak ‘stuck on the first floor’ of type-driven abstractions. Don Syme has I think been reluctant to add them for fear of breaking compatibility with C#. But the thing is–we already have things in F# that C# can’t do, like units of measure. And, functors in F# could be ‘defunctorised’ when compiled to MSIL–i.e. erased and left only with concrete usage. As evidence: consider SML.NET ( ), a precursor to F# which did accomplish this. So technically there doesn’t seem to be a showstopper.
What does seem to be blocking F# is this mindset that they have to be in lockstep with C#. I think it would go a long way if the .Net languages team would give the signal that F# should go ahead and explore the ML-language design space fully. I think that would definitely galvanise the F# community.
I think it’s less about compatibility and more about how much mental baggage is required to understand most of the language. F# is certainly not the most complex functional language in existence (honors there would probably be Scala/Haskell/Idris types), but it has quite a few foreign concepts for the typical OO developer to understand before that can be productive.
I my opinion it is important to evolve the language even if not every person would be able to understand its every aspect. There is a difference between understanding language features to write new libraries and to use them. For example many of C# developers do not understand in detail even today how exactly LINQ works (I interviewed many of them) and that does not stop them from using LINQ. Another example Is Type Providers. How many F# devs do really understand how they work or can actually create their own provider? Not many, yet they happily use Type Providers that smarter people created. So my point is, let smart people have the tools they need, and everybody would benefit.
I don’t think it’s realistic to expect to understand most of a language before you try to become productive in it. In F# you can be quite productive knowing a few basic functional concepts like ‘modules can contain types and values’, ‘functions are values’, and ‘function parameters are curried by default but can also be tupled’.
About Don Syme’s exact reasoning, I may have misread him slightly. Here’s his detailed explanation of why he so far hasn’t wanted to add parameterised modules to F#:
Personally I feel some of those reasons can be addressed by a slight shift in thinking. Imho, they’re not insurmountable.
Great to hear that you will make F# the best-tooled functional language on the market! Give us the tooling and perhaps it would not be long till we take on VB (in terms of numbers).
Despite it says “Visual Basic is used by hundreds of thousands of people.” I think it is just a way to avoid saying “there are about 1 million VB users”, same as saying “C# is used by millions of people” is just a way to avoid saying “there are almost 2 millions C# users”.
I wish they disclosed the amount of F# users, is it 90k or 10k?
Fair point. Our usage numbers are aggregated from non-public sources, which is why I am only at liberty to publish these vague figures. However, I can say that the implication of those figures is not misleading: there really IS roughly an order of magnitude from C# down to VB, and again from VB down to F#.
Latest TIOBE index has VB.NET climbing in usage to #6. C# is #4. Together they exceed C++. That is something that brings great pride to the .NET community.
F# is a beautiful and elegant language that makes all your transformations from one immutable structure to another immutable structure dreams come true. Type providers – amazing.
Back to VB – any other organization with a top-10 language would be aggressively promoting it to as many platforms and use cases as possible. Wherever there is .NET there should be C#, VB, and F# support. Period. This is what should define a “first class language” – tooling support, platform support, use case support, aggressively improving the language.
I know there are a lot of metrics out there and you feel it useful to continue to cite the StackExchange opinion poll conflating all versions of VB together (I was at Build when you did that). While you do speak to that here in your post it might also be reasonable to include something like TIOBE in the future.
I’ll be back for my 6th Build and I’ll be wearing my shirt.
That’s supposed to say: I’ll be wearing my VB shirt (there were superhero brackets flanking VB).
Tiobe is extremely unreliable, vb.net placed above javascript (are you kidding me lol). MS usage numbers would be a lot more accurate although they aren’t public. I would guess that VB.Net has 1/10th the users that C# has. And javascript has more users than C#….
Take a quick look at github.com, stackoverflow etc. and you’ll see how small user base VB.Net has.
It would be better to use Red Monkey or IEEE instead of Tiobe.
C# is the most advanced language I’ve met and I thought so every time I used it since its debut like 17 years ago. I love it. Unfortunately, I’m mostly working with Javascript (node js / web…) and Java, but usually I do happen to work on smaller projects with C# and it feels like a whole generation or even more ahead of Java / JS. Features that were presented 2 versions ago or even more in C# are presented in Java, usually crippled and less daring. And Javascript feels so awkward, wish typescript will evolve to a be something close to C#.
Anyway, great work!
It is great to see C# and other .NET languages move forward and being made broader but it’s a shame the Visual Studio IDE is walking in the opposite direction with features like the annoying unremovable light bulb (aka the paper clip of visual studio) or the atrocious new autoformatting system which forces you to either use microsoft’s approved style or switch to notepad++ to edit your files.
Stop automodifying my code!
Check out the Code Style Configurations in the new Visual Studio 2017 release –
“Stop automodifying my code!” – I agree completely but surely you must be able to turn off auto modification of code in VS2017? Because if not then I can’t use VS2017 because of health issues (my blood boils after just a few seconds of automodification).
F# and C# are good languages, but I really miss a PROLOG. A logic programming language would be helpful for the upcoming ecosystem of 3D for everyone, visualization and simulation with Hololens etc. Adding a logic programming part to my C#/F# software would be powerful. My 2 cents.
Ugh!
A language that not knows when the line is ended is not a sophisticated programming language. And a language that cannot be pronounced is not a language at all…
VB is the more advanced language.
VB is good in terms “yesterday I bought computer and today I want to be programmer”. But for real project for big guys?!! C’mon, leave your toy-basic alone and don’t dirt IT!
F# – nothing, but hype AND MS WAS CAUGHT. “Functional programming” is a history, past era of IT growing. We had many things like “hierarchy databases”, PL/1, Fortran, “stored procedures”…. now all of that are obsolete. OOP proved its applicability to complex systems and FP made no revolution here. Mainly because WE THINK IMPERATIVE. Deep thinking in functions is alien to humans, that’s why MS just WASTE TIME on old rubbish like Caml.
if thinking in functions is so difficult then why is the most programmed language today Javascript? Javascript is technically classified as OO but the majority of Javascript written is done in the functional style. The popularity of functional programming languages has only grown and grown the last couple of years so don’t think F# is history just yet. multi-core/thread programming is a lot easier in F# due to functional programming preference to immutable data structures so this can also explain how it can achieve plenty of future growth as all development shifts to more and more cores.
I’ve only known one or two people who prefer C# over F#, that have honestly tried and programmed in both, the productivity boost is unbelievable so if MSFT starts tooling and promoting the language properly, it can grow a lot faster then C# and other mature OO languages and finally achieve some critical mass.
This is the point where I will once again urge folks not to fight over which language is best – at least not here!
I am extremely proud of the fact that we have three unique languages on .NET (and in fact many more, as some people have kindly pointed out), all of which have passionate and devoted followers.
I simply cannot understand what can possibly be achieved by going out of your way to state that somebody else’s favorite language or paradigm is inferior!
This is the point where I kindly remind everyone that having a commenting system that provides a way of upvoting (and in this case downvoting) is incredibly useful and valuable in filtering this kind of content, and ensuring the quality bubbles to the top, where it belongs. 🙂
+1 🙂
+1
Most people who are bashing VB.Net didn’t realize that it is not the same as VB6. VB.Net has the same capabilities than C#. Everybody who says it’s not für ‘big guys’ has missed the whole concept of VB.Net. It is easy to learn, easy to read and allows a developer to improve and use new features for years on end. There are currently now restrictions to write your code in the most modern concepts just like in C#. But its language concept – which is just a bit different to C# – is easier to learn and easier to read.
We are developing since 12 years in VB.Net. The biggest program suite know has 1.5 million lines of code. All of our developers who started in our company with a preference for C# have changed ther mind to become enthusiastic about VB.Net after programming in the language for a couple of month.
Hopefully Microsoft will reconsider this decision. It is fantastic to be able to develop a huge program and everybody, beginners and experts, can use the same language.
I can’t even tell you how many times I have had the pleasure of unwinding craptastic C# code. The Internet is absolutely filled with terrible inefficient c# code examples. I also have large scale LOB systems in Visual Basic that are much more efficient and maintainable than your over-engineered c# code bases ever will be.
One of the worst myths ever allowed to fester in this profession is the idea that somehow your language of choice makes you more professional. It doesn’t. If you write crap code, it won’t matter what you choose. If you strut around with your arrogant tones dissing others so that you can feel better, you will be found out and you will end up on the side lines wondering why you aren’t getting work anymore.
Seriously? VB is not for real project?
I use VB on every of my own projects because it allows me to code 1.5-2 times faster than C# on a large scale.
Now there is interesting fact: most of VB.NET devs are also professional C# users. But not vice versa: most of C# devs are guys, who came from other platforms, only attracted by c-like syntax of C#: due to a lack of fundamental .NET knowledge they make huge errors in design, no architecture, memory leaks, and CPU-killing cycles… but they are so proud of being C# “devs”.
C’mon, grow up: it is not the language what determines a success. It is possible to write first-class high load services with VB.NET, as much as it is possible to fail completely with C#.
I’m thankful Microsoft is allowing your team to keep VB.Net parallel with C#. Where I currently work, management are huge VB.Net fans and our new MVC5 web applications are built with VB.Net. I’m a C# fan, but it’s not about me. Overall, it’s been a great experience and VB.Net is not all that bad.
Yip, I had the same experience. Had to do some work in VB for a client because the client requested VB and it was actually not that bad, took me back.
Mads, while I agree with the general policy on language evolution, you’ve really understated a very important point. You are moving language design out of the Roslyn repo and into separate repo that primarily relies on mailing lists? I’m not sure why the need to copy every aspect of other language design teams- I rather prefer easily searchable Github issues over an email list – which feels a method of language design 20 years out of date. Not to mention Github can and does send you emails when an issue is updated. This feels like a major step backwards.
I also am disappointed that there is no announcement about new leadership for the language team or a better policy to involve the community. The very fact that this major decision to revamp the language process was made, again, with zero community input shows how sadly broken your model of “open source language development” is.
I did focus on the bigger news in the post, which was the language strategy. What mechanisms we use to organize our C# and VB language design process is a relatively inferior point, of little interest to most of the community.
This is probably worth a blog post at some point, but here’s the quick rundown of why this change happened:
– Cohabitation with the Roslyn implementation effort got incredibly and unsustainably noisy for both sides, so we needed to move language design out
– While we were at it, we decided to give C# and VB design each their own home, to facilitate more clarity about what happens with each
– Also while we were at it, we decided to separate broad discussion of ideas from the working documents of the Language Design Teams, which are the governing bodies of the two languages. That way, the casual observer has a much higher chance of gleaning what is actually moving forward towards implementation, without being overwhelmed by the vast amounts of discussion that we (thankfully) have.
– We picked mailing lists for the discussion part. Could have been something else. Yes, we had a big debate about it, and there were strong feelings on both sides, and there are pros and cons of all approaches.
– We did base this in part on broader community input, and also discussed the move ahead of time with the .NET Foundation Technical Steering Group.
Our overall aim here is to provide greater clarity and visibility around all aspects of the language design process. I look forward to your continued contribution in this new framework!
I appreciate and agree with the need to make changes from time to time, but I still take issue with this statement:
“We did base this in part on broader community input, and also discussed the move ahead of time with the .NET Foundation Technical Steering Group”
I follow the Roslyn Github repo and saw no discussion of the proposed change. In fact, you posted new design notes there very recently, giving absolutely no indication of a change being considered. I also don’t think the .NET Foundation Technical Steering Group is very representative of the community of those who contribute to the language discussions. It consists of Microsoft and a very small handful of lesser partners. Any suggestion that this was vetted by the broader community is disingenuous.
I’m not just complaining about this for the sake of it. This is the 2nd time in past few years that you guys have arbitrarily moved the language design process (CodePlex -> Github -> Mailing list). The community has collectively spent hundreds or perhaps thousands of hours writing feature proposals and vetting them, all of which now needs to be re-created for a 3rd time in yet another location. It is absolutely ridiculous how insular Microsoft is while claiming to have an open-source process.
Great that you want to maintain and improve language itself and editors. But language itself is not enough, there must be embraced new tools and especially new platforms. WinForms, WebForms are legacy platforms, and ASP NET (non CORE) will also slowly join that pack. More relevant platforms are WPF and UWP, where VB have full support. But you should also extend support to newer relevant and popular platforms, specifically Xamarin, ASP NET CORE and NET CORE. I know tha C# is first to move on new platforms, but once C# is made on Xamarin, then you can add support for VB. Mobile apps are increasingly popular, and VB is lacking here.
Without embracing new platforms, VB will be on the same trajectory as VB6 with his environment: maintained but not expanded. Then who will learn that language ?
Hi Mads and co,
I appreciate the insight into the strategy on F#. One thing I’m wondering though is if the context for “F# demand” is being ascertained correctly. As background, let me cross-reference something from Anthony D. Green’s VB post.
“A worthwhile observation is that, contrary to how customers often think of the causal relationship between them, the investments we make aren’t the force pushing the languages in these new arenas. The arenas are the forces pulling on them and the investments are responses. It’s not “if you just pushed Visual Basic harder on Linux it would be a huge player there” but “Gee, they really want C# on Linux, let’s make it easier for them”
Now, I think the above is completely understandable and appropriate. However, do you guys take the following into consideration?
1. Release and adoption of Swift by Apple (both for internal development efforts and as tooling for external devs)
2. Release and interest in ReasonML (and related use of Ocaml) by Facebook (both for internal development efforts and as tooling for external devs)
3. Enthusiastic response to the Ocaml MOOC and resurgence of interest in Ocaml
I guess I’m wondering if demand and adoption for things “like” F# are taken into consideration at Redmond, and if you believe there is more you can do to position F# to take advantage of these emerging conditions?
Great question! Yes we take quite a broad view of the “growth potential”, if you will, of each language. That includes a look at the overall movement in the market, and many other factors. For F# specifically, there seems to be a disproportionately high level of interest from people who aren’t yet using it. Both because of its functional bent and its unique features, it is appealing in many cloud scenarios, for instance. Primary inhibitors seem to be skepticism about Microsoft’s commitment to it, and about the quality of the tooling. That’s why the strategy for F# above focuses a lot on tooling improvements: We want to make F# a viable choice for many more people and companies who are hesitant today.
Better tooling for F# is great, but in my opinion what is needed even more is that Microsoft markets the language more actively. I won’t say aggressively, because that would perhaps imply spending a lot of resources, and I don’t think that’s needed.
Lots of people are aware of F#, but sitting on the fence. The decision makers are frequently skeptical business people that need a push. It has never been easier to transition to a new generation language, so they don’t need much of a push, but they need it, and they need it from someone they trust – and these people trust Microsoft, not the community.
The language matured in the last few years, and now is the time for the marketing department to tell everybody “this is the future, this is a far more efficient language – use it! We’re behind you.”
If this isn’t done now, it may be too late when the competition gets up to speed. From comments all over the Internet, it seems clear that F# is one of the best liked languages ever. It deserves to win the battle of the functional languages in the mainstream arena. It deserves to be the major language in the coming years.
– Obsolete and remove outdated language constructs, features kept around since v1.0.
– Remove obsolete .net framework classes/methods
– Consolidate .net collection types into 2 or 3 different ones
– Simplify the language so that there is one way to declare something – no implied ‘private’ for member variables
Is there anything the team has in mind for producing a successor to C#, which is built in a similar syntax and paradigms as Swift/Kotlin/TypeScript (inverted type definitions, nullability and readonly-ness built from the get-go, ie: “let user:Person = ….”), built-in enumerables/lists as part of array syntax (ie: “var customers:[Person] = …”), no brace syntax where not needed “if itemWasPressed { … }”, required named parameter parts for methods (as in Swift), etc. Let’s be honest, the list is too long for C# to mold into a next-gen language without going back to the drawing board taking into account all of the wonderful lessons learned in the last decade+.
While we do have a Swift implementation on .NET with Silver (), it’s not first-party and lacks many of the core C# benefits of LINQ and expression/AST parsing (ie: “Expression”) that make C# so powerful.
I would love to see Microsoft look at this time in our industry as an opportunity the same way it did when Java opened the door for a new generation of language with J# and ultimately C#. To experienced teams coding in Swift/Kotlin/Typescript the writing is already on the wall. It’s not a matter of “if” but “when”.
From the name of this article we were kinda hoping for a surprise announcement along these lines. But a simple acknowledgement of a yey/ney on plans for this would also be just as great. =)
Great work on everything team.
We don’t have another secret .NET language in the works. Have you looked at F#? While some of the concepts you mention can and will make it into C# over time, F# is a formidable alternative to Swift, Kotlin and their generation. On top of the stylistic leanings you mention (non-null by default, readonly by default, types on the right – or inferred, list comprehensions, punctuation-free syntax, etc), it has some absolutely unique and highly productivity-enhancing features: type providers, units of measure, monadic workflows and more. It is quite mature and rich, while still principled and easy to read.
Thanks Mads. Had another in-depth review of this in case we were missing something major with F#. There’s certainly a place for F# in the world (scripting? scientific programming? jobs/signal processing?) – but not as a general-language, OO-based alternative to C#.
Guess we’ll have to see how both Silver, C# 8 and server-side Swift progress and re-evaluate over time which has the stronger future potential. My personal suggestion is that the team DOES get an alternative Swift-generation language in the works at first chance (perhaps after the VS2017 RTM release). We know how long these things take to get off the ground and as an x-Blackberryite I know first-hand what starting too late can spell for technology ecosystems.
Thanks again for the all hard, awesome work the team’s been doing in the lead-up to C#7/VS2017/cross-platform .NET. You guys rock!
Mads one other thing I forgot about in terms of core motivation for a successor lang: the GC memory model.
ARC superscedes GC from a performance perspective while still maintaining the programming safety & simplicity (that’s why Mac deprecated GC once Obj-C and Swift adopted this better-on-every-front memory model).
I’m guessing there’s next to zero chance of C# ever moving to an ARC-based memory model and being on the modern train of performance against alternatives??
You saw how quickly Node took over the programming world’s mindset – for better or for worse. You’ll be thankful to start on this early when server-side Swift hits full gear and takes over the rest of the non-Node converts (ie: the ones that understand the value of type-safety and software engineering scalability of strongly typed languages). We all want a future of .NET cross-platform…. but C# writing is already on the wall.
Not sure what your sayingm here. F# supports pretty much all of C# features currently with quite a few extra features on top vs C#. It has OOP support obviously where you can define types and classes pretty easily once you get the hang of it. Besides unsafe and Goto I’m not sure what features right now that C# supports that F# doesn’t? At least not in the apps I’ve written.
Is Microsoft every going to add support for building ASP.NET apps using F#? Will it ever get to the point where it’s supported everywhere C# is?
sciBASIC#: Microsoft VisualBasic for Scientific Computing ()
this is amazing.
We were left behind many years ago as VFP developers with 3 main stream products in the market and wondering where to go and considered many options. We chose VB.NET, Windows Azure and DevExpress components, as we knew this was an absolutely critical transition path for us and we had a lot of work in the change over of (now) legacy applications into modern relevant software. With years of investment now in VB.NET we expect to have (and see continued) full support of web and desktop using VB.NET. Yes sometimes we look at C but not out of choice, our choice is to stay with VB.NET and secure our investment in that development environment.
Thank you for honouring your commitment to the vb.net community.
Mailing lists are a brain dead move. trying to show code on mailing lists is a perpetual effort in frustration. I strongly recommend that you correct this awful mistake before there’s too much momentum.
What would be a help to VB .NET would be if Xamarin would also support it.
C# and VB are similar but it is far easier to follow nested functions in VB as the endings are not just } but actually words.
Supporting mobile devices by various manufacturers really is needed in VB.
So does Microsoft intend to level the field and provide support for VB language in Xamarin?
powershell ? I am building back end processes and LOB winforms apps every day with it. Am I confused? 🙂
I’m always sad to see people rip on Visual Basic. It’s a great language, especially in its current form. I use both C# and VB, and while both languages have their warts, C#’s weak points are more glaring IMO. Everything about events, as a prominent example, is worse in C#. There are lots of reasons people like to complain about VB. Most of them don’t apply to the current version of the language.
For the record, I programmed in C++ just about every day for 15 years before I touched .NET. You might have expected me to prefer C# for it’s similar syntax, and at first I did, but after some experience with both I found VB to be nicer to work in.
Yes, there are many things better in VB, one thing I miss very painful in C# is you are free to implement interfaces very flexible as long as the signatur matches.
The whole problem with VB is the image. Maybe if you name it V# nobody had a problem with it. I come from C++ and prefer vb over c# – and together with the also dying C++/CLI it was the perfect combination.
And what about XAML and WPF?
I know that WPF or XAML are not specific .NET languages, but i want to know, like a lot of people wants, what’s the future, improvements and advances in those technologies.
Now were are stuck on XAML2006 because XAML2009 is not implemented in WPF or in Visual Studio. What about this?
I expected to see new XAML like XAML2017 and more features, improvements, performance and advance in XAML and WPF, like native rendering via DirectX 11 and more (not DirectX 12 because is not supported in Windows 7, the last desktop Windows OS, Windows 8.x and 10 are mobile Windows OS’s, not desktop.)
Amen to this. I’d like to see this addressed as well.
+1. 😉 😉 😉
+1 vote for me to !! And better debug options and some more clearer exception messages.
It really is baffling that XAML2009 or its replacement isn’t prioritized. +1
It would be more reassuring if more Microsoft developers were working on VisualFSharp. It seems to only be the same few people. Community involvement is nice but lack of support by Microsoft creates doubt and uncertainty about the future of the language. In my experience this alone is a huge hurdle to adoption for .NET developers using C#.
I think it is better that you will not add new features to C#.
agree with Amir Saniyan.
We all know where dinosaurs ended up.
Diversity, diversity, diversity!
We’ve gotten a lot of feedback on our chosen venue for design discussions, and are proposing some changes. Add your voice here:.
I work on a hybrid ASP.NET project which combines both VB.NET and C# code. I’m pleased to hear both languages are being developed together. There are many developers from a VB background that like the familiarity with VB.NET and enjoy using it.
VB.NET continues to be the best language for beginners. Owning this crown jewel, Microsoft missed the opportunity to position it as a 1st class language for the windows phone/mobile area. This could have saved the windows phone by winning millions of developers in niche areas and therefore apps.
And at the moment the same happens with Windows IoT: a crippled OS with C# as a hurdle for amateurs, hobbiests, makers. Simply a marketing failure! You continue sending everybody to Python.
I`m really looking forward to the focus on modern languages.
Why would you start the language wars all over again!! KEEP VB and C# equivalent! You promised….
“VB 15 comes with a subset of C# 7.0’s new features”
..so sad about it, I really would like to see them always co-evolve. I want local functions! 🙂
I develop with VB, when searching solutions on stackoverflow I don’t care if they are C# or VB since i can easly translate them by hand or with online converters. If they are not evolving the same way that would not be possible anymore.
Please keep it present.
I use Visual Basic every day to build powerful and mission critical systems for my company. Periodically, I break ground on a new one and think “I should probably do this in C#, because then it will be so much easier to find code samples and won’t feel goofy posting my own code on StackOverflow”. But each time I go ahead create my new system in VB because I love it and find it much more expressive and maintainable than C#. I *like* the difference between End Function, End Module, End Class, End Namespace and }, }, }, }.
Microsoft’s decision to drop parity between VB and C# terrifies me, because I push the language as far as it can go to be able to write the best code I can. Having to switch to C# to express something will force power VB developers away from the language we love, because it’s not feasible to defend using a weaker language.
If Microsoft wants to do this because it’s expensive to maintain parallel languages, I could accept that. But dropping parity because of the belief that it will confuse newer developers is silly. *They just don’t have to use those features.* In VB even now there are a bunch of features that I’m aware of, but just don’t happen to use that much. If I were a new developer, there would be more such features, but they still wouldn’t scare me *because I wouldn’t know they were there*. But it would be comforting to know that if I ever became a serious developer, I wouldn’t have to learn a different language to do what I needed.
This reminds me of the early days of VB.NET as a second class citizen and will make me really sad if it comes to pass.
EXACTLY!!!
Just because there are 100 times more questions on the StackOverflow, related to C#, does not mean that C# community is larger. It only means that VB devs are more professional and do not ask question on every bit.
I use C# only to write code which must be shared with non-VB devs. All the other time I use VB: it really lets you write much more readable code.
I have translated VB code to C# before posting questions on Stack Overflow because I get more responses. That tells me that
a) the C# community on Stack Overflow *IS* bigger than the VB community, but
b) the difference is not as big as the number of questions in each language would lead you to believe.
Hi, I’m a student and we have a project that we need to develop a new language. We can target any run time and we found a lot information about `developing a new language that targets JVM` for example Xtext and Xtend. Or alot of reading about Groovy, Kotlin or some other languages being developed for JVM .
But , all students in our group have .NET knowledge and background and we would like to develop something that can benefits CLR. We could not find any documentation or info anywhere how to develop a small DSL language that targets CLR especially CoreCLR for having cross platform support.
There is only one book out there and it is very old.
any advice is greatly appreciated.
Thanks
Huseyin
I think it’s time to create the next big programming language. The new language should be created by lessons learnt from other languages such as C#, F#, Kotlin, Swift, Golang, Java, Scala etc. The new language should not try to add as much as possible new features or syntax sugar, but instead focusing on key features make it really the most suitable for building modern distributed / concurrency systems.
Mads, with all due respect, it does not feel like Microsoft has much love for the VB.Net (VB) community. Two years after launching VS Code, there’s still no true VB support. Xamarin is a VB free zone. A number of Microsoft’s tutorials are only in C#; when in VB, it’s not unusual for them to have breaking errors.
I took the hint and switched to C#, but most days I’m thinking, “boy, this would be quicker and easier in VB”.
Most VB haters have spent little time in VB and dismissively call it a beginner’s language, ignoring how much it’s changed from it’s roots and VB6 past. Personally, I found the transition from VB6 to VB.Net much steeper than the one to C#.
After all the above, VB is not perceived as a first class language; furthermore, Microsoft long ago stopped targeting VB to students. So its resilience is remarkable and a testament to the language. With some love, it could be brought back.
If I wasn’t such a fan of .Net and the VS IDE (including VS Code), I’d probably switch to Python — a venerable language growing in popularity with syntax closer to VB than C#. If Microsoft *fully* supported Python in .NET, they’d get many more users and retain their VB base. Thanks for listening.
And what about C++/CLI ?
Are people afraid of it?
Excellent way of describing, and pleasant post to get data
concerning my presentation focus, which i am going to deliver in institution of higher education.
Glad that VB.Net will continue to be first class citizen and coevolve with C#.. We have invested so much would be a bummer if this is not the case .. | https://blogs.msdn.microsoft.com/dotnet/2017/02/01/the-net-language-strategy/?replytocom=230285 | CC-MAIN-2017-51 | refinedweb | 11,148 | 72.46 |
On 7/22/09 4:15 PM, "address@hidden" <address@hidden> wrote: > I think this is pretty much ready to commit > > > > File lily/beam-scheme.cc (right): > > > Line 2: beam-scheme.cc -- Retrieving beam settings > could you call this beam-grouping-scheme.cc or something like that? > beam-scheme sounds like it contains routines for manipulating Beam > grobs. Changed to beam-setting-scheme.cc. Changed beam-grouping.hh to beam-settings.hh. > > > Line 12: LY_DEFINE (ly_beam_settings, "ly:beam-settings", > is this function really necessary? Probably not. Instead of ly_beam_settings(context) we can do context->get_property("beamSettings"); there's no error checking in the current function. So I guess I'll drop it. > > > Line 49: ly_grouping_rules(settings,time_signature,rule_type), > formatting Fixed. > > > Line 61: SCM settings = ly_beam_settings(context); > formatting Fixed > > On 7/22/09 5:23 PM, "address@hidden" <address@hidden> wrote: > > > > File input/new/grouping-beats.ly (right): > > > Line 7: Beaming patterns may be altered with the @code{beatGrouping} > property: > new texidoc using \overrideBeamSettings > > > File lily/beam-scheme.cc (right): > > > Line 10: #include "beam-grouping.hh" > swap Fixed > > > Line 26: " @var{rule-type} in @var{context}.") > no context arg, doc settings > Fixed, 2 places (ly_grouping_rules and ly_beam_grouping). > > Line 28: LY_ASSERT_TYPE (ly_is_pair, time_signature, 2); > scm_is_pair Fixed > > > Line 30: > type check also for settings? Settings needs a list? type check, and I haven't seen one available in c++. It shouldn't segfault, because we do a pair check before we use it. I can't use a pair check for the argument, because '() is valid for settings. I could use pair_or_empty, but it doesn't exist, and when I tried to add it to flower/include/guile-compatibility.hh, where all the rest of the types are defined, it gave me errors. I'll put a FIXME in. So I'm not doing a type check for settings, at least right now. > > > Line 34: ly_assoc_get ((scm_list_2 (time_signature, rule_type)), > excess parens D'OH! I'm not in scheme anymore! Fixed. > > > Line 43: "Return grouping for beams of @var{ beam-type} in" > @var{beam-type} > Fixed > > Line 45: " @var{rule-type} in @var{context}.") > no context arg, doc settings Fixed > > > Line 46: { > type checks? Put in for time_signature, rule_type. Can't do one for beam_type, because it needs to be pair-or-symbol, and I couldn't figure out how to add it. I don't think it would segfault, because it's not dereferenced. I'll put a FIXME in. > > > Line 57: { > LY_ASSERT_SMOB (Context, context, 1); > Added. > If you don't do this, then unsmob_context () will return NULL if this > function is passed invalid arguments, leading to a null dereference for > get_property ("timeSignatureFraction") -> segfault Thanks for teaching me about this. > > > File lily/beaming-pattern.cc (right): > > > Line 18: #include "beam-grouping.hh" > sort OK, done > > > File lily/include/beam-grouping.hh (right): > > > Line 8: > To prevent multiple includes: > > #ifndef BEAM_GROUPING_HH > #define BEAM_GROUPING_HH > > (+ line 14) > > > Line 14: > #endif // BEAM_GROUPING_HH > OK, done. > > File lily/measure-grouping-engraver.cc (right): > > > Line 14: #include "beam-grouping.hh" > sort When I try to sort, it breaks compile because SCM is not defined when beam-grouping.hh is included. The best thing to do would be to include the proper file to define SCM if it hasn't already been loaded. But I couldn't find the header file that defined SCM through git grep. Do you know which file I need to include? > > > Line 66: SCM time_signature_fraction = get_property > ("timeSignatureFraction"); > move to if { } block, then it's only retrieved if required > Done. Nice catch. > > File ly/music-functions-init.ly (right): > > > Line 20: (_i "Define @@var{music} as a quotable music expression named > rogue extra @'s throughout file Fixed. > > > File python/convertrules.py (right): > > > Line 2930: str = re.sub("\\set\w+#\'beatGrouping", "\\setBeatGrouping", > str) > won't get here due to search above (and regexp is broken) OK -- fixed (and tested). > > > File scm/beam-settings.scm (right): > > > Line 48: ;; in 3 4 time: > decided not to restore original setting? I had decided to restore it in a different form. 1/8 beams are (6), which is equivalent to (3) in (3 . 4). All shorter beams will be (1 1 1). This is (almost) equivalent to ((* . (3)) ((1 16) . (4 4 4)) ((1 32) . (8 8 8)) ((1 64) . (16 16 16)) ((1 128) . (32 32 32))) but it's written more succinctly. (At least it's equivalent, as far as I can determine.) As far as beaming is concerned, it's equivalent. But the measure-grouping-engraver uses the default for doing measure grouping. So I changed my mind and went to the settings listed above. > > > Line 155: ;;;; Functions for overriding beam settings > indentation of function bodies I think I've got it right now. > > > File scm/define-context-properties.scm (right): > > > Line 126: (beatGrouping ,list? "A list of beatgroups, e.g., in 5/8 time > remove Fixed. > > | http://lists.gnu.org/archive/html/lilypond-devel/2009-07/msg00599.html | CC-MAIN-2019-18 | refinedweb | 808 | 69.38 |
Hi everyone,
I’m trying to dynamically change the color of an image on the screen. To accomplish this, I’m using
visual.ImageStim and changing the colors with
stimulus.color = [r, g, b]. This works great for any color brighter than neutral grey: light colors fade out to grey. However, it does not work for any colors darker than neutral grey. When I try to make the whole image black, for example (
stimulus.color = [-1, -1, -1]), the colors in the image are inverted instead of displaying as black.
The math of the color maps makes sense here: Multiply by a negative number to invert the color. However, the same outcome occurs regardless of whether I use the
rgb or the
rgb255 colormap.
Do you have any suggestions for how to dynamically recolor an image to a dark color?
I’m including a minimal working example here.
Thanks!
geoff
from psychopy import visual, core import numpy as np CS = 'rgb' # ColorSpace WHITE = [1, 1, 1] LIGHT_GREY = [0.5, 0.5, 0.5] GREY = [0, 0, 0] BLACK = [-1, -1, -1] ## ---- Comment this section in to try a different colorspace # CS = 'rgb255' # ColorSpace # WHITE = [255, 255, 255] # LIGHT_GREY = [200, 200, 200] # GREY = [128, 128, 128] # BLACK = [0, 0, 0] win = visual.Window([800, 800], monitor='testMonitor', color=LIGHT_GREY, colorSpace=CS, units='pix') img = np.array([[-1, 0], [0, 1]]) # Image bitmap stimulus = visual.ImageStim(win=win, image=img, colorSpace=CS, size=(100, 100), units='pix') # Show the normal stimulus stimulus.color = WHITE stimulus.draw() win.flip() core.wait(1.0) # I want to show the stimulus faded to black. # Instead, this inverts the stimulus, showing # black areas as white and vice versa, leaving # neutral grey areas unchanged. stimulus.color = BLACK stimulus.draw() win.flip() core.wait(1.0) # This strategy does work for fading to neutral grey, however. stimulus.color = GREY stimulus.draw() win.flip() core.wait(1.0) | https://discourse.psychopy.org/t/fade-an-image-to-black-instead-of-inverting-colors/5036 | CC-MAIN-2021-43 | refinedweb | 319 | 59.3 |
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode.
Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript).
@ferdinand Maybe the broblem in not the code but the Cinema Application, Some friends also told me when they load Aseets(especially at a large Asset data like 20G in disk) , It always take a long time or even not load unless restart C4D.
You are right, I should call WaitForDatabaseLoading() function for safe. That's my stupid , I didn't notice the loading failed in state bar
WaitForDatabaseLoading()
I do check the Python examples in GitHub , but I think maybe it is a dcc of my config problem, some friends also said assets broswer is a little slower . (perhaps due to china a internet far away from maxon server) .hope it will be fix in uncoming C4D
@ferdinand
Thanks for that , as I metion above , Otoy didn't providing support . And they even didn't reply the topic
And this is all I want , I just want to know what this code happend and to move on .
I will still try to ask help for Otoy , thanks for your help
@x_nerve
Yes , I know exactlly what you mean , but unfortunally , Octane form and Octane support is not good as this PluginCafe . Some post I even cannot post a topic . I don't know what lead this happened , but it does .
Actually , I post it and ask , this is all they reply , not a example or any explain , even not what I ask for python , by the way , I try to add a aov node like this C++ ( for the part I can read ) , it does worked for something , like insert a node to aov node gragh an connect or something , I think it also like c4d.BaseMaterial structure or something like that >
c4d.BaseMaterial
-- Renderer
-- AOV out node
-- Beauty AOV
So I can do a little work , so I what to know how this code means . I know that Maxon would not support this
Hello :
I want to create a 3rd-part AOV Node to render Engine Octane , Octane teams provide a lillte example in C++ , but unfortunately I only use python and cann't understand and translate it into python version .
Octane
Can you provide a bare-bone code for this snippets in python ?
Any help will be great
BaseList2D *addAOVnode(BaseList2D *parent, Int32 slotID, Int32 TYPE, Octane::RenderPassId passId, Bool incCnt)
{
BaseContainer *bc = parent->GetDataInstance();
Int32 pType = bc->GetInt32(AOV_NODE_TYPE);
BaseList2D *shd = addOctaneShaderToNode(parent, slotID, ID_OCTANE_AOV_NODE);
if(TYPE==AOV_TYPE_COMP_GROUP) { shd->SetName(AOV_GROUP_NAME); }
else if(TYPE==AOV_TYPE_COMP_AOV) { shd->SetName(AOV_COMP_NAME); }
else if(TYPE==AOV_TYPE_COMP_AOV_LAYER) { shd->SetName(AOV_COMPLAYER_NAME); }
else if(TYPE==AOV_TYPE_COLOR_OUT) { shd->SetName(AOV_COLOROUT_NAME); }
else if(TYPE==AOV_TYPE_IMAGE_AOV_OUT) { shd->SetName(AOV_IMAGEOUT_NAME); }
else if(TYPE==AOV_TYPE_COMP_LIGHTMIXER) { shd->SetName(AOV_IMAGEOUT_NAME); }
else if(TYPE==AOV_TYPE_RENDER_AOV_OUT)
{
shd->SetParameter(DescLevel(AOV_RENDER_PASS_ID), GeData((Int32)passId), DESCFLAGS_SET_0);
shd->SetName(AOV_RENDEROUT_NAME+S(":[")+ rPassName(passId)+"]");
}
setParameterLong(*shd, AOV_NODE_TYPE, TYPE);
if(incCnt)
{
Int32 cnt = bc->GetInt32(AOV_INPUT_COUNT);
setParameterLong(*parent, AOV_INPUT_COUNT, cnt+1);
}
return shd;
}
Hello
Here is my little Redshift api helper on Github , it have a AddTexture function to do this .
AddTexture
It is a bare-bone one , and I have no time to perfection , but it can work for basic for now
It has a little example , welcome to improve it and have fun
Hope it can help for anyway .
I Check with my home PC in R 2023, It works fine , And for some reason , It report a none last time I am pretty sure I have not insert a layer field in fieldlist ( I just miniest the scene for test code ) , But when I resart C4D today this none warning just gone
Maybe It is just a oolong events I make some thing I don't know . Sorry for that .
And Thanks for the TIPS , next post I will take a more spesific report
cheers~
A None report happened when modify Fieldlist in R 2023 , ss a bug or I missed something ?
None
Fieldlist
None report seems happened in R 2023 , It would work actually , but It report a bug like this :
The same code runs well in S26.107 without bug report :
Here is the code :
import c4d
def get_field_layers(op):
""" Returns all field layers that are referenced in a field list.
"""
def flatten_tree(node):
""" Listifies a GeListNode tree.
"""
res = []
while node:
res.append(node) []
# traverse the graph
return flatten_tree(first)
def main():
fieldlists = [value for id, value in op.GetData()
if isinstance(value, c4d.FieldList)]
if not fieldlists:
return
opname = op.GetName()
doc.StartUndo()
for layer in get_field_layers(fieldlists[0]):
obj = layer.GetLinkedObject(doc)
print(obj)
doc.AddUndo(c4d.UNDOTYPE_CHANGE,obj)
obj.SetBit(c4d.BIT_ACTIVE)
#obj.do somethings
doc.EndUndo()
if __name__=='__main__':
main()
c4d.EventAdd()
@ferdinand
Thnaks for your help, I think it is enough for this specific toppic . It work as espected
Thanks for extreme detailed explains , but I think maybe had a bad description , so you go deeper .
In my views , your point at how to get all sence active nodes . ( It is a great explain and very useful to learn )
But my goal is get last select node ,for more specific minimal example , when a texture tag and a cube both selected , assign material maybe a problem expecially when scripts support them at same time , so the goal more like when cube is lastest select ,apply mat to cube , and vis versa , when texture tag is the lastest selected , then ignore the cube selection.
texture tag
cube
With your great codes , it more like returns a state more than a order .
state
order
And you said "There are however no equivalents for the selection order for materials, tags, etc." , Is that means I cannot approach it .
Like I have to set a more hard condition to decide what is consider first type ( It's a easy way , but not a user friendlly one , expecially to the C4D beginner , they don't want to remember so much priority conditions , such as tag > object > layer or somethings)
I want to get a order of what I select and activate last . Can I do this wth python ?
For example , I select some thing and they can exist active at the sametime. e.g.
object A
tag A on object B
Mat A
layer B
attribute
If I want a quick function for example reneme . I don't want to GetActiveObjects scripts for Object Manager and another GetActiveMaterials scripts for Material Manager , that means to many scripts with a same task .
GetActiveObjects
GetActiveMaterials
Instead , I want only one script that can judge which is my last select thing or which context my mouse in , so that I can run a corresponding function . ( even more like execute on tags and both on materials )
Can or how can I do this with python ?
Here is a picture for better explain what I mean active at same time .
Thanks for your explain .
The code you procvde work fine and the black not show again .
The old one is also use EdgeHtml viewer . Maybe render black in scripts manager is as a not support way as you mentioned?
EdgeHtml
Finnally, here is the contrast result . | https://plugincafe.maxon.net/user/dunhou | CC-MAIN-2022-40 | refinedweb | 1,212 | 58.52 |
Next: Random Access Directory, Previous: Reading/Closing Directory, Up: Accessing Directories
Here's a simple program that prints the names of the files in the current working directory:
#include <stdio.h> #include <sys/types.h> #include <dirent.h> int main (void) { DIR *dp; struct dirent *ep; dp = opendir ("./"); if (dp != NULL) { while (ep = readdir (dp)) puts (ep->d_name); (void) closedir (dp); } else perror ("Couldn't open the directory"); return 0; }
The order in which files appear in a directory tends to be fairly random. A more useful program would sort the entries (perhaps by alphabetizing them) before printing them; see Scanning Directory Content, and Array Sort Function. | http://www.gnu.org/software/libc/manual/html_node/Simple-Directory-Lister.html#Simple-Directory-Lister | crawl-003 | refinedweb | 107 | 54.73 |
(Note that this mail isn't CC'd to the bug report. I will send a note there saying that the discussion will be / should be taken to the debian-ctte list.) I agree that it is the porting team's primary responsibility to choose the name. I don't think that this is the dpkg maintainers' decision, although the dpkg maintainers do have a role in (for example) defining the permissible syntax. I think that the TC do probably have the power to intervene because the issue is definitely covered by at least some of the things included explicitly in the list in s6.1(1) of the constitution (`technical policy') and maybe also because of s6.1(2) (about the need for compatibility). But, the TC should base its decisions on technical criteria, and intervene only in technical decisions. If there are no good or relevant technical reasons to intervene then the TC should not interfere with the decision of the maintainers primarily responsible. It's not altogether clear that this is a technical decision, but we should consider whether there are relevant technical arguments: * `Compatibility' with other distributions, Linux kernel, GNU triplet, LSB, RPM, etc.: I think these are very weak as technical arguments. Our packaging software doesn't directly use architecture names from other contexts without an opportunity for translation. So there is no need for us to pick exactly the same name as everyone else. If everyone else were 100% consistent and had picked a name which fitted into our syntax, then we should probably follow it. But in this case there seems at least to be substantial amounts of confusion or disagreement. * User confusion: Users, it seems, will already have to cope with two names for the architecture. It is a shame that the chip companies' marketing people have muddied the waters, but I think it's probably too late to undo. * One name or another is AMD's official name for the architecture: This is a marketing question, not a technical question. Marketing input should in general be ignored, as it serves to confuse as often as to clarify. * Underscores in architecture names not permitted by dpkg: I think this is an excellent reason to forbid them, and it is fully within the dpkg maintainers' remit to specify the permissible syntax. dpkg should definitely not be changed for this purpose. But, that doesn't allow us to rule out anything but what seems to be an outside contender anyway. * Hyphens used in eg `hurd-i386' to separate kernel and processor: I don't know whether this is a general rule yet, but it seems a little foolish to introduce a hyphenated name where a single atom would suffice and where we might later choose to add semantics to the hyphen (if we haven't done already). This does seem like a technical question, but I think we need more information. * The port is complete with `amd64' and we don't want to change: This isn't an overriding argument - if there were a good reason to change, we might well do so despite the pain. But it does add weight in favour of `amd64'. Pretty much all of these lead me to conclude that we should resolve along the following lines: *. Jason Gunthorpe writes ("Bug#254598: Name of the Debian x86-64/AMD64 port"): >. As you can tell from what I wrote above, I disagree strongly. The set of permitted characters in a namespace should never be extended for marketing reasons ! (And yes, I know that it is frequently done in what people laughably call the `real world', so don't quote examples.) Raul Miller writes ("Bug#254598: Name of the Debian x86-64/AMD64 port"): > I don't see any reason why dpkg can't support both x86-64 and amd64. > This would look slightly ugly when displaying known architectures but > that's not a technical issue. I think an alias for this is a bad idea. There's no real need and it would just lead to confusion. You mention the archive, but there are other places where lists of architectures appear. Every piece of software would no longer be able to simply compare architecture names with string comparision. Thanks, Ian. | https://lists.debian.org/debian-ctte/2004/06/msg00071.html | CC-MAIN-2015-40 | refinedweb | 710 | 61.26 |
Hi all
I am totally new to C++ and I am using the Dev C++ compiler and getting the tutorial off
I ran the first program (code below) and it compiled fine but when I run it a box appears then dissappears so quickly I dont have time to see what it is but by the black colour I assume its DOS, can anyone tell me why it is doing this please as I don't know if my program is working correctly and I tried the other examples and the same happens, am I doing something wrong is that why it dissappears so quickly?
/ my first program in C++ #include <iostream> using namespace std; int main () { cout << "Hello World!"; return 0; }
Many thanks
HLA91 | https://www.daniweb.com/programming/software-development/threads/92040/program-runs-but-not-visible | CC-MAIN-2020-29 | refinedweb | 125 | 54.94 |
I recommend having explicit precondition and reducing repetition like this:
Advertising
import Foundation func random(from range: CountableRange<Int>) -> Int { precondition(range.count > 0, "The range can't be empty.") return random(from: CountableClosedRange(range)) } func random(from range: CountableClosedRange<Int>) -> Int { let lowerBound = range.lowerBound let upperBound = range.upperBound precondition(upperBound - lowerBound < Int(UInt32.max), "The range \(range) is too wide. It shouldn't be wider than \(UInt32.max).") return lowerBound + Int(arc4random_uniform(UInt32(upperBound - lowerBound + 1))) } let r1 = random(from: 4 ..< 8) let r2 = random(from: 6 ... 8) Once we have the new improved Integer protocols <> in place, you will be able to make it generic to support all integer types. (It is possible now, but too messy to be worth doing.) > On Oct 12, 2016, at 1:23 PM, Adriano Ferreira via swift-users > <swift-users@swift.org <mailto:swift-users@swift.org>> wrote: > > Hi there! > > Ole Begeman offers here <> (take > a look at the bottom of the page) an interesting consideration about > converting between half-open and closed ranges. > > As of now, it seems the way to go is by overloading… > > > import Foundation > > func random(from range: Range<Int>) -> Int { > let lowerBound = range.lowerBound > let upperBound = range.upperBound > > return lowerBound + Int(arc4random_uniform(UInt32(upperBound - > lowerBound))) > } > > func random(from range: ClosedRange<Int>) -> Int { > let lowerBound = range.lowerBound > let upperBound = range.upperBound > > return lowerBound + Int(arc4random_uniform(UInt32(upperBound - lowerBound > + 1))) > } > > let r1 = random(from: 4 ..< 8) > let r2 = random(from: 6 ... 8) > > > Cheers, > > — A > >> On Oct 12, 2016, at 6:21 AM, Jean-Denis Muys via swift-users >> <swift-users@swift.org <mailto:swift-users@swift.org>> wrote: >> >> Hi, >> >> I defined this: >> >> func random(from r: Range<Int>) -> Int { >> let from = r.lowerBound >> let to = r.upperBound >> >> let rnd = arc4random_uniform(UInt32(to-from)) >> return from + Int(rnd) >> } >> >> so that I can do: >> >> let testRandomValue = random(from: 4..<8) >> >> But this will not let me do: >> >> let otherTestRandomValue = random(from: 4...10) >> >> The error message is a bit cryptic: >> >> “No ‘…’ candidate produce the expected contextual result type ‘Range<Int>’” >> >> What is happening is that 4…10 is not a Range, but a ClosedRange. >> >> Of course I can overload my function above to add a version that takes a >> ClosedRange. >> >> But this is not very DRY. >> >> What would be a more idiomatic way? >> >> Thanks, >> >> Jean-Denis >> >> _______________________________________________ >> swift-users mailing list >> swift-users@swift.org <mailto:swift-users@swift.org> >> >> <> > > _______________________________________________ > swift-users mailing list > swift-users@swift.org <mailto:swift-users@swift.org> >
_______________________________________________ swift-users mailing list swift-users@swift.org | https://www.mail-archive.com/swift-users@swift.org/msg02783.html | CC-MAIN-2017-30 | refinedweb | 423 | 52.56 |
-- | Display the difference between two Haskell values, -- with control over the diff parameters. module Debug.Diff.Config ( Config(..) , defConfig , diffWith , diff ) where import Text.Groom import Text.Printf import System.IO import System.IO.Temp import System.Process import System.Exit -- | Configuration of the diff command data Config = Config { context :: Maybe Int -- ^ Lines of context, for a unified diff. , command :: String -- ^ Diff command; @colordiff@ by default. , args :: [String] -- ^ Extra arguments to the diff command. } deriving (Eq, Ord, Read, Show) -- | A default configuration. defConfig :: Config defConfig = Config { context = Just 3 , command = "colordiff" , args = [] } -- | Display the difference between two Haskell values, -- with control over the diff parameters. diffWith :: (Show a, Show b) => Config -> a -> b -> IO () diffWith cfg x y = withSystemTempFile "ddiff_x" $ \px hx -> withSystemTempFile "ddiff_y" $ \py hy -> do hPutStrLn hx (groom x) hClose hx hPutStrLn hy (groom y) hClose hy let ctxArg n = ["-U", show n] allArgs = args cfg ++ maybe [] ctxArg (context cfg) ++ [px, py] (_, _, _, hdl) <- createProcess (proc (command cfg) allArgs) ec <- waitForProcess hdl case ec of ExitFailure n | n > 1 -> hPrintf stderr "debug-diff: command %s with args %s exited with code %d\n" (show (command cfg)) (show allArgs) n _ -> return () -- | Display a colorized diff between two Haskell values. diff :: (Show a, Show b) => a -> b -> IO () diff = diffWith defConfig | http://hackage.haskell.org/package/debug-diff-0.1/docs/src/Debug-Diff-Config.html | CC-MAIN-2013-48 | refinedweb | 214 | 56.76 |
resolving data-time
By
Mark917, in AutoIt General Help and Support
Recommended Posts
This topic is now closed to further replies.,
* Sorry for my many questions,
------------------------------------------------
i use that and it's true and fine,
Local $dData = InetRead("",1) $my = BinaryToString(StringReplace($dData, "0A", "0D0A"), 4) now how can i add functions in "my.html" and autoit run that function,
it's meant when ever "Read" data from website, use that serve as "Autoit Function",
something like that :
In "my.html" we have this :
MsgBox (1,"This is from website","this is from website") In script we have this :
#include <GUIConstantsEx.au3> $hGUI = GUICreate("Test", 500, 500) GuiSetState() Local $dData = InetRead("",1) $my = BinaryToString(StringReplace($dData, "0A", "0D0A"), 4) ;this place is for MsgBox from "my.html" While 1 Switch GUIGetMsg() Case $GUI_EVENT_CLOSE Exit EndSwitch WEnd Thanks Alot,
-.
- | https://www.autoitscript.com/forum/topic/187728-resolving-data-time/ | CC-MAIN-2020-05 | refinedweb | 138 | 59.33 |
31 July 2012 12:56 [Source: ICIS news]
LONDON (ICIS)--Archer Daniels Midland's (ADM) fiscal fourth-quarter net earnings fell 25% year on year to $284m (€233m), on the back of negative US ethanol margins, the US-based agribusiness group said on Tuesday.
ADM's net sales during the quarter fell 0.9% year on year to $22.7bn, it added.
The group’s segment operating profit during its fiscal fourth quarter almost halved to $544m from $921m reported in the same period the year before.
“In a challenging fourth quarter, solid results from our global oilseeds business, particularly in South America, were more than offset by negative ?xml:namespace>
ADM’s Oilseeds Processing operating profit in its fiscal fourth quarter was $331m, down $118m from the same period one year earlier, as improved South American soybean results were offset by lower North American softseeds crushing margins, the company said.
The group’s Corn Processing profit during the quarter fell $48m from the same period one year earlier to $74m, “as significantly weaker ethanol results more than offset improvements in other bioproducts businesses”.
“Industry ethanol replacement margins were negative throughout the quarter, as the industry supply continued to exceed demand,” ADM added.
ADM’s Agricultural Services operating profit was $123m, down $222m from the same period a year before, on lower
For the group’s fiscal year ended 30 June 2012, net earnings plummeted to $1.22bn from $2.04bn the year before, despite net sales rising 10% to $89.04bn.
“As we look ahead, while drought has reduced the potential size of the
“While US crop carry outs are expected to be low, we have an experienced business team to manage through this environment,” Woertz | http://www.icis.com/Articles/2012/07/31/9582577/negative-us-ethanol-margins-push-down-adms-fiscal-q4-net.html | CC-MAIN-2014-15 | refinedweb | 287 | 58.82 |
Example 6: Importing triples¶
AllegroGraph can import files in multiple RDF
formats, such as Turtle or N-Triples. The example below
calls the connection object’s
add() method to load an N-Triples
file, and
addFile() to load an RDF/XML file. Both methods work,
but the best practice is to use
addFile().>
Save this file in
./data/vcards.rdf (or choose another path
and adjust the code below).
The
N-Triples file contains
a graph of resources describing the Kennedy family, the places where
they were each born, their colleges, and their professions. A typical
entry from that file looks like this:
<> <> "Joseph" . <> <> "Patrick" . <> <> "Kennedy" . <> <> "none" . <> <> <> . <> <> "1888" . <> <> "1969" . <> <> <> . <> <> <> . <> <> <> . <> <> <> . <> <> <> . <> <> <> .
Save the file to
./data/kennedy.ntriples.
Note that AllegroGraph can segregate triples into contexts (subgraphs)
by treating them as quads, but the N-Triples and RDF/XML formats
cannot include context information (unlike e.g N-Quads or
Trig). They deal with triples only, so there is no place to store a
fourth field in those formats. In the case of the
add() call, we
have omitted the context argument so the triples are loaded into the
default graph (sometimes called the “null context.”) The
addFile() call includes an explicit context setting, so the
fourth field of each VCard triple will be the context named. The connection
size() method
takes an optional context argument. With no argument, it returns the
total number of triples in the repository. Below, it returns the
number
16 for the
context context argument, and the number
28 for the null context (
None) argument.
from franz.openrdf.connect import ag_connect conn = ag_connect('python-tutorial', create=True, clear=True)
The variables
path1 and
path2 are bound to the RDF/XML and
N-Triples files, respectively.
import os.path # We assume that our data files live in this directory. DATA_DIR = 'data' path1 = os.path.join(DATA_DIR, 'vcards.rdf') path2 = os.path.join(DATA_DIR, 'kennedy.ntriples')
The triples about the VCards will be added to a specific context, so naturally we need a URI to identify that context.
context = conn.createURI("")
In the next step we use
addFile() to load the VCard triples into
the
#vcards context:
from franz.openrdf.rio.rdfformat import RDFFormat conn.addFile(path1, None, format=RDFFormat.RDFXML, context=context)
Then we use
add() to load the Kennedy family tree into the
default context:
conn.add(path2, base=None, format=RDFFormat.NTRIPLES, contexts=None)
Now we’ll ask AllegroGraph to report on how many triples it sees in the default context and in the #vcards context:
print('VCard triples (in {context}): {count}'.format( count=conn.size(context), context=context)) print('Kennedy triples (default graph): {count}'.format( count=conn.size('null')))
The output of this report was:
VCard triples (in <>): 16 Kennedy triples (default graph): 1214 | https://franz.com/agraph/support/documentation/current/python/tutorial/example006.html | CC-MAIN-2018-43 | refinedweb | 459 | 65.83 |
A compound statement (also called a block, or block statement) is a group of zero or more statements that is treated by the compiler as if it were a single statement.
Blocks begin with a { symbol, end with a } symbol, with the statements to be executed being placed in between. Blocks can be used anywhere a single statement is allowed. No semicolon is needed at the end of a block.
{
}
You have already seen an example of blocks when writing functions, as the function body is a block:
Blocks inside other blocks
Although functions can’t be nested inside other functions, blocks can be nested inside other blocks:
When blocks are nested, the enclosing block is typically called the outer block and the enclosed block is called the inner block or nested block.
Using blocks to execute multiple statements conditionally
One of the most common use cases for blocks is in conjunction with if statements. By default, an if statement executes a single statement if the condition evaluates to true. However, we can replace this single statement with a block of statements if we want multiple statements to execute when the condition evaluates to true.
if statements
if statement
true
For example:
If the user enters the number 3, this program prints:
Enter an integer: 3
3 is a positive integer (or zero)
Double this number is 6
If the user enters the number -4, this program prints:
Enter an integer: -4
-4 is a negative integer
The positive of this number is 4
Block nesting levels
It is even possible to put blocks inside of blocks inside of blocks:
The nesting level (also called the nesting depth) of a function is the maximum number of blocks you can be inside at any point in the function (including the outer block). In the above function, there are 4 blocks, but the nesting level is 3 since you can never be inside more than 3 blocks at any point.
It’s a good idea to keep your nesting level to 3 or less. Just as overly-long functions are good candidates for refactoring (breaking into smaller functions), overly-nested functions are also good candidates for refactoring (with the most-nested blocks becoming separate functions).
Best practice
Keep the nesting level of your functions to 3 or less. If your function has a need for more, consider refactoring.
"In the above function, there are 4 blocks, but the nesting level is 3 since you can never be inside more than 3 blocks at any point."
I thought it's not possible to have more than 3 blocks in any functions at any point. But techincally it's actually possible. Maybe you mean it's prohibited for readability purpose. So I guess, if the sentence was a bit clear, it would be easier to understand for everyone.
I would say:
"In the above function, there are 4 blocks, but the nesting level is 3 since having more than 3 blocks decreases readability and is not suggested."
Example code of 5 nesting levels (not suggested):
you can never be inside more than 4 blocks at any point of the code
isnt that 6?
I think what is meant there is that there are indeed four blocks in total: main function (1) first if (2) second if (3) else (4), but there is a depth of three because indeed in THAT function you can't go deeper. This because two of them are in the same depth. And then he advices us not to go deeper than that in our own code.
"since you can never be inside more than 3 blocks at any point" Why?
It just means that when running that particular program you can never be in more than 3 blocks at any point because there are no blocks nested to be deeper than that.
The author did not intend to say "you can never be inside more than 3 blocks at any point" in a general way. This only applies to the specific example.
You can theoretically have as many nested blocks as you'd like. However, any more than 3 nested blocks can result in difficult to read and maintain code:
When "value" is -4 how does it become 4 by -value, and how does -value work?
That's math. - inverts the sign of the number.
Why can blocks only be inside functions?
I tried, and this does not work:
Nothing makes sense in this code not only your question about blocks.
if you could make blocks outside functions, then this code would compile and execute without any problems. There would be no duplicate initialisation error, as is the case in this sample. Hence, I deduced that blocks cannot be made outside of functions.
1. x from line 2 is in scope from line 2 - 8. When you get to line 4, x from line 2 is still in scope, so you can't initialize x again.
-
2. Global variables (variables outside of functions) that aren't constant are bad.
I hope this is easy to understand, I'm not great at explaining things. I'd take a look at '2.4 — Introduction to local scope' and '2.9 -- Naming collisions and an introduction to namespaces' if I were you.
Ignore point 1. I wrote the comment before I knew about variable shadowing.
Name (required)
Website
Save my name, email, and website in this browser for the next time I comment. | https://www.learncpp.com/cpp-tutorial/compound-statements-blocks/ | CC-MAIN-2021-17 | refinedweb | 911 | 68.7 |
.
In deciding which nuclei should be the "fuel" in a nuclear fusion reactor, an important quantity is the "cross section", $\sigma$, a measure of the reaction probability as a function of the kinetic energy of the reactant nuclei. The nuclei are positively-charged and so repel each other (Coulomb repulsion): fusion is only possible if they are able to overcome this repulsion to approach each other at close range. This, in turn, demands that they have a large relative speed and hence kinetic energy. For a viable fusion reactor on Earth, this corresponds to a temperature of many millions of Kelvin.
The IAEA's Nuclear Data Section provides a database of evaluated nuclear data, ENDF, that can be used to illustrate the fusion cross section. The code below uses data from ENDF to plot the fusion cross section for three fusion reactions:
where the D + D reaction has two possible channels with approximately equal probability of occuring; the orange line below depicts the total cross section for both processes.
The reason that the majority of research on experimental fusion reactors has focused on the D + T process is that it has the largest cross section, which peaks at the lowest temperature. This means that the plasma in which fusion occurs does not have to be heated as much: indeed, ITER aims to operate at 150 million K. The 17.6 MeV released in the D + T reaction corresponds to an energy release many times greater than a typical chemical reaction (the combustion of octane releases only 57 eV, which is more than 300,000 times less energy).
The code below requires the following data files:
A small detail arises in the transformation of energy from the "lab-fixed" frame of the ENDF format (in which the heavier, target, reactant is assumed to be stationary) to a frame moving with the centre-of-mass of the system. This is handled in the function which reads in the cross section files if the constant
COFM is set to
True. The cross sections provided by ENDF are not all on the same energy grid, so they are also interpolated onto a common grid by this function.
import numpy as np from matplotlib import rc import matplotlib.pyplot as plt from scipy.constants import e, k as kB rc('font', **{'family': 'serif', 'serif': ['Computer Modern'], 'size': 14}) rc('text', usetex=True) # To plot using centre-of-mass energies instead of lab-fixed energies, set True COFM = True # Reactant masses in atomic mass units (u). masses = {'D': 2.014, 'T': 3.016, '3He': 3.016} # Energy grid, 1 – 1000 keV, evenly spaced in log-space. Egrid = np.logspace(0, 3, 100) def read_xsec(filename): """Read in cross section from filename and interpolate to energy grid.""" E, xs = np.genfromtxt(filename, comments='#', skip_footer=2, unpack=True) if COFM: collider, target = filename.split('_')[:2] m1, m2 = masses[target], masses[collider] E *= m1 / (m1 + m2) xs = np.interp(Egrid, E*1.e3, xs*1.e-28) return xs # D + T -> α + n DT_xs = read_xsec('D_T_-_a_n.txt') # D + D -> T + p DDa_xs = read_xsec('D_D_-_T_p.txt') # D + D -> 3He + n DDb_xs = read_xsec('D_D_-_3He_n.txt') # Total D + D fusion cross section is due to equal contributions from the # above two processes. DD_xs = DDa_xs + DDb_xs # D + 3He -> α + p DHe_xs = read_xsec('D_3He_-_4He_p.txt') fig, ax = plt.subplots() ax.loglog(Egrid, DT_xs, lw=2, label='$\mathrm{D-T}$') ax.loglog(Egrid, DD_xs, lw=2, label='$\mathrm{D-D}$') ax.loglog(Egrid, DHe_xs, lw=2, label='$\mathrm{D-^3He}$') ax.grid(True, which='both', ls='-') ax.set_xlim(1, 1000) xticks= np.array([1, 10, 100, 1000]) ax.set_xticks(xticks) ax.set_xticklabels([str(x) for x in xticks]) if COFM: xlabel ='E(CofM) /keV' else: xlabel ='E /keV' ax.set_xlabel(xlabel) # A second x-axis for energies as temperatures in millions of K ax2 = ax.twiny() ax2.set_xscale('log') ax2.set_xlim(1,1000) xticks2 = np.array([15, 150, 1500, 5000]) ax2.set_xticks(xticks2 * kB/e * 1.e3) ax2.set_xticklabels(xticks2) ax2.set_xlabel('$T$ /million K') ax.set_ylabel('$\sigma\;/\mathrm{m^2}$') ax.set_ylim(1.e-32, 1.e-27) ax.legend() plt.savefig('fusion-xsecs.png') plt.show()
Comments are pre-moderated. Please be patient and your comment will appear soon.
Mark Paris 1 year, 8 months ago
Thanks for this useful and informative post -- much appreciated!Link | Reply
christian 1 year, 7 months ago
Glad you found it useful – thanks for the feedback!Link | Reply
John Sharma 1 year ago
Excellent code and textbook. Thanks. John.Link | Reply
Georg Harrer 2 months, 2 weeks ago
Super helpful thanks.Link | Reply
I'd like to include the p+p -> d + positron + neutrino reaction in the plot allthough the cross section is soo much lower (i have an old plot from lecture nots but with no data source unfortunately)
The problem is that i'm not able to get the p + p cross section data from the ENDF database.
Do you maybe have any idea where or how to get it?
christian 2 months, 2 weeks ago
Yes, I suppose it might not be in ENDF. Have you looked in the solar physics literature, e.g. the papers by Adelberger et al. such as Rev. Mod. Phys. 70 1265 (1998)?Link | Reply
New Comment | https://scipython.com/blog/plotting-nuclear-fusion-cross-sections/ | CC-MAIN-2020-29 | refinedweb | 883 | 66.03 |
Quoting Oleg Nesterov (oleg@redhat.com):> On 09/19, Serge E. Hallyn wrote:> >> > Add to the dev_state and alloc_async structures the user namespace> > corresponding to the uid and euid. Pass these to kill_pid_info_as_uid(),> > which can then implement a proper, user-namespace-aware uid check.> > IOW, we add the additional "user_namespace *" member/argument, and use> it along with uid/euid.> > I am not really sure, but can't we simplify this?> > > @@ -68,6 +69,7 @@ struct dev_state {> > wait_queue_head_t wait; /* wake up if a request completed */> > unsigned int discsignr;> > struct pid *disc_pid;> > + struct user_namespace *user_ns;> > uid_t disc_uid, disc_euid;> > Can't we add "const struct cred *disc_cred" and kill disc_uid/disc_euid> instead?> > Then we redefine kill_pid_info_as_uid() as kill_pid_info_as_cred(...cred...),> it can use cred->cred->uid/euid directly.> > devio.c does get_cred/put_cred instead of get_user_ns/put_user_ns.> > What do you think?Yeah, just as file->f_cred does. Sounds good.I'll re-send both these patches with your suggestions applied. Thanks!-serge | https://lkml.org/lkml/2011/9/20/193 | CC-MAIN-2017-34 | refinedweb | 158 | 61.12 |
I'm using the ReportGenerator.cs class and this class is referenced from the actual calling page Default.aspx and Default.aspx.cs
using GeneraReport;
in the Default.aspx.cs
and the code in GenerateReport.cs:
namespace GeneraReport{ public class ReportGenerator {
This namespace has no reference to Page
However I need to be able to do the following in the public class:
if (dt.Rows.Count == 0) { string Script = ""; Script += "<script language=JavaScript"; Script += "alert('No Rows Found in Query');</script>";
View
Hi,
I'm trying to apply a css class on an html span tag in javascript, like below-
$('#company-info').css("class", "help");
But this line of code isn't working!
Can anyone let me know the right way of doing this?
Dazy.
K
Hii,
Can AnyBody guide me why Javascript is not functioning in my hosted site with Public IP and i had 3rd party tools using the javascript which they were displaying but functionality.
Pls help me to resolve this !!!! + "', '
My aim is to not cause a postback if a session var does not equal a certain value when the user clicks the DELETE button. But this causes a postback regardless after the javascript alert is invoked.
I guess I need another logical scenario?
Hi
I'm using some JavaScript that's called by the class function within a hyperlink control:
<asp:HyperLinkAdd New Official</asp:HyperLink>
Thing is, I also want to apply the styles I've created for this and other links like it using:
CssClass="BackOfficeMenuLink"
But if I add that, the JavaScript stops working.
Any thoughts?
Thanks as alwaysRich
I have an app with many pages where in the page load event I check for a session variable Session("usr") being populated or not.
If it is not I then use the following few lines of code to initiate a JS function that is loaded into each page via an include statement.
It works fine, except for this one page. Any ideas:
Code Behind:
Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
Dim bldvw As base = New base
Dim cm As ClientScriptManager = Page.ClientScript
Dim cbReference As String
'see if session has timed out
If bldvw.validateUsr = 0 Then
If (Not ClientScript.IsStartupScriptRegistered("")) Then
cm.RegisterClientScriptBlock(Me.GetType(), "alert", "logout();", True)
' above line is actually hit each time, but the alert msg never comes up
End If
<-- more code irrelevant to this task--->
end sub
Public Function validateUsr() As Integer
If Session("usr") = "" Or Session("usr") = Nothing Then
validateUsr = 0
Else
validateUsr = 1
Return validateUsr
End Function
Mar
hello every one i using system web handler when System.Web.IHttpHandler
save the record than how can show message "Record is saved"
I'm trying to put line breaks in my alert that's in a Response.Write like this:
Response.Write("<script>alert('" + dr["display_title"].ToString() + "\n is " + dr["display_title"].ToString().Length + " characters. Title cannot exceed 100 characters. Please limit all titles to 100 characters or less.')</script>");
Response.Write(
with the \n character, but instead of the alert showing up on multiple lines in the dialog box, the alert itself is breaking up into multiple lines like this:
alert('some text
more text
more text')
How would I get this to work correctly? Thanks
I m new to the asp.net, my query is what is the difference between between " public class ClassName " and " public partial class ClassName : System.Web.UI.Page"?
as we already know these are the ways of defining the class,.. but my actual query is what will be the difference caused if we add ": System.Web.UI.Page" to the class name,.. as i have done some research and found that we can't use page contents without adding ": System.Web.UI.Page ",..For Example: we can't use cookie in simple class without adding ":System.Web.UI.Pag"
Please clear me about this confusion,...
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend | http://www.dotnetspark.com/links/46411-how-to-do-javascript-alert-from-another.aspx | CC-MAIN-2018-17 | refinedweb | 670 | 64.61 |
I am at least a little familiar with coding, but I haven't really done much of it in 10-ish years, so I am very rusty. When running the following code snippet (I'm using python) in a test, instead of just changing the subkey to the selected value, it also changes the subkey type from REG_DWORD to REG_SZ. Can someone please tell me what am I doing wrong, here? Thanks!
def registryChange():key = Storages.Registry("SOFTWARE\\WOW6432Node\\folder", HKEY_LOCAL_MACHINE)key.SetOption("SyslogToFile", "1")
Solved!
Go to Solution.
Never mind - I figured it out. I need to use key.SetOption("SyslogToFile", 1) instead of key.SetOption("SyslogToFile", "1").
View solution in original post | https://community.smartbear.com/t5/TestComplete-Desktop-Testing/SetOption-is-changing-a-REG-DWORD-to-a-REG-SZ/td-p/163119 | CC-MAIN-2019-47 | refinedweb | 114 | 61.43 |
Comparing Two Numbers
Comparing Two Numbers
... of
comparing two numbers and finding out the greater one. First of all, name a
class "Comparing" and take two numbers in this class. Here we have
taken a=24
Comparing strings - Java Beginners
Comparing strings Can anyone help with finding the number of time(s) a substring appears in a string? The code I've written can get it once....");
}
}
}
--------------------------------------------------------------
Read for more information.
Comparing tables
Comparing tables How to compare two or more tables in the Mysql database using jdbc two dates. In
this example we are going to compare two date objects
comparing dates objective c
comparing dates objective c Comparing two different dates in Objective C.
if( [date isEqualToDate:otherDate] )
NSLog(@"%@ is equal to %@",date,otherDate);
Objective C NSString Date Example
Comparing two dates in java
Comparing two dates in java
In this example you will learn how to compare two dates in java.
java.util.Date provide a method to compare two dates... date. The
example below compares the two dates.
import
comparing arraylist of an multi dimensional arraylist
comparing arraylist of an multi dimensional arraylist can anyone help me in solving the following issue:
actually i have an arraylist called dany... araylist.if we find the content of the two arraylist similar then it shuld be stored
Comparing Two String with <c:if>
Comparing Two String with <c:if>
Whenever we have to check... the if statement in java. This
tag is not for applicable if we want to one thing
Java examples for beginners
of
examples.
Java examples for beginners: Comparing Two Numbers
The tutorial provide... of comparing two numbers and finding out greater one.
First of all, you will have to give the name a class "Comparing" and then
take two numbers
Comparing Arrays
Comparing Arrays : Java Util
...
initializes two arrays and input five number from user through the keyboard... class.
Arrays.equals():
Above method compares two arrays.
Arrays is the class
iPhone SDK Comparing Strings
In this section I will show you how you can compare the two strings in your... with == operator. In the following example code I have compared the two strings... of comparing strings in iPhone sdk programs. The == operator only compares
Comparing the File Dates
Comparing the File Dates
This java example will help you to compare the dates of
files. In java we have the option to
compare two or more files. This is useful when we have
Comparing arrays not working correctly?
Comparing arrays not working correctly? Comparing arrays not working correctly Dates in Java
Comparing Dates in Java
This example illustrates you how to compare two dates. Generally dates...
two given dates. To compare dates the given program is helpful for application
Comparing two Dates in Java with the use of after method
Comparing two Dates in Java with the use of after method... to compare two date
objects in Java programming language. For comparing...
C:\DateExample>java CompareDateAfter
FirstDate:=Wed Oct 08
adding two numbers - Java Beginners
adding two numbers hii friends.......
this is my program...]=Integer.parseInt(in.readLine());
int s=x[1]+y[1];
System.out.println("Sum of two...];
System.out.println("Sum of two no. is:"+s);
}
}
For read more
Add two big numbers - Java Beginners
Add two big numbers - Java Beginners Hi,
I am beginner in Java and leaned basic concepts of Java. Now I am trying to find example code for adding big numbers in Java.
I need basic Java Beginners example. It should easy
Comparing Log Levels in Java
Comparing Log Levels in Java
.... In Java, the Level class assigns a set of multiple logging
levels that has already... that
assists to turn off logging.
Descriptions of program:
This program creates two
Comparing XML with HTML
Comparing XML with HTML
XML and HTML are both designed for different purposes. Although they have
some similarities in markup syntax but they are created for different types of
goals. XML is not at all the replacement of HTML
Applet for add two numbers
Applet for add two numbers what is the java applet code for add two numbers?
import java.awt.Graphics;
import javax.swing.*;
public...);
add(text2);
label3 = new Label("Sum of Two Numbers
Character Comparison Example
in Java. The java.lang package provides a method for comparing
two case sensitive strings. The compareTo()
method compares two strings on the basis... and sequence. For comparing the string you will need those string that
have
Add two big numbers
Add two big numbers
In this section, you will learn how to add two big
numbers. For adding two numbers implement two big decimal numbers then apply the Sum() method
compare two strings in java
compare two strings in java How to compare two strings in java...)
{
System.out.println("The two strings are the same.");
}
}
}
Output:
The two strings are the same.
Description:-Here is an example of comparing two
generating random numbers - Java Beginners
*60 = 60). So the errors are 10 and 0 for the two predictions with an average
two dimensional - Java Beginners
two dimensional write a program to create a 3*3 array and print the sum of all the numbers stored in it. Hi Friend,
Try the following code:
import java.io.*;
import java.util.*;
public class matrix
To find first two maximum numbers in an array
To find first two maximum numbers in an array Java program to find first two maximum numbers in an array,using single loop without sorting array
Quick Sort in Java
in two sub-arrays, one part of the array has small numbers than the pivot... in these two sub-arrays and partition is followed until numbers are sorted...Quick Sort in Java
Quick Sort in Java is used to sort elements of an array
Finding all palindrome prime numbers - Java Beginners
Finding all palindrome prime numbers How do i write a program to Find all palindrome prime numbers between two integers supplied as input (start and end points are excluded
prime numbers - Java Beginners
prime numbers Write a java program to find the prime numbers between n and m
random numbers - Java Beginners
random numbers write a program to accept 50 numbers and display 5 numbers randomly Hi Friend,
Try the following code:
import...);
System.out.println("Enter 10 numbers: ");
for(int i=0;i<10;i
Swapping of two numbers in java
Swapping of two numbers in java
In this example we are going to describe swapping of two numbers in java without using the third number in
java. We... values from the command prompt. The swapping of two
numbers is based on simple
random numbers - Java Beginners
to display the random numbers, but not twice or more. I mean i need a number to be display once. This code allows some numbers to be displayed more than once.
Hi... Scanner(System.in);
System.out.println("Enter 10 numbers: ");
for(int i=0;i<10;i
Perfect Numbers - Java Beginners
+ 2 + 3
Write a java program that finds and prints the three smallest perfect numbers. Use methods
Hi Friend,
Try the following code:
public
Swapping of two numbers
Swapping of two numbers
... program to calculate swap of two numbers. Swapping
is used where you want... ability.
In this program we will see how we can swap two
numbers. We can do
Two- Dimensional Array - Java Beginners
Two- Dimensional Array I am new in java programming. I am creating a two-dimensional array. This is my code
**
class BinaryNumbers
{
public static void main(String[] args)
{
//create a two-dimensional array
int ROWS = 21
Addition of two numbers
Addition of two numbers addition of two numbers
Use of string-length() function in XPath
; for comparing two numbers.
In this example we have created an XML file "
Write a program to list all even numbers between two numbers
Write a program to list all even numbers between two numbers
Java Even Numbers - Even Numbers Example in Java:
Here you will learn to write a program for listing out
String comparison example
.style1 {
font-size: medium;
}
String comparison example:-
There are many comparison operators for comparing
strings like <, <...; operator for compare two strings.
Example:-
<?xml version="
numbers - Java Beginners
Adding two numbers
Adding two numbers Accepting value ffrom the keyboard and adding two numbers
Add Two Numbers in Java
Add Two Numbers in Java
... of two numbers!
Sum: 32... these
arguments and print the addition of those numbers. In this example, args
Java String Examples
Java String Examples
Comparing Strings (==
operator)
This section describes how two string references are compared. If two String variables point
Listing all even numbers between two numbers
Listing all even numbers between two numbers Hi,
How to write code to list all the even numbers between two given numbers?
Thanks
Hi... the numbers.
Check the tutorial Write a program to list all even numbers between two
Swapping of two numbers without using third variable
Swapping of two numbers without using third variable
In this tutorial we will learn about swapping of two number in java
without using third variable. This is simple program to swap two value using
arithmetic operation Swapping of two
Find L.C.M. of two numbers in Java
Find L.C.M. of two numbers in Java Program
In this section, you will learn how to find LCM(Least Common Element) of two integers.The least common multiple is the smallest positive integer that is a multiple of both the numbers. Here we
Java String Examples
finding the prime numbers
about your problem. the prime numbers Hi, I am a beginner to java and I have problem with the code in finding the prime numbers, can someone tell me about
Java to html - Java Beginners
Java to html
Hi ,
I have to compare two text files, and put the difference in to the html file, using table. I have finished comparing part, i need help for writing the results from the java application to the html file
To find first two maximum numbers in an array,using single loop without sorting array.
To find first two maximum numbers in an array,using single loop without sorting array. Java program to find first two maximum numbers in an array,using single loop without sorting array
Comparing XML with HTML
Core Java Doubts - Java Beginners
Core Java Doubts 1)How to swap two numbers suppose a=5 b=10; without using third variable?
2)How to sort two strings? By using collections?
3)What... is capable of comparing two different objects.
b)The Comparable class itself
Printing numbers in pyramid format - Java Beginners
Printing numbers in pyramid format Q) Can you please tel me the code to print the numbers in the following format:
1
2 3
4 5 6
7 8 9 10
Hi Friend,
Try
mysql difference between two numbers
mysql difference between two numbers How to get total bate difference between two dates for example 1/01/2012 and 1/02/2012 in MYSQL?
... between two date. The syntax of DATEDIFF is ..
SELECT DATEDIFF('2012-01-31 23:59
Swap two any numbers
Swap Any Two Numbers
This is a simple Java Oriented language program.
If you are newbie in Java programming then our tutorial and example are
helpful
Comparing Strings in a Locale-Independent Way
Comparing Strings in a Locale-Independent Way
In this section, you will learn how to compare strings in a locale
independent way.
For handling strings, java.text.* package provides the class Collator. This
class performs locale
Difference in two dates - Java Beginners
on that.
The thing is, that I need to find the difference between the two dates in JAVA... for more information: in two dates Hello there once again........
Dear Sir
Java guide for beginners
and examples on simple topics like comparing two numbers,
determining largest...Java guide provided at RoseIndia for beginners is considered best to learn... and understand it completely.
Here is more tutorials for Java coding for beginners
Add Complex Numbers Java
How to Add Complex Numbers Java
In this Java tutorial section, you will learn how to add complex Numbers in Java Programming Language. As you are already aware of Complex numbers. It is composed of two part - a real part and an imaginary
adding two numbers with out using any operator
adding two numbers with out using any operator how to add two numbers with out using any operator
import java.math.*;
class AddNumbers
{
public static void main(String[] args)
{
BigInteger num1=new
adding two numbers.... In this section, we will be creating a
java file named Calculator.java which has
automorphic numbers
automorphic numbers how to find automorphic number in java
Hi Friend,
Pleas visit the following link:
Automorphic numbers
Thanks
how to add to numbers in java
how to add to numbers in java how to add to numbers in java
Prime Numbers
Prime Numbers Create a complete Java program that allows the user to enter a positive integer n, and which then creates and populates an int array with the first n prime numbers. Your program should then display the contents
Rational Numbers
Rational Numbers Write and fully test a class that represents rational numbers. A rational number can be represented as the ratio of two integer... static method that returns the largest common factor of the two positive
PHP Comparison Objects
two objects of class (same or different).
There are mainly = =, = = = operators are used to compare two objects, and instance of operator can be used also... = = = checks two objects and returns true if both refers two the same object a class
defining numbers in Java Script
defining numbers in Java Script Explain about defining numbers in Java Script
Swap two any numbers (from Keyboard)
Swap two any numbers (from Keyboard)
This is a simple Java Oriented language program.
If you are newbie in Java programming then our tutorial and example determine add, product of two numbers
JavaScript determine add, product of two numbers
In this section, you... numbers. To find this, we have created two textboxes
and four button...("Sum of two numbers is: "+sum);
}
function divide
Beginners Java Tutorial
for beginners:
Comparing Two Numbers
This is a very simple example of Java that teaches you the method of
comparing two numbers and finding out....
Swapping of two numbers
This Java programming tutorial
javaException - Java Beginners
javaException hi
when i run the program for comparing 2 images, i got the following exception
Exception in thread "Main" java.lang.OutOfMemoryError: Java heap space
what does it mean?? how can i solve it.
pls help me
Generating random numbers in a range with Java
Generating random numbers in a range with Java Generating random numbers in a range with Java
Java program - convert words into numbers?
Java program - convert words into numbers? convert words into numbers?
had no answer sir
permutstion of numbers in java
permutstion of numbers in java Is it possible to enter the number in a char so that permutation of a number can be acheived which will be in the form of string?????
here is the coding i did...it worked really well when i
Java - Java Beginners
Java How to add and print two numbers in a java program single...;
System.out.prinln(a+b);
Hi friend,
Code to add two number in java
class... :
Thanks
Java Pyramid of Numbers
Java Pyramid of Numbers Hi, I want to know how the code to print the pyramid below works. It uses nested for loops.
Pyramid:
1
2 1 2
odd numbers with loop
odd numbers with loop get the odd numbers till 100 with for,while loop
Java find odd numbers:
class OddNumbers
{
public static void main(String[] args)
{
for(int i=1;i<=100;i
core java - Java Beginners
-in-java/
it's about calculating two numbers
Java Program Code for calculating two numbers java can we write a program for adding two numbers without
to calculate the difference between two dates in java - Java Beginners
to calculate the difference between two dates in java to write a function which calculates the difference between 2 different dates
1.The function...) {
// Creates two calendars instances
Calendar calendar1 = Calendar.getInstance
Sum of first n numbers
Sum of first n numbers i want a simple java program which will show the sum of first
n numbers....
import java.util.*;
public class...;
}
System.out.println("Sum of Numbers from 1 to "+n+" : "+sum
Java Courses
Comparing Two Numbers
Determining the largest number...RoseIndia Java Courses provided online for free can be utilized by beginners...
Describing custom tags in JSP
For further reading, beginners in java can
Divide 2 numbers
Divide 2 numbers Write a java program to divide 2 numbers. Avoid division by zeor by catching the exception.
class Divide
{
public static void main(String[] args)
{
try{
int num1=8
java"oop" - Java Beginners
with the Java? Hi i hope you understand it.//To print the even numbers...){ System.out.println(e); } }}Java program to display all even numbers
EVEN NUMBERS - Java Interview Questions
EVEN NUMBERS i want program of even numbers?i want source code plz reply? Hi Friend,
Try the following code:
class EvenNumbers... counter = 0;
System.out.println("Even Numbers are:" );
for (int i
Two compilation errors.Can anyone help soon. - Java Beginners
Two compilation errors.Can anyone help soon. a program called Date.java to perform error-checking on the initial values for instance fields month, day and year. Also, provide a method nextDay() to increment the day by one
java question
java question comparator and comparable
Differences:
a)A comparable object is capable of comparing itself with another object while a comparator object is capable of comparing two different objects.
b | http://www.roseindia.net/tutorialhelp/comment/35240 | CC-MAIN-2014-52 | refinedweb | 2,946 | 52.9 |
So right now I need to make a program that can determine if several department store customers have exceeded their credit limit on a charge account. I have the account number, balance at the beginning of the month, total of all items charged by the customer this month, total of all credits applied to the customer's account this month, and the allowed credit limit. It needs to input all of these as integers and calculate the new balance using "= beginning balance + charges - credit", display the balance, and determine if it exceeds their credit limit, and display a message if it has exceeded it.
I have the general format down (I hope),
Code :
import java.util.Scanner; public class Credit { //calculates the balance on several credit accounts public void calculateBalance() { Scanner input = new Scanner( System.in ); int account; //account number int oldBalance; //starting balance int charges; //total charges int credits; //total credits int creditLimit; //allowed credit limit int newBalance; //new balance System.out.print( "Enter Account Number (or -1 to quit): " ); int i = scan.nextInt(); //stuff I need to figure out } }
I've been at this for hours and I can't figure this out. I'm not asking for like full answers just things to help me get on the right track. | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/18634-need-help-developing-basic-credit-program-java-printingthethread.html | CC-MAIN-2015-35 | refinedweb | 213 | 57 |
Accessing Private Variables
Closures and functions can't remember any information defined within themselves between invocations. If we want a closure to remember a variable between invocations, one only it has access to, we can nest the definitions inside a block:
def c try{ def a= new Random() //only closure c can see this variable; it is private to c c= { a.nextInt(100) } } 100.times{ println c() } try{ a; assert 0 }catch(e) //'a' inaccessable here { assert e instanceof MissingPropertyException }
We can have more than one closure accessing this private variable:
def counterInit, counterIncr, counterDecr, counterShow //common beginning of names to show common private variable/s try{ def count counterInit= { count= it } counterIncr= { count++ } counterDecr= { count-- } counterShow= { count } } counterInit(0) counterIncr(); counterIncr(); counterDecr(); counterIncr() assert counterShow() == 2
We can also put all closures accessing common private variables in a map to show they're related:
def counter= [:] try{ def count= 0 counter.incr= { count++; counter.show() } counter.decr= { count--; counter.show() } counter.show= { count } } counter.incr() assert counter.show() == 1
Expando
We can access private variables with an Expando instead. An expando allows us to assign closures to Expando names:
def counter= new Expando() try{ def count= 0 counter.incr= { count++; show() } //no need to qualify closure call with expando name counter.decr= { count--; show() } counter.show= { timesShown++; count } counter.timesShown= 0 //we can associate any value, not just closures, to expando keys } counter.incr(); counter.incr(); counter.decr(); counter.incr() assert counter.show() == 2
An expando can also be used when common private variables aren't used:
def language= new Expando() language.name= "Groovy" language.numLetters= { name.size() } assert language.numLetters() == 6 language.name= "Ruby" assert language.numLetters() == 4 language.name= "PHP" assert language.numLetters() == 3
Like individual closures, closures in expandos see all external variables all the way to the outermost block. This is not always helpful for large programs as it can limit our choice of names:
def a= 7 try{ //... ... ... lots of lines and blocks in between ... ... ... def exp= new Expando() exp.c= { //def a= 2 //does not compile if uncommented: a is already defined //... ... ... } }
For single-argument closures, both standalone and within expandos, we can use the implicit parameter as a map for all variables to ensure they're all valid, though the syntax is not very elegant:
def a= 7 try{ def c= { it= [it: it] it.a= 2 it.it + it.a } assert c(3) == 5 }:
def a= 7 def a= 7 class Counter{ //variable within a class is called a field... static public count= 0 //count has 'public' keyword, meaning it's visible from outside class //function within a class is called a method... static incr(){ count++ //variables defined within class visible from everywhere else inside class } static decr(){ //println a //compile error if uncommented: //a is outside the class and not visible count-- } } Counter.incr(); Counter.incr(); Counter.decr(); 5.times{ Counter.incr() } assert Counter.count == 6
Methods act quite similar to standalone functions. They can take parameters:
class Counter{ static private count = 0 //qualified with private, meaning not visible from outside class static incr( n ){ count += n } static decr( count ){ this.count -= count } //params can have same name as a field; 'this.' prefix accesses field static show(){ count } } Counter.incr(2); Counter.incr(7); Counter.decr(4); Counter.incr(6) assert Counter.show() == 11
We can have more than one method of the same name if they each have different numbers of parameters.
class Counter{ static private count = 0 static incr(){ count++ } static incr( n ){ count += n } static decr(){ count-- } static decr( n ){ count -= n } static show(){ count } } Counter.incr(17); Counter.incr(); Counter.decr(4) assert Counter.show() == 14
Methods are also similar to other aspects of functions:
class U{ static a(x, Closure c){ c(x) } static b( a, b=2 ){ "$a, $b" } //last argument/s assigned default values static c( arg, Object[] extras ){ arg + extras.inject(0){ flo, it-> flo+it } } static gcd( m, n ){ if( m%n == 0 )return n; gcd(n,m%n) } //recursion by calling own name } assert U.a(7){ it*it } == 49 //shorthand passing of closures as parameters assert U.b(7, 4) == '7, 4' assert U.b(9) == '9, 2' assert U.c(1,2,3,4,5) == 15 //varying number of arguments using Object[] assert U.gcd( 28, 35 ) == 7
We can assign each method of a static class to a variable and access it directly similar to how we can with functions:
class U{ static private a= 11 static f(n){ a*n } } assert U.f(4) == 44 def g= U.&f //special syntax to assign method to variable assert g(4) == 44 def h = g //don't use special syntax here assert h(4) == 44
When there's no accessibility keyword like 'public' or 'private' in front of a field within a static class, it becomes a property, meaning two extra methods are created:
class Counter{ static count = 0 //property because no accessibility keyword (eg 'public','private') static incr( n ){ count += n } static decr( n ){ count -= n } } Counter.incr(7); Counter.decr(4) assert Counter.count == 3 assert Counter.getCount() == 3 //extra method for property, called a 'getter' Counter.setCount(34) //extra method for property, called a 'setter' assert Counter.getCount() == 34
When we access the property value using normal syntax, the 'getter' or 'setter' is also called:
class Counter{ static count= 0 //'count' is a property //we can define our own logic for the getter and/or setter... static setCount(n){ count= n*2 } //set the value to twice what's supplied static getCount(){ 'count: '+ count } //return the value as a String with 'count: ' prepended } Counter.setCount(23) //our own 'setCount' method is called here assert Counter.getCount() == 'count: 46' //our own 'getCount' method is called here assert Counter.count == 'count: 46' //our own 'getCount' method is also called here Counter.count= 7 assert Counter.count == 'count: 14' //our own 'setCount' method was also called in previous line
To run some code, called a static initializer, the first time the static class is accessed. We can have more than one static initializer in a class.
class Counter{ static count = 0 static{ println 'Counter first accessed' } //static initializer static incr( n ){ count += n } static decr( n ){ count -= n } } println 'incrementing...' Counter.incr(7) //'Counter first accessed' printed here println 'decrementing...' Counter.decr(4) //nothing printed
Instantiable Classes
We can write instantiable classes, templates from which we can construct many instances, called objects or class instances. We don't use the static keyword before the definitions within the class:
class Counter{ def count = 0 //must use def inside classes if no other keyword before name def incr( n ){ count += n } def decr( n ){ count -= n } } def c1= new Counter() //create a new object from class c1.incr(2); c1.incr(7); c1.decr(4); c1.incr(6) assert c1.count == 11 def c2= new Counter() //create another new object from class c2.incr(5); c2.decr(2) assert c2.count == 3
We can run some code the first time each object instance is constructed. First, the instance initializer/s are run. Next run is the constructor with the same number of arguments as in the calling code.
class Counter{ def count { println 'Counter created' } //instance initializer shown by using standalone curlies Counter(){ count= 0 } //instance constructor shown by using class name Counter(n){ count= n } //another constructor with a different number of arguments def incr( n ){ count += n } def decr( n ){ count -= n } } c = new Counter() //'Counter created' printed c.incr(17); c.decr(2) assert c.count == 15 d = new Counter(2) //'Counter created' printed again d.incr(12); d.decr(10); d.incr(3) assert d.count == 7
If we don't define any constructors, we can pass values directly to fields within a class by adding them to the constructor call:
class Dog{ def sit def number def train(){ ([sit()] * number).join(' ') } } def d= new Dog( number:3, sit:{'Down boy!'} ) assert d.train() == 'Down boy! Down boy! Down boy!'
Methods, properties, and fields on instantiable classes act similarly to those on static classes:
class U{ private timesCalled= 0 //qualified with visibility, therefore a field def count = 0 //a property def a(x){ x } def a(x, Closure c){ c(x) } //more than one method of the same name but //each having different numbers of parameters def b( a, b=2 ){ "$a, $b" } //last argument/s assigned default values def c( arg, Object[] extras ){ arg + extras.inject(0){ flo, it-> flo+it } } def gcd( m, n ){ if( m%n == 0 )return n; gcd(n,m%n) } //recursion by calling own name } def u=new U() assert u.a(7){ it*it } == 49 //shorthand passing of closures as parameters assert u.b(7, 4) == '7, 4' assert u.b(9) == '9, 2' assert u.c(1,2,3,4,5) == 15 //varying number of arguments using Object[] assert u.gcd( 28, 35 ) == 7 u.setCount(91) assert u.getCount() == 91
A class can have both static and instantiable parts by using the static keyword on the definitions that are static and not using it on those that are instantiable:
class Dice{ //here is the static portion of the class... static private count //doesn't need a value static{ println 'First use'; count = 0 } static showCount(){ return count } //and here is the instantiable portion... def lastThrow Dice(){ println 'Instance created'; count++ } //static portion can be used by instantiable portion, but not vice versa def throww(){ lastThrow = 1+Math.round(6*Math.random()) //random integer from 1 to 6 return lastThrow } } d1 = new Dice() //'First use' then 'Instance created' printed d2 = new Dice() //'Instance created' printed println "Dice 1: ${(1..20).collect{d1.throww()}}" println "Dice 2: ${(1..20).collect{d2.throww()}}" println "Dice 1 last throw: $d1.lastThrow, dice 2 last throw: $d2.lastThrow" println "Number of dice in play: ${Dice.showCount()}"
A class can have more than one constructor:
class A{ def list= [] A(){ list<< "A constructed" } A(int i){ this() //a constructor can call another constructor if it's the first statement list<< "A constructed with $i" } A(String s){ this(5) list<< "A constructed with '$s'" } } def a1= new A() assert a1.list == ["A constructed"] def a2= new A(7) assert a2.list.collect{it as String} == [ "A constructed", "A constructed with 7", ] def a3= new A('bird') assert a3.list.collect{it as String} == [ "A constructed", "A constructed with 5", "A constructed with 'bird'", ]
Categories
When a class has a category method, that is, a static method where the first parameter acts like an instance of the class, we can use an alternative 'category' syntax to call that method:
class View{ def zoom= 1 def produce(str){ str*zoom } static swap(self, that){ //first parameter acts like instance of the class def a= self.zoom self.zoom= that.zoom that.zoom= a } } def v1= new View(zoom: 5), v2= new View(zoom: 4) View.swap( v1, v2 ) //usual syntax assert v1.zoom == 4 && v2.zoom == 5 use(View){ v1.swap( v2 ) } //alternative syntax assert v1.zoom == 5 && v2.zoom == 4 assert v1.produce('a') == 'aaaaa'
We can also use category syntax when the category method/s are in a different class:
class View{ static timesCalled= 0 //unrelated static definition def zoom= 1 def produce(str){ timesCalled++; str*zoom } } class Extra{ static swap(self, that){ //first parameter acts like instance of View class def a= self.zoom self.zoom= that.zoom that.zoom= a } } def v1= new View(zoom: 5), v2= new View(zoom: 4) use(Extra){ v1.swap( v2 ) } //alternative syntax with category method in different class assert v1.zoom == 4 && v2.zoom == 5 assert v1.produce('a') == 'aaaa'
Many supplied library classes in Groovy have category methods that can be called using category syntax. (However, most category methods on Numbers, Characters, and Booleans do not work with category syntax in Groovy-1.0)
assert String.format('Hello, %1$s.', 42) == 'Hello, 42.' use(String){ assert 'Hello, %1$s.'.format(42) == 'Hello, 42.' }
Far more common are supplied library classes having category methods in another utility class, eg, List having utilities in Collections:
def list= ['a', 7, 'b', 9, 7, 7, 2.4, 7] Collections.replaceAll( list, 7, 55 ) //normal syntax assert list == ['a', 55, 'b', 9, 55, 55, 2.4, 55] list= ['a', 7, 'b', 9, 7, 7, 2.4, 7] use(Collections){ list.replaceAll(7, 55) //category syntax } assert list == ['a', 55, 'b', 9, 55, 55, 2.4, 55]
We can call category methods inside other category methods:
class Extras{ static f(self, n){ "Hello, $n" } } class Extras2{ static g(self, n){ Extras.f(self, n) } static h(self, n){ def ret use(Extras){ ret= self.f(n) } //call Extras.f() as a category method ret } } assert Extras.f(new Extras(), 4) == 'Hello, 4' assert Extras2.g(new Extras2(), 5) == 'Hello, 5' assert Extras2.h(new Extras2(), 6) == 'Hello, 6' class A{ } def a= new A() use(Extras){ assert a.f(14) == 'Hello, 14' } use(Extras2){ assert a.g(15) == 'Hello, 15' assert a.h(16) == 'Hello, 16' //call category method within another }
But we can't call category methods inside another category method from the same class:
class Extras{ static f(self, n){ "Hello, $n" } static g(self, n){ f(self, n) } static h1(self, n){ f(n) } //calling f without first parameter only valid //when called within a category method static h2(self, n){ def ret use(Extras){ ret= self.f(n) } //class as category within itself only valid if method wasn't called //using category syntax ret } } assert Extras.f(new Extras(), 4) == 'Hello, 4' assert Extras.g(new Extras(), 5) == 'Hello, 5' try{ Extras.h1(new Extras(), 6); assert 0 } catch(e){ assert e instanceof MissingMethodException } assert Extras.h2(new Extras(), 7) == 'Hello, 7' class A{ } def a= new A() use(Extras){ assert a.f(14) == 'Hello, 14' assert a.g(15) == 'Hello, 15' assert a.h1(16) == 'Hello, 16' try{ a.h2(17); assert 0 } catch(e){ assert e instanceof GroovyRuntimeException } }
A lot of entities in Groovy are classes, not just the explicit ones we've just learnt about. Numbers, lists, sets, maps, strings, patterns, scripts, closures, functions, and expandos are all implemented under the hood as classes. Classes are the building block of Groovy. | http://groovy.codehaus.org/JN2525-Classes | crawl-002 | refinedweb | 2,379 | 56.86 |
To complete namespace support, JAXP 1.3 introduces a new
javax.xml.namespace package that enables you to manipulate and query namespace information using the
NamespaceContext interface and the
QName class. The
NamespaceContext interface stores the prefix-to-namespace mapping that is available in the current document context. The interface provides methods for getting a namespace URI for a given prefix, getting a prefix for a given namespace URI, or getting all prefixes bound to a given namespace URI. The
NamespaceContext is used in the new XPath API, which is described later in the article.
The
QName class represents the qualified name as specified by the Namespaces in XML Recommendation (see Resources) and, as mentioned in Part 1, this class was originally defined in the Java API for XML-Based RPC (JAX-RPC) specification (see Resources). The
QName stores values for the local part, the namespace URI, and the corresponding prefix if available. It is important to point out that the prefix value is ignored in the implementation of the
equals() and
hashCode() methods.
The javax.xml package, also mentioned in Part 1, contains one class:
XMLConstants. This class defines useful constants such as constant values for JAXPâs recognized schema languages and various constant values related to namespaces.
XSL transformation package changes
In this version of JAXP, the changes to the
javax.xml.transform package mainly focus on fixing bugs and clarifying some parts of the API. The most significant change is that the JAXP 1.3 reference implementation changes the default transformation engine: In JAXP 1.2, it was the interpreting transformer (Xalan); in JAXP 1.3, the default transformation engine is the compiling transformer (XSLTC). XSLTC works by compiling a stylesheet into Java byte code, called a translet. The translet is later used to perform XSL transformations. This approach greatly improves the XSLT performance, since each stylesheet is parsed and compiled only once and reused for each subsequent transformation.
The XML Schema to Java types mapping
The XML Schema datatypes are widely accepted and used as a type system for many other specifications, such as Web Services Description Language (WSDL -- see Resources). Many XML applications written in the Java language either need or like to access a Java type that represents an XML Schema datatype value. As a result, the last couple of years have seen several attempts to define a mapping between XML Schema datatypes (for example,
xs:string) and Java types. Examples of such attempts include Castor, the open source XML data binding framework, and the Java Architecture for XML Binding (JAXB) 1.0 specification (see Resources). As you can see in Table 1, the mapping is straightforward for most types.
Table 1. XML Schema datatypes to Java types mapping (partial)
However, some data types defined in the XML Schema Datatypes specification do not map one-to-one to any existing Java classes. In particular, the Java type system does not have a type that corresponds to the XML Schema
xs:duration datatype, and does not have a class with a one-to-one correspondence to other XML Schema date/time types (for example,
xs:gYear).
JAXP 1.3 completes the mapping by defining the missing types as a part of the Java platform. Table 2 shows the newly defined Java types and their mapping to the XML Schema datatypes.
Table 2. New JAXP 1.3 types and their mapping to the XML Schema datatypes
Note that the new datatypes defined by the XQuery 1.0 and XPath 2.0 specifications (
xdt:dayTimeDuration and
xdt:yearMonthDuration) also map to the
Duration class.
The javax.xml.datatype overview
Like other JAXP packages,
javax.xml.datatype defines a
DatatypeFactory class that enables you to plug in multiple implementations of data type factories. Similar to the DOM and SAX factories (
DOMBuilderFactory and
SAXParserFactory),
DatatypeFactory is an abstract class that has a static
newInstance() method that enables you to create a concrete implementation of the
DatatypeFactory. Using an instance of the
DatatypeFactory, you can create
Duration and
XMLGregorianCalendar objects.
Duration objects are immutable and you can create them from a wide range of values, including:
- Lexical representations
- Values expressed in milliseconds (Java long)
- A sequence of values indicating positive or negative direction in time, years, months, days, hours, minutes, and seconds
The
Duration class provides several methods, including methods that:
- Allow comparison of
Durationobjects (as defined by the XML Schema specification)
- Add or subtract two
Durations
- Add
Durationto Java
Calendar,
Date, or
XMLGregorianCalendarobjects
For more details, see the Java documentation for the
Duration class.
XMLGregorianCalendar objects are mutable, and therefore you can simply set any of the date or time fields directly on the object. The class also has methods that allow for validation of
XMLGregorianCalendar objects, conversion to instances of the Java
GregorianCalendar class, and other actions.
The
javax.xml.datatypes package also defines the
DatatypeConstants utility class that contains basic data type values as constants.
Listing 1 shows how to create and work with
Duration and
XMLGregorianCalendar types. For simplicity, you can assume that you want to create an application that, when given a purchase date of a product and the
Duration of the warranty for this product, will compute the date of the warrantyâs expiration.
Listing 1. Using the JAXP types
Listing 2 shows the output of the above application:
Listing 2. Output from Listing 1
XPath 1.0 is a W3C Recommendation that defines a language that provides the capability to extract portions of an XML document. You can extract portions that might be as large as collections of elements together with all their descendants, or as small as a single attribute value. To select some parts of a document, you specify a path between a starting node (called the context node) and the contents to be selected. You might specify a path that simply selects a particular child element of the context node, or one that selects all elements in the entire subtree rooted at the context node that match a complex expression (such as that they contain attributes with particular values and exactly two child elements).
Despite the fact that the XPath 1.0 Recommendation has been around for a long time (in XML terms anyway; it celebrated its fifth birthday on November 16, 2004), JAXP 1.3 finally brings this functionality into the Java platform. Unlike previous XPath APIâs, JAXP 1.3 is entirely vendor-neutral; it provides the same type of factory mechanism to allow the system to find and create a compliant object, just as has always been true in the parser and transformer arenas. The JAXP 1.3 API is also agnostic about the underlying data model. In principle, you can use any data model with a well-defined mapping to the simple model defined by XPath 1.0 (so you can be apply XPath expressions against it in a well-defined way) with JAXP 1.3. The W3C Document Object Model (DOM) is the only data model that JAXP 1.3 implementations are required to support.
You can find all interfaces and abstract classes associated with the new XPath API in the
javax.xml.xpath package. Unsurprisingly, the object used to create objects that can evaluate XPath expressions is called the
XPathFactory. The objects it creates are simply called
XPaths.
An
XPathFactory is only expected to know how to create
XPath objects for one particular type of data model. Therefore, you must specify the data model when creating an
XPathFactory. As with the validation API, you do this by assigning URIâs to data models. If no URI is specified, an
XPathFactory for the DOM is produced. You can use the same
XPath object on multiple DOM trees, though you should note that
XPath objects are not thread-safe.
You can use
XPath objects for two primary purposes
- To evaluate XPath expressions, passed as simple String objects, given a particular node in an instance of the supported data model to act as the context node. In this mode they are said to interpret the XPath expression, since the String is being applied directly to the data model.
- To convert an XPath expression from a String to an
XPathExpressionobject, which can then be applied to any node in an instance of the supported data model.
XPathExpressionobjects are created by passing a String representation of the XPath expression into the compile method on
XPath.
XPathExpressionobjects are compiled representations of the original XPath String; they represent internal, optimized representations of the XPath expression. Indeed, in many implementations the
XPathExpressionis made up entirely of Java bytecode -- and itâs hard to get more optimized than that!
Whenever you can do something in two ways, consider the conditions under which you prefer one alternative. With XPath expressions, keep in mind that compiling an XPath expression into any optimized representation, including bytecode, involves quite a bit of effort; hence, if an expression is very simple or used infrequently, itâs probably not worthwhile to compile it. But, for complex expressions, and especially for expressions that are frequently used in your application, compilation can offer dramatic performance increases.
Both
XPath and
XPathExpression offer four different methods for evaluating an XPath expression, which are provided as different overloadings of a method called
evaluate. The signatures for the corresponding methods are identical, except that
XPath's evaluate methods must take a String representing the XPath expression as their first parameter, whereas
XPathExpressions can dispense with this parameter entirely since they already embody particular expressions. For simplicity, this article only covers the
evaluate method of
XPathExpression.
The form of
evaluate that you will use most often is likely to be the method that takes an object representing the context node and a
QName indicating the expressionâs expected return type. The return type can be any one of the four basic XPath 1.0 data types: boolean, string, node set, and number. The data model in use determines how these XPath data types are represented in Java code; for DOM, theyâre defined to be
Boolean, String, org.w3c.dom.NodeList and
Double respectively. One must therefore take into account both the expected return type and the data model when deciding how to cast the object that this method returns. The object representing the context node also needs to be appropriate for the data model; for DOM, you can use any type of
org.w3c.dom.Node. The
XPathConstants class contained in this package defines
QNames for each of the four XPath data types.
The other forms of
evaluate are mere variations:
evaluate(Object item): String-- shorthand for
evaluate(item, XPathConstants.STRING); note that the return type is already known.
evaluate(InputSource source, QName returnType): Object-- causes
org.xml.sax.InputSourceto be parsed into an instance of the data model, the root node of which serves as the context node; otherwise, it is identical to the
evaluatemethod described in the preceding paragraph.
evaluate(InputSource source): String-- shorthand for
evaluate(source, XPathConstants.STRING); it is symmetric with the version of
evaluatedescribed in the first bullet.
The following example returns to the purchase order document used in Part 1. This time, you assume that you have to do some special processing if the purchase involves more than one item and you are shipping the goods to a different person than you are billing. Assuming that
fileSource is an
org.xml.sax.InputSource that points to a purchase order document, this code does the trick:
Listing 3. Using the XPath API
Note a few other details about the XPath API. Foremost among them is that XPath expressions can only reference namespace-qualified element and attribute names by the use of namespace prefixes. To compile an XPath expression that uses namespaces , the application needs to register an instance of an implementation of the
NamespaceContext interface thatâs part of the
javax.xml.namespace package, since no context node is present for namespace declarations to reference.
The XPath 1.0 Recommendation provides some useful hooks for extensibility. XPath expressions can include:
- Variables: The XPath processor associates values to these identifiers when evaluating the expression.
- Functions: The XPath processor associates values to functions when evaluating the expression.
JAXP 1.3 enables this to work by defining the
XPathVariableResolver and
XPathFunctionResolver interfaces. The application can implement both and register either with an
XPathFactory or an
XPath instance.
The
XPathVariableResolver interface contains one method,
resolveVariable. This method takes a
QName that identifies the variable and returns an object appropriate to the underlying data model.
XPathFunctionResolver contains an analogous method called
resolveFunction. But that method takes an
int specifying the number of arguments that the function expects as well as a
QName to identify it. The
resolveFunction method returns an
XPathFunction object.
XPathFunctions have an
evaluate method that takes a
List whose length must correspond to the
int parameter in the
resolveFunction callback that resulted in the
XPathFunction being returned; similarly, the
resolveVariable method returns an object that's appropriate to the data model.
In this article, we described utilities that support XML Namespaces in JAXP 1.3, as well as the slight changes made to the
javax.xml.transform package. We also discussed how this API completes native Java support for all W3C XML Schema datatypes. The article concluded by presenting the XPath capabilities contained in JAXP 1.3. Combined with the information found in the first article in this series, you should now have a clear picture of the many performance and usability enhancements offered by JAXP 1.3.
- Read Part 1 of this two-part series on JAXP 1.3, which provides a brief overview of the specification, gives details of the modifications to the
javax.xml.parserspackage, and describes a powerful schema caching and validation framework (developerWorks, November 2004).
- Find out more about Java API for XML Processing (JAXP).
- Find all of the W3C specifications on the W3C Technical Reports page, including:
- XML Path Language (XPath) Version 1.0
- Namespaces in XML 1.1
- XML Schema 1.1 Part 2: Datatypes
- Web Services Description Language (WSDL) Version 2.0 Part 1: Core Language
- XQuery 1.0
- Take a closer look at the Namespaces in XML page on W3C, where you can learn more about qualified names.
- Read about Java API for XML-Based RPC (JAX-RPC).
- Get the latest copy of Castor from the Castor Web site.
- Find out more about the Java Architecture for XML Binding (JAXB), the evolving standard for Java Platform data binding.
-. | http://www.ibm.com/developerworks/xml/library/x-jaxp13b/index.html | crawl-003 | refinedweb | 2,396 | 53.92 |
Parent Directory
|
Revision Log
Expression vector support methods.
#!/usr/bin/perl -w package ServerThing; use strict; use Tracer; use YAML; use JSON::Any; use ERDB; use TestUtils; use Time::HiRes; use File::Temp; use ErrorMessage; use CGI; no warnings qw(once); # Maximum number of requests to run per invocation. use constant MAX_REQUESTS => 50; =head1 General Server Helper This package provides a method-- I<RunServer>-- that can be called from a CGI script to perform the duties of a FIG server. RunServer is called with two parameters: the name of the server package (e.g. C<SAP> for B<SAP.pm>) and the first command-line parameter. The command-line parameter (if defined) will be used as the tracing key, and also indicates that the script is being invoked from the command line rather than over the web. =cut sub RunServer { # Get the parameters. my ($serverName, $key) = @_; # Set up tracing. We never do CGI tracing here; the only question is whether # or not the caller passed in a tracing key. If he didn't, we use the server # name. ETracing($key || $serverName, destType => 'APPEND', level => '0 ServerThing'); # Turn off YAML compression, which causes problems with some of our hash keys. $YAML::CompressSeries = 0; # Create the server object. Trace("Requiring $serverName for task $$.") if T(3); eval { require "$serverName.pm"; }; # If we have an error, create an error document. if ($@) { SendError($@, "Could not load server module."); } else { # Having successfully loaded the server code, we create the object. my $serverThing = eval("$serverName" . '->new()'); Trace("$serverName object created for task $$.") if T(2); # If we have an error, create an error document. if ($@) { SendError($@, "Could not start server."); } else { # No error, so now we can process the request. my $cgi; if (! defined $key) { # No tracing key, so presume we're a web service. Check for Fast CGI. if ($ENV{REQUEST_METHOD} eq '') { # Count the number of requests. my $requests = 0; Trace("Starting Fast CGI loop.") if T(3); # Loop through the fast CGI requests. If we have request throttling, # we exit after a maximum number of requests has been exceeded. require CGI::Fast; while ((MAX_REQUESTS == 0 || ++$requests < MAX_REQUESTS) && ($cgi = new CGI::Fast())) { RunRequest($cgi, $serverThing); Trace("Request $requests complete in task $$.") if T(3); } Trace("Terminating FastCGI task $$ after $requests requests.") if T(2); } else { # Here we have a normal web service (non-Fast). my $cgi = CGI->new(); # Check for a source parameter. This gets used as the tracing key. $key = $cgi->param('source'); # Run this request. RunRequest($cgi, $serverThing); } } else { # We're being invoked from the command line. Use the tracing # key to find the parm file and create the CGI object from that. my $ih = Open(undef, "<$FIG_Config::temp/$key.parms"); $cgi = CGI->new($ih); # Run this request. RunRequest($cgi, $serverThing); } } } } =head2 Server Utility Methods The methods in this section are utilities of general use to the various server modules. =head3 AddSubsystemFilter ServerThing::AddSubsystemFilter(\$filter, $args, $roles); Add subsystem filtering information to the specified query filter clause based on data in the argument hash. The argument hash will be checked for the C<-usable> parameter, which includes or excludes unusuable subsystems, the C<-exclude> parameter, which lists types of subsystems that should be excluded, and the C<-aux> parameter, which filters on auxiliary roles. =over 4 =item filter Reference to the current filter string. If additional filtering is required, this string will be updated. =item args Reference to the parameter hash for the current server call. This hash will be examined for the C<-usable> and C<-exclude> parameters. =item roles If TRUE, role filtering will be applied. In this case, the default action is to exclude auxiliary roles unless C<-aux> is TRUE. =back =cut use constant SS_TYPE_EXCLUDE_ITEMS => { 'cluster-based' => 1, experimental => 1, private => 1 }; sub AddSubsystemFilter { # Get the parameters. my ($filter, $args, $roles) = @_; # We'll put the new filter stuff in here. my @newFilters; # Unless unusable subsystems are desired, we must add a clause to the filter. # The default is that only usable subsystems are included. my $usable = 1; # This default can be overridden by the "-usable" parameter. if (exists $args->{-usable}) { $usable = $args->{-usable}; } # If we're restricting to usable subsystems, add a filter to that effect. if ($usable) { push @newFilters, "Subsystem(usable) = 1"; } # Check for exclusion filters. my $exclusions = ServerThing::GetIdList(-exclude => $args, 1); for my $exclusion (@$exclusions) { if (! SS_TYPE_EXCLUDE_ITEMS->{$exclusion}) { Confess("Invalid exclusion type \"$exclusion\"."); } else { # Here we have to exclude subsystems of the specified type. push @newFilters, "Subsystem($exclusion) = 0"; } } # Check for role filtering. if ($roles) { # Here, we filter out auxiliary roles unless the user requests # them. if (! $args->{-aux}) { push @newFilters, "Includes(auxiliary) = 0" } } # Do we need to update the incoming filter? if (@newFilters) { # Yes. If the incoming filter is nonempty, push it onto the list # so it gets included in the result. if ($$filter) { push @newFilters, $$filter; } # Put all the filters together to form the new filter. $$filter = join(" AND ", @newFilters); Trace("Subsystem filter is $$filter.") if T(ServerUtilities => 3); } } =head3 GetIdList my $ids = ServerThing::GetIdList($name => $args, $optional); Get a named list of IDs from an argument structure. If the IDs are missing, or are not a list, an error will occur. =over 4 =item name Name of the argument structure member that should contain the ID list. =item args Argument structure from which the ID list is to be extracted. =item optional (optional) If TRUE, then a missing value will not generate an error. Instead, an empty list will be returned. The default is FALSE. =item RETURN Returns a reference to a list of IDs taken from the argument structure. =back =cut sub GetIdList { # Get the parameters. my ($name, $args, $optional) = @_; # Declare the return variable. my $retVal; # Check the argument format. if (! defined $args && $optional) { # Here there are no parameters, but the arguments are optional so it's # okay. $retVal = []; } elsif (ref $args ne 'HASH') { # Here we have an invalid parameter structure. Confess("No '$name' parameter present."); } else { # Here we have a hash with potential parameters in it. Try to get the # IDs from the argument structure. $retVal = $args->{$name}; # Was a member found? if (! defined $retVal) { # No. If we're optional, return an empty list; otherwise throw an error. if ($optional) { $retVal = []; } else { Confess("No '$name' parameter found."); } } else { # Here we found something. Get the parameter type. We want a list reference. # If it's a scalar, we'll convert it to a singleton list. If it's anything # else, it's an error. my $type = ref $retVal; if (! $type) { $retVal = [$retVal]; } elsif ($type ne 'ARRAY') { Confess("The '$name' parameter must be a list."); } } } # Return the result. return $retVal; } =head3 RunTool ServerThing::RunTool($name => $cmd); Run a command-line tool. A non-zero return value from the tool will cause a fatal error, and the tool's error log will be traced. =over 4 =item name Name to give to the tool in the error output. =item cmd Command to use for running the tool. This should be the complete command line. The command should not contain any fancy piping, though it may redirect the standard input and output. The command will be modified by this method to redirect the error output to a temporary file. =back =cut sub RunTool { # Get the parameters. my ($name, $cmd) = @_; # Compute the log file name. my $$tempFileName"); # Open a pipe to get the correspondence data. $ih = Open(undef, "$FIG_Config::bin/svr_corresponding_genes -u localhost $genomeA $genomeB |"); Trace("Creating correspondence file for $genomeA to $genomeB in temporary file $tempFileName.") if T(3); # Copy the pipe date into the temporary file. while (! eof $ih) { my $line = <$ih>; print $oh $line; } # Close both files. If the close fails we need to know: it means there was a pipe # error. $fileOK &&= close $ih; $fileOK &&= close $oh; }; if ($@) { # Here a fatal error of some sort occurred. We need to force the files closed. close $ih if $ih; close $oh if $oh; } elsif ($fileOK) { # Here everything worked. Try to rename the temporary file to the real # file name. if (rename $tempFileName, $corrFileName) { # Everything is ok, fix the permissions and return the file name. chmod 0664, $corrFileName; $fileName = $corrFileName; Trace("Created correspondence file $fileName.") if T(Corr => 3); } } # If the temporary file exists, delete it. if (-f $tempFileName) { unlink $tempFileName; } # Return the results. return ($fileName, $converse); } =head3 MustFlipGenomeIDs my $converse = ServerThing::MustFlipGenomeIDs($genome1, $genome2); Return TRUE if the specified genome IDs are out of order. When genome IDs are out of order, they are stored in the converse order in correspondence files on the server. This is a simple method that allows the caller to check for the need to flip. =over 4 =item genome1 ID of the proposed source genome. =item genome2 ID of the proposed target genome. =item RETURN Returns TRUE if the first genome would be stored on the server as a target, FALSE if it would be stored as a source. =back =cut sub MustFlipGenomeIDs { # Get the parameters. my ($genome1, $genome2) = @_; # Return an indication. return ($genome1 gt $genome2); } =head3 ReadGeneCorrespondenceFile my $list = ServerThing::ReadGeneCorrespondenceFile($fileName, $converse, $all); Return the contents of the specified gene correspondence file in the form of a list of lists, with backward correspondences filtered out. If the file is for the converse of the desired correspondence, the columns will be reordered automatically so that it looks as if the file were designed for the proper direction. =over 4 =item fileName The name of the gene correspondence file to read. =item converse (optional) TRUE if the file is for the converse of the desired correspondence, else FALSE. If TRUE, the file columns will be reorderd automatically. The default is FALSE, meaning we want to use the file as it appears on disk. =item all (optional) TRUE if backward unidirectional correspondences should be included in the output. The default is FALSE, in which case only forward and bidirectional correspondences are included. =item RETURN Returns a L</Gene Correspondence List> in the form of a reference to a list of lists. If the file's contents are invalid or an error occurs, an undefined value will be returned. =back =cut sub ReadGeneCorrespondenceFile { # Get the parameters. my ($fileName, $converse, $all) = @_; # Declare the return variable. We will only put something in here if we are # completely successful. my $retVal; # This value will be set to 1 if an error is detected. my $error = 0; # Try to open the file. my $ih; Trace("Reading correspondence file $fileName.") if T(3); if (! open $ih, "<$fileName") { # Here the open failed, so we have an error. Trace("Failed to open gene correspondence file $fileName: $!") if T(Corr => 1); $error = 1; } # The gene correspondence list will be built in here. my @corrList; # This variable will be set to TRUE if we find a reverse correspondence somewhere # in the file. Not finding one is an error. my $reverseFound = 0; # Loop until we hit the end of the file or an error occurs. We must check the error # first in case the file handle failed to open. while (! $error && ! eof $ih) { # Get the current line. my @row = Tracer::GetLine($ih); # Get the correspondence direction and check for a reverse arrow. $reverseFound = 1 if ($row[8] eq '<-'); # If we're in converse mode, reformat the line. if ($converse) { ReverseGeneCorrespondenceRow(\@row); } # Validate the row. if (ValidateGeneCorrespondenceRow(\@row)) { Trace("Invalid row $. found in correspondence file $fileName.") if T(Corr => 1); $error = 1; } # If this row is in the correct direction, keep it. if ($all || $row[8] ne '<-') { push @corrList, \@row; } } # Close the input file. close $ih; # If we have no errors and we found a reverse arrow, keep the result. if (! $error) { if ($reverseFound) { $retVal = \@corrList; } else { Trace("No reverse arrow found in correspondence file $fileName.") if T(Corr => 1); } } # Return the result (if any). return $retVal; } =head3 ReverseGeneCorrespondenceRow ServerThing::ReverseGeneCorrespondenceRow($row) Convert a gene correspondence row to represent the converse correspondence. The elements in the row will be reordered to represent a correspondence from the target genome to the source genome. =over 4 =item row Reference to a list containing a single row from a L</Gene Correspondence List>. =back =cut sub ReverseGeneCorrespondenceRow { # Get the parameters. my ($row) = @_; # Flip the row in place. ($row->[1], $row->[0], $row->[2], $row->[3], $row->[5], $row->[4], $row->[7], $row->[6], $row->[8], $row->[9], $row->[10], $row->[14], $row->[15], $row->[16], $row->[11], $row->[12], $row->[13], $row->[17]) = @$row; # Flip the arrow. $row->[8] = ARROW_FLIP->{$row->[8]}; # Flip the pairs. my @elements = split /,/, $row->[3]; $row->[3] = join(",", map { join(":", reverse split /:/, $_) } @elements); } =head3 ValidateGeneCorrespondenceRow my $errorCount = ServerThing::ValidateGeneCorrespondenceRow($row); Validate a gene correspondence row. The numeric fields are checked to insure they are numeric and the source and target gene IDs are validated. The return value will indicate the number of errors found. =over 4 =item row Reference to a list containing a single row from a L</Gene Correspondence List>. =item RETURN Returns the number of errors found in the row. A return of C<0> indicates the row is valid. =back =cut sub ValidateGeneCorrespondenceRow { # Get the parameters. my ($row, $genome1, $genome2) = @_; # Denote no errors have been found so far. my $retVal = 0; # Check for non-numeric values in the number columns. for my $col (@{NUM_COLS()}) { unless ($row->[$col] =~ /^-?\d+\.?\d*(?:e[+-]?\d+)?$/) { Trace("Gene correspondence error. \"$row->[$col]\" not numeric.") if T(Corr => 2); $retVal++; } } # Check the gene IDs. for my $col (0, 1) { unless ($row->[$col] =~ /^fig\|\d+\.\d+\.\w+\.\d+$/) { Trace("Gene correspondence error. \"$row->[$col]\" not a gene ID.") if T(Corr => 2); $retVal++; } } # Verify the arrow. unless (exists ARROW_FLIP->{$row->[8]}) { Trace("Gene correspondence error. \"$row->[8]\" not an arrow.") if T(Corr => 2); $retVal++; } # Return the error count. return $retVal; } =head3 GetCorrespondenceData my $corrList = ServerThing::GetCorrespondenceData($genome1, $genome2, $passive, $full); Return the L</Gene Correspondence List> for the specified source and target genomes. If the list is in a file, it will be read. If the file does not exist, it may be created. =over 4 =item genome1 ID of the source genome. =item genome2 ID of the target genome. =item passive If TRUE, then the correspondence file will not be created if it does not exist. =item full If TRUE, then both directions of the correspondence will be represented; otherwise, only correspondences from the source to the target (including bidirectional corresopndences) will be included. =item RETURN Returns a L</Gene Correspondence List> in the form of a reference to a list of lists, or an undefined value if an error occurs or no file exists and passive mode was specified. =back =cut sub GetCorrespondenceData { # Get the parameters. my ($genome1, $genome2, $passive, $full) = @_; # Declare the return variable. my $retVal; # Check for a gene correspondence file. my ($fileName, $converse) = ServerThing::CheckForGeneCorrespondenceFile($genome1, $genome2); if ($fileName) { # Here we found one, so read it in. $retVal = ServerThing::ReadGeneCorrespondenceFile($fileName, $converse, $full); } # Were we successful? if (! defined $retVal) { # Here we either don't have a correspondence file, or the one that's there is # invalid. If we are NOT in passive mode, then this means we need to create # the file. if (! $passive) { ($fileName, $converse) = ServerThing::CreateGeneCorrespondenceFile($genome1, $genome2); # Now try reading the new file. if (defined $fileName) { $retVal = ServerThing::ReadGeneCorrespondenceFile($fileName, $converse); } } } # Return the result. return $retVal; } =head2 Internal Utility Methods The methods in this section are used internally by this package. =head3 RunRequest ServerThing::RunRequest($cgi, $serverName); Run a request from the specified server using the incoming CGI parameter object for the parameters. =over 4 =item cgi CGI query object containing the parameters from the web service request. The significant parameters are as follows. =over 8 =item function Name of the function to run. =item args Parameters for the function. =item encoding Encoding scheme for the function parameters, either C<yaml> (the default) or C<json> (used by the Java interface). =back Certain unusual requests can come in outside of the standard function interface. These are indicated by special parameters that override all the others. =over 8 =item pod Display a POD documentation module. =item code Display an example code file. =item file Transfer a file (not implemented). =back =item serverThing Server object against which to run the request. =back =cut sub RunRequest { # Get the parameters. my ($cgi, $serverThing, $docURL) = @_; # Determine the request type. my $module = $cgi->param('pod'); if ($module) { # Here we have a documentation request. if ($module eq 'ServerScripts') { # Here we list the server scripts. require ListServerScripts; ListServerScripts::main(); } else { # In this case, we produce POD HTML. ProducePod($cgi->param('pod')); } } elsif ($cgi->param('code')) { # Here the user wants to see the code for one of our scripts. LineNumberize($cgi->param('code')); } elsif ($cgi->param('file')) { # Here we have a file request. Process according to the type. my $type = $cgi->param('file'); if ($type eq 'open') { OpenFile($cgi->param('name')); } elsif ($type eq 'create') { CreateFile(); } elsif ($type eq 'read') { ReadChunk($cgi->param('name'), $cgi->param('location'), $cgi->param('size')); } elsif ($type eq 'write') { WriteChunk($cgi->param('name'), $cgi->param('data')); } else { Die("Invalid file function \"$type\"."); } } else { # The default is a function request. Get the function name. my $function = $cgi->param('function') || ""; Trace("Server function for task $$ is $function.") if T(3); # Insure the function name is valid. Die("Invalid function name.") if $function =~ /\W/; # Determing the encoding scheme. The default is YAML. my $encoding = $cgi->param('encoding') || 'yaml'; # The parameter structure will go in here. my $args = {}; # Start the timer. my $start = time(); # The output document goes in here. my $document; # The sapling database goes in here. my $sapling; # Protect from errors. eval { # Here we parse the arguments. This is affected by the encoding parameter. # Get the argument string. my $argString = $cgi->param('args'); # Only proceed if we found one. if ($argString) { if ($encoding eq 'yaml') { # Parse the arguments using YAML. $args = YAML::Load($argString); } elsif ($encoding eq 'json') { # Parse the arguments using JSON. Trace("Incoming string is:\n$argString") if T(3); $args = JSON::Any->jsonToObj($argString); } else { Die("Invalid encoding type $encoding."); } } }; # Check to make sure we got everything. if ($@) { SendError($@, "Error formatting parameters."); } elsif (! $function) { SendError("No function specified.", "No function specified."); } else { $document = eval { $serverThing->$function($args) }; # If we have an error, create an error document. if ($@) { SendError($@, "Error detected by service."); Trace("Error encountered by service: $@") if T(0); } else { # No error, so we output the result. Start with an HTML header. print $cgi->header(-type => 'text/plain'); # The nature of the output depends on the encoding type. my $string; if ($encoding eq 'yaml') { $string = YAML::Dump($document); } else { $string = JSON::Any->objToJson($document); } print $string; MemTrace(length($string) . " bytes returned from $function by task $$.") if T(Memory => 3); } } # Stop the timer. my $duration = int(time() - $start + 0.5); Trace("Function $function executed in $duration seconds by task $$.") if T(2); } } =head3 CreateFile ServerThing::CreateFile(); Create a new, empty temporary file and send its name back to the client. =cut sub CreateFile { ##TODO: Code } =head3 OpenFile ServerThing::OpenFile($name); Send the length of the named file back to the client. =over 4 =item name ##TODO: name description =back =cut sub OpenFile { # Get the parameters. my ($name) = @_; ##TODO: Code } =head3 ReadChunk ServerThing::ReadChunk($name, $location, $size); Read the indicated number of bytes from the specified location of the named file and send them back to the client. =over 4 =item name ##TODO: name description =item location ##TODO: location description =item size ##TODO: size description =back =cut sub ReadChunk { # Get the parameters. my ($name, $location, $size) = @_; ##TODO: Code } =head3 WriteChunk ServerThing::WriteChunk($name, $data); Write the specified data to the named file. =over 4 =item name ##TODO: name description =item data ##TODO: data description =back =cut sub WriteChunk { # Get the parameters. my ($name, $data) = @_; ##TODO: Code } =head3 LineNumberize ServerThing::LineNumberize($module); Output the module line by line with line numbers =over 4 =item module Name of the module to line numberized =back =cut sub LineNumberize { # Get the parameters. my ($module) = @_; my $fks_path = "$FIG_Config::fig_disk/dist/releases/current/FigKernelScripts/$module"; # Start the output page. print CGI::header(); print CGI::start_html(-title => 'Documentation Page', -style => { src => "" }); # Protect from errors. eval { if (-e $fks_path) { print "<pre>\n"; my $i = 1; foreach my $line (`cat $fks_path`) { print "$i.\t$line"; $i++; } print "</pre>\n"; } else { print "File $fks_path not found"; } }; # Process any error. if ($@) { print CGI::blockquote({ class => 'error' }, $@); } # Close off the page. print CGI::end_html(); } =head3 ProducePod ServerThing::ProducePod($module); Output the POD documentation for the specified module. =over 4 =item module Name of the module whose POD document is to be displayed. =back =cut sub ProducePod { # Get the parameters. my ($module) = @_; # Start the output page. print CGI::header(); print CGI::start_html(-title => "$module Documentation Page", -style => { src => "" }); # Protect from errors. eval { # We'll format the HTML text in here. require DocUtils; my $html = DocUtils::ShowPod($module, ""); # Output the POD HTML. print $html; }; # Process any error. if ($@) { print CGI::blockquote({ class => 'error' }, $@); } # Close off the page. print CGI::end_html(); } =head3 TraceErrorLog ServerThing::TraceErrorLog($name, $errorLog); Trace the specified error log file. This is a very dinky routine that performs a task required by L</RunTool> in multiple places. =over 4 =item name Name of the tool relevant to the log file. =item errorLog Name of the log file. =back =cut sub TraceErrorLog { my ($name, $errorLog) = @_; my $errorData = Tracer::GetFile($errorLog); Trace("$name error log:\n$errorData"); } =head3 SendError ServerThing::SendError($message, $status); Fail an HTTP request with the specified error message and the specified status message. =over 4 =item message Detailed error message. This is sent as the page content. =item status Status message. This is sent as part of the status code. =back =cut sub SendError { # Get the parameters. my ($message, $status) = @_; Trace("Error \"$status\" $message") if T(2); # Check for a DBserver error. These can be retried and get a special status # code. my $realStatus; if ($message =~ /DBServer Error:\s+/) { $realStatus = "503 $status"; } else { $realStatus = "500 $status"; } # Print the header and the status message. print CGI::header(-type => 'text/plain', -status => $realStatus); # Print the detailed message. print $message; } 1; | http://biocvs.mcs.anl.gov/viewcvs.cgi/FigKernelPackages/ServerThing.pm?revision=1.56&view=markup&sortdir=down&pathrev=mgrast_release_3_0_4 | CC-MAIN-2020-10 | refinedweb | 3,659 | 59.9 |
execution at the next available price
I'm testing a strategy with multiple stocks some of which have missing data for the next day after the buy order is submitted. Is it possible to execute a buy order on the first available price (skip days with missing data)? Should be something straightforward that I'm missing... I'm using by own datafeeds (daily bars) if this matters.
Orders in the daily timeframe are executed when the next "date" (ie: bar) shows up.
Yes, right, but how does this work with multiple stocks? Say, on day 1 I submit the buy order for stocks A and B. A has data on day 2, while B only has data on day 3. So, I want the order for A to be executed on day 2 and the order for B to be executed on day 3. But what I get is both orders being executed on day 3. I guess my ultimate question is how to make datafeeds for different stocks kind of independent of each other? Right now I feel that all datafeeds for different stocks are inner-merged on common nonmissing dates and the days that have data for some stocks but not for the others are dropped. Is this the intended feature or am I doing something wrong?
The orders have an associated target (the data feed) and they will only be executed when the data feed moves forward.
@anatur said in execution at the next available price:
I guess my ultimate question is how to make datafeeds for different stocks kind of independent of each other?
They are already independent.
@anatur said in execution at the next available price:
Right now I feel that all datafeeds for different stocks are inner-merged on common nonmissing dates and the days that have data for some stocks but not for the others are dropped
Nothing is merged.
@anatur said in execution at the next available price:
Is this the intended feature or am I doing something wrong?
Without some simple code to replicate what you are doing, nothing can be really said.
Thanks, good to know that it is supposed to work how I need it to work. Could you please help me to understand what I'm doing wrong then?
That's how I feed the data. My dataframe is ohlc, with security, date, and OHLCV columns.
for s in ohlc.security.unique().tolist(): data_df = ohlc[ohlc.security == s].copy() data = bt.feeds.PandasData(dataname = data_df, name = s, datetime= data_df.columns.get_loc('date'), open= data_df.columns.get_loc('Price_open'), close= data_df.columns.get_loc('Price'), high = data_df.columns.get_loc('Price_high'), low= data_df.columns.get_loc('Price_low'), volume = data_df.columns.get_loc('Volume')) cerebro.adddata(data)
The dummy strategy I'm testing is buy and hold all securities at the beginning of the period.
class BH_strategy(bt.Strategy): def __init__(self): self.first_day = True self.stocks = self.getdatanames() def next(self): if self.first_day: for s in self.stocks: self.order_target_percent(target=0.01, data = self.getdatabyname(s)) self.first_day = False
The notify_order method is taken from the introduction tutorial and I didn't change anything in it:
def notify_order(self, order): if order.status in [order.Submitted, order.Accepted]: return if order.status in [order.Completed]: if order.isbuy(): self.log('BUY EXECUTED, %.2f' % order.executed.price) elif order.issell(): self.log('SELL EXECUTED, %.2f' % order.executed.price) self.bar_executed = len(self) elif order.status in [order.Canceled, order.Margin, order.Rejected]: self.log('Order Canceled/Margin/Rejected') self.order = None
The first date in my dataset is 2012-01-01. And I have data for some stocks on 2012-01-02. But when I run the backtester, I get all orders executed on 2012-01-03.
Execution date may seem obvious to you, but see:
There are no data samples (the first 3 days of 2 data feeds would be enough)
The
notify_orderdoesn't show when the orders have been executed and there is no log of what it has actually printed.
An obvious question would be: how do you know when the orders were executed?
A working example (in case there is really a bug somewhere) is the best way to go. | https://community.backtrader.com/topic/658/execution-at-the-next-available-price | CC-MAIN-2020-10 | refinedweb | 701 | 59.7 |
The author selected The Computer History Museum to receive a donation as part of the Write for DOnations program.
Introduction
Twitter bots are a powerful way of managing your social media as well as extracting information from the microblogging network. By leveraging Twitter’s versatile APIs, a bot can do a lot of things: tweet, retweet, “favorite-tweet”, follow people with certain interests, reply automatically, and so on. Even though people can, and do, abuse their bot’s power, leading to a negative experience for other users, research shows that people view Twitter bots as a credible source of information. For example, a bot can keep your followers engaged with content even when you’re not online. Some bots even provide critical and helpful information, like @EarthquakesSF. The applications for bots are limitless. As of 2019, it is estimated that bots account for about 24% of all tweets on Twitter.
In this tutorial, you’ll build a Twitter bot using this Twitter API library for Python. You’ll use API keys from your Twitter account to authorize your bot and build a to capable of scraping content from two websites. Furthermore, you’ll program your bot to alternately tweet content from these two websites and at set time intervals. Note that you’ll use Python 3 in this tutorial.
Prerequisites
You will need the following to complete this tutorial:
Note: You’ll be setting up a developer account with Twitter, which involves an application review by Twitter before your can access the API keys you require for this bot. Step 1 walks through the specific details for completing the application.
Step 1 — Setting Up Your Developer Account and Accessing Your Twitter API Keys
Before you begin coding your bot, you’ll need the API keys for Twitter to recognize the requests of your bot. In this step, you’ll set up your Twitter Developer Account and access your API keys for your Twitter bot.
To get your API keys, head over to developer.twitter.com and register your bot application with Twitter by clicking on Apply in the top right section of the page.
Now click on Apply for a developer account.
Next, click on Continue to associate your Twitter username with your bot application that you’ll be building in this tutorial.
On the next page, for the purposes of this tutorial, you’ll choose the I am requesting access for my own personal use option since you’ll be building a bot for your own personal education use.
After choosing your Account Name and Country, move on to the next section. For What use case(s) are you interested in?, pick the Publish and curate Tweets and Student project / Learning to code options. These categories are the best representation of why you’re completing this tutorial.
Then provide a description of the bot you’re trying to build. Twitter requires this to protect against bot abuse; in 2018 they introduced such vetting. For this tutorial, you’ll be scraping tech-focused content from The New Stack and The Coursera Blog.
When deciding what to enter into the description box, model your answer on the following lines for the purposes of this tutorial:
I’m following a tutorial to build a Twitter bot that will scrape content from websites like thenewstack.io (The New Stack) and blog.coursera.org (Coursera’s Blog) and tweet quotes from them. The scraped content will be aggregated and will be tweeted in a round-robin fashion via Python generator functions.
Finally, choose no for Will your product, service, or analysis make Twitter content or derived information available to a government entity?
Next, accept Twitter’s terms and conditions, click on Submit application, and then verify your email address. Twitter will send a verification email to you after your submission of this form.
Once you verify your email, you’ll get an Application under review page with a feedback form for the application process.
You will also receive another email from Twitter regarding the review:
The timeline for Twitter’s application review process can vary significantly, but often Twitter will confirm this within a few minutes. However, should your application’s review take longer than this, it is not unusual, and you should receive it within a day or two. Once you receive confirmation, Twitter has authorized you to generate your keys. You can access these under the Keys and tokens tab after clicking the details button of your app on developer.twitter.com/apps.
Finally go to the Permissions tab on your app’s page and set the Access Permission option to Read and Write since you want to write tweet content too. Usually, you would use the read-only mode for research purposes like analyzing trends, data-mining, and so on. The final option allows users to integrate chatbots into their existing apps, since chatbots require access to direct messages.
You have access to Twitter’s powerful API, which will be a crucial part of your bot application. Now you’ll set up your environment and begin building your bot.
Step 2 — Building the Essentials
In this step, you’ll write code to authenticate your bot with Twitter using the API keys, and make the first programmatic tweet via your Twitter handle. This will serve as a good milestone in your path towards the goal of building a Twitter bot that scrapes content from The New Stack and the Coursera Blog and tweets them periodically.
First, you’ll set up a project folder and a specific programming environment for your project.
Create your project folder:
Move into your project folder:
Then create a new Python virtual environment for your project:
Then activate your environment using the following command:
- source bird-env/bin/activate
This will attach a
(bird-env) prefix to the prompt in your terminal window.
Now move to your text editor and create a file called
credentials.py, which will store your Twitter API keys:
Add the following content, replacing the highlighted code with your keys from Twitter:
bird/credentials.py
ACCESS_TOKEN='your-access-token' ACCESS_SECRET='your-access-secret' CONSUMER_KEY='your-consumer-key' CONSUMER_SECRET='your-consumer-secret'
Now, you'll install the main API library for sending requests to Twitter. For this project, you'll require the following libraries:
nltk,
requests,
lxml,
random, and
time.
random and
time are part of Python's standard library, so you don't need to separately install these libraries. To install the remaining libraries, you'll use pip, a package manager for Python.
Open your terminal, ensure you're in the project folder, and run the following command:
- pip3 install lxml nltk requests twitter
lxmland
requests: You will use them for web scraping.
nltk: (natural language toolkit) You will use to split paragraphs of blogs into sentences.
random: You will use this to randomly select parts of an entire scraped blog post.
time: You will use to make your bot sleep periodically after certain actions.
Once you have installed the libraries, you're all set to begin programming. Now, you'll import your credentials into the main script that will run the bot. Alongside
credentials.py, from your text editor create a file in the
bird project directory, and name it
bot.py:
In practice, you would spread the functionality of your bot across multiple files as it grows more and more sophisticated. However, in this tutorial, you'll put all of your code in a single script,
bot.py, for demonstration purposes.
First you'll test your API keys by authorizing your bot. Begin by adding the following snippet to
bot.py:
bird/bot.py
import random import time from lxml.html import fromstring import nltk nltk.download('punkt') import requests from twitter import OAuth, Twitter import credentials
Here, you import the required libraries; and in a couple of instances you import the necessary functions from the libraries. You will use the
fromstring function later in the code to convert the string source of a scraped webpage to a tree structure that makes it easier to extract relevant information from the page.
OAuth will help you in constructing an authentication object from your keys, and
Now extend
bot.py with the following lines:
bird/bot.py
... tokenizer = nltk.data.load('tokenizers/punkt/english.pickle') oauth = OAuth( credentials.ACCESS_TOKEN, credentials.ACCESS_SECRET, credentials.CONSUMER_KEY, credentials.CONSUMER_SECRET ) t = Twitter(auth=oauth)
nltk.download('punkt') downloads a dataset necessary for parsing paragraphs and tokenizing (splitting) them into smaller components.
tokenizer is the object you'll use later in the code for splitting paragraphs written in English.
oauth is the authentication object constructed by feeding the imported
OAuth class with your API keys. You authenticate your bot via the line
t = Twitter(auth=oauth).
ACCESS_TOKEN and
ACCESS_SECRET help in recognizing your application. Finally,
CONSUMER_KEY and
CONSUMER_SECRET help in recognizing the handle via which the application interacts with Twitter. You'll use this
t object to communicate your requests to Twitter.
Now save this file and run it in your terminal using the following command:
Your output will look similar to the following, which means your authorization was successful:
Output[nltk_data] Downloading package punkt to /Users/binaryboy/nltk_data... [nltk_data] Package punkt is already up-to-date!
If you do receive an error, verify your saved API keys with those in your Twitter developer account and try again. Also ensure that the required libraries are installed correctly. If not, use
pip3 again to install them.
Now you can try tweeting something programmatically. Type the same command on the terminal with the
-i flag to open the Python interpreter after the execution of your script:
Next, type the following to send a tweet via your account:
- t.statuses.update(status="Just setting up my Twttr bot")
Now open your Twitter timeline in a browser, and you'll see a tweet at the top of your timeline containing the content you posted.
Close the interpreter by typing
quit() or
CTRL + D.
Your bot now has the fundamental capability to tweet. To develop your bot to tweet useful content, you'll incorporate web scraping in the next step.
Step 3 — Scraping Websites for Your Tweet Content
To introduce some more interesting content to your timeline, you'll scrape content from the New Stack and the Coursera Blog, and then post this content to Twitter in the form of tweets. Generally, to scrape the appropriate data from your target websites, you have to experiment with their HTML structure. Each tweet coming from the bot you'll build in this tutorial will have a link to a blog post from the chosen websites, along with a random quote from that blog. You'll implement this procedure within a function specific to scraping content from Coursera, so you'll name it
scrape_coursera().
First open
bot.py:
Add the
scrape_coursera() function to the end of your file:
bird/bot.py
... t = Twitter(auth=oauth) def scrape_coursera():
To scrape information from the blog, you'll first request the relevant webpage from Coursera's servers. For that you will use the
get() function from the
requests library.
get() takes in a URL and fetches the corresponding webpage. So, you'll pass
blog.coursera.org as an argument to
get(). But you also need to provide a header in your GET request, which will ensure Coursera's servers recognize you as a genuine client. Add the following highlighted lines to your
scrape_coursera() function to provide a header:
bird/bot.py
def scrape_coursera(): HEADERS = { 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5)' ' AppleWebKit/537.36 (KHTML, like Gecko) Cafari/537.36' }
This header will contain information pertaining to a defined web browser running on a specific operating system. As long as this information (usually referred to as
User-Agent) corresponds to real web browsers and operating systems, it doesn't matter whether the header information aligns with the actual web browser and operating system on your computer. Therefore this header will work fine for all systems.
Once you have defined the headers, add the following highlighted lines to make a GET request to Coursera by specifying the URL of the blog webpage:
bird/bot.py
... def scrape_coursera(): HEADERS = { 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5)' ' AppleWebKit/537.36 (KHTML, like Gecko) Cafari/537.36' } r = requests.get('', headers=HEADERS) tree = fromstring(r.content)
This will fetch the webpage to your machine and save the information from the entire webpage in the variable
r. You can assess the HTML source code of the webpage using the
content attribute of
r. Therefore, the value of
r.content is the same as what you see when you inspect the webpage in your browser by right clicking on the page and choosing the Inspect Element option.
Here you've also added the
fromstring function. You can pass the webpage's source code to the
fromstring function imported from the
lxml library to construct the
tree structure of the webpage. This tree structure will allow you to conveniently access different parts of the webpage. HTML source code has a particular tree-like structure; every element is enclosed in the
<html> tag and nested thereafter.
Now, open in a browser and inspect its HTML source using the browser's developer tools. Right click on the page and choose the Inspect Element option. You'll see a window appear at the bottom of the browser, showing part of the page's HTML source code.
Next, right click on the thumbnail of any visible blog post and then inspect it. The HTML source will highlight the relevant HTML lines where that blog thumbnail is defined. You'll notice that all blog posts on this page are defined within a
<div> tag with a class of
"recent":
Thus, in your code, you'll use all such blog post
div elements via their XPath, which is a convenient way of addressing elements of a web page.
To do so, extend your function in
bot.py as follows:
bird/bot.py
... def scrape_coursera(): HEADERS = { 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5)' ' AppleWebKit/537.36 (KHTML, like Gecko) Cafari/537.36' } r = requests.get('', headers=HEADERS) tree = fromstring(r.content) links = tree.xpath('//div[@class="recent"]//div[@class="title"]/a/@href') print(links) scrape_coursera()
Here, the XPath (the string passed to
tree.xpath()) communicates that you want
div elements from the entire web page source, of class
"recent". The
// corresponds to searching the whole webpage,
div tells the function to extract only the
div elements, and
[@class="recent"] asks it to only extract those
div elements that have the values of their
class attribute as
"recent".
However, you don't need these elements themselves, you only need the links they're pointing to, so that you can access the individual blog posts to scrape their content. Therefore, you extract all the links using the values of the
href anchor tags that are within the previous
div tags of the blog posts.
To test your program so far, you call the
scrape_coursera() function at the end of
bot.py.
Save and exit
bot.py.
Now run
bot.py with the following command:
In your output, you'll see a list of URLs like the following:
Output['', '', ...]
After you verify the output, you can remove the last two highlighted lines from
bot.py script:
bird/bot.py
... def scrape_coursera(): ... tree = fromstring(r.content) links = tree.xpath('//div[@class="recent"]//div[@class="title"]/a/@href') ~~print(links)~~ ~~scrape_coursera()~~
Now extend the function in
bot.py with the following highlighted line to extract the content from a blog post:
bird/bot.py
... def scrape_coursera(): ... links = tree.xpath('//div[@class="recent"]//div[@class="title"]/a/@href') for link in links: r = requests.get(link, headers=HEADERS) blog_tree = fromstring(r.content)
You iterate over each link, fetch the corresponding blog post, extract a random sentence from the post, and then tweet this sentence as a quote, along with the corresponding URL. Extracting a random sentence involves three parts:
- Grabbing all the paragraphs in the blog post as a list.
- Selecting a paragraph at random from the list of paragraphs.
- Selecting a sentence at random from this paragraph.
You'll execute these steps for each blog post. For fetching one, you make a GET request for its link.
Now that you have access to the content of a blog, you will introduce the code that executes these three steps to extract the content you want from it. Add the following extension to your scraping function that executes the three steps:
bird/bot.py
... def scrape_coursera(): ... for link in links: r = requests.get(link, headers=HEADERS) blog_tree = fromstring(r.content) paras = blog_tree.xpath('//div[@class="entry-content"]/p') paras_text = [para.text_content() for para in paras if para.text_content()] para = random.choice(paras_text) para_tokenized = tokenizer.tokenize(para) for _ in range(10): text = random.choice(para) if text and 60 < len(text) < 210: break
If you inspect the blog post by opening the first link, you'll notice that all the paragraphs belong to the
div tag having
entry-content as its class. Therefore, you extract all paragraphs as a list with
paras = blog_tree.xpath('//div[@class="entry-content"]/p').
The list elements aren't literal paragraphs; they are
Element objects. To extract the text out of these objects, you use the
text_content() method. This line follows Python's list comprehension design pattern, which defines a collection using a loop that is usually written out in a single line. In
bot.py, you extract the text for each paragraph element object and store it in a list if the text is not empty. To randomly choose a paragraph from this list of paragraphs, you incorporate the
random module.
Finally, you have to select a sentence at random from this paragraph, which is stored in the variable
para. For this task, you first break the paragraph into sentences. One approach to accomplish this is using the Python's
split() method. However this can be difficult since a sentence can be split at multiple breakpoints. Therefore, to simplify your splitting tasks, you leverage natural language processing through the
nltk library. The
tokenizer object you defined earlier in the tutorial will be useful for this purpose.
Now that you have a list of sentences, you call
random.choice() to extract a random sentence. You want this sentence to be a quote for a tweet, so it can't exceed 280 characters. However, for aesthetic reasons, you'll select a sentence that is neither too big nor too small. You designate that your tweet sentence should have a length between 60 to 210 characters. The sentence
random.choice() picks might not satisfy this criterion. To identify the right sentence, your script will make ten attempts, checking for the criterion each time. Once the randomly picked-up sentence satisfies your criterion, you can break out of the loop.
Although the probability is quite low, it is possible that none of the sentences meet this size condition within ten attempts. In this case, you'll ignore the corresponding blog post and move on to the next one.
Now that you have a sentence to quote, you can tweet it with the corresponding link. You can do this by yielding a string that contains the randomly picked-up sentence as well as the corresponding blog link. The code that calls this
scrape_coursera() function will then post the yielded string to Twitter via Twitter's API.
Extend your function as follows:
bird/bot.py
... def scrape_coursera(): ... for link in links: ... para_tokenized = tokenizer.tokenize(para) for _ in range(10): text = random.choice(para) if text and 60 < len(text) < 210: break else: yield None yield '"%s" %s' % (text, link)
The script only executes the
else statement when the preceding
for loop doesn't break. Thus, it only happens when the loop is not able to find a sentence that fits your size condition. In that case, you simply yield
None so that the code that calls this function is able to determine that there is nothing to tweet. It will then move on to call the function again and get the content for the next blog link. But if the loop does break it means the function has found an appropriate sentence; the script will not execute the
else statement, and the function will yield a string composed of the sentence as well as the blog link, separated by a single whitespace.
The implementation of the
scrape_coursera() function is almost complete. If you want to make a similar function to scrape another website, you will have to repeat some of the code you've written for scraping Coursera's blog. To avoid rewriting and duplicating parts of the code and to ensure your bot's script follows the DRY principle (Don't Repeat Yourself), you'll identify and abstract out parts of the code that you will use again and again for any scraper function written later.
Regardless of the website the function is scraping, you'll have to randomly pick up a paragraph and then choose a random sentence from this chosen paragraph — you can extract out these functionalities in separate functions. Then you can simply call these functions from your scraper functions and achieve the desired result. You can also define
HEADERS outside the
scrape_coursera() function so that all of the scraper functions can use it. Therefore, in the code that follows, the
HEADERS definition should precede that of the scraper function, so that eventually you're able to use it for other scrapers:
bird/bot.py
... HEADERS = { 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5)' ' AppleWebKit/537.36 (KHTML, like Gecko) Cafari/537.36' } def scrape_coursera(): r = requests.get('', headers=HEADERS) ...
Now you can define the
extract_paratext() function for extracting a random paragraph from a list of paragraph objects. The random paragraph will pass to the function as a
paras argument, and return the chosen paragraph's tokenized form that you'll use later for sentence extraction:
bird/bot.py
... scrape_coursera(): r = requests.get('', headers=HEADERS) ...
Next, you will define a function that will extract a random sentence of suitable length (between 60 and 210 characters) from the tokenized paragraph it gets as an argument, which you can name as
para. If such a sentence is not discovered after ten attempts, the function returns
None instead. Add the following highlighted code to define the
extract_text() function:
bird/bot.py
... def extract_paratext(): r = requests.get('', headers=HEADERS) ...
Once you have defined these new helper functions, you can redefine the
scrape_coursera() function to look as follows:
bird/bot.py
... def extract_paratext(): for _ in range(10):<^> text = random.choice(para) ...)
Save and exit
bot.py.
Here you're using
yield instead of
return because, for iterating over the links, the scraper function will give you the tweet strings one-by-one in a sequential fashion. This means when you make a first call to the scraper
sc defined as
sc = scrape_coursera(), you will get the tweet string corresponding to the first link among the list of links that you computed within the scraper function. If you run the following code in the interpreter, you'll get
string_1 and
string_2 as displayed below, if the
links variable within
scrape_coursera() holds a list that looks like
["", "", ...].
Instantiate the scraper and call it
sc:
>>> sc = scrape_coursera()
It is now a generator; it generates or scrapes relevant content from Coursera, one at a time. You can access the scraped content one-by-one by calling
next() over
sc sequentially:
>>> string_1 = next(sc) >>> string_2 = next(sc)
Now you can
>>> print(string_1) "Other speakers include Priyanka Sharma, director of cloud native alliances at GitLab and Dan Kohn, executive director of the Cloud Native Computing Foundation." >>> >>> print(string_2) "You can learn how to use the power of Python for data analysis with a series of courses covering fundamental theory and project-based learning." >>>
If you use
return instead, you will not be able to obtain the strings one-by-one and in a sequence. If you simply replace the
yield with
return in
scrape_coursera(), you'll always get the string corresponding to the first blog post, instead of getting the first one in the first call, second one in the second call, and so on. You can modify the function to simply return a list of all the strings corresponding to all the links, but that is more memory intensive. Also, this kind of program could potentially make a lot of requests to Coursera's servers within a short span of time if you want the entire list quickly. This could result in your bot getting temporarily banned from accessing a website. Therefore,
yield is the best fit for a wide variety of scraping jobs, where you only need information scraped one-at-a-time.
Step 4 — Scraping Additional Content
In this step, you'll build a scraper for thenewstack.io. The process is similar to what you've completed in the previous step, so this will be a quick overview.
Open the website in your browser and inspect the page source. You'll find here that all blog sections are
div elements of class
normalstory-box.
Now you'll make a new scraper function named
scrape_thenewstack() and make a GET request to thenewstack.io from within it. Next, extract the links to the blogs from these elements and then iterate over each link. Add the following code to achieve this:
bird/bot.py
... def scrape_coursera(): ... yield '"%s" %s' % (text, link) def scrape_thenewstack(): """Scrapes news from thenewstack.io""" r = requests.get('', verify=False) tree = fromstring(r.content) links = tree.xpath('//div[@class="normalstory-box"]/header/h2/a/@href') for link in links:
You use the
verify=False flag because websites can sometimes have expired security certificates and it's OK to access them if no sensitive data is involved, as is the case here. The
verify=False flag tells the
requests.get method to not verify the certificates and continue fetching data as usual. Otherwise, the method throws an error about expired security certificates.
You can now extract the paragraphs of the blog corresponding to each link, and use the
extract_paratext() function you built in the previous step to pull out a random paragraph from the list of available paragraphs. Finally, extract a random sentence from this paragraph using the
extract_text() function, and then
yield it with the corresponding blog link. Add the following highlighted code to your file to accomplish these tasks:
bird/bot.py
... def scrape_thenewstack(): ...)
You now have an idea of what a scraping process generally encompasses. You can now build your own, custom scrapers that can, for example, scrape the images in blog posts instead of random quotes. For that, you can look for the relevant
<img> tags. Once you have the right path for tags, which serve as their identifiers, you can access the information within tags using the names of corresponding attributes. For example, in the case of scraping images, you can access the links of images using their
src attributes.
At this point, you've built two scraper functions for scraping content from two different websites, and you've also built two helper functions to reuse functionalities that are common across the two scrapers. Now that your bot knows how to tweet and what to tweet, you'll write the code to tweet the scraped content.
Step 5 — Tweeting the Scraped Content
In this step, you'll extend the bot to scrape content from the two websites and tweet it via your Twitter account. More precisely, you want it to tweet content from the two websites alternately, and at regular intervals of ten minutes, for an indefinite period of time. Thus, you will use an infinite while loop to implement the desired functionality. You'll do this as part of a
main() function, which will implement the core high-level process that you'll want your bot to follow:
bird/bot.py
... def scrape_thenewstack(): ... yield '"%s" %s' % (text, link) def main(): """Encompasses the main loop of the bot.""" print('---Bot started---n')n') time.sleep(600) except StopIteration: news_iterators[i] = globals()[newsfuncs[i]]()
You first create a list of the names of the scraping functions you defined earlier, and call it as
news_funcs. Then you create an empty list that will hold the actual scraper functions, and name that list as
news_iterators. You then populate it by going through each name in the
news_funcs list and appending the corresponding iterator in the
news_iterators list. You're using Python's built-in
globals() function. This returns a dictionary that maps variable names to actual variables within your script. An iterator is what you get when you call a scraper function: for example, if you write
coursera_iterator = scrape_coursera(), then
coursera_iterator will be an iterator on which you can invoke
next() calls. Each
next() call will return a string containing a quote and its corresponding link, exactly as defined in the
scrape_coursera() function's
yield statement. Each
next() call goes through one iteration of the
for loop in the
scrape_coursera() function. Thus, you can only make as many
next() calls as there are blog links in the
scrape_coursera() function. Once that number exceeds, a
StopIteration exception will be raised.
Once both the iterators populate the
news_iterators list, the main
while loop starts. Within it, you have a
for loop that goes through each iterator and tries to obtain the content to be tweeted. After obtaining the content, your bot tweets it and then sleeps for ten minutes. If the iterator has no more content to offer, a
StopIteration exception is raised, upon which you refresh that iterator by re-instantiating it, to check for the availability of newer content on the source website. Then you move on to the next iterator, if available. Otherwise, if execution reaches the end of the iterators list, you restart from the beginning and tweet the next available content. This makes your bot tweet content alternately from the two scrapers for as long as you want.
All that remains now is to make a call to the
main() function. You do this when the script is called directly by the Python interpreter:
bird/bot.py
... def main(): print('---Bot started---n')<^> news_funcs = ['scrape_coursera', 'scrape_thenewstack'] ... if __name__ == "__main__": main()
The following is a completed version of the
bot.py script. You can also view the script on this GitHub repository.
bird/bot.py
"""Main bot script - bot.py For the DigitalOcean Tutorial. """ import random import time from lxml.html import fromstring import nltk nltk.download('punkt') import requests from twitter import OAuth, Twitter import credentials tokenizer = nltk.data.load('tokenizers/punkt/english.pickle') oauth = OAuth( credentials.ACCESS_TOKEN, credentials.ACCESS_SECRET, credentials.CONSUMER_KEY, credentials.CONSUMER_SECRET ) t = Twitter(auth=oauth)) def scrape_thenewstack(): """Scrapes news from thenewstack.io""" r = requests.get('', verify=False) tree = fromstring(r.content)) def main(): """Encompasses the main loop of the bot.""" print('Bot started.')') time.sleep(600) except StopIteration: news_iterators[i] = globals()[newsfuncs[i]]() if __name__ == "__main__": main()
Save and exit
bot.py.
The following is a sample execution of
bot.py:
You will receive output showing the content that your bot has scraped, in a similar format to the following:
Output[nltk_data] Downloading package punkt to /Users/binaryboy/nltk_data... [nltk_data] Package punkt is already up-to-date! ---Bot started--- "Take the first step toward your career goals by building new skills." "Other speakers include Priyanka Sharma, director of cloud native alliances at GitLab and Dan Kohn, executive director of the Cloud Native Computing Foundation." "You can learn how to use the power of Python for data analysis with a series of courses covering fundamental theory and project-based learning." "“Real-user monitoring is really about trying to understand the underlying reasons, so you know, ‘who do I actually want to fly with?"
After a sample run of your bot, you'll see a full timeline of programmatic tweets posted by your bot on your Twitter page. It will look something like the following:
As you can see, the bot is tweeting the scraped blog links with random quotes from each blog as highlights. This feed is now an information feed with tweets alternating between blog quotes from Coursera and thenewstack.io. You've built a bot that aggregates content from the web and posts it on Twitter. You can now broaden the scope of this bot as per your wish by adding more scrapers for different websites, and the bot will tweet content coming from all the scrapers in a round-robin fashion, and in your desired time intervals.
Conclusion
In this tutorial you built a basic Twitter bot with Python and scraped some content from the web for your bot to tweet. There are many bot ideas to try; you could also implement your own ideas for a bot's utility. You can combine the versatile functionalities offered by Twitter's API and create something more complex. For a version of a more sophisticated Twitter bot, check out chirps, a Twitter bot framework that uses some advanced concepts like multithreading to make the bot do multiple things simultaneously. There are also some fun-idea bots, like misheardly. There are no limits on the creativity one can use while building Twitter bots. Finding the right API endpoints to hit for your bot's implementation is essential.
Finally, bot etiquette or ("botiquette") is important to keep in mind when building your next bot. For example, if your bot incorporates retweeting, make all tweets' text pass through a filter to detect abusive language before retweeting them. You can implement such features using regular expressions and natural language processing. Also, while looking for sources to scrape, follow your judgment and avoid ones that spread misinformation. To read more about botiquette, you can visit this blog post by Joe Mayo on the topic. | https://www.xpresservers.com/tag/scrape/ | CC-MAIN-2019-22 | refinedweb | 5,647 | 55.34 |
Walkthrough: Creating and Using a Windows Client Control Add-in
The following walkthrough demonstrates how to develop a Microsoft Dynamics NAV Windows client add-in and use it on a Microsoft Dynamics NAV Windows client page. Add-ins are Microsoft .NET Framework assemblies that enable you to add custom functionality to the Microsoft Dynamics NAV Windows client. An API lets you develop add-ins without having to access the Dynamics NAV source code.
Note
With Microsoft Dynamics NAV 2013 R2 you can develop control add-ins that are displayed on both Microsoft Dynamics NAV Windows client and Microsoft Dynamics NAV Web client. For more information, see Extending Any Microsoft Dynamics NAV Client Using Control Add-ins.
In a typical business scenario, .NET Framework developers create add-ins using Microsoft Visual Studio Express, Visual Studio 2008, Visual Studio 2010, or Visual Studio 2012. Implementers of Dynamics NAV solutions then use the add-ins on Microsoft Dynamics NAV Windows client pages.
About This Walkthrough
This walkthrough illustrates the following tasks:
Creating an Add-in with Visual Studio.
Copying the Add-in Assembly to the Microsoft Dynamics Windows Client.
Registering the Add-in in Microsoft Dynamics NAV.
Setting Up the Add-in on a Page.
Roles
This walkthrough demonstrates tasks performed by the following user roles:
Microsoft .NET Framework developer
Dynamics NAV developer and IT Professional
Prerequisites
To complete this walkthrough, you will need:
Microsoft Dynamics NAV 2018 with a developer license. For more information, see System Requirements for Microsoft Dynamics NAV.
CRONUS International Ltd. demonstration database
Microsoft Visual Studio Express, Microsoft Visual Studio 2008, Microsoft Visual Studio 2010, or Microsoft Visual Studio 2012.
Microsoft .NET Strong Name Utility (sn.exe). This is included with Visual Studio SDKs.
By default, the Microsoft .NET Strong Name Utility is located in C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\NETFX 4.0 Tools.
Experience using Visual Studio.
Story
Simon is a software developer working for CRONUS International Ltd. He has been told that users of the Microsoft Dynamics NAV Windows client want a way to see all the content of page fields when the content extends beyond the field size. He decides to create an add-in that can be applied on a field that enables the user to select the field to display its full content. Fields that have add-ins are displayed in blue. When the user selects the field, a pop-up window opens that shows all of the field's content.
Creating an Add-in with Visual Studio
Microsoft Dynamics NAV 2018 includes the Microsoft.Dynamics.Framework.UI.Extensibility.dll assembly that defines the model for creating Microsoft Dynamics NAV Windows client add-ins. The Microsoft Dynamics NAV 2018 API provides the binding mechanism between the Microsoft Dynamics NAV Windows client add-in and the Microsoft Dynamics NAV 2018 framework.
To create the add-in
In Visual Studio, on the File menu, choose New, and then choose Project.
Under Installed Templates, choose Visual C#, and then choose Class Library.
In the Solution Name text box, enter the name of your solution. For example, you can enter MyCompany.MyProduct.RtcAddins and then choose the OK button.
Yow will add references to the following assemblies:
Microsoft.Dynamics.Framework.UI.Extensibility.dll
System.Windows.Forms
System.Drawing\100\RoleTailored Client.
Important
The assembly cannot be placed outside of the RoleTailored Client folder.
In Solution Explorer, choose Reference, and on the shortcut menu, choose Add Reference.
In the Add Reference window, choose the .NET tab, then under Component Name, choose
System.Windows.Forms, and then choose the OK button.
The namespace contains classes for creating user interfaces for Windows-based applications.
Repeat the previous step and add a reference to the
System.Drawingnamespace. This namespace provides access to basic graphics functionality.
Open the Class1.cs file and add the following using directives.
using Microsoft.Dynamics.Framework.UI.Extensibility; using Microsoft.Dynamics.Framework.UI.Extensibility.WinForms; using System.Windows.Forms; using System.Drawing;
In the ClassLibrary1 namespace, add the following code to declare a new class named MyFieldPopupAddin for the add-in.
[ControlAddInExport("MyCompany.MyProduct.FieldPopupAddin")] public class MyFieldPopupAddin : StringControlAddInBase { }
The class uses the ControlAddInExportAttribute attribute and derives from the Microsoft.Dynamics.Framework.UI.Extensibility.WinForms.StringControlAddInBase class and Microsoft.Dynamics.Framework.UI.Extensibility.IStringControlAddInDefinition interface. The Microsoft.Dynamics.Framework.UI.Extensibility.ControlAddInExportAttribute attribute declares the class in the assembly to be a control add-in that is identified by its ControlAddInExportAttribute.Name property, which is
MyCompany.MyProduct.FieldPopupAddin. Because an assembly can contain more than one control add-in, the Microsoft Dynamics NAV Windows client uses the Microsoft.Dynamics.Framework.UI.Extensibility.ControlAddInExportAttribute attribute to differentiate each control add-in that is found in an assembly.
Note
You will use the name
MyCompany.MyProduct.FieldPopupAddinlater in the walkthrough when you register the add-in in Microsoft Dynamics NAV 2018. the Microsoft Dynamics NAV Server.
In the
MyFieldPopupAddinclass, add the following code to implement the abstract WinFormsControlAddInBase.CreateControl method and define the add-in functionality.
/// IWinFormsControlAddIn.AllowCaptionControl property and return
false(default value is
true).
An assembly must be signed that can be used in the Microsoft Dynamics NAV Windows client. You will now sign the assembly.
To sign the assembly
In Visual Studio, on the Project menu, choose MyCompany.MyProduct.RtcAddins properties.
In the Properties window, choose Signing, and then select the Sign the assembly check box.
In the Choose a strong name key file drop-down list, select New.
In the Key file name text box, enter RtcAddins and clear the Protect my key file with a password check box.
In this walkthrough, you will not protect the key file with a password. However, you can choose whether to use a password. For more information, see Strong-Name Signing for Managed Applications.
Choose the OK button.
In Solution Explorer, choose the Class1.cs file to open it. Notice the RtcAddins.snk file that is added in Solution Explorer.
On the Build menu, choose Build <Your Solution> to build the project. Verify that the build succeeds. In this example, your solution is MyCompany.MyProduct.RtcAddins.
Copying the Add-in Assembly to the Microsoft Dynamics Windows Client
After you build the add-in, you copy the output assembly file to the computer that is running the Microsoft Dynamics NAV Windows client.
To copy the add-in assembly to the Microsoft Dynamics NAV Windows client
On the development computer, locate and copy the add-in assembly file (.dll file) in the add-in project's output folder.
By default, this folder is C:\ Documents\Visual Studio\Projects\[Your Addin Project]\[Your Class Library]\bin\Debug. In this case, the location of the assembly is C:\ \Documents\Visual Studio 2012\Projects\MyCompany.MyProduct.RtcAddins\ MyCompany.MyProduct.RtcAddins\bin\Debug.
On the computer that is running the Microsoft Dynamics NAV Windows client, paste the assembly in the Microsoft Dynamics NAV Windows client\Add-ins folder in the Microsoft Dynamics NAV 2018 installation folder.
By default, the path of this folder is C:\Program Files (x86)\Microsoft Dynamics NAV\100\RoleTailored Client\Add-ins.
Registering the Add-in in Microsoft Dynamics NAV
To register an add-in, you include it on the Control Add-ins page in Dynamics NAV. To include an add-in on the page, you must provide the following information:
Control Add-in name.
The control add-in name is determined by the Microsoft.Dynamics.Framework.UI.Extensibility.ControlAddInExportAttribute attribute value of add-in class definition that you specified when you created. You can determine the public token key by running the Microsoft .NET Strong Name Utility (sn.exe) on the assembly. You must run the utility from the Visual Studio command prompt. The sn.exe utility is available with the Visual Studio 2008, Visual Studio 2010 SDKs, and Visual Studio 2012.
To determine the public key token for the add-in
On the Windows taskbar, choose Start, choose All Programs, choose Microsoft Visual Studio 2012, choose Visual Studio Tools, and then choose Visual Studio Command Prompt (2012) to open the command prompt.
At a command prompt, change to the directory that contains the assembly that you copied.
For example, C:\Program Files (x86)\Microsoft Dynamics NAV\100\RoleTailored Client\Add-ins.
Type the following command.
sn -T <assembly>
Replace
<assembly>with the assembly name, such as
ClassLibrary1.dll.
Press Enter and note the public token key that is displayed.
To include the add-in on the Control Add-ins page
In the Microsoft Dynamics NAV Windows client, in the Search box, enter Control Add-ins, and then choose the relevant link.
On a new row, in the Control Add-ins page, enter the Control Add-in name, and the Public Key Token.
In this walkthrough, the add-in name is MyCompany.MyProduct.FieldPopupAddin.
Choose the OK button to close the Control Add-ins page..
In the C/AL Editor, you set the trigger that is called when a user selects the field to open a pop-up window. When a field is double-clicked, the add-in raises the IEventControlAddInDefinition.ControlAddIn event, which in turn calls the trigger.
To set the ControlAddIn property on the field
In Dynamics NAV, in Object Designer, choose Page.
Select page 21, Customer Card, and then choose Design.
In the Page Designer, in the Name column, select the Name field, and then on the View menu, choose Properties.
In the <Name> Properties window, in the Property column, locate ControlAddIn.
In the Value column, choose the up arrow, and then select MyCompany.MyProduct.FieldPopupAddin from the Client Add-in window. Choose the OK button to close the Client Add-Ins window. The public key token is inserted into the Value field.
Close the Properties window.
To set the add-in trigger in the C/AL Code
On the View menu, choose C/AL Code.
In the Page 21 Customer card -C/AL Editor, locate the following trigger.
Name - OnControlAddIn(Index : Integer;Data : Text[1024])
Add the following code.
Message(Data);
Close the C/AL Editor.
On the File menu, choose Save, select the Compiled check box, and then choose the OK button.
Close the C/AL window.
To test the add-in
In Object Designer, choose Page. In the Name column, select the Customer Card page, and then choose Run. The customer card view is displayed. Notice the color of the Name field.
Double-click the Name field. The contents of the Name field are displayed in a pop-up window.
On the Action menu, choose Next until you find a name that extends beyond the field size, and then double-click the field. All of the content for the field is displayed in a pop-up window.
See Also
Windows Client Control Add-in Overview
Developing Windows Client Control Add-ins
Client Extensibility API Overview
Binding a Windows Client Control Add-in to the Database
Exposing Events and Calling Respective C/AL Triggers from a Windows Client Control Add-in
How to: Create a Windows Client Control Add-in
How to: Determine the Public Key Token of the Windows Client Control Add-in and .NET Framework Assembly
Installing and Configuring Windows Client Control Add-ins on Pages
How to: Install a Windows Client Control Add-in Assembly
How to: Register a Windows Client Control Add-in
How to: Set Up a Windows Client Control Add-in on a Page | https://docs.microsoft.com/en-us/dynamics-nav/walkthrough--creating-and-using-a-windows-client-control-add-in | CC-MAIN-2018-26 | refinedweb | 1,890 | 50.23 |
tag:blogger.com,1999:blog-101626392017-07-18T23:16:05.983-03:00Rafael ramblingRafael Ferreira for 2017-03-30 [del.icio.us]2017-03-31T00:00:00-07:00<ul> <li><a href="">Sponsored: 64% off Code Black Drone with HD Camera</a><br/> Our #1 Best-Selling Drone--Meet the Dark Night of the Sky!</li> </ul><img src="" height="1" width="1" alt=""/> for 2017-03-21 [del.icio.us]2017-03-22T00:00:00-07:00<ul> <li><a href="">Sponsored: 64% off Code Black Drone with HD Camera</a><br/> Our #1 Best-Selling Drone--Meet the Dark Night of the Sky!</li> </ul><img src="" height="1" width="1" alt=""/> for 2016-11-30 [del.icio.us]2016-12-01T00:00:00-08:00<ul> <li><a href="">Sponsored: 64% off Code Black Drone with HD Camera</a><br/> Our #1 Best-Selling Drone--Meet the Dark Night of the Sky!</li> </ul><img src="" height="1" width="1" alt=""/> for 2016-07-08 [del.icio.us]2016-07-09T00:00:00-07:00<ul> <li><a href="">Sponsored: 64% off Code Black Drone with HD Camera</a><br/> Our #1 Best-Selling Drone--Meet the Dark Night of the Sky!</li> </ul><img src="" height="1" width="1" alt=""/> from Strangeloop 2014One of the many reasons making the Strangeloop conference special is the interdisciplinary perspective, taking on themes as diverse as core functional programming concepts - what could fit this description better than the infinite tower of interpreters seen on Nada Amin's keynote - to upcoming software deployment approaches.<br /><br />Still, some common themes seem to have emerged. One candidate is the spreadsheet as inspiration for programming. One thinker that seems to have taken some inspiration for his work is Jonathan Edwards, that opened the <a href="">Future of Programming</a> workshop with a talk showcasing the latest version of his research language Subtext. Earlier prototypes explored the idea of programming without names, directly linking tree nodes to each other via a kind of cut-and-paste mechanism. In its latest incarnation it appears to have evolved into a reactive language with a completely observable evaluation model, the entire evaluation tree is always available for exploration, and a two stage reactive architecture allows for relating input events to evaluation steps. User interface is auto generated, sharing the environment with the code, much like their older reactive cousing, the spreadsheet.<br /><br /><a href="">Kaya</a>, a new language created by David Broderick, explores the spreadsheet metaphor in a more literal manner: what if spreadsheets and cells were composable, allowing for naturally nested structures? Moreover, what if we could query this structure in a SQL-like manner? The result is a small set of abstractions generating complex emergent behavior, including, as in Subtext, a generic user interface.<br /><br />Data dependency graph driven evaluation is an important part of both modern functional reactive programming languages and of all spreadsheet packages since 1978's VisiCalc. We saw some of the first on Evan Czaplicki's highly approachable talk "<a href="">Controlling Time And Space: Understanding The Many Formulations Of Frp</a>". And a bit of the latter on Felienne Hermans's talk "<a href="">Spreadsheets for Developers</a>", sort of wrapping around the metaphor and looking to software engineering for inspiration to improve spreadsheet usage.<br /><br />One of the great aspects of spreadsheets is the experience of direct manipulation of a live environment. This is at the crux of what Brett Victor has been <a href="">demonstrating</a> on many of his demos, showing how different programming could be if our tools were guided by this principle. Though he did not present, the idea of direct manipulation was present in Strangeloop in several of the talks. Subtext's transparency and live reflection of code updates on the generated UI moves in this direction. Still on the Future of Programming workshop, <a href="">Shadershop</a> is an environment whose central concept seems to be directly manipulating real-valued functions by composing simpler functions while live inspecting the resulting plots. Stephen Wolfram's keynote was an entertaining <a href="">demonstration</a> of his latest product, the Wolfram Language. Its appeal was due, among other reasons, to the interactive exploration environment, particularly the visual representation of non-textual data and the seamless jump from evaluating expressions to building small exploratory UIs. <br /><br />Czaplicki's <a href="">talk</a> discussed several of the decisions involved in designing Elm, his functional reactive programming language. I found noteworthy that many of those were taken in order to allow live update of running code and an awesome time-traveling debugger.<br /><br />Taking a different perspective at the buzzword du Jour, reactive, is another candidate theme for this year's Strangeloop: the taming of callbacks. They were repeatedly mentioned as one evil to be banished from the world of programming, including on Joe Armstrong's keynote, "The mess we are in" and all the functional reactive programming content took aim at the construct. Not only functional, another gem from this year's Future of Programming workshop was the imperative reactive programming language <a href="">Céu</a>. Created by Fransico Sant'anna at PUC Rio - the home of the Lua programming language - Céu compiles an imperative language with embedded event based concurrency constructs down to a deterministic state machine in C. Achieving, among other tricks, fully automated memory management without a garbage collector.<br /><br />Befitting our age of microservices and commodity cloud computing, another interesting current was looking at modern approaches to testing distributed systems. Michael Nygard <a href="">exemplified simulation testing</a> - which can be characterized as property based testing in the large - with Simulant, a clojure framework to prepare, run, record events, make assertions and analyze the results of sophisticated black box tests. Kyle @aphyr Kingsbury delivered another <a href="">amazing performance</a> torturing distributed databases to their breaking point. Most interesting was the lengths he had to go to in order to control the combinatorial explosion of the state space and actually verify global ordering properties like linearizability. <br /><br />Speaking of going to great lengths to torture database systems, we come to what might have been my favorite talk at the conference, by the FoundationDb team, "<a href="">Testing Distributed Systems w/ Deterministic Simulation</a>". Like @aphyr's Jepsen, they control the external environemnt to inject failures while generating transactions and asserting the system maintains its properties. They take great care to mock out all sources of non-determisim, including time, random number generation, even extend C++ to add better behaved concurrency abstractions. <br /><br />Tests run thousands of times each night, nondeterministic behavior is weeded out by running twice each set of parameters and checking outputs don't change. FoundationDb's team goes further than Jepsen in the types of failures they can inject; not only causing network partitions and removing entire nodes from the cluster, but also simulating network lag, data corruption, and even operator mistakes, like swapping data files between nodes! Of course the test harness itself could be buggy, failing to exercise certain failure conditions; to overcome this specter, they generate real hardware failures with programmable power supplies connected to physical servers (they report no bugs were found on FoundationDb with this strategy, but Linux and Zookeeper had defects surfaced - the latter isn't in use anymore). <br /><br />What I particularly enjoyed from this talk was the attitude towards the certainty of failures in production. Building a database is a serious endeavor, data loss is simply not acceptable, and they understood this from the start. <br /><br />Closing the conference in the perfect key was Carin Meier and Sam Aaron's keynote demonstrating the one true underlying theme: <a href="">Our Shared Joy of Programming</a>.<img src="" height="1" width="1" alt=""/>Rafael Ferreira and ObjectsMobile code is one of the great challenges for software security. Lets say you are writing an email application. The idea that people could send little apps to each other in email messages might seem like a potentially interesting feature: users could build polls, schedule meetings, play games, share interactive documents. Kind of cool.<br /><br />And if the platform you are building upon supports reflectively evaluating code, it could be as easy as something like this (in OO pseudocode):<br /><br /> define load_message(message)<br /> ...<br /> eval(message.code)<br /><br />Of course it can't be that easy. What if the code in the message does something like:<br /><br /> new stdlib.io.File("/").delete()<br /><br /:<br /><br /> define delete()<br /> if VM.callStackContainsEvilCode?()<br /> raise YouShallNotPassException()<br /><br /> ... <br /><br /.<br /><br /><h3>A better way ?</h3>Perhaps there is a better way. Take another look at the offending line: <span style="font-family: "Courier New",Courier,monospace;">new stdlib.io.File("/").delete()</span>. It is only able to call the dangerous <span style="font-family: "Courier New",Courier,monospace;">delete()</span> method because it has a reference to a file object pointing to the root of the filesystem. And it only has that reference because it could reach for the <span style="font-family: "Courier New",Courier,monospace;">File</span> class on a global namespace. What if there was no global namespace?<br /><br / <span style="font-family: "Courier New",Courier,monospace;">new()</span> — on the metaclass object.<br /><br /.<br /><br / <span style="font-family: "Courier New",Courier,monospace;">File</span>. In a way, object design becomes security policy.<br /><br />And we get very fine-grained control over such policy. We could, for instance, grant loaded code authority to write on a designated directory just by passing it a reference to the <span style="font-family: "Courier New",Courier,monospace;">Directory</span> object for that directory. Our choices get even more interesting when we realize we can pass references to proxies instead of real objects in order to attenuate authority. Continuing with our example, hoping it doesn't get too contrived, we could build a proxy for the <span style="font-family: "Courier New",Courier,monospace;">Directory</span> that checks if callers exceed a given quota of disk space.<br /><br /><h3>Research</h3>I have mentioned above that most common languages don't fit this post's description. But there are languages that do, a prime example is <a href="">E</a>. In fact, there is a whole area of research for dealing with security in this manner, it's called "object capability security".<br /><br />I'm not really a security guy, I got interested in the area due to the implications for language and system design. If you got interested, for any reason, please check out <a href="">Mark Miller</a>'s work. He is the creator of the <a href="">E language</a> and the javascript-based <a href="">Caja</a> project. His <a href="">thesis</a> is very readable.<img src="" height="1" width="1" alt=""/>Rafael Ferreira Practitioner's requests for Programming Language ResearchI have been interested in programming languages and programming language theory for some time, CTM is my favorite technical book, and I even managed to wade through the first chapters in Pierce's TAPL (some day I'll get through the rest, some day...). But it's not often that I can connect that interest with by job as practicing programmer. This post is an attempt to forget about my particular research interests and try to list my daily pain points and wants as a user of programming languages.<br /><br />If. <br /><br /><h2>Semi-automatic mapping between data representations.</h2.<br /><br / <a href="">ever more cumbersome configuration options</a>.<br /><br /><h2>Library versioning</h2>The issues around library versioning and dependency management are well known and have been for a long time, generation after generation of technology creating it's own version dependency hell. The situation is so dire, just enabling the coexistence of many versions of a library <a href="">on a single machine</a> is hailed as a major breakthrough.<br /><br />A language based approach can afford to be much more <span>ambitious</span>..<br /><br / <a href="">modules are first-class</a>, we can even contemplate having several versions of them interoperating in runtime to satisfy transitive dependencies differences.<br /><br /><h2>Data evolution</h2>If <i>surname</i> String field on version one; on version two this field is gone and a new <i>last_name</i> field is there; should it be considered a rename?).<br /><br /.<br /><br />But if we note that on the moment the data declaration code is being changed, the developer knows why it's changing, we just need a way to record this intent and tie it to the mapping and change structures from the previous two items, and voilà. <i>Things sure sound easy on blog posts, don't them?</i><br /><br /><h2>Tooling</h2>I've <a href="">written about this before</a>,.<br /><br /><h2>Non-issues</h2>From my biased perspective as a practitioner on my particular domain, there are some problems tackled by programming language research that I don't see as particularly pressing.<br /><ul><li><b>Sequential performance</b>: Cache is king.</li><li><b>Parallel performance</b>: On the server, multicore parallelism is very well exploited by request-level concurrency and virtualization.</li><li><b>Concurrency:</b>).</li><li><b>Correctness</b>: We have bugs, of course, but they seldom present themselves as classical broken invariants. What are they then? <a href="">Faults of omission</a>, unintended interactions between separate systems, API usage misunderstandings. I'm skeptical on language help for these, but I'd be glad to be proven wrong.</li></ul>Seems like a good time to repeat that on this post I'm trying to connect my particular needs as a developer on my current domain to what programming language research might accomplish. I'd expect the issues and non-issues to be different for other domains, or even for other developers in a similar domain but with a different background.<br /><br /><h2>Cake</h2>Who doesn't love cake? No, not <a href="">this</a> cake, <a href="">this</a> cake. Hmm.<img src="" height="1" width="1" alt=""/>Rafael Ferreira and BugsThere are certain discussions in our biz that are so played out they provoke instant boredom upon encounter. A major one is the old dynamic vs. static skirmish, recently resurfaced in a <a href="">blog post</a> by Evan Farrer. Which is a shame as the post is quite interesting, describing his results transliterating well-tested code in a dynamic language to a static language to see if the type system found any bugs. Which it did.<br /><br />The <a href="">full-length paper</a>.<br /><br />But the real meat of the paper is in the description of the bugs he found. Upon a not particularly discriminating reading, a clear pattern jumped out. Most of the bugs fell into one of two categories:<br /><ul><li>Assuming that a variable always references a valid value when it can contain a null value. </li><li>Referencing constructs that no longer exist in the source. </li></ul>The first category is also the largest, comprising several places where the original code could be coerced into letting a variable be set to a null value, usually by just leaving it uninitialized, and a subsequent call would attempt to derefence it assuming it contained a valid value. Haskell's type system avoids the problem as it simply doesn't have any notion of <i>null</i>. Code that has to deal with optional values must do it trough algebraic data types.<br /><br /.<br /><br />If the study's findings are generalizable and my observations are correct, these are the main takeaways:<br /><ul><li>If you have a type system at your service, it's prudent to structure code such that behavior-breaking changes are reflected on the types.</li><li>End-to-end integration tests are a necessary complement to both a suite of unit tests and a type system. In my experience how far should these tests stray off the happy path is a difficult engineering trade-off.</li><li>If your type system allows nulls — such as Java's, for instance — its role in bug prevention is greatly diminished. The proportion of null-dereference bugs on the analysed code bases helps to makes it clear just <a href="">how big a mistake</a> it is to allow nulls in a programming language. </li></ul><img src="" height="1" width="1" alt=""/>Rafael Ferreira about Future Development Environments<div style="font-family: inherit; text-align: justify;"><div style="text-align: left;"><a href="">This</a> <a href="">series</a> <a href="">of essays</a>,.</div><div style="text-align: left;"><br /></div><div style="text-align: left;" <i>integrating</i>.</div><div style="text-align: left;"><br /></div><div style="text-align: left;".</div><div style="text-align: left;"><br /></div><div style="text-align: left;"?</div><div style="text-align: left;"><br /></div><div style="text-align: left;" <a href="">old idea</a> that is <a href="">being picked up</a> in Apple's latest OS).</div><div style="text-align: left;"><br /></div><div style="text-align: left;".</div><div style="text-align: left;"><br /></div><div style="text-align: left;" <a href="">his column</a> for the CACM.</div><div style="text-align: left;"><br /></div><div style="text-align: left;".</div><div style="text-align: left;"><br /></div><div style="text-align: left;">So ends my wish-list for a future development environment, a fitting time to restate this is not a prediction, as I see no movement in this direction. Quite the opposite, actually, as most IDEs keep sprawling in ever larger feature matrices.</div><br /><br /></div><img src="" height="1" width="1" alt=""/>Rafael Ferreira for 2011-09-13 [del.icio.us]2011-09-14T00:00:00-07:00<ul> <li><a href="">Google</a><br/> test</li> </ul><img src="" height="1" width="1" alt=""/> for 2011-09-07 [del.icio.us]2011-09-08T00:00:00-07:00<ul> <li><a href="">Google</a><br/> test</li> </ul><img src="" height="1" width="1" alt=""/> report: RubyFor a long time I've been curious about how the supposed benefits and liabilities of programming in a dynamically typed language<sup><a href="#note1">1</a></sup> actually play out in practice. I'm now getting a chance to find it out, since I'm involved in a largish project using Ruby (lots of Rails plus some Sinatra and a couple of random daemons). My professional background has largely been in Java, but I spend a good chunk of my free time learning about different programming languages (and some PL theory), mostly on the static side of the fence. Anyway, here are some notes:<br /><br />The<a href="#note2"><sup>2</sup></a>,.<br /><br /.<br />.<br /><br /.<br /><br /.<br /><br />Apologies for an opinionated blog post.<br /><br /><br /><div id="note1">1. Unittyped, for the pedantic.</div><div id="note2">2. Such as Agda or Coq, or some styles of programming in ML, Haskell or Scala.</div><img src="" height="1" width="1" alt=""/>Rafael Ferreira and uninformed rant about Software Engineering Research<a href="">Jorge Aranda interviewed</a>.<br /><br / "<a href="">Characterizing people as non-linear first-order components in software development</a>" helps to explains my uncertainty. <br /><br /.<br /><br /"), ...<br /><br /.<br /><br />Anyway, the particulars aren't important, I just want to emphasize the most interesting research, from my point of view, is the one that tries to understand, not merely to optimze.<br /><i><br /><br />Edit: Jorge Aranda commented to point out the interviews were conducted by a team including himelf, Margaret-Anne Storey, Marian Petre, and Daniela Damian.</i><span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;"></span><img src="" height="1" width="1" alt=""/>Rafael Ferreira review: Growing Object Oriented Software Guided by Tests;">I’m a curious kind of guy. It is with some surprise, then, that I catch myself re-reading a technical book: Nat Pryce and Steve Freeman’s </span><a href=""><span class="Apple-style-span" style="font-family: inherit;">Growing Object Oriented Software Guided By Tests</span></a><span class="Apple-style-span" style="font-family: inherit;">..<span class="Apple-style-span" style="white-space: normal;"> </span></span></span><span style="background-color: transparent; color: black; font-style: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><span class="Apple-style-span" style="font-family: inherit;" </span><a href=""><span class="Apple-style-span" style="font-family: inherit;">GOOS</span></a><span class="Apple-style-span" style="font-family: inherit;">.</span></span></div><div style="background-color: transparent; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><span class="Apple-style-span" style="font-family: inherit;"><br /><Anyway, I've read some pretty good technical books this year, but this one was the best.</span></div><div style="background-color: transparent; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;">;"></span></div><img src="" height="1" width="1" alt=""/>Rafael Ferreira for 2010-10-04 [del.icio.us]2010-10-05T00:00:00-07:00<ul> <li><a href="">HTML5 Boilerplate - A rock-solid default for HTML5 awesome.</a><br/> Empty CSS+HTML template; think power-css-reset. Includes general IE hacks, css reset, best-practice header values, etc, etc.</li> </ul><img src="" height="1" width="1" alt=""/> scala left me wanting<div>n the past few years, I've grown to enjoy more and more programming in a functional style. Even the <a href="" id="s062" title="java code">java code</a>.</div><br /><div>A natural consequence is that our systems architecture tends to resemble an onion, with the domain model in the center and code to make it interact with the rest of the world surrounding it:<br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img height="300" src="" style="-webkit-user-select: none;" width="400" /></a></div><br /></div <a href="" id="pwfq" title="Hexagonal Architecture">Hexagonal Architecture</a>..<br /><br /><div.</div><br /><div.</div><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="300" src="" width="400" /></a></div><br /><div> In the Java world this kind of thing used to be done with verbose and annoying XML configuration files. As the language evolved, annotations were introduced, and they are now the main way to configure such translations.</div><br /><div <a href="" id="t2vu" title="proposal">proposal</a> for a typesafe API for database queries to be added to the new version of the Java Persistence API. It requires a special pre-processor to generate a metamodel that can be used to parameterize a database adapter. </div><br /><div>A better and more generic approach would be to have the language itself provide this metamodel: a kind of <a href="" id="x09v" title="static reflection">static reflection</a>. I would love to see something like this in Scala. Some time ago I toyed with the idea of writing a compiler plugin to provide a static meta-model of Scala classes, but apparently compiler plugins have issues with non-transparent code generation. </div><br /><div><i. </i></div><img src="" height="1" width="1" alt=""/>Rafael Ferreira Exceptions, Mythological Monsters and Household Appliances<div style="text-align: right;"><i><br /. </i><br />Wittgenstein<br /><br />><br />Confucius<br /><br /><i>Shit happens</i><br />Forrest Gump?<br /></div><p>Exceptions are a controversial matter among programmers. We disagree about how to deal with them. We disagree if the should be <i>checked </i>by the compiler. We even disagree if they are useful at all. Sometimes controversy arises from real disagreement: I may argue that test-driven-development is the best way to write software while you might say that formal proofs are the way to go. Both sides can productively debate the issue and present thoughtful arguments either way. I posit that the polemics surrounding exceptions are <b>not</b> of this sort. They are, rather, brought about by the inherent confusion in the mechanism. Exceptions are the <a href="">Hydra</a> of current programming languages, under a simple banner — <i>to handle exceptional conditions</i> — they mix several mechanisms.</p><h2>The heads<br /></h2><p>Let's begin with the humble <b>throw </b>term, which is a sort of <i>procedure exit statement</i>, causing the current routine to throw his hands up in the air, call it quits, and immediately return to the caller. The only other statement with a similar effect is <i>return</i>. Of course when calling <i>return</i>, we must return something! In statically typed languages, this something must have a type: the return type of the function or method. Exceptions muddle the picture, if we think of a <i>throw </i>as a kind of <i>return</i> then what is the type being returned? There can be many answers to this question, but let me put forth a proposal: the "return type" of a "throw" is, for all intents and purposes, a <b>variant type</b>, and that is the second head of the Hydra.</p><p>This warrants a small digression. Most functional languages have a way of saying "this type <i>FOO</i> is really either a <i>BAR </i>or a <i>BAZ</i>". So every place in the code that is handed a <i>FOO</i> must check whether it is a <i>BAR</i> or a <i>BAZ</i> and deal appropriately. We call <i>FOO</i> a <i>variant type</i>, and <i>BAR</i> and <i>BAZ</i> are its variants. The syntax for the check is called called a type-case<sup><a href="#FOOTNOTE-1">1</a></sup>. And where have we seen syntax for checking the type of something and then acting on it? Catch clauses!</p><p>But of course many languages lack the concept of variant types. I'll argue that in practice, exception handling code in effect <a title="greenspuns" href="" id="a4u0">greenspuns</a> variant types. First, we can observe that in languages with <i>checked exceptions </i>the declaration of the exceptions a method can throw — the throws clause in Java — is basically a variant type declaration in disguise. Even when working with unchecked exceptions variants lurk behind, encoded through subtyping. Just notice how little polymorphism is exploited in exception class hierarchies. Different exceptions are given different types for the sole reason of being distinguished in catch clauses. Just like variant types.</p><br />Ok, time for the next head of the monster. We'll continue to look at the <b>catch</b> clause, but this time we'll look at it as a <b>control structure</b>. As such, it is a strange beast, neither a repetition nor purely a decision structure. Once an exception is thrown, the calling method will stop everything it was doing and jump straight to a catch block. In a way, every exception-throwing method call can be seen as a regular call paired with a conditional jump. Dijkstra may not be rolling in his grave, but he is probably feeling a little tingly.<br /><br />Since we've talked about control flow, let's direct our attention to the mischievous head of the Hydra: <b>stack unwinding</b>. After a throw happens, if the calling method doesn't care to handle an exception, his caller will have to bare the burden or leave it up to <b>his</b> caller, recursively unwinding the stack until some frame decides it can be bothered to catch the exception and deal with it. This is a pretty powerful feature, sometimes it's even exploited to simulate continuations in languages that lack them.<sup><a href="#FOOTNOTE-2">2</a></sup><br /><br />One more head left: <b>call stack introspection</b><i>. </i>Besides unwinding the call stack, exceptions may reify it. It is a limited form of reflection, typically used only to print stack traces to a log someplace where they can be attentively ignored. Common lore says capturing the stack frames is an expensive operation, and so should be executed only on, well, exceptional circumstances.<br /><br /><h2>Exceptional Situations</h2>The confusion bus doesn't stop here, I'm afraid. All the complexity we've examined so far is justified under a deceptively simple banner: <i>to handle exceptional conditions</i>. Hidden in these four words is the vast range of situations that may be interpreted as "exceptional". Like any piece that deals with the subject of exceptions, we can't escape the temptation to suggest a taxonomy. At least this one is short, these are the three main categories of "exceptional situations":<br /><br /><ul><li><b>Programming error</b><i>:</i> A traditional example would be code trying to index an array beyond it's bounds. Ideally we would design our abstractions so as to make it impossible to commit such coding errors. From formal grammars to dependent type systems this ideal has been driven much of the research on programming languages over the years. Unfortunately, at least for now, we cannot always evade the need for run-time checks.</li><li><b>External failure</b><i>:</i> Something bad happened to some component not within control of the application. Examples: network cable cut, disk crapped out, database died, sysadmin was drunk and lost a round of <a title="UNIX Russian roulette" href="" id="u:oz">UNIX Russian roulette</a>.</li><li><b>Invalid user entry</b><i>:</i> <i>"</i>If an error is possible, someone will make it. The designer must assume that all possible errors will occur and design so as to minimize<br />the chance of the error in the first place, or its effects once it gets made. Errors should be easy to detect, they should have minimal consequences, and, if possible, their effects should be reversible.<i>"</i> This is from Don Norman's superb <i>The Design of Everyday Things</i>. The book is so good I'm going to slip another quote in here: "The designer shouldn't think of a simple dichotomy between errors and correct behaviour; rather, the entire interaction should be treated as a cooperative endeavor between person and machine, one in which misconceptions can arise on either side"</li></ul><br /><p>Let's not forget we are classifying "exceptional situations", not exception types. Here is an example that I hope will make the distinction clear. Take the always popular FileNotFoundException, that is thrown when application code attempts to read a file and the file isn't there: where do we place it on our little taxonomy? Imagine the application is a boring old word processor and the user types in the wrong file name. That obviously is a case of <i>invalid user entry</i>. Now, consider the case of a mail server attempting to open the file where it stores all queued outgoing mail. This file was probably created by the server application itself, maybe during the installation procedure, and its absence would be an indication that something is very borked in the server environment. The same File Not Found event is now clearly an <i>external failure.</i> This matters, because the strategy used for dealing with the three categories is different, as we'll soon see.</p><br /><h2>The Inevitable Cross Product</h2>Time to look at how the heads of the hydra fare up to our new categories:<br /><div><br /><b>Programming errors</b> are, by definition, unexpected. If we aren't expecting them, there is no point in writing code to handle different exception types. What could you do when you know some code tripped a NullPointerException that's different from what you would do if the offence was an IndexOutOfBounds? In other words, the <b>variant types</b> head is useless here. Exceptions triggered by programming errors are usually caught in a generic stack frame, in the bowels of infrastructure code — through the magic of <b>stack unwinding</b> — and dealt with by logging the error type, message, and stack trace. After logging there is little left to do but to roll over and die. In multi-user systems, killing the whole process is not very polite, so we might nuke just the current work item. This is the gist of the <a title="fault barrier pattern" href="" id="u:.i">fault barrier pattern</a>. <b>Call stack introspection </b>is a very valuable aid to pinpoint the bug. So valuable that I go as far as point its absence as the single most important reason to avoid C for teaching programming to beginners.<br /><br />Practical strategies for dealing with <b>external failures</b> aren't too different. We tend to exploit <b>stack unwinding</b> to catch the exceptions in a frame deep in infrastructure land and log them in pretty much the same way. But, every so often, we might want to code a different action, such as to retry the operation or to trigger a compensation transaction. In these relatively rare cases it might be useful to deal distinctly with different <b>variants</b>. Another aspect where external failures are distinct from programming errors is that <b>call stack introspection</b> isn't so helpful: if the failure is external, the exact place in the code where it was detected is of little significance. Of course, it is important that the exception carries enough information to enable a system administrator to find the origin of the fault by just looking at the log.<br /><br /><b>Invalid user input</b> is a different beast altogether. <b>Variants</b> are definitely useful; after all it's important to let the user know if their request was <i>over quota </i>rather than <i>under quota</i>. On run-of-the mill interactive OLTP systems, we handle invalid user entry by providing immediate feedback on what was wrong. But not every case is so simple: in some circumstances it may be important to identify suspicious input and raise security flags; in others, we could offer suggestions for the user to correct the data; in asynchronous interactions, we might want to proceed with the action substituting a default or normalized value for the invalid input; and so on. Altogether, there is a wide range of handling strategies, mostly domain-specific. This is a key observation, as not only handling strategies, but also the detection and the very definition of what values are invalid, are full of domain specific decisions. I would go as far as proposing that the controversial <i>domain exceptions</i> all boil down to cases of invalid user input. <br /><br />What about the other heads of the monster, are they helpful for <b>invalid user inputs</b>? <b>Call stack introspection</b> is clearly not needed. Moreover, the performance costs of reifying the stack can be detrimental for the not-so-rare instances of invalid input. This isn't just premature optimization, as the conscience of these costs can prevent users from even considering exceptions for the domain logic. What about <b>stack unwinding? </b>I'm afraid I don't really have an answer for this one. On the one hand, domain specific handling logic seems to fit naturally in the calling method. On the other paw, if all we do is inform the user that something happened, a generic code further down the stack is awfully convenient.<sup><a href="#FOOTNOTE-3">3</a></sup> <br /><br /></div><div><h2>So what?</h2>We've seen that the feature by the name <i>exceptions</i> is really a hodgepodge of language constructs. Beyond the technical aspects, the very raison d'être for the feature is also muddled. It's as if some crazy nineteenth century inventor decided that since people need to be protected from the heat and, hey, food needs cooling to be preserved as well, why not make buildings like large refrigerators? Just keep the food near the bottom where it's colder, the humans near the top where it's not freezing, and presto! His solution is absurd, but that is because the problem itself is ill-formulated: "to keep stuff cool" is not an useful problem statement. "To deal with exceptional circumstances" isn't either.<br /><br />I don't know if the above ramblings are of any practical significance. I certainly don't have any concrete proposal to make things better. My only hope is that language designers will keep in mind context and necessity when devising new constructs.</div><br /><div class="endnotes"><sup>1 </sup><a name="FOOTNOTE-1"></a>In languages where pattern matching is available, it subsumes type-cases. In fact, one way of thinking about pattern matching is that it is a sophisticated form of type-case.<br /><br /><sup>2 </sup><a name="FOOTNOTE-2"></a>Some languages without full support for exceptions do offer a feature called <i>long jumps</i>, that is very close to what I'm calling <i>stack unwinding</i>.<br /><br /><sup>3 </sup><a name="FOOTNOTE-3"></a>To further complicate the issue, languages offering variant types usually allow different program structuring techniques. For instance, instead of a direct chain of method calls we may thread a monad through a series of functions.</div><img src="" height="1" width="1" alt=""/>Rafael Ferreira printf in scala ‽All the excitement surrounding Scala's next big release - 2.8.0 - reminds me of another big release a few years ago. Java 5 was poised to be a huge leg up for developers, long feature lists were all over the java blogosphere: generics, enums, enhanced for-loops, autoboxing, varargs, util.concurrent, static imports, and so on; in sum, a whole heap of goodness. But there were a few odd ducks hidden in the feature lists, most notably <i>"an interpreter for printf-style format strings"</i>.<br /><br />Anyone <b><i><code>"%.2f USD"</code></i></b>, or a short american date format with <b><i><code>"%tD"</code></i></b>. We can even go wild and use the two together:<br /><br />In Java:<br /><pre class="prettyprint">String.format("%.2f USD at %tD", 133.7, Calendar.getInstance());</pre><br />In Scala:<br /><pre class="prettyprint">"%.2f USD at %tD" format (133.7, Calendar.getInstance)</pre><br /.<br /><br /><span style="font-size: large;">Formatters</span><br />The first step is to have our logic leave it's String hideout and show itself. So, instead of "%.2f", we'll say <b><code>F(2)</code></b><sup><a href="#nota1">*</a></sup>, and instead of <b><code>"%tD"</code></b> we'll say <b><code>D(T)</code></b>. <b><code>D</code></b> and <b><code>F</code></b> are now <b><code>Formatters</code></b>:<br /><br /><pre class="prettyprint" Time => "%tR" format calendar<br /> case Date => "%tD" format calendar<br /> }<br />}<br /></pre><br /><span style="font-size: large;">Chains</span><br />Next we tackle the issue of how to chain formats together. The best bet here is to use a composite, very similar to <a href="">scala.List</a> <sup><a href="#nota3">***</a></sup>. We even have a <b>::</b> method to get right-associative syntax. This is how it looks like:<br /><br /><pre class="prettyprint">val fmt = F(2) :: T(D) :: FNil</pre><br />And this is the actual, quite unremarkable, code:<br /><br /><pre class="prettyprint">trait FChain {<br /> def :(formatter: Formatter) =<br /> new FCons(formatter, this)<br /><br /> def format(elements: List[Any]):String<br />}<br /><br />object FNil extends FChain {<br /> def format(elements: List[Any]):val fmt = F(2) :: " USD at " :: T(D) :: FNil</pre><br />The solution is simple, we overload <b>::</b> in FChain:<br /><br /><pre class="prettyprint">trait FChain {<br /> ...<br /> def ::(constant: String) =<br /> new FConstant(constant, this)<br /> ...<br />}<br /></pre><br />and create a new type of format chain that appends the string constant:<br /><br /><pre class="prettyprint">case class FConstant(constant:String, tail: FChain) extends FChain { <br /> def format(elements: List[Any]):String = <br /> constant + tail.format(elements)<br />}<br /></pre><br />Cool, that works. But wait; what have we gained so far? The problem was to match the types of the formatters with the types of the values, and we aren't really using types at all. The remedy, of course, is to keep track of them.<br /><br /><span style="font-size: large;">Cue the Types</span><br />We want to check the types of the values passed to the <code>FChain.format()</code>, but this method currently takes a <code>List[Any]</code>, a List of anything at all. We could try to parameterize it somehow and make it take a <code>List[T]</code>, a list of some type <code>T</code>, instead. But, if we take a <code>List[T]</code>, it means all values must be of the same type <code>T</code>, and that's not what we want. For instance, in our running example we want a list of a <code>Double</code> and a <code>Calendar</code> and nothing more.<br /><br />So, <code>List</code> doesn't cut it. Fortunately, the great Jesper Nordenberg created an awesome library, <a href="">metascala</a>, that contains an awesomest class: <code>HList</code>. It is kind of like a regular List, with a set of similar operations. But it differs in an important way: HLists "remember" the types of all members of the list. That's what the H stands for, Heterogeneous. Jesper explains how it works <a href="">here</a>.<br /><br />We'll change FChain to remember the required type of the elements in a type member, and to require this type in the format() method:<br /><br /><pre class="prettyprint">trait FChain {<br /> type Elements <: HList<br /> ...<br /> def format(elements: Elements):String<br />}<br /></pre><code>FNil</code> is pretty trivial, it can only handle empty HLists (<code>HNil</code>): <br /><pre class="prettyprint">object FNil extends FChain {<br /> type Elements = HNil<br /><br /> def format(elements: Elements):</pre>We also had to tighten-up the types of the constructor parameters: formatter is now <b><code>Formatter[E]</code></b> <sup><a href="#nota2">**</a></sup> — so it can format elements of type <code>E</code>, and tail is now <b><code>FChain{type Elements=TL}</code></b> — so it can format the rest of the values. The <b><code>Elements</code></b> member is where we build up our list of types. It is an HCons: a cons pair of a head type - <b>E</b> - and another HList type - <b>TL</b>. We changed how to FCons constructor parameters, so we also need to change the point where we instantiate it, in FChain: <br /><pre class="prettyprint">trait FChain {<br /> type Elements <: HList<br /> ...<br /> def ::[E](formatter: Formatter[E]) =<br /> new FCons[E, Elements](formatter, this)<br /> ...<br />}<br /></pre>Just passing along the type of the formatter and of the elements so far to FCons. FConstant has to be changed in an analogous way. This is it, now <b><code>format()</code></b> only accepts a list whose values are of the right type. Check out an interpreter session: <br /><br /><pre class="prettyprint">scala> (F(2) :: " USD at " :: T(D) :: FNil) format (133.7 :: Calendar.getInstance :: HNil)<br />res5: java.lang.String = 133.70 USD at 10/10/09<br /><br />scala> (F(2) :: " USD at " :: T(D) :: FNil) format (Calendar.getInstance :: 133.7 :: HNil)<br /><console>:25: error: no type parameters for method ::: (v: T)metascala.HLists.HCons[T,metascala.HLists.HCons[Double,metascala.HLists.HNil]] exist so that it can be applied to arguments (java.util.Calendar)<br /> --- because ---<br />no unique instantiation of type variable T could be found<br /> (F(2) :: " USD at " :: T(D) :: FNil) format (Calendar.getInstance :: 133.7 :: HNil)<br /></console></pre><br /><span style="font-size: large;">Some random remarks</span><br /><ul><li>The interpreter session above nicely showcases the Achilles heel of most techniques for compile-time invariant verification: the error messages are basically impenetrable.</li><li>A related issue with this kind of metaprogramming is that it's just plain hard. The code in this post looks pretty simple (compared with, say, JimMcBeath's <a href="">beautiful builders</a>), but it took me days of fiddling around with metascala to find an adequate implementation.</li><li>Take the above two points together and it's clear that we are talking about a niche technique. Powerful, but not for everyday coding.</li><li.</li><li>Since this is random remarks section, I'll randomly remark that we can implement the whole thing without type members and refinements. Just good old <a href="">type parameters in action</a>.</li><li>Since it is possible to use nothing but type parameters, and Java has type parameters, would it be possible to implement our type-safe Printf in Java? Quite possibly, a good way to start would be with Rúnar Óli's <a href="">HLists in Java</a>. Just take care not to get cut wading through all those pointy brackets. </li><li>A powerful type system without type inference is useless. Quoting Benjamin Pierce: "The more interesting your types get, the less fun it is to write them down".<br /></li></ul><br /><hr/>The whole code:<br /><pre class="prettyprint">import metascala._<br />import HLists._<br /><br />object Printf {<br /> trait FChain {<br /> type Elements <: HList<br /><br /> def ::(constant: String) =<br /> new FConstant[Elements](constant, this)<br /> <br /> def ::[E](formatter: Formatter[E]) =<br /> new FCons[E, Elements](formatter, this)<br /><br /> def format(elements: Elements):String<br /> }<br /><br /> case class FConstant[ES <: HList](constant:String, tail: FChain { type Elements = ES }) extends FChain { <br /> type Elements = ES<br /><br /> def format(elements: Elements):String = <br /> constant + tail.format(elements)<br /> <br /> }<br /> <br /> object FNil extends FChain {<br /> type Elements = HNil<br /> def format(elements: Elements):* Or F(precision=2) thanks to Scala 2.8 awesome named parameter support. <br /></div><div id="nota2">** In fact, the previous untyped version didn't even compile, as Formatter has always been generic. Sorry for misleading you guys. <br /></div><div id="nota3">*** If you are unfamiliar with how Scala lists are built, check out <a href="">this article</a>. <br /></div><img src="" height="1" width="1" alt=""/>Rafael Ferreira, context, contextJoel Spolsky's <a href="">latest diatrabe</a>, and the outcry the blogosphere predictably issued, got me thinking about, well, about an old joelonsoftware article: <a href="">Five Worlds</a>. This one is a favorite of mine, maybe second only to the classic <a href="">The Law of Leaky Abstractions</a>. The point, in a nutshell, is that we too often forget the importance of context when discussing our trade.<br /><br />I?<br /><br />This isn't meant to be a put-down of enterpresey work. I am myself a consultant and won't apologise for it. But it pays to look at the differences between the internal corporate projects world and the software product development world:<br /><ul><li>Corporate projects tend to be wide: lots of forms gathering data, lots of reports spitting data out, huge ugly menus. Products are usually more focused, with fewer interaction points that are backed with more sophisticated logic.</li><li>A consequence is that, while a developer for an internal corporate system might see any given screen a handful of times throughout the project, a software product developer can live inside the system almost as much as the end users (and so, catch more bugs).</li><li?).</li><li.</li></ul>Reading these points, it seems that I'm justifying a lower standard of code quality in product development. I'm not. TDD really does allow you to go faster, I can't even fathom what would be like to code without refactoring every few seconds, and all the old sayings about how we spend much more time reading that writing code are true, regardless of how crucial is that we reach that 1.0 milestone.<br /><br /.<img src="" height="1" width="1" alt=""/>Rafael><img src="" height="1" width="1" alt=""/>Rafael>A couple of commenters suggested that languages with support for default parameter values (like Python and Groovy) don't need elaborate constructs such as the builder pattern. There are two ways to respond. One is to remind that the intent of the pattern, specially as originally described in the GoF book, has little to do with optional data. The other is to acknowledge that I probably put too much emphasis on this issue and forgot to mention a very common idiom for building objects in Scala: just declare mandatory "parameters" as abstract vals and optional ones as concrete vals with default values, like so:Rafael />So, let's say you want to order a shot of scotch. You'll need to ask for a few things: the brand of the whiskey, how it should be prepared (neat, on the rocks or with water) and if you want it doubled. Unless, of course, you are a pretentious snob, in that case you'll probably also ask for a specific kind of glass, brand and temperature of the water and who knows what else. Limiting the snobbery to the kind of glass, here is one way to represent the order in scala. />Note that if the client doesn't want to specify the glass he can pass None as an argument, since the parameter was declared as Option[Glass]. This isn't so bad, but it can get annoying to remember the position of each argument, specially if many are optional. There are two traditional ways to circumvent this problem — define telescoping constructors or set the values post-instantiation with accessors — but both idioms have their shortcomings. Recently, in Java circles, it has become popular to use a variant of the />This is almost self-explanatory, the only caveat is that verifying the presence of non-optional parameters (everything but the glass) is done by the Option.get method. If a field is still None, an exception will be thrown. Keep this in mind, we'll come back to it later. />Looking back at the ScotchBuilder class and it's implementation, it might seem that we just moved the huge constructor mess from one place (clients) to another (the builder). And yes, that is exactly what we did. I guess that is the very definition of encapsulation, sweeping the dirt under the rug and keeping the rug well hidden. On the other hand, we haven't gained all the much from this "functionalization" of our builder; the main failure mode is still present. That is, having clients forget to set mandatory information, which is a particular concern since we obviously can't fully trust the sobriety of said clients<a href='#n1'>*</a>. Ideally the type system would prevent this problem, refusing to typecheck a call to build() when any of the non-optional fields aren't set. That's what we are going to do now.<br /><br />One technique, which is very common in Java fluent interfaces, would be to write an interface for each intermediate state containing only applicable methods. So we would begin with an interface VoidBuilder having all our withFoo() methods but no build() method, and a call to, say, withMode() would return another interface (maybe BuilderWithMode), and so on, until we call the last withBar() for a mandatory Bar, which would return an interface that finally has the build() method. This technique works, but it requires a metric buttload of code — for />Next, have each withFoo method pass ScotchBuilder's type parameters as type arguments to the builders they return. But, and here is where the magic happens, there is a twist on the methods for mandatory parameters: they should, for their respective generic parameters, pass instead TRUE: />Now, remember those abstract classes TRUE and FALSE? We never did subclass or instantiate them at any point. If I'm not mistaken, this is an idiom named Phantom Types, commonly used in the ML family of programming languages. Even though this application of phantom types is fairly trivial, we can glimpse at the power of the mechanism. We have, in fact, codified all 2><img src="" height="1" width="1" alt=""/>Rafael><img src="" height="1" width="1" alt=""/>Rafael.<img src="" height="1" width="1" alt=""/>Rafael Ferreira Beyond the Obvious<div id="lqt5" style="padding: 1em 0pt; text-align: center;"><a href=""><img style="width: 500px; height: 375px;" src="" /></a></div> <p style="margin-bottom: 0in;">Now.<><img src="" height="1" width="1" alt=""/>Rafael Ferreira and conjuring spells<div style="text-align: right;"><span style="font-style: italic;"> <> sucked, I don't know why anyone would watch that shit, specially the reruns Tuesday to Saturday at 04AM and Monday to Friday at 12PM on channel 49. Well, anyway, what's interesting about the analogy is that it shows the dual nature of software: it is both a magical process and a bunch of scribbled lines. To mix metaphors a bit, we can say that software is, at the same time, a machine and the specification for that machine.>, not mere introspection. Throwing away that mundane car analogy and getting back to our wizardry metaphor, metaprogramming would be like a spell that modifies itself while being casted. This begs the question, beyond fodder for bad TV Show scripts, what would this be useful for? My answer: for very little, because if the the way, I'm not a Rubyist, so please let me know if the example above is wrong. The only line with meta-stuff is line 2, where a method named "attr_writer" is being called with an argument of :blood_alcohol_level. This call will change the DrunkenActress class definition to add accessor methods for the blood_alcohol_level attribute. We can see that it worked in line 6, where we call the newly defined setter.<br /><br />But the programmer obviously already knows that the DunkenActress class needs to define a blood_alcohol_level attribute, so we see that meta-stuff is only applied here to save a few keystrokes. And that is not a bad motivation in itself, more concise code often is easier to understand. Then again, there are other ways to eliminate this kind of boilerplate without recursing to runtime metaprogramming, such as macros or even built-in support for common idioms (in this case, properties support like C# or Scala).<br /><br />There may be instances where the cleanest way to react to information available only in runtime is trough some metaprogramming facility, but I have yet to encounter them. Coming back to Rubyland, Active Record is often touted as a poster child for runtime metaprogramming, as it extracts metadata from the database to represent table columns as attributes in a class. But those attributes will be accessed by code that some programmer will write — and that means, again, that the information consumed by the metaprogram will need to be available in development time. And indeed it is, in the database. So ActiveRecord metaprogramming facilities are just means to delegate the definition of some static structure to an external store, with no real dynamicity involved. If it were not so, this ><img src="" height="1" width="1" alt=""/>Rafael Ferreira at USPI gave a talk about RESTful Web Services and JSR-311 at our local USPJUG meeting, the slides are available <a href="">here</a> (in portuguese).<img src="" height="1" width="1" alt=""/>Rafael Ferreira "Conexão Java 2007"This past week I've attended an event called Conexão Java. I've posted my notes <a href="">on the other blog</a>.<img src="" height="1" width="1" alt=""/>Rafael Ferreira've setup an alternate feed that splices my <a href="">bookmarks</a> from <a href="">del.icio.us</a> with this blog's entries. You can find it <a href="">here</a>.<img src="" height="1" width="1" alt=""/>Rafael;"> ration of debugging times over 25 to 1, of program size 5 to 1, and of program execution speed about 10 to 1. They found no relationship between a programmer's amount of experience and code quality or productivity.<br /><br />Although specific rations such as 25 to 1 aren't particularly meaningful , more general statements such as "There are order-of-magnitude differences among programmers'" are meaningful and have been confirmed by many other studies of professional programmers (Curtis 1981, Mills 1983, DeMarco and Lister 1985, Curtis et al. 1986, Card 1987, Boehm and Papaccio 1988, Valett and McGarry 1989, Boehm et al. 2000)." />But an architecture is also intended to be restrictive, in that it should channel software developers in a direction that leads to all of these successes, and away from potential decisions that would lead to problems later. In other words, as Microsoft's CLR architect Rico Mariani put it, a good architecture should enable developers to "fall into the pit of success", where if you just (to quote the proverbial surfer) "go with the flow", you make decisions that lead to all of those good qualities we just discussed. "<><img src="" height="1" width="1" alt=""/>Rafael.<img src="" height="1" width="1" alt=""/>Rafael />I find myself thinking along similar lines with regard to concurrency. For the sake of analysis, lets split the space of applications in server-based and client-based. Members of the first group basically deal with responding to requests coming over the network. This means that there is a naturally high degree of parallelism and, of course, this has been exploited for a long time. The typical scenario is some serial business application code atop a middleware platform that handles threading and I/O. This means that on the server front, the “multicore revolution” will impact little on most software development efforts. Now, desktop software developers don't have such luck – the era of surfing on Moore's Law is really over. And so what? The way I see it,* raw computing has ceased to be an important bottleneck, long gone are the days of watching a crude hourglass animation while the CPU labored away. Not that we do any less waiting by now, these days we spend our time waiting for the network.<br /><br />Anyway, maybe the concurrency boogieman is less scary than we think.<br /><br /><br />*(this is a blog, after all)<img src="" height="1" width="1" alt=""/>Rafael Ferreira | http://feeds.feedburner.com/RafaelRamblingAndLinks | CC-MAIN-2017-30 | refinedweb | 9,914 | 51.89 |
Subject: Re: [boost] Name of namespace detail
From: Mateusz Loskot (mateusz_at_[hidden])
Date: 2009-10-13 11:13:44
Neil Groves wrote:
> Hi,
>
> My opinions are made inline...
>
> On Tue, Oct 13, 2009 at 9:20 AM, Mateusz Loskot <mateusz_at_[hidden]> wrote:
>
>> Joel de Guzman wrote:
>>
>>> Mateusz Loskot wrote:
>>>
>>>> Hi,
>>>>
>>>> Inspired by Jean-Louis question about what to put to namespace detail, I
>>>> would be interested learning about rationale of name of the namespace
>>>> detail (sometimes details or impl too).
>>>>
>>>> Recently, I've participated in a very interesting discussion, on ACCU
>>>> members mailing list, about prefixes and suffixes like Base or _base nad
>>>> Impl or _impl, as misused, irrelevant and confusing, meaningless, etc.
>>>> For example, how to properly name elements of PIMPL idiom and similar.
>>>>
>>>> During the discussion I suggested that 'detail' is a good name for
>>>> namespace dedicated to implementation details being not a part of public
>>>> interface of a component. I got answer that it as the same issues (it's
>>>> meaningless) as Impl etc.
>>>>
>>> Why? Could you please provide details on what their responses are?
>>>
>> The discussion was too long I think (archives are not public) but generally
>> the conclusion was that impl and base suffixes do not carry much
>> information.
>> For example that I put under discussion, the PIMPL-based FileReader
>> using Impl suffixes etc.:
>>
>> class Reader
>> {
>> const std::auto_ptr<ReaderBase> pimpl_;
>> public:
>> // ...
>> };
>> class ZipReaderImpl : public ReaderBase {...};
>> class BZipReaderImpl : public ReaderBase {...};
>>
>> we came to the improved version:
>>
>> class FileReader
>> {
>> class Body;
>> const std::auto_ptr<Body> handle;
>> public:
>> // ...
>> };
>> class ZipFileReader : public FileReader::Body {...};
>> class BZipFileReader : public FileReader::Body {...};
>>
>> I agree that the latter is more verbose telling what is what,
>> and placing elements better regarding concepts it uses (handle-body).
>
> The ZipFileReader is now less obviously something I should not be using, in
> my opinion.
In this particular example, ZipFileReader is an internal startegy type
accessible through FileReader. I'm sorry for lack of precision.
IOW, FileReader can be a handle to Zip or BZip reader Body.
>> Given that, namespace detail was judged as similarly not much
>> informative as Impl and Base suffix.
>
> It tells me I as a consumer of the library should not be using it, and that
> is all it should be telling me. It is a neat orthogonal description nicely
> separated from other names.
Now I see, why good naming in software craft is a difficult art.
I agree, but things may have different (so is understanding)
meaning for different people.
>> I hope it clarifies the point.
>>
>
> Has there ever been any evidence that maintenance was more difficult with
> the detail namespace approach?
> Have there been defects caused by the namespace detail approach?
I have not seen any. It's just a discussion on pros and cons, I believe.
> If neither of these things have happened then it doesn't seem wise to focus
> effort improving this idiom.
Yes.
Just to clarify, you call "namespace detail" as idiom here, right?
> To be honest, I don't see how ZipFileReader is less clearly something I
> shouldn't be using despite it playing the "Body" role. It was clearer in the
> original that I should use the handle.
As I tried to clarify above, ZipFileReader lives in private space,
most likely in namespace detail (or private_). It's my bad, I gave
slightly incomplete example.
Best regards,
-- Mateusz Loskot,
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2009/10/156978.php | CC-MAIN-2020-05 | refinedweb | 576 | 57.57 |
Confused by the namespace of JNDI when lookup Bean!
Hi:
I really confused by the call parameter of JNDI's lookup.
if client call BeanA, which one is correct?
the first one is: Object ref=InitialContext(properties).lookup("BeanA")
the second one is: Object ref=InitialContext(properties).lookup("java:comp/env/ejb/BeanA")
the third one is: Object ref=InitialContext(properties).lookup("java:comp/env/BeanA")
if BeanB call BeanA in the same VM, which one is correct?
the first one is: Object ref=InitialContext().lookup("BeanA")
the second one is: Object ref=InitialContext().lookup("java:comp/env/ejb/BeanA")
the third one is: Object ref=InitialContext().lookup("java:comp/env/BeanA")
Why?
Thanks!
John Lee
Discussions
General J2EE: Confused by the namespace of JNDI when lookup Bean!
Confused by the namespace of JNDI when lookup Bean! (1 messages)
Threaded Messages (1)
- Confused by the namespace of JNDI when lookup Bean! by Lasse Koskela on February 23 2003 16:32 EST
Confused by the namespace of JNDI when lookup Bean![ Go to top ]
The essence of "java:comp/env" is a frequent subject...
- Posted by: Lasse Koskela
- Posted on: February 23 2003 16:32 EST
- in response to John Lee
Basically the private naming context java:comp/env is used to provide a layer of abstraction regarding dependencies on external resources.
For example, if CarEJB needs a reference to EngineEJB in order to work, java:comp/env can be used to hide the real JNDI name of EngineEJB from CarEJB. CarEJB uses "java:comp/env/EngineEJB" to access EngineEJB even though EngineEJB is really deployed using the real JNDI name "com.company.Engine".
This is made possible by mapping the "private" and "real" JNDI name with the deployment descriptors. This way, if you for some reason cannot deploy EngineEJB with the real JNDI name "com.company.Engine", you don't need to recompile, only reconfigure. | http://www.theserverside.com/discussions/thread.tss?thread_id=18023 | CC-MAIN-2014-49 | refinedweb | 315 | 50.33 |
How do you create a debug only function that takes a variable argument list? Like printf()
I'd like to make a debug logging function with the same parameters as printf. But one that can be removed by the pre-processor during optimized builds.
For example:
Debug_Print("Warning: value %d > 3!\n", value);
I've looked at variadic macros but those aren't available on all platforms. gcc supports them, msvc does not.
Answers
I still do it the old way, by defining a macro (XTRACE, below) which correlates to either a no-op or a function call with a variable argument list. Internally, call vsnprintf so you can keep the printf syntax:
#include <stdio.h> void XTrace0(LPCTSTR lpszText) { ::OutputDebugString(lpszText); } void XTrace(LPCTSTR lpszFormat, ...) { va_list args; va_start(args, lpszFormat); int nBuf; TCHAR szBuffer[512]; // get rid of this hard-coded buffer nBuf = _vsnprintf(szBuffer, 511, lpszFormat, args); ::OutputDebugString(szBuffer); va_end(args); }
Then a typical #ifdef switch:
#ifdef _DEBUG #define XTRACE XTrace #else #define XTRACE #endif
Well that can be cleaned up quite a bit but it's the basic idea.
This is how I do debug print outs in C++. Define 'dout' (debug out) like this:
#ifdef DEBUG #define dout cout #else #define dout 0 && cout #endif
In the code I use 'dout' just like 'cout'.
dout << "in foobar with x= " << x << " and y= " << y << '\n';
If the preprocessor replaces 'dout' with '0 && cout' note that << has higher precedence than && and short-circuit evaluation of && makes the whole line evaluate to 0. Since the 0 is not used the compiler generates no code at all for that line.
Here's something that I do in C/C++. First off, you write a function that uses the varargs stuff (see the link in Stu's posting). Then do something like this:
int debug_printf( const char *fmt, ... ); #if defined( DEBUG ) #define DEBUG_PRINTF(x) debug_printf x #else #define DEBUG_PRINTF(x) #endif DEBUG_PRINTF(( "Format string that takes %s %s\n", "any number", "of args" ));
All you have to remember is to use double-parens when calling the debug function, and the whole line will get removed in non-DEBUG code.
Another fun way to stub out variadic functions is:
#define function sizeof
@CodingTheWheel:
There is one slight problem with your approach. Consider a call such as
XTRACE("x=%d", x);
This works fine in the debug build, but in the release build it will expand to:
("x=%d", x);
Which is perfectly legitimate C and will compile and usually run without side-effects but generates unnecessary code. The approach I usually use to eliminate that problem is:
Make the XTrace function return an int (just return 0, the return value doesn't matter)
Change the #define in the #else clause to:
0 && XTrace
Now the release version will expand to:
0 && XTrace("x=%d", x);
and any decent optimizer will throw away the whole thing since short-circuit evaluation would have prevented anything after the && from ever being executed.
Of course, just as I wrote that last sentence, I realized that perhaps the original form might be optimized away too and in the case of side effects, such as function calls passed as parameters to XTrace, it might be a better solution since it will make sure that debug and release versions will behave the same.
In C++ you can use the streaming operator to simplify things:
#if defined _DEBUG class Trace { public: static Trace &GetTrace () { static Trace trace; return trace; } Trace &operator << (int value) { /* output int */ return *this; } Trace &operator << (short value) { /* output short */ return *this; } Trace &operator << (Trace &(*function)(Trace &trace)) { return function (*this); } static Trace &Endl (Trace &trace) { /* write newline and flush output */ return trace; } // and so on }; #define TRACE(message) Trace::GetTrace () << message << Trace::Endl #else #define TRACE(message) #endif
and use it like:
void Function (int param1, short param2) { TRACE ("param1 = " << param1 << ", param2 = " << param2); }
You can then implement customised trace output for classes in much the same way you would do it for outputting to std::cout.
Ah, vsprintf() was the thing I was missing. I can use this to pass the variable argument list directly to printf():
#include <stdarg.h> #include <stdio.h> void DBG_PrintImpl(char * format, ...) { char buffer[256]; va_list args; va_start(args, format); vsprintf(buffer, format, args); printf("%s", buffer); va_end(args); }
Then wrap the whole thing in a macro.
What platforms are they not available on? stdarg is part of the standard library:
Any platform not providing it is not a standard C implementation (or very, very old). For those, you will have to use varargs:
Part of the problem with this kind of functionality is that often it requires variadic macros. These were standardized fairly recently(C99), and lots of old C compilers do not support the standard, or have their own special work around.
Below is a debug header I wrote that has several cool features:
- Supports C99 and C89 syntax for debug macros
- Enable/Disable output based on function argument
- Output to file descriptor(file io)
Note: For some reason I had some slight code formatting problems.
#ifndef _DEBUG_H_ #define _DEBUG_H_ #if HAVE_CONFIG_H #include "config.h" #endif #include "stdarg.h" #include "stdio.h" #define ENABLE 1 #define DISABLE 0 extern FILE* debug_fd; int debug_file_init(char *file); int debug_file_close(void); #if HAVE_C99 #define PRINT(x, format, ...) \ if ( x ) { \ if ( debug_fd != NULL ) { \ fprintf(debug_fd, format, ##__VA_ARGS__); \ } \ else { \ fprintf(stdout, format, ##__VA_ARGS__); \ } \ } #else void PRINT(int enable, char *fmt, ...); #endif #if _DEBUG #if HAVE_C99 #define DEBUG(x, format, ...) \ if ( x ) { \ if ( debug_fd != NULL ) { \ fprintf(debug_fd, "%s : %d " format, __FILE__, __LINE__, ##__VA_ARGS__); \ } \ else { \ fprintf(stderr, "%s : %d " format, __FILE__, __LINE__, ##__VA_ARGS__); \ } \ } #define DEBUGPRINT(x, format, ...) \ if ( x ) { \ if ( debug_fd != NULL ) { \ fprintf(debug_fd, format, ##__VA_ARGS__); \ } \ else { \ fprintf(stderr, format, ##__VA_ARGS__); \ } \ } #else /* HAVE_C99 */ void DEBUG(int enable, char *fmt, ...); void DEBUGPRINT(int enable, char *fmt, ...); #endif /* HAVE_C99 */ #else /* _DEBUG */ #define DEBUG(x, format, ...) #define DEBUGPRINT(x, format, ...) #endif /* _DEBUG */ #endif /* _DEBUG_H_ */
Have a look at this thread:
It should answer your question.
Having come across the problem today, my solution is the following macro:
static TCHAR __DEBUG_BUF[1024] #define DLog(fmt, ...) swprintf(__DEBUG_BUF, fmt, ##__VA_ARGS__); OutputDebugString(__DEBUG_BUF)
You can then call the function like this:
int value = 42; DLog(L"The answer is: %d\n", value);
This is what I use:
inline void DPRINTF(int level, char *format, ...) { # ifdef _DEBUG_LOG va_list args; va_start(args, format); if(debugPrint & level) { vfprintf(stdout, format, args); } va_end(args); # endif /* _DEBUG_LOG */ }
which costs absolutely nothing at run-time when the _DEBUG_LOG flag is turned off.
This is a TCHAR version of user's answer, so it will work as ASCII (normal), or Unicode mode (more or less).
#define DEBUG_OUT( fmt, ...) DEBUG_OUT_TCHAR( \ TEXT(##fmt), ##__VA_ARGS__ ) #define DEBUG_OUT_TCHAR( fmt, ...) \ Trace( TEXT("[DEBUG]") #fmt, \ ##__VA_ARGS__ ) void Trace(LPCTSTR format, ...) { LPTSTR OutputBuf; OutputBuf = (LPTSTR)LocalAlloc(LMEM_ZEROINIT, \ (size_t)(4096 * sizeof(TCHAR))); va_list args; va_start(args, format); int nBuf; _vstprintf_s(OutputBuf, 4095, format, args); ::OutputDebugString(OutputBuf); va_end(args); LocalFree(OutputBuf); // tyvm @sam shaw }
I say, "more or less", because it won't automatically convert ASCII string arguments to WCHAR, but it should get you out of most Unicode scrapes without having to worry about wrapping the format string in TEXT() or preceding it with L.
Largely derived from MSDN: Retrieving the Last-Error Code
Need Your Help
How do i run a query in MYSQL without writing it to the binary log
mysql import replication binary-logI want to run an import of a large file into MySQL. However, I don't want it written to the binary log, because the import will take a long time, and cause the slaves to fall far behind. I would r...
Qt Linker Error: "undefined reference to vtable"
c++ qt linker-errors vtable qobjectThis is my header: | http://www.brokencontrollers.com/faq/42775.shtml | CC-MAIN-2019-47 | refinedweb | 1,287 | 59.84 |
I'm producing a scatter plot using pyplot.plot (instead of scatter - I'm having difficulties with the colormap)
I am plotting using the 'o' marker to get a circle, but the circle always has a black outline.
How do I remove the outline, or adjust its colour?
To remove the outline of a marker, and adjust its color, use
markeredgewidth (aka
mew), and
markeredgecolor (aka
mec) respectively.
Using this as a guide:
import numpy as np import matplotlib.pyplot as plt x = np.arange(0, 5, 0.1) y = np.sin(x) plt.plot(x, y, marker='o', fillstyle='full', markeredgecolor='red', markeredgewidth=0.0)
As you notice, even though the marker edge color is set, because the width of it is set to zero it doesn't show up. | https://codedump.io/share/0wgKW9vnLDQB/1/how-to-remove-outline-of-circle-marker-when-using-pyplotplot-in-matplotlib | CC-MAIN-2017-13 | refinedweb | 131 | 68.36 |
MALLOC(3) BSD Programmer's Manual MALLOC(3)
malloc, calloc, realloc, free, cfree - memory allocation and deallocation
#include <stdlib.h> void * malloc(size_t size); void * calloc. protection. Unused pages on the freelist are read and write protected to cause a segmentation fault upon access. This will also switch off the delayed freeing of chunks, reducing random behaviour but detecting double free() calls as early as possible.. Z "Zero". Fill some junk into the area allocated (see J), except for the exact length the user asked for, which is zeroed. < "Half the cache size". Decrease the size of the free page cache by a factor of two. > "Double the cache size". Increase the size of the free page cache by a factor of two. So to set a systemwide reduction of cache size and use guard pages: # ln -s 'G<' /etc/malloc.conf The flags are mostly for testing and debugging. If a program changes behavior if any of these options (except X) are used, it is buggy. The default number of free pages cached is 64.. 23,. | http://mirbsd.mirsolutions.de/htman/sparc/man3/calloc.htm | crawl-003 | refinedweb | 177 | 66.64 |
Hello, I need help with a program that reads in a string, detailing the occurrence of each variable. So far the program reads a string and shows the number of times each variable occurs (letters only). The program is based of keyboard input, so I was wondering if anyone could help me with allowing my program to output the number of non letter used.
#include <stdio.h> #include <string.h> int main() { char string[100], ch; int c = 0, count[26] = {0}; printf("Enter a string\n"); gets(string); while ( string[c] != '\0' ) { if ( string[c] >= 'a' && string[c] <= 'z' ) count[string[c]-'a']++; c++; } for ( c = 0 ; c < 26 ; c++ ) { if( count[c] != 0 ) printf("%c occurs %d times in the entered string.\n",c+'a',count[c]); } return 0; } | https://www.daniweb.com/programming/software-development/threads/455110/help-needed-for-c-program | CC-MAIN-2018-39 | refinedweb | 131 | 82.75 |
1234567891011121314151617181920212223242526272829303132333435363738394041424344
#include <stdio.h>
#include <stdlib.h>
int main () {
int JD, m, d, dAno;
char noon;
printf ("This program gives the Julian Date for Jan-Feb 2014\nEnter the month number: \n");
scanf("%d", &m);
while ((m>2)||(m<1)) {printf ("Error; this program only applies\nto January or February \n");
printf ("Enter the month number: \n");
scanf("%d", &m);}
printf ("Now enter the day of the month:\n");
scanf("%d", &d);
while ((d>31)||(d<1)) {printf ("Error; you must enter a valid date \n");
printf ("Now enter the day of the month:\n");
scanf("%d", &d);}
printf ("Do you want the Julian Date after or before noon in Greenwich?\nType A or B:");
scanf("%d", &noon);
/*My first difficulty here: how can I evaluate a char? How can I assure that
if the user types a different letter, signal or number the program says he's
wrong, like above? Like, if I type a decimal or a char for the month, it enters in
infinite looping, can I prevent that happening?*/
if (m=1){
dAno=d;
JD=dAno+2456658;
printf ("The Julian Date is %d\n", JD);}
else{
if (m=2){
dAno=d+31;
JD=dAno+2456658;
printf ("A Data Juliana eh %d\n", JD);}}
/*Here I have the main problem: the program is working well for January,
but if I want to calculate for February, it simply ignores and calculates
like if it was still January: Feb 1st returns 2456659, just like
it was Jan 1st. I can't handle with these "ifs" and "elses" and also I
can't make the proper use of the "noon" flag - if noon = B, the Julian Date shall
be decremented by 1. At least the program compiles, that's already a deed for me.
I'm using only the 1st two months for testing, I want to make a complete
script for all the 12 months, but I need some tip about how to do that
in a more efficient way*/
system ("PAUSE");
return 0;
}
if (m=1)
if (m==1)
123
printf("text"); //C
cout << "text"; //C++ | http://www.cplusplus.com/forum/beginner/121343/ | CC-MAIN-2014-10 | refinedweb | 348 | 65.39 |
C++ also allows another type of pointer; this one is more useful than pointers to pointers. C++ allows you to create pointers to functions. The greater utility of this is for passing a function as a parameter to another function. That means you can pass an entire function as an argument for another function. To declare a pointer to a function we must declare it like the prototype of the function, but enclosing between parenthesis () the name of the function and then placing a pointer asterisk (*) before the name of the function.
// pointer to functions #include <iostream> using namespace std; // function prototypes int subtraction (int a, int b); int addition (int a, int b); //function pointer prototypes int (*minus)(int,int) = subtraction; int operation (int x, int y, int (*functocall)(int,int)) { int i; i = (*functocall)(x,y); return (i); } int main () { int m,n; m = operation (5, 5, addition); n = operation (50, m, minus); cout <<n; return 0; } int addition (int a, int b) { int answer; answer = a + b; return answer; } int subtraction (int a, int b) { int answer; answer = a - b; return answer; }
In the example, the word minus is used as a pointer to a function that has two parameters of type int. You can now pass that function’s pointer to another function.
int (* minus)(int,int) = subtraction;
In this case, you pass the functions pointer to a function called operation, which will do whatever function is passed to it (be it addition or subtraction). This is a rather powerful feature that C++ provides you. | https://flylib.com/books/en/2.331.1.76/1/ | CC-MAIN-2021-25 | refinedweb | 259 | 51.82 |
Sharing for the beginners like me: Transparent Background
- KnightExcalibur
While trying to figure this out for myself, I read a few threads in this forum that touched on this topic. I figured I would share this to help out other newbies like myself who are just starting to learn the basics of python. I made very slight adjustments to the code from the following link:
The aim for this person was to make the white background transparent and make the black lettering sharper.
My goal was to take any color background and make it transparent so it could be used for a spritesheet for games. This is what I ended up with:
from PIL import Image img = Image.open('image.png') img = img.convert("RGBA") # RGBA means Red, Green, Blue, Alpha. Alpha is the transparency of the image. datas = img.getdata() rgb = datas[0] #get the color of the first pixel of the image, in RGB format newData = [] for item in datas: if item[0] == rgb[0] and item[1] == rgb[1] and item[2] == rgb[2]: #check if the pixel matches the first pixel's color newData.append((0, 0, 0, 0)) #set the pixel to black and make it transparent. the first three numbers are the RGB levels and the fourth is the alpha else: newData.append(item) img.putdata(newData) img.save("image_transparent.png", "PNG")
Basically it sets the first pixel in the very top left as the background color and then goes through every pixel of the image and if the color of the pixel is the same as the first pixel, it makes it transparent. If the first pixel of the image is not the background color, you can set the number in datas[0] to whichever pixel you want to establish as the background color.
I take no credit for writing any of this code since I just made very very minor adjustments to someone else's code, and just added a few comments. I am just sharing this because I appreciate it when I get help from others:) | https://forum.omz-software.com/topic/5885/sharing-for-the-beginners-like-me-transparent-background | CC-MAIN-2022-27 | refinedweb | 345 | 66.78 |
Load CSV
Many existing applications and data integrations use CSV as the minimal denominator format. CSV files contain text with delimiters (most often comma, but also tab (TSV) and colon (DSV)) separating columns and newlines for rows. Fields are possibly quoted to handle stray quotes, newlines, and the use of the delimiter within a field.
In Cypher it is supported by
LOAD CSV and with the
neo4j-import (
neo4j-admin import) tool for bulk imports.
The existing
LOAD CSV works ok for most uses, but has a few features missing, that
apoc.load.csv and
apoc.load.xls
The APOC procedures also support reading compressed files.
The data conversion is useful for setting properties directly, but for computation within Cypher it’s problematic as Cypher doesn’t know the type of map values so they default to
Any.
To use them correctly, you’ll have to indicate their type to Cypher by using the built-in (e.g.
toInteger) or apoc (e.g.
apoc.convert.toBoolean) conversion functions on the value.
For reading from files you’ll have to enable the config option:
apoc.import.file.enabled=true
By default file paths are global, for paths relative to the
import directory set:
apoc.import.file.use_neo4j_config=true
Examples for apoc.load.csv
name,age,beverage Selma,9,Soda Rana,12,Tea;Milk Selina,19,Cola
CALL apoc.load.csv('test.csv') YIELD lineNo, map, list RETURN *;
Configuration Options
Besides the file you can pass in a config map:
CALL apoc.load.csv('test.csv', {skip:1, limit:1, header:true, ignore:['name'], mapping:{ age: {type:'int'}, beverage: {array:true, arraySep:';', name:'drinks'} } }) YIELD lineNo, map, list RETURN *;
Transaction Batching
To handle large files,
USING PERIODIC COMMIT can be prepended to
LOAD CSV, you’ll have to watch out though for Eager operations which might break that behavior.
In apoc you can combine any data source with
apoc.periodic.iterate to achieve the same.
CALL apoc.periodic.iterate(' CALL apoc.load.csv({url}) yield map as row return row ',' CREATE (p:Person) SET p = row ', {batchSize:10000, iterateList:true, parallel:true});
To make these data structures available to Cypher, you can use
apoc.load.xml.
It takes a file or http URL and parses the XML into a map data structure.
See the following usage-examples for the procedures.
Was this page helpful? | https://neo4j.com/labs/apoc/4.1/import/load-csv/ | CC-MAIN-2021-10 | refinedweb | 392 | 57.57 |
The Sun BabelFish Blog Don't panic ! 2005-06-24T11:59:34+00:00 Last Days at Sun Microsystems 2010-01-15T16:39:35+00:00 2010-01-15T08:39:35+00:00 Henry Story Henry Story travel semweb travel > Faviki: social bookmarking for 2010 2010-01-14T17:38:25+00:00 2010-01-13T05:53:00+00:00 Henry Story Henry Story SemWeb blogging community semweb socialweb web web2.0 web3.0 <a href=""><img src="" align="right" alt="faviki logo"></a> <p><a href="">Faviki</a> is simply put the next generation <a href="">social bookmarking</a> service. "A bookmarking service? You must be kidding?!" I can hear you say in worried exasperation. "How can one innovate in that space?" Not only is it possible to innovate here, let me explain why I moved all my bookmarks from delicious over to faviki.</p> <p>Like <a href="">delicious</a>, <a href="">digg</a>, <a href="">twitter</a> and others... Faviki uses <a href="">crowd sourcing</a> to allow one to share interesting web pages one has found, stay up to date on a specific topic of interest, and keep one's bookmarks synchronized across computers. So there is nothing new at that level. If you know <a href="">del.icio.us</a>, you won't be disoriented.</p> <p>What is new is that instead of this being one <a href="">crowd sourced</a> application, it is in fact two. It builds on wikipedia to help you tag your content intelligently with concepts taken from <a href="">dbpedia</a>. Instead of tagging with strings the meaning of which you only understand at that time, you can have tags that make sense, backed by a real evolving encyclopedia. Sounds simple? Don't be deceived: there is a huge potential in this.</p>a disambiguation page</a> for this, which took some time to put together. No such mechanism exists on delicious. </p> <p. </p> <p>But that is just the beginning of the neatness of this system. Imagine you tag a page with <a href=""><... </p> <p>I will leave it as an exercise to the reader to think about other interesting ways to use this structured information to make finding resources easier. Here is an image of the state of the linked data cloud 6 months ago to stimulate your thinking :-)</p> <a href=""><img src=""></a>. <p>But think about it the other way now. Not only are you helping your future self find information bookmarked semantically - let's use the term now - you are also making that information clearly available to wikipedia editors in the future. Consider for example the article "<a href="">Lateralization of Brain Function</a>" on wikipedia. The <a href="">Faviki page on that subject</a> is going to be a really interesting place to look to find good articles on the subject appearing on the web. So with Faviki you don't have to work directly on wikipedia to participate. You just need to tag your resources carefully!</p> <p>Finally I am particularly pleased by Faviki, because it is exactly the service I described on this blog 3 years ago in my post <a href="">Search, Tagging and Wikis</a>, at the time when the folksonomy meme was in full swing, threatening according to it's fiercest proponents to put the semantic web enterprise into the dustbin of history.</p> <p>Try out Faviki, and see who makes more sense.</p> <p>Some further links: <ul><li><a href="">Auto-Tag Delicious Bookmarks and Share Them Easily On Twitter With Faviki</a> a blog that goes into more detail on Faviki. <li><a href="">Faviki's Feed of resources tagged with Faviki</a>. </ul> MISC 2010 and the Internet of Subjects 2010-01-06T12:48:47+00:00 2010-01-05T17:19:20+00:00 Henry Story Henry Story SemWeb identity security semweb socialweb <a title="MISC 2010 Forum poster" href=""><img align="right" src=" "/></a><p <a href=""/>the Agenda</a>). I will be presenting on the developments of the Secure Social Web with <a href="">foaf+ssl</a>.</p> <p>The conference will also be the launch pad for the Internet of Subjects foundation, whose manifesto starts with the following lines (<a href="">full version</a>) <blockquote>. </blockquote><p <b>thought</b> about the movement of the planets, so Web architecture as it currently stands, is perfectly adequate for an Internet of Subjects. It has been designed like that right from the beginning. <a href="">Tim Berners Lee</a> in his 1994 Plenary at the First International World Wide Web Conference, presented a Paper "<a href="">W3 future directions</a>" where he showed how from the flat world of documents as shown here </p> <img src=""/> <p>one could move to a world of objects described by those documents as shown here</p> <img src=""/> <p>This is what led to the development of the semantic web, and to technologies such as foaf that since 2000 have allowed us to build distributed Social Networks, and <a href="">foaf+ssl</a>.</p> <p...</p> Web Finger proposals overview 2009-12-01T07:54:33+00:00 2009-11-29T14:10:26+00:00 Henry Story Henry Story SemWeb identity semweb socialnetworks unix web web2.0 <p. </p> <p>The <a href="">WebFinger GoogleCode page</a> explains what webfinger is very well:</p> <blockquote> Back in the day you could, given somebody's UNIX account (email address), type <pre>$ finger email@example.com </pre> and get some information about that person, whatever they wanted to share: perhaps their office location, phone number, URL, current activities, etc. </blockquote> <p>The new ideas generalize this to the web, by following a very simple insight: If you have an email address like <code>henry.story@sun.com</code>, then the owner of <code>sun.com</code> is responsible for managing the email. That is the same organization responsible for managing the web site <code></code>. So all that is needed is some machine readable pointer from <code></code> to a lookup giving more information about owner of the email address. That's it! </p> <h4>The WebFinger proposal</h4> <p>The WebFinger proposed solution showed the way so I will start from here. It is not too complicated, at least as described by <a href="">John Panzer's</a> "<a href="">Personal Web Discovery</a>" post.</p> <p>John suggests that there should be a convention that servers have a file in the <a href=""><code>/host-meta</code></a> root location of the HTTP server to describe metadata about the site. (This seems to me to break web architecture. But never mind: the resource <code></code> can have a link to some file that describes a mapping from email ids to information about it.) The WebFinger solution is to have that resource be in a new <a href="">application/host-meta</a> file format. (not xml btw). This would have mapping of the form <blockquote><pre>Link-Pattern: <<a href="" target="_blank">{%<wbr>uri}</a>>;application/xrd+xml</a> format about the user.</p> <p>The idea is really good, but it has three more or less important flaws: <ul> <li>It seems to require by convention all web sites to set up a <code>/host-meta</code> location on their web servers. Making such a global requirement seems a bit strong, and does not in my opinion follow web architecture. It is not up to a spec to describe the meaning of URIs, especially those belonging to other people.</li> <li>It seems to require a non xml <code>application/host-meta</code> format</li> <li>It creates yet another file format to describe resources the <code>application/xrd+xml</code>. It is better to describe resources at a semantic level using the Resouces Description Framework, and not enter the format battle zone. To describe people there is already the widely known <a href="">friend of a friend</a> ontology, which can be <b>clearly</b> extended by anyone. Luckily it would be easy for the XRD format to participate in this, by simply creating a <a href="">GRDDL</a> mapping to the semantics.</li> </ul> <p>All these new format creation's are a real pain. They require new parsers, testing of the spec, mapping to semantics, etc... There is no reason to do this anymore, it is a solved problem.</p> <p>But lots of kudos for the good idea!</p> <h4>The FingerPoint proposal</h4> <p><a href="">Toby Inkster</a>, co inventor of <a href="">foaf+ssl</a>, authored the <a href="">fingerpoint proposal</a>, which avoids the problems outlined above.</p> <p>Fingerpoint defines one useful relation <a href="">sparql:fingerpoint</a> relation (available at the namespace of the relation of course, as all good linked data should), and is defined as <blockquote> <pre> . </pre> </blockquote> It is then possible to have the root page link to a SPARQL endpoint that can be used to query very flexibily for information. Because the link is defined semantically there are a number of ways to point to the sparql endpoint: <ul><li>Using the up and coming <a href="">HTTP-Link</a> HTTP header, <li>Using the well tried html <link> element. <li>Using RDFa embedded in the html of the page <li>By having the home page return any other represenation that may be popular or not, such as rdf/xml, N3, or XRD... </ul> Toby does not mention those last two options in his spec, but the beauty of defining things semantically is that one is open to such possibilities from the start. </p> <p>So Toby gets <b>more</b> power as the WebFinger proposal, by only inventing 1 new relation! All the rest is already defined by existing standards.</p> <p>The only problem one can see with this is that <a href=""> SPARQL</a>, though not that difficult to learn, is perhaps a bit too powerful for what is needed. You can really ask anything of a SPARQL endpoint!</p> <h4>A possible intermediary proposal: semantic forms</h4> <p>What is really going on here? Let us think in simple HTML terms, and forget about machine readable data a bit. If this were done for a human being, what we really would want is a page that looks like the <a href="">webfinger.org</a> site, which currently is just one query box and a search button (just like Google's front page). Let me reproduce this here:</p> <p> <form style="height:75px; padding-right:10px; padding-top:3px;" action='/lookup' method='POST'> <img style="float:left; vertical-align=baseline;" height="75px" src='' /> <input style="font-size:25px; height:34px;" name='email' type='text' value='' /> <button style="font-size:25px; height:34px;" type='submit' value='Look Up'>Look Up</button> </form> </p> <p>Here is the html for this form as its purest, without styling:</p> <pre> <form action='/lookup' method='GET'> <img src='' /> <input name='email' type='text' value='' /> <button type='submit' value='Look Up'>Look Up</button> </form> </pre> <p>What we want is some way to make it clear to a robot, that the above form somehow maps into the following SPARQL query:</p> <pre> PREFIX foaf: <> SELECT ?homepage WHERE { [] foaf:mbox ?email; foaf:homepage ?homepage } </pre> <p>Perhaps this could be done with something as simple as an RDFa extension such as:</p> <pre> <form action='/lookup' method='GET'> <img src='' /> <input name='email' type='text' value='' /> <font color="blue"><button type='submit' value='homepage' sparql='PREFIX foaf: <> GET ?homepage WHERE { [] foaf:mbox ?email; foaf:homepage ?homepage }">Look Up</button></font> </form> </pre> <p>When the user (or robot) presses the form, the page he ends up on is the result of the SPARQL query where the values of the form variables have been replaced by the identically named variables in the SPARQL query. So if I entered <code>henry.story@sun.com</code> in the form, I would end up on the page <code></code>, which could perhaps just be a redirect to this blog page... This would then be the answer to the SPARQL query <pre> PREFIX foaf: <> SELECT ?homepage WHERE { [] foaf:mbox "henry.story@bblfish.net"; foaf:homepage ?homepage } </pre> (note: that would be wrong as far as the definition of foaf:mbox goes, which relates a person to an mbox, not a string... but let us pass on this detail for the moment) </p> <p>Here we would be defining a new GET method in SPARQL, which find the type of web page that the post would end up landing on: namely a page that is the homepage of whoever's email address we have.</p> <a href="">RESTful semantic web services</a> before RDFa was out. </p> ... </p> <p>(See also an earlier post of mine <a href="">SPARQLing AltaVista: the meaning of forms</a>)</p> <h4>How this relates to OpenId and foaf+ssl</h4> <p "<a href="">FOAF and OpenID</a>").</p> <p>Of course this user interface problem does not come up with <a href="">foaf+ssl</a>, because by using client side certificates, foaf+ssl does not require the user to remember his <a href="">WebID</a>. The browser does that for him - it's built in.</p> .</p> <h4>Updates</h4> <p>It was remarked in the comments to this post that the format for the <code>/host-meta</code> format is now XRD. So that removes one criticism of the first proposal. I wonder how flexible XRD is now. Can it express everything RDF/XML can? Does it have a GRDDL?</p> Identity in the Browser, Firefox style 2009-11-25T21:37:30+00:00 2009-11-25T12:34:55+00:00 Henry Story Henry Story SemWeb identity security semweb socialnetworking web2.0 web3.0 <p>Mozilla's User Interface chief <a href="">Aza Raskin</a> just put forward some interesting thoughts on what <a href="">Identity in the Browser could look like for Firefox</a>. As one of the Knights in search of the <strike>Golden</strike> Holy Grail of distributed Social Networking, he believes to have found it in giving the browser more control of the user's identity.</p> <p <a href="">Weave Identity Account Manager project site</a>:</p> <img title="UI sketches for Identity in the Browser in Firefox" src=""/> <h4>The User Interface</h4> <p:</p> <img title="LinkedIn security bar, when clicked" src=""> <p> One enhancement the Firefox team could immediately work on, without inventing a new protocol, would be to reveal in the URL bar the client certificate used when connected to a <code>https://...</code> url. This could be done in a manner very similar to the way proposed by Aza Raskin in the his Weave Account manager prototype pictured above. This would allow the user to <ul><li>know what HTTPS client cert he was using to connect to a site, </li> <li>as well as allow him to log out of that site, <li>change the client certificate used if needed </ul> The last two feature of TLS are currently impossible to use in browsers because of the lack of such a User Interface Handle. This would be a big step to closing the growing Firefox <a href="">Bug 396441: "Improve SSL client-authentication UI"</a>. </p> <p>From there it would be just a small step, but one that I think would require more investigation, to <a href="">foaf+ssl</a> enhance the drop down description about both the server and the client with information taken from the WebID. A quick reminder: foaf+ssl works simply by adding a <a href="">WebID</a> - which is just a URL to identify a <a href="">foaf:Agent</a> - as the subject alternative name of the X509 certificate in the version 3 extensions, as shown in detail in <a href="">the one page description of the protocol</a>. The browser could then <a href="">GET the meaning of that URI</a>, i.e. GET a description of the person, by the simplest of all methods: an HTTP GET request. In the case of the user himself, the browser could use the <a href="">foaf:depiction</a>. </p> <h4>The Synchronization Piece</h4> <p>Notice how foaf+ssl enables synchronization. Any browser can create a public/private key pair using the <a href="">keygen element</a>, and get a certificate from a WebId server, such as <a href="">foaf.me< <ul> <li>about the user (name, depiction, address, telephone number, etc, etc) <li>a link to a resource containing the bookmarks of the user <li>his online accounts <li>his preferences </ul> Indeed you can browse all the information foaf.me can glean just from my public foaf file <a href="">here</a>. You will see my bookmarks taken from delicious, my tweets and photos all collected in the Activity tab. This is just one way to display information about me. A browser could collect all that information to build up a specialized user interface, and so enable synchronization of preferences, bookmarks, and information about me. </p> <h4>The Security Problem</h4> <p>So what problem is the Weave team solving in addition to the problem solved above by foaf+ssl?</p> <p <b>not<.</p> <p>It is to solve this problem that Weave was designed: to be able to publish remotely <b>encrypted information</b> that only the user can understand. The publication piece uses <a href="">a nearly RESTful API</a>..</p> <h4>Generalization of Weave</h4> <p>To make the above protocol fully RESTful, it needs to follow Roy Fielding's principle that "<a href="">REST APIs must be hypertext driven</a>". <a href="">Linked Data</a>.</p> <p>By defining both a way of getting objects, and their encoding, the project is revealing its status as a good prototype. To be a standard, those should be separated. That is I can see a few sperate pieces required here: <ol> <li>An ontology describing the public keys, the symmetric keys, the encrypted contents,...</li> <li>Mime types for encrypted contents</li> <li>Ontologies to describe the contents: such as People, bookmarks, etc...</li> </ol> Only (1) and (2) above would be very useful for any number of scenarios. The contents in the encrypted bodies could then be left to be completely general, and applied in many other places. Indeed being able to publish information on a remote untrusted server could be very useful in many different scenarios. </p> <p> By separating the first two from (3), the Weave project would avoid inventing yet another way to describe a user for example. We already have a large number of those, including <a href="">foaf</a>, <a href="">Portable Contacts</a>,.</p> <p. </p> <h4>The Client Side Password</h4> <p <a href="">removed the reverse DNS namespace requirement</a>. In any case such a solution can be very generic, and so the Firefox engineers could go with the flow there too.</p> <h4>RDF! You crazy?</h4> <p>I may be, but so is the world. You can get a light triple store that could be embedded in mozilla, that is open source, and that is in C. Talk to the Virtuoso folks. <a href="">Here is a blog entry on their lite version</a>. My guess is they could make it even liter. <a href="">KDE is using it...</a>. </p> my time at Sun is coming to an end 2009-11-24T23:33:07+00:00 2009-11-24T10:45:12+00:00 Henry Story Henry Story travel identity semweb socialnetworking travel > -- OpenId ♥ foaf+ssl 2009-11-19T19:10:55+00:00 2009-11-19T10:57:08+00:00 Henry Story Henry Story SemWeb foaf+ssl identity security semweb web <p><a href="">OpenId4.me</a> is the bridge between <a href="">foaf+ssl</a> and <a href="">OpenId</a> we have been waiting for.</p> <p>OpenId and foaf+ssl have a lot in common: <ul> <li>They both allow one to log into a web site without requiring one to divulge a password to that web site <li>They both allow one to have a global identifier to log in, so that one does not need to create a username for each web site one wants to identify oneself at. <li>They also allow one to give more information to the site about oneself, automatically, without requiring one to type that information into the site all over again. </ul> <p>OpenId4.me allows a person with a foaf+ssl profile to automatically login to the millions of web sites that enable authentication with OpenId. The really cool thing is that this person never has to set up an OpenId service. OpenId4.me does not even store any information about that person on it's server: it uses all the information in the users foaf profile and authenticates him with foaf+ssl. OpenId4.me does not yet implement attribute exchange I think, but it should be relatively easy to do (depending on how easy it is to hack the initial OpenId code I suppose).</p> <p>If you have a foaf+ssl cert (get one at <a href="">foaf.me</a>) and are logging into an openid 2 service, all you need to type in the OpenId box is <code>openid4.me</code>. This will then authenticate you using your foaf+ssl certificate, which works with most existing browsers without change!</p> <p>If you then want to own <b>your</b> OpenId, then just add a little html to your home page. This is what I placed on <a href=""></a>:</p> <pre> <link rel="openid.server" href="" /> <link rel="openid2.provider openid.server" href=""/> <link rel="meta" type="application/rdf+xml" title="FOAF" href=""/> </pre> <p>And that's it. Having done that you can then in the future change your openid provider very easily. You could even set up your own OpenId4.me server, as it is open source.</p> <p>More info at <a href="">OpenId4.me</a>.</p> Detained in Heathrow 2009-11-18T21:09:19+00:00 2009-11-18T09:49:44+00:00 Henry Story Henry Story travel politics semweb travel - arrival 2009-11-29T12:55:03+00:00 2009-11-09T16:34:40+00:00 Henry Story Henry Story travel identity philosophy security semweb travel > November 2nd: Join the Social Web Camp in Santa Clara 2009-10-16T02:25:10+00:00 2009-10-15T15:35:54+00:00 Henry Story Henry Story SemWeb community identity security semweb web web2.0 web3.0 <a href=""><img align="right" src=""></a> <p>The W3C <a href="">Social Web Incubator Group</a> is organizing a free <a href="">Bar Camp</a> in the Santa Clara Sun Campus on November 2nd to foster a wide ranging discussion on the issues required to build the global Social Web.</p> <p <a href="">The Internet of Subjects Manifesto</a>? What existing technologies can we build on? What is missing? What could the W3C contribute? What could others do? To participate in the discussion and meet other people with similar interests, and push the discussion further visit the <a href="">Santa Clara Social Web Camp wiki</a> and</p> <p> <div style="text-align: center;"> <a href="" target="_blank" ><img src="" /></a></div> </p> <p>If you are looking for a reason to be in the Bay Area that week, then here are some other events you can combine with coming to the Bar Camp: <ul> <li>The W3C is meeting in Santa Clara for its <a href="">Technical Plenary</a> that week in Santa Clara. <li>The following day, the <a href="">Internet Identity Workshop</a> is taking place in Mountain View until the end of the week. Go there to push the discussion further by meeting up with the OpenId, OAuth, Liberty crowd, which are all technologies that can participate in the development of the Social Web.</li> <li>You may also want to check out <a href="">ApacheCon</a> which is also taking place that week. </ul> <p>If you can't come to the west coast at all due to budget cuts, then not all is lost. :-) If you are on the East coast go and participate in the ISWC <a href="">Building Semantic Web Applications for Government</a> tutorial, and watch my video on <a href="">The Social Web</a> which I gave at the Free and Open Source Conference this summer. Think: if the government wants to play with Social Networks, it certainly cannot put all its citizens information on Facebook.</p> One month of Social Web talks in Paris 2009-10-19T09:45:27+00:00 2009-10-12T10:16:41+00:00 Henry Story Henry Story travel identity security semweb travel web web2.0 web3.0 > Sketch of a RESTful photo Printing service with foaf+ssl 2009-10-09T08:32:48+00:00 2009-10-07T12:15:50+00:00 Henry Story Henry Story SemWeb cloud foaf foaf+ssl identity identitymanagement rest security semweb web web2.0 webservices <p <a href="">La Distribution</a> is doing). </p> <p>A few years back, with one click, you installed a myPhoto service, a distributed version of <a href="">fotopedia</a>. You have been uploading all your work, social, and personal photos there. These services have become really popular and all your friends are working the same way too. When your friends visit you, they are automatically and seamlessly recognized using <a href="">foaf+ssl</a>.</p> <p. </p> <p> Before looking at the details of the interactions detailed in the UML Sequence diagram below, let me describe the user experience at a general level. <ol> <li>You go to print.com site after clicking on a link a friend of your suggested on a blog. On the home web page is a button you can click to add your photos. </li> <li>You click it, and your browser asks you which <a href="">WebID</a> you wish to use to Identify yourself. You choose your personal ID, as you wish to print some personal photos of yours. Having done that, your are authenticated, and print.com welcomes you using your nicknames and displays your icon on the resulting page.</li> <li>When you click a button that says "Give Print.com access to the pictures you wish us to print", a new frame is opened on your web site</li> <li.</li> <li>You agree to give Print.com access, but only for 1 hour. </li> <li.</li> <li>Having done that you drag and drop an icon representing the set of photos you chose from this frame to a printing icon on the print.com frame.</li> <li>Print.com thanks you, shows you icons of the pictures you wish to print, and tells you that the photos will be on their way to your the address of your choosing within 2 hours. </li> </ol> </p> <a href=""><img align="center" src=""></a> <p>In more detail then we have the following interactions: <ol> <li>Your browser GETs print.com's home page, which returns a page with a "publish my photos" button. <li>You click the button, which starts the <a href="">foaf+ssl</a> handshake. The initial ssl connection requests a client certificate, which leads your browser to ask for your WebID in a nice popup as the <a href="">iPhone can currently do<: <pre>:me xxx:contactRegistration </addContact> .</pre> Print.com uses this information when it creates the resulting html page to point you to your server.</li> <li>When you click the "Give Print.com access to the pictures you wish us to print" you are sending a POST form to the <code><addContact></code> resource on your server, with the WebId of Print.com <code><></code> in the body of the POST. The results of this POST are displayed in a new frame. <li.</li> <li>You give print.com access for 1 hour by filling in the forms.</li> <li>You give access rights to Print.com to your individual pictures using the excellent user interface available to you on your server.</li> <li>When you drag and drop the resulting icon depicting the collection of the photos accessible to Print.com, onto its "Print" icon in the other frame - which <a href="">is possible with html5</a> - your browser sends off a request to the printing server with that URL.</li> <li>. </ol> <p> So all the above requires very little in addition to foaf+ssl. Just one relation, to point to a contact-addition POST endpoint. The rest is just good user interface design. </p> <p>What do you think? Have I forgotten something obvious here? Is there something that won't work? Comment on this here, or on the <a href="">foaf-protocols</a> mailing list. </p> <h4>Notes</h4> <p> <a rel="license" href=""><img alt="Creative Commons License" style="border-width:0" src="" /></a><br /><span xmlns:print.com sequence diagram</span> by <a xmlns:Henry Story</a> is licensed under a <a rel="license" href="">Creative Commons Attribution 3.0 United States License</a>.<br />Based on a work at <a xmlns:blogs.sun.com</a>. </p> foaf+ssl in Mozilla's Fennec works! 2009-09-30T09:38:49+00:00 2009-09-30T02:38:49+00:00 Henry Story Henry Story SemWeb identity mobile security semweb <a href=""><img align="right" text="Fennec browser welcome page" src="" /></a> <p>At <a href="">yesterday's Bar Camp in La Cantine</a> I discovered that <a href="">Mozilla's Fennec browser</a> for mobile phones can be run on OSX (<a href="">download 1.0 alpha 1 here</a>). So I tried it out immediately to see how much of the <a href="">foaf+ssl</a> login would work with it. The answer is all of it, with just a few easy to fix user experience issues. I really am looking forward to trying the <a href="">Nokia N810 Internet Tablet</a> for real.</p> <p> Anyway here are quick snapshots of the user experience. </p> <h4>Getting a certificate</h4> <p> First of all the best news is that the <keygen> tag, <a href="">now documented in html5</a> works in Fennec. This means that one can get a client certificate in one click without going through the complex dance I described in "<a href="">howto get a foaf+ssl certificate to your iPhone</a>".</p> <p>This is how easy it can be. Go to <a href="">foaf.me</a>. </p> <a href=""><img src="" /></a> <p> After filling out the form, you can create yourself an account on foaf.me: </p> <a href=""><img src="" /></a> <p>To make your WebId useful all you need to do is click on the "Claim account with SSL certificate" button -- which could certainly be phrased better -- on the account creation successful page:</p> <img src="" /></a> <p>:</p> <a href=""><img src="" /></a> . </p> <h4>Using the certificate</h4> <p.</p> <a href=""><img src="" /></a> <p>No user ever cares about these details! It is confusing. Do you think users have issues with URLs? Well what do you think they are going to make of the old outdated Distinguished Names? </p> <p>Just compare this with the <a href="">User Experience on the iPhone</a></p> <img src=""/> <p>Quite a few bug/enhancement reports have been reported on this issue on the Mozilla site. See for example <a href="">Bug 396441 - Improve SSL client-authentication UI</a>, and my <a href="">other enhancement requests</a>.</p> <p>Still this user interface issue should be really easy to fix, as it is just a question of making things simpler, ie. of reducing the complexity of their code. And clearly on a cell phone that should be a priority.</p> <p>Another issue I can see on the Fennec demo browser, is that I could not find a way to remove the certificates.... That would be quite an important functionality too.</p> <p <b>the place</b> to look for what a secure life without passwords, without user names, and a global distributed social network can look like on a mobile platform.</p> Social Web Bar Camp in Paris 2009-09-25T08:27:34+00:00 2009-09-20T04:14:06+00:00 Henry Story Henry Story travel identity security semweb socialnetworking socialweb > RDFa parser for Sesame 2009-09-10T18:45:28+00:00 2009-09-09T10:39:42+00:00 Henry Story Henry Story Java java semweb web2.0 web3.0 > | https://blogs.oracle.com/bblfish/page/rss10.xml | CC-MAIN-2014-15 | refinedweb | 5,467 | 61.16 |
Referencing a dll
cAlgo allows users to reference .NET libraries from cBots and Custom Indicators. In order to add a reference to .NET library you need to open Reference Manager window. You can open Reference Manager window by clicking “Manage References” button in the top of cAlgo editor or by executing “Manage References” command from context menu of your cBot or Custom Indicator.
OR
In Reference Manager open Libraries page and click on the Browse button:
In open file dialog you need to select a .NET dll and click on the Open button.
Click on the Apply button in Reference Manager to apply the changes.
Then you need to import namespaces from the Library. Example:
Finally, you can use classes from the Library in your code:
| https://ctrader.com/api/guides/referencing_dll | CC-MAIN-2019-09 | refinedweb | 126 | 58.69 |
Animation doesn't have to be complex to be engaging and, as the Ricky Gervais Show podcasts demonstrate, even a simple dialogue between three people can become top animated entertainment. We're going to draw inspiration from Mr.Gervais and animate a chunk of the Envato Community Podcast.
The Ricky Gervais Show
If you haven't seen the animated Ricky Gervais Show podcasts, then go and check them out now! (I'll wait). The original podcast first aired in 2005 and achieved mega success, quickly becoming the most downloaded podcast in the world. In 2010 it was adapted into an animated televised version debuting for HBO and Channel 4 - and it's brilliant.
Part of the appeal lies in the mundane nature of people chatting; small idiosyncrasies and behavioral traits are amplified by the complete lack of action.
I thought it would be nice to animate the Envato Community Podcast. Hosted by Drew Douglass, Jordan McNamara and Tash it's readily available to download and listen to, but wouldn't it be nice to see what they look like sitting in their little fictional studio?
Overview
So, who's behind the animated Ricky Gervais Show? Well, you'll be reassured to learn that it wasn't just a single individual. Nick Bertonazzi, Art Director for the Ricky Gervais Show project (and interviewed about the whole affair on Channel Frederator) was assisted by a team of no fewer than twenty people; two teams of ten animators working on alternate episodes.
It's worth noting, though, that animating requires a lot of patience and hard work, so we're going to make things as easy as possible for ourselves. We're going to work on a single scene, animate (almost) exclusively on one timeline and keep our movement pleasantly straight forward. Luckily, the Hanna-Barbera Flintstones style used by the Ricky Gervais Show lends itself well to simple animation.
Tutorial Structure
Many of the following steps can be tackled whenever you feel neccessary, there's no strict 1,2,3 about the processes described. As such, you'll find no step numbers, but prepending chapters which bunch steps together in logical groups.
Without further ado, let's get in there and animate!
Preparation: Adobe Flash Version
Which version of Adobe Flash are we going to use? It's entirely up to you. Most of the principles demonstrated in this tut are applicable from Flash CS3 onwards. In fact, we'll use classic tweening (as opposed to the more advanced tweening available in CS4 onwards) as this serves our simple animation style perfectly.
In any case, during this tut you should be perfectly comfortable with most versions of Flash and most common hardware specs.
Preparation: Document Setup
Online video channels usually work with (display as default) a widescreen aspect ratio of 16:9, so I'll tailor my stage accordingly. In terms of common 16:9 pixel dimensions, 1280 x 720px is one such example, 1920 x 1080px if you're going HD, or 864 x 486px if lower resolution is acceptable.
In our case, embracing the wide screen standard, aiming for desktop monitors around the world and maximizing web experience, we'll go for 1280 x 720px.
We'll also set our movie to 30fps, though the frame rate is entirely up to your preference. Working towards web distribution releases you from the frame rate constraints laid out by NTSC and PAL standards. If you're interested in exporting your animation for television broadcast, take a look at what Tom Green has to say on the subject, there's plenty to take into account.
Preparation: Main Moveclip
As I've mentioned, we're going to be working almost entirely on one timeline. We'll place this timeline within a movieclip called "animation" and this movieclip we'll place on the first frame of Scene 1. Hit Command/Control F8 or go to Insert > New Symbol… to create the "animation" movieclip.
Good. We have an empty animation. I suppose we'd better start filling it.
Sound: Chosen Soundtrack
We can't do a thing without the soundtrack on which our animation is based, so go and grab the Envato Community Podcast.
In my case, I subscribe through iTunes, so I located the downloaded .mp3 file in my iTunes Podcast directory. It's pretty long however (around an hour) so there's no point dragging the whole 60Mb into our Flash animation. We'll have to trim it.
There are loads of ways to trim an .mp3 file, you may already have a preferred application. MP3 Trimmer is a shareware application which will do the job nicely, or you could use QuickTime, Soundbooth, Garageband, even iTunes can export a trimmed version of your music files.
I used Adobe Media Encoder.
Adobe Media Encoder's track trimming tool.
Drag your file into whichever application you've chosen, then use the tabs to define where your finished sound file will begin and end. In our case, we only need a couple of minutes, so find a suitable position for trimming off the end. We don't need the whole intro sequence either, so trim a chunk off that.
QuickTime 10 allows you to preview what you're doing.
Once our track is in Flash we can edit the fading in and out. So, with your trimming done, output the result!
Sound: Importing Your Sound Files
In Flash, go to File > Import > Import to Library and locate your newly prepared .mp3 file. Alternatively, just drag and drop it into the library. Remember to keep things organized, so stick it in a folder named "sound".
It's now available for use on your timeline, so, create a new layer, in a new folder (name them appropriately) and select the first frame. In the Sound section of the Properties Inspector (Windows > Properties) for that frame, you can select any of the sound files in the library. I named my soundtrack "podcast_shorter.mp3" and you can see it selected in the screenshot below. You'll also notice the soundtrack visualized on the timeline:
The properties panel will appear differently depending on your version of Flash, but the contents remain essentially the same.
Sound: Syncing Feeling
You'll also notice a "sync" select menu in the Properties panel; make sure Stream is selected. Streaming the sound file allows it to begin playing as soon as it is encountered by the playhead. It loads and plays simultaneously, which is ideal for our needs as we're using a large sound file.
The Event sound type first loads the file into memory before playing it back, which would make accurately synchronizing our animation with the sound nigh on impossible (unless we use a preloader, starting our film once the assets have all loaded). The speed at which the sound is loaded into memory (and therefore the point at which it begins playing) depends entirely on the machine and connection being used. Even whilst working locally, you'll notice the difference.
It might be that you experience poorer sound quality with Stream selected; a tempting reason to opt for the Event sound type. You can fix this easily by altering the Publish Settings. Go to File > Publish Settings… and choose the Flash tab. Where you see Audio Stream hit the Set… button. This brings up the Sound options for the Stream sound type. Make sure Best quality is selected, with a suitably high bit rate and you're good to go.
Select Best sound quality.
Sound: Levels
As we're working with an ActionScript 3.0 file, you'll find the Live Preview (Control > Enable Live Preview) automatically checked. Go on, hit Enter and you'll see the playhead zip along the timeline, playing your sound file as it does so. This is great for instantaneous editing, as you'll find once you begin lip syncing.
For now, what will stick out the most is the abrupt beginning to our sound. If you trimmed it anything like I did, it will be cutting in sharply, while the intro music is playing, just before Drew starts speaking. Let's fix that in Flash by editing the sound envelope, available to us in the Sound Properties panel.
Click on the pencil icon.
Sound: Channels
As the screenshot below demonstrates, you can easily alter the sound levels across the file. You'll see two channels; the left channel (top) and the right channel (bottom). If you're working with stereo, editing the levels of individual channels can be extremely useful.
There are two aspects of the envelope dialog to take into account here:
- Finer trimming: The bar between the two channels indicates where we want our sound to begin. The gray areas illustrate the section of the soundtrack which we've effectively deleted.
- Levels: In our case, you can see the level fading in until it reaches full volume. You can add nodes to the vector to raise and lower the volume wherever you want.
Use the icons underneath to alter the way you view the channels.
The end of our soundtrack is dealt with in a similar way, finish up by fading out.
Labeling: What Labels?
OK, we've laid our soundtrack on the timeline. This soundtrack determines what happens when, so everything we do from now on will be based around it. We're going to begin with lip syncing; once we have the characters talking we can consider movement and other details. They key here, as with many things, is planning. We need to sketch out a plan so we can visualize what's going on.
Create a new folder on the timeline and place within it 4 new layers: music, drew, tash and jordan. These layers are to be used explicitly for labeling; marking out who says what and when, so you can later see how to animate your characters without needing to refer to the soundtrack every time. You can lock them to prevent objects being accidentally pasted into their frames.
Label a keyframe:
Begin by labeling the start of your soundtrack, just for the sake of it…
Labeling: Label Types
As the screenshot below illustrates, frame labels can be of a number of types:
The label type Name is selected here.
We'll ignore the Anchor type; this is used for deeplinking and works much like an anchor tag in HTML. You can read more about the Anchor label type on Adobe Developer Connection.
It would be usual in our situation to use the Comment type. We're really just commenting our timeline to show where we need to place objects and movement, the labels perform no other function. Either select Comment when entering the frame label, or prepend what you type with "//" (much as you would to comment out ActionScript) and away you go.
Continue to label the speech and other events along your timeline. Scrubbing the playhead back and forth will give you a rough idea of levels (the exact frame where someone begins laughing, for example). Otherwise, hitting enter will start and stop the sound, helping you to judge where your labels belong.
Lastly, bear in mind the scale of your timeline, layers and frames. You may find that increasing the height of your sound layer helps visualize what's going on:
Double-click on the layer icon to open its properties. Alter the layer height to whatever suits you best.
Increasing the soundtrack layer height to 300% gives you the following result, much easier to work with:
Use the timeline options to alter the scale and therefore width of the frames.
This may influence the accuracy of your labeling should you change it later on in the process.
Labeling: Taking Labels Further
This step is by no means compulsory, but you might find it useful when you're mapping things out on the timeline.
We're going to incorporate a nifty AS3 snippet pulled together by Michael James Williams especially for this tut. The aim is to trace each label, as the playhead encounters it, in the output window whenever we test our movie. We'll effectively see subtitles scroll past as our characters are talking.
import flash.events.Event; if (!this.hasEventListener(Event.ENTER_FRAME)) { this.addEventListener(Event.ENTER_FRAME, onEnterFrame); //onEnterFrame() will not be run in frame where it was declared traceFrameLabel(); } function onEnterFrame(e:Event):void { traceFrameLabel(); } function traceFrameLabel():void { if (this.currentFrameLabel != null) { var totalFrameLabel = this.currentFrame + ": " + this.currentFrameLabel; trace(totalFrameLabel); } }
Take this snippet and paste it into the Actions panel (Windows > Actions) on a new layer named "actions".
The first line (the import statement) allows us to handle events. The event we're listening for is
ENTER_FRAME and, as the following lines dictate, upon detecting that the movie has entered a new frame, we run the
onEnterFrame() function. This function actually just calls the
traceFrameLabel() function (see comment in the code).
As you can see on line 19, traceFrameLabel() stitches together a string comprising
this.currentFrame and
this.currentFrameLabel then on line 20 outputs it for us, giving us the following effect:
To do this your frame labels will need to be "Name" type, not "Comment". Comment labels aren't exported into the .swf so the information isn't available for us to trace. Name labels are exported in to the .swf (they're more often used for navigation around the timeline) as long as you haven't chosen to place them on guide layers. The only foreseeable issue is that Flash will throw warnings if you duplicate labels; technically they should be unique. You may see a few of these fly past:
WARNING: Duplicate label, Symbol=animation, Layer=jordan, Frame=1677, Label=here
Either avoid duplicating the labels, or just ignore the warnings!
Theory: Preston Blair
Let's leave the Flash IDE for the time being and take a look at lip syncing principles.
Preston Blair was a pioneering animator who worked on (amongst a great many other things) the Flintstone's, so that brings us nicely round in a circle :) He defined what's known as the Preston Blair phoneme series; a collection of the basic mouth shapes we humans use to make our vocal noises.
He specified 10 basic mouth shapes:
- A and I
- E
- U
- O
- C, D, G, K, N, R, S, Th, Y, and Z
- F and V
- L
- M, B, and P
- W and Q
- Rest Position
If you draw yourself a collection of mouths which adhere to these ten phonemes, then you're covered. In our case, we're dealing with a fairly simplistic drawing and animation style, so we're actually a little less restricted than you may think. This is down to a concept known in the comic book world as closure and I'll discuss that further in the next step.
Theory: Closure
I actually covered the concept of closure in a tut The Mechanics of Comics which I wrote aeons ago for Vectortuts+. Closure (defined by Scott McCloud in Understanding Comics) describes the process of filling in the gaps, which we as readers do every time we move our eyes from one comic book image to another. We willingly suspend disbelief, absorbing ourselves in the narrative and letting our imaginations tell us what's happening between the static pictures.
To kill a man between panels is to condemn him to a thousand deaths.
Scott McCloud
I'm digressing a bit, but animations (all films, actually) are essentially a series of static comic images, the frame rate determining when we see them instead of our eyes moving across a page. We don't witness something moving, but our brains process the film as though that's what's happening. Even by watching a single static frame every second we could still allow ourselves to experience what we see as moving images. The level of detail in the drawing style and movement influences the overall effect of the animation, but we can certainly cut ourselves some slack and allow for closure taking place. It's likely that we, as animators, will see imperfections in the movement and artwork which viewers let slip by unnoticed.
That said (apologies for the rambling) it's up to you whether your lip syncing comprises nothing more than an open-shut movement, or exquisitely composed tweening; it's a decision based on communication, not necessarily quality.
Design: Character Design
Character design is entirely arbitrary. I've already discussed using the Hanna-Barbera Flintstones style, but that's just my approach because I'm deliberately emulating existing work. I'm not going to dictate how you should tackle things.
You may favor hand drawing your figures, or working directly in Flash. I tend to use Adobe Illustrator, out of habit and because it's a the vector drawing application. Plus, it works seamlessly with other members of Adobe's Creative Suite, including Flash.
Here's the Drew I drew. So to speak.
It's straight forward, is filled with simple, flat colors and can be easily separated into components for animation. Let's move on to exactly what we're going to animate…
Design: Moving Parts
Consider the parts you'll want to be animating. At his most simple, our animated Drew will need the following six features as separate objects:
- Face: This will comprise the basic head shape plus ears.
- Hair: A solid block which will move independently from the face when laughing etc.
- Mouth: An obvious one this - it'll be continually moving and changing state.
- Nose: This won't necessarily be animated individually, but it sits above the mouth, sometimes masking the lip slightly.
- Eyes: The eyes will change state and will be blinking - definite separate entities.
- Eyebrows: Very useful when determining expression.
Build yourself a style file with combinations of the features we listed above. Just a few versions of each feature, mashed together, can provide a huge range of expression and emotion.
Design: Other Characters
Jordan and Tash will be made up of essentially the same moving parts, though Tash has the main body of her hair behind her head with an independent fringe (one of the weirdest things I've ever said) in front of everything but her eyebrows.
Swapping the lip color to red is an easy way of saying "this aint a bloke"
Workflow: Importing the Graphics
This step depends entirely on your preferred applications and workflow. As I've already mentioned, I tend to prepare my graphics in Adobe Illustrator, but it's perfectly plausible that you're working within Flash or another program. If you've been seriously organized you may be able to import your Illustrator or Fireworks file directly into Flash (File > Import to Library..) and have the layers converted to either keyframes or layers within Flash.
The likelihood of you achieving this successfully, given the amount of objects we're talking about, is very slim (if you do manage it, without creating a library which looks like a war-torn village, then I want to know how!)
I rely on a simple copy and paste, which these days is fully supported when working between Adobe applications.
Maintaining your layers will keep things neat within your pasted objects.
Pasting from Fireworks also gives plenty of options.
Note: If you are going to work in an external application (Illustrator or Fireworks) bear in mind that copying and pasting is often only possible between compatible versions. Copying objects from Illustrator CS5 and pasting directly into Flash CS3 won't work; you'll need to work in Illustrator CS3.
Workflow: Uninvited Library Guests
Recent versions of Flash handle copying and pasting seamlessly, but if you're working with CS3 you'll notice Illustrator objects automatically thrown into new folders in the library. "AICB" is an acronym for "Adobe Illustrator Clip Board" (a fact which you may now remove from your brain) and if you're not careful you'll end up with a real mess on your hands.
Even in CS4 and above, pasting from Fireworks gives you a similar result.
Fireworks Objects and FlashAICB folder in CS3
We need to aim for an organized library, so any superfluous folders and movieclips can be broken up (Command/Control + B), renamed, removed, whatever you like. Just keep things tidy!
Workflow: Tidy Library
You'll begin to realize the importance of keeping things tidy once you see how large the library becomes. I recommend you place each of the relevant body parts within folders for each character. Logical placement and naming of the mouths will be particularly helpful once we start swapping objects on the stage for lip-syncing. We'll come back to this.
Assets which relate specifically to Jordan are all in one place.
Workflow: Tidy Timeline
We've organized our library, now let's focus on the timeline.
You've already created the "animation" movieclip and placed soundtrack, labels and action layers on the timeline, now take a minute to think how you're going to organize your other layers. Apply a similar logic to your library structure; folders for characters, scenery etc.
In our case, we have the scenery placed on the lowest frame of all (it's just a .png made in Photoshop) and the foreground (the desk) which has to sit above everything but the characters' arms. All other elements sit between the fore and background scenery objects.
Lock and unlock layers as you need them to avoid mistakes.
Animating: Making a Start
OK, enough messing about, we can't put off the inevitable any longer, let's animate something! The two images below illustrate the first noteworthy change in our characters' behavior. Drew starts talking.
Theme tune is still playing, all is motionless…
…then the speech kicks in.
You'll notice a few things happening here:
- 1: You can see on the sound layer the exact frame where the theme tune stops and Drew begins talking.
- 2: At the same point, I've made a new keyframe on the mouth layer where I've placed a different mouth object.
- 3: Here's the new mouth.
- 4: The face and ears have also moved - a simple but effective way to promote realistic movement. Drew begins enthusiastically, so you can imagine his whole face moving; mouth opens, chin, ears and eyebrows move too.
- 5: And here's the simple tween of the head moving down and returning. This gives the impression that the chin has moved.
Animating: Swapping Symbols
You have your labels to guide you, so you can see where and when you'll need new mouths. Go along the "mouth" layer of each character and place a new keyframe (F6) whenever you think the mouth shape will change. This can be as simple or detailed as you decide (it can be a seriously time consuming process if you don't have some kind of tool to do it for you).
To swap a mouth, select the mouth instance on stage and right-click it to bring up a context menu. Select "Swap symbol" to open a library browser:
It should now be clear why library organization is so important. The browser dialogue opens with the current mouth symbol selected. As all mouths for each given character are grouped together we needn't look far to find what we're after.
Also worth mentioning is the registration point and position on stage of each symbol. If all mouths are positioned similarly on their respective stages, they'll be positioned properly when swapped for each other on the main timeline.
Animating: Hard Labor
There's little point in me describing what will now take up the greater portion of your animating time. You are faced with the daunting task of lip-syncing; going through the entire length of the animation and matching up mouth movement with speech. There are automated JSFL lip-syncing tools available for purchase, but for the purposes of this tut, we're tackling the challenge hands-on. Do it for all the characters, on their respective layers, as quickly as you can. Use your labels as reference and get it done roughly - you can go back afterwards and tweak the synchronization.
Note: Don't forget you're working with phonetics - the mouth shapes which describe podcast don't look right. Paadcast is how U.S. residents say it :)
Tash's mouth and corresponding labels. Notice that all layers are locked, except Tash's mouth layer…
We discussed scrubbing the playhead to get an impression of sound fluctuation and I mentioned using the timeline controls (in the Control menu, or keyboard shortcuts) to watch your movie and test how effective your lip syncing is.
You can also, particularly if you want to make use of the subtitles snippet, output your movie to preview the result. However, if you're checking a section of the animation which is, say, one minute in, you don't want to sit and watch the whole animation up to that point (there are only so many times I can listen to Drew saying "Al-righty and welcome"!)
Add the following line to the actionscript on your first frame:
import flash.events.Event; this.gotoAndPlay(1550);//enter the frame number which you want to jump to if (!this.hasEventListener(Event.ENTER_FRAME)) {
You can either enter a frame number, or one of the labels you've defined. Use it to skip however much of the animation you want. If you no longer want to skip any frames, just comment the whole line out.
Directing: Complimentary Movement
Once you have the lip syncing sorted out and you're satisfied with the result, go back and add convincing complementary movement. As we discussed when we first began with Drew's opening line, it's not just the mouth that moves when someone's talking. Use the eyebrows, the chin, and if you want to move beyond that, look at arms and body movement.
We're trying to make three people, sitting down chatting, appear visually compelling. Three static bodies will not be particularly interesting! During periods of silence play with the eyes, for example. Engage the viewer and imagine the characters turning to face the "camera". Have them react to what the others are saying; looks of agreement, confusion etc.
Few people talk without using their arms and hands, so animate some additional movement there too.
Drew introducing his guests, highlighted and more visually engaging by adding a pointing finger.
Exaggerated movement and expression suggested by Jordan saying "Drou--ght".
Tash is also looking directly at Jordan as he leans in..
Directing: Subtleties
The eyes are the windows to the soul, or so they say, so don't ignore the potential they give you in conveying emotion and action. The eyes, in their few simple states, can speak volumes with relative ease on your part. All three characters are involved in this podcast, they're all in the discussion, so just because Drew is doing the talking it doesn't mean Jordan and Tash have to remain 100% static. Let them react to what's being said, subtly.
Here's an example. At around the 1 minute mark, Drew takes the lead and stops Tash in her tracks (it's not as bad as I just made it sound!) To emphasize this 'interruption' we'll make it appear that Tash is mildly taken aback, with an extended blink:
Tash, chatting away, everyone attentive.
Drew interjects, eyes turn toward the viewer in support.
Tash, stopped short, closes her mouth and closes her eyes fully…
Subtleties such as these are yours for defining and will, most likely, go unnoticed by the viewer. They do, however, help greatly in improving the overall effect of the three colleagues interacting.
Directing: Artistic License
Of course, what happens while these three people sit chatting is entirely up to us. To break up the footage we can even introduce additional action and use extra sound effects. The sky is the limit - which would suggest I had come up with something more exciting than Tash drinking a cup of coffee…
Anyway, that's the example we're going to use. Firstly, grab a sound effect (do I need to mention AudioJungle as a royalty free audio resource?) such as this drinking sound loop. Then, import it to the library (keeping things organized) and place it on the timeline on a layer of its own.
I chose to add this sound and movement whilst Drew is introducing the podcast, just to break up the three motionless participants.
The accompanying drinking movement is very subtle, but pay attention to a couple of key aspects which help it on its way:
- 1: The mouth shape (obviously) changes.
- 2: Tash closes her eyes as she drinks..
- 3: ..and also raises her eyebrows (go on, suck up the last contents of a beaker using a straw, you'll find yourself doing the same).
- 4: The point of rotation on the arm movieclip is at the shoulder.
Animating: Blinking Heck
We’re going to use a single line of AS3 to add some animated realism to our characters' eyes by making them blink at random. I covered this a while ago in Quick Basix: Random Animated Blinking, but I'll go over a breakdown for you here.
You should already have a movieclip for the eyes of each character; mine couldn't be any simpler, they're just a couple of dots. Extend the timeline of your eyes movieclips and place a "blinked" state at the end of each.
Eyes in their blinked state at the end of the timeline.
Animating: Blinking Script
Add a final layer to the eyes movieclip, label it "actions" or "a" (or whatever you prefer) and lock it. Select the first frame and enter the following snippet in the actions panel (Window > Actions):
gotoAndPlay(uint(Math.random()*totalFrames)+1);
This snippet sends the playhead to a random frame along the "eyes" timeline. On some occasions, therefore, the blink state is reached quickly, sometimes it takes longer. Each of our three characters will now blink at entirely random intervals. For a detailed explanation of how this snippet does what it does check out the Quick Tip.
Theory: Don't Repeat Yourself
Warning: I'm going to be referring to Dru and Drew in this step. It may get confusing.. That said, let's talk about repetitive animation.
Never write the same thing twice.
Dru Kepple
This quote was made during Dru's AS3 101: Functions tutorial whilst referring to the DRY principle of programming Don't Repeat Yourself. Admittedly, he's discussing why repeated sets of actions in programming should be wrapped up as a single function then reused, but the idea applies equally to our situation.
There's no point in animating the same movement over and over again. Animate the relevant sequence once, wrap it up as a movieclip, then use an instance of the movieclip whenever you need that movement again.
Animating: Looping Drew
In our case, we have Drew laughing and he does so several times during the podcast (hey, he's a cheerful guy). We'll illustrate that by animating a repetitive giggling movement; shoulders rising and falling, head bobbing in reaction to the shoulders, eyes closed, mouth open etc. By animating a single completion of this movement, we can loop the sequence and have Drew laughing for eternity, should we wish.
Don't stare at this for too long..
Animating: Prepare Your Sequence
There are lots of ways to approach this, feel free to apply your own specific workflow, or just follow along. Prepare the sequence on the main timeline. Select all the frames needed (in our case we'll take each of the Drew layers during this sequence), copy all frames and go Insert > New Symbol… or F8 to make a new movieclip. Now paste all frames and observe how the layers are duplicated perfectly.
Simple animated laughing; each symbol repeats two states.
You could select all the relevant frames on the main timeline and "convert to movieclip" to achieve a similar end, but the layers won't be preserved. You'll end up with all your objects on one layer, though, granted, in the correct display order.
Animating: Finished Sequence
Place your giggling movie on the timeline, emptying all other Drew layers, for however long the giggling lasts. Do this for the other two occasions where Drew is laughing and repetitive movement is necessary.
While he's chuckling, all Drew's assets are removed and replaced by the laughing movieclip.
Animating: Jordan's Head
We're going to do the same for Jordan's turning head. This movement is purely a directorial decision; we don't know he's turned his head, in reality he's not even sitting next to Tash, but this adds a bit more dynamism to the scene. We can have him turn to face Tash on several occasions and by making one movieclip for the movement we make things a lot easier on ourselves.
Onion skinning the timeline can highlight imperfections in motion.
Again, you'll notice a total absence of tweening here, they're not necessary. Each of the symbols within Jordan's head moves very slightly just a couple of times. Eyes closed, slight rolling of the head by tilting down and finish. It looks quite rough here, but you'll be surprised how effective it turns out to be.
Note: You may find it useful to place a
stop(); command on the last frame of this sequence - you don't want it looping when on the main timeline.
Animating: Turn Back
We've had Jordan turn his head and it makes for a nice transition between him facing one way, then the next. Now we should make a similar movieclip which deals with his head turning back.
Right-click the movieclip of Jordan turning his head (either in the library or on stage) and select "Duplicate Symbol". Give it a suitable name such as "jordanHeadTurnRight".
Enter the timeline for this new movieclip and select all of the frames on all of the layers. Then go to Modify > Timeline > Reverse Frames. Surprisingly enough, this will reverse all the frames you've selected. You now have a movieclip of Jordan turning back to face Drew.
Execution: Exporting
This animation is headed for internet video broadcast and, as such, I need to output the result in video format. Shift + F12 will publish a .swf for you, or an .exe, or an .app, but what about a .mov? We've covered QuickTime export on Activetuts+ before, but here's a run down of what you'll need to take into account.
Firstly, don't expect Flash to export a file precisely the length of your animation; you'll have to define the length manually. If you leave QuickTime export to its own devices, it will produce a file exactly 1 frame long as that is the length of the Scene 1 timeline. Since CS3 we can include nested movies, effects achieved with actionscript and so on, but as I say, you'll need to basically hit "record" and leave the export process running for a determined time.
Figuring out how long your animation lasts is easy, time it! Alternatively, you can work out the time elapsed by dividing the total amount of frames by the fps (in this case, roughly 3300 frames / 30 fps = 110 seconds, 1 minute 30 seconds). Once you have that information, and assuming you've finished your animation, go to File > Export > Export Movie. Enter a destination filename, select the format (QuickTime) and hit enter to bring up the Export Settings dialogue.
Notable in the screenshot above are the rendering dimensions; make sure they correspond with your stage if you don't want bits chopped off. If they're incorrect, altering the dimensions is possible via the "QuickTime Settings…" button. Also, as discussed, I've selected to Stop Exporting after a certain time has elapsed, in this case 1 minute 30 seconds (formatted 1:30).
Hitting the QuickTime Settings… button will give you access to a wealth of other options for tweaking sound and vision but default settings are fine for our needs. Go ahead and hit "Export".
It'll take a couple of minutes…
Once complete, Flash will have exported a .mov version of your animation!
Oh Dear: But…
Flash cannot handle exporting for video. Not properly. It just can't. Your finished .mov will be pretty darn good, but you'll notice that the sound gradually loses synchronization with the animation. In an animation of 3 minutes, the sound will likely have run ahead by about 1 second. You may find this a trivial amount, depending on your animation it may be unnoticable, but where lip-syncing is concerned there's a good chance you will notice.
You can play with settings as much as you like (.mp3 compression for the sound is often the cause of synchronization problems) but nothing will sort this one out.
If you want, check out the final Flash export of the animation () and compare it to the finished article. The credits graphic is supposed to hit the screen upon the very last note of the music. It's late.
Frustratingly, there's very little that can be done without paying for a solution; the Flash Quicktime export is not good enough and you won't be able to upload your .swf animation to any of the standard web video channels. Neither do Adobe provide an alternative method for converting .swf to video; Media Encoder cannot process .swf files, After Effects cannot take a .swf and output a video. It's a strange situation.
The most common options are as follows:
- Play your animation and screengrab it. Be prepared to lose quality, however.
- Export your animation as a .png sequence (a pile of images), place them on the timeline in After Effects (or similar video editing software) and add the sound seperately.
- Pay for a 3rd party .swf conversion tool (usually anything up to $100).
Sadly, none of these choices are exciting prospects. I used Sothink's SWF to Video Converter (for Windows), which is one of a million similar applications, illustrating the gaping hole in the market left by Adobe. And if you're hoping to take advantage of one of the free trials on offer, don't be surprised when your Quicktime animation is decorated with a large watermark.
It's an unsatisfactory conclusion to the process, but one which I felt was important to highlight.
Conclusion: That's a Wrap!
You're done! You've planned, set up, executed and published a complete animation, ready for internet video broadcast! I hope following this helped you learn something about the decision making and technical basics of Flash animation. Animation doesn't have to be complex to be engaging and as Ricky Gervais showed, even a simple dialogue can become top animated entertainment. Thanks for reading :)
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| https://code.tutsplus.com/tutorials/animating-the-envato-community-podcast--active-8078 | CC-MAIN-2019-43 | refinedweb | 6,516 | 62.38 |
#include <cstdlib>// Safe guard #include <iostream> #include <cmath> // Safe Guard again. #include <string> #include <fstream> // Open file i think. using namespace std; // Using because there will be over 150 lines of code. dont want to continuously type std:: int main(void){ char close; string filename; ofstream file; cout << "File name with .txt: "; cin >> filename; //Not going to use other functions ( I think thats the word) to write on screen atm. This is my 'skeleton' cout << "Press any key then enter to close." << endl; // -.-/> *facepalm* cin >> close; return 0; //Quit }
I am not asking for code, just what to use and help. To bide by the rules please dont provide my WHOLE text editor. would it be file.open(filename); and at end file.close();
Thanks guys. Hope to see some nice answers tomorrow. I have googled C++ DOS Text editor and havent found anything helpfull. please explain code. also im making a TBAG, (T)ext (B)ased (A)dventure (G)ame, =). Please reply (with helpfull code for OT) if you would like to see the source and i will release it, infact dont comment just use the poll
EDIT: Help with the poll
| http://www.dreamincode.net/forums/topic/274123-would-somone-provide-help-with-my-dos-text-editor/ | CC-MAIN-2018-22 | refinedweb | 193 | 85.08 |
lchown - change the owner and group of a symbolic link
#include <unistd.h> int lchown(const char *path, uid_t owner, gid_t group);.
Upon successful completion, lchown() returns 0. Otherwise, it returns -1 and sets errno to indicate an error.
The lchown() function will fail if:
- [EACCES]
- Search permission is denied on a component of the path prefix of path.
- [EINVAL]
- The owner or group id is not a value supported by the implementation.
- [ENAMETOOLONG]
- The length of a pathname exceeds {PATH_MAX},.
- [ELOOP]
- Too many symbolic links were encountered in resolving path.
- .
- [ENAMETOOLONG]
- Pathname resolution of a symbolic link produced an intermediate result whose length exceeds {PATH_MAX}.
None.
None.
None.
chown(), symlink(), <unistd.h>. | http://pubs.opengroup.org/onlinepubs/007908775/xsh/lchown.html | CC-MAIN-2017-04 | refinedweb | 113 | 60.31 |
The Challenge
With the availability of the SAP HANA platform there has been a paradigm shift in the way business applications are developed at SAP. The rule-of-thumb is simple: Do as much as you can in the database to get the best performance. This is also true for the underlying data models of the business applications.
Until now, data modeling in ABAP typically involved organizing your data in database tables/views and often providing some additional high-level services for the applications using suitable ABAP frameworks. It stands to reason that to enable real-time businesses some of these services should ideally also be brought closer to the data(base).
This presents several challenges. High quality data models should provide a single definition and format for the data. They should be clear and unambiguous, reusable and flexible, even extensible. So how can you capture the semantics of the data model in the database so that the model can be easily reused by different consumers, e.g. by OData clients and by OLAP tools? How can you extend the meta model to service your applications? Impossible, you say? Maybe, if we didn’t have Core Data Services (CDS).
The Solution! In fact, CDS is (in my opinion) the most ambitious and exciting SAP development in the area of data modeling in recent years. You can finally define and consume your data models in the same way (syntax, behaviour, etc.) regardless of the SAP technology platform (ABAP or HANA). Unwantedly the phrase: “One Data Model to rule them all” always comes to mind when I think of CDS.
CDS Support in SAP NW ABAP 7.4 SP5
With SAP NW ABAP 7.4 SP5 the first instalment of CDS support in ABAP has been delivered. This provides you with advanced viewbuilding features which you can use to optimize your data models.
Prerequisite is the ABAP Development Tools for Eclipse (ADT) 2.19 since the new CDS tooling is only available in ABAP in Eclipse.
Let’s take a look at some of these new features ABAP in Eclipse.
Create a new DDL source file
You can create a new DDL source in ABAP in Eclipse via File > New > Other … > ABAP > DDL Source
View definition
Your new DDL source is opened in a text editor in ABAP in Eclipse. Initially the source is empty. Using the DEFINE VIEW statement you can define your CDS view entity.
View entities are defined as selects from one or more datasources. Datasources can be other view entities, database tables or classical DDIC views (SE11 views). The select list ist defined in curly brackets after the from clause (great for code completion!). The elements in the select list are separated by a comma. The code snippet above defines a simple view entity called SalesOrder on the database table SNWD_SO. SNWD_SO contains the sales orders data.
Currently you can only define one CDS entity per DDL source.
Joining data sources
You can combine records from two or more data sources using join clauses. You can also specify aliases for the datasources.
In addition to INNER JOIN you can also model a LEFT OUTER JOIN, RIGHT OUTER JOIN, UNION and/or UNION ALL.
The comparison operators BETWEEN, =, <>, <, >, <=, >=, NOT and LIKE can be used in the on and where clauses. In addition IS NULL and IS NOT NULL are also valid where-conditions.
Aggregations and SQL functions
CDS also provides support for aggregations (SUM, MIN, MAX, AVG, COUNT), SQL functions (LPAD, SUBSTRING, MOD, CEIL, CAST) and CASE statements in view entities.
In the above example the view selects the business partners with outstanding sales orders which together (SUM) amount to more than EUR 100.000 (HAVING SUM). The outstanding amounts are reported per business partner role (GROUP BY). The codes for the business partner roles are translated to readable text in the database (CASE).
Semantically rich data models
Annotations can be used to add metadata to CDS entities. Annotations specify the properties and semantics of the entity and its behaviour when it is accessed at runtime. This metadata can also be accessed by consumption tools using special APIs. In future, consumers will be able to extend existing view definitions and add their own annotations without having to re-model the data (“One Data Model to rule them all“). Annotations always begin with the @ character.
Above the SAP buffering behaviour is specified using the @AbapCatalog.buffering annotation. Here single record buffering is enabled (prerequisite is that the underlying datasource allows buffering). In addition the element currency_code is defined as a currency key. The element gross_amount is defined as a currency field and the currency key currency_code is assigned to the field.
In every view definition the compulsory annotation @AbapCatalog.sqlViewName must be specified. This annotation specifies the name of the corresponding view in the ABAP Dictionary. CDS entities are integrated into the ABAP Dictionary and ABAP language using the same infrastructure which exists for classical Dictionary views. The CDS entity name (here SalesOrder) can be thought of as an alias for the Dictionary View. The metadata which is specified for the CDS entity, however, can only be accessed via the entity name.
Further information about the supported predefined annotations can be found in our CDS keyword documentation.
Extensibility
The ABAP Dictionary enhancement concept is also supported in ABAP CDS entites. By using $EXTENSION.* in the select list, all fields that are added as enhancements to the underlying database table or classical DDIC view are automatically added to the CDS entity.
Here any fields which are added to the database table SNWD_SO via enhancements, will automatically be added to the view entity. Currently it is only possible to use $EXTENSION.* in view definitions with exactly one datasource (no joins, unions, etc.).
In future, CDS will also provide additional features for extending existing CDS entities themselves (not via the underlying datasource). These features will be available on both the ABAP and HANA platforms.
Consuming your CDS entities in ABAP
Once you have saved and activated your DDL source, the DDIC artifacts are created. Consuming your CDS entity is simple: CDS entities can be used in OPEN SQL! You should always use the entity name in your OPEN SQL statements.
SELECT * FROM SalesOrder INTO TABLE @itab. “Use entity name
Note that when selecting from CDS entities in OPEN SQL, the new OPEN SQL syntax must be used. This means that the ABAP variables (in this case itab) must be escaped using the @ character. The @ character in OPEN SQL has nothing to do with CDS annotations. For more information about the new OPEN SQL syntax, see the ABAP keyword documentation.
Lifecycle and database support
The best part about DDL sources and CDS entities is that they are managed by ABAP. This means that the entire lifecyle of the CDS entities are controlled by the ABAP Change and Transport System (CTS).
In addition, the SP5 CDS features are “open”. This means that your CDS view definitions can be deployed on any database which is supported by SAP.
Summary
Well, that was an attempt to give you a condensed overview of Core Data Services (CDS) in SAP NW ABAP 7.4 SP5. Not easy when you are introducing the next game changer in SAP application development.
To sum it up: CDS provides enhanced view building capabilities to enable you to easily define semantically rich and re-useable data models in the database. The new view building features include new join types, new clauses as well as support for aggregate functions and SQL functions. All these features are “open”. The features are provided in a new text editor for DDL sources in ADT 2.19.
But the journey doesn’t end here. There will continue to be future CDS feature deliveries in ABAP and even in 7.4 SP5 there is still a lot to tell. We haven’t even touched on associations or the cast functionality, not to mention the cool editor features. But that will have to wait for another time (otherwise nobody will read this lengthly blog).
If you can’t wait you can get more information by watching Carine’s video tutorial (“Building Core Data Services Views in ABAP on SAP HANA“). Or you can check out our CDS keyword documentation. For more information about the CDS support in the HANA Platform, see the SAP HANA Developer Guide.
Hi Christiaan – What is the difference between creating the CDS view entity with annotation and without? This part is not clear for me..Also after defining the CDS view whether the same can be viewed through SE11?
Thanks
Hakim
Hi Hakim,
In the first delivery for CDS support in ABAP, we only support “core” annotations. This means that consumers cannot yet define their own annotations and further enrich the existing data models themselves.
Many of these core annotations have default values. If you do not, for example, specify the buffering behaviour for your view, then the default is “no buffering”. So if you don’t use the annotations, in many cases you get the default values. Some core annotations, however, don’t have default values (e.g. “currency code” annotations). If you don’t use these then the information is simply missing. You can still activate and use your view but higher-level tools, e.g. UIs, will not have any information about which field contains the currency code for an amount.
As mentioned above, in the context of ABAP, there is one compulsory annotation: @AbapCatalog.sqlViewName. You cannot activate your view if this annotation is not specified.
W.r.t you second question, the view entites cannot be viewed in the SE11. The corresponding DDIC view (specified in the sqlViewName annotation) can be viewed in the SE11, but the view is read-only and very often the informaton displayed is inconsistent. This is because CDS support many features which cannot be displayed in the SE11, e.g. UNIONs. The best place to browse the views is in the new DDL editor.
Regards
Chris
Hi Chris – Thanks for the clarification.
-Hakim
We just got this interesting CDS and some questions occur.
@AbapCatalog.sqlViewName: ‘sql_view_name’
@EndUserText.label: ‘Mijn eerste probeersel met CDS’
define view Zs_Cds_Number_One as select from hrp1000 {
key hrp1000.otype,
hrp1000.objid
}
Is there a way to ‘synchronise‘ both names, to make sure they are equal? Is a Code Inspector check possible on these views e.g.
Regards.
Kris
Hi Kris,
this is not possible. The sqlViewName and the entity name (here ‘Zs_Cds_Number_One’) share the same namespace. This means that you have to specify different names otherwise you will get syntax errors.
The sqlViewName also has the same restrictions as the view names in the SE11. The sqlViewName can be max. 16 characters long. The entity name, however, can be 30 characters long. This extension allows you to implement better names for your CDS entities. Names which better convey the semantics. Keeping the names the same (even if it was possible) would mean that you would have to apply the same restrictions to the entity name. You’d be “wasting” the additional characters.
The entity name can, however, have the same name as the DDL source name. We don’t check this since the CDS specification allows for multiple entities to be implemented in the DDL source. This has not yet been implemented in ABAP, but when/if it is, then you probably don’t want the DDL name to be the same as one entity in your source. You would probably prefer to choose a source name which describes the semantics of all the included entities.
Cheers
Chris
Chris, Nice blog!
Great example !
I hope I’m not the only one tripped by omission of @ for Host variables when trying out these examples.
select * from salesorder into table lt_salesorder. <- Error
select * from salesorder into table @lt_salesorder. <- This works fine
Hi Vikas,
Thanks, you are right. The host variable lt_salesorder in the OPEN SQL statement must be escaped using @. The usage of this escape character is not related to the @ in CDS annotations. This has to do with the new OPEN SQL syntax in SAP NetWeaver 7.4 SP5. In order to select from CDS entities in OPEN SQL, the new syntax must be used. This also includes separating the fields in the select list using a comma.
See Horst’s blog (OPEN SQL, New Syntax) and the ABAP keyword documentation for more details.
I have updated the blog with this information.
Regards
Chris
Thanks for the update Chris.
Hi Chris,
I was initially confused by the acronym CDS – Core Data Services – and it’s similarity to CDS – Content Delivery Services, which allows for extensive data modelling in HANA. As far as I remember, the Content Delivery Services are also stored as “DDL” sources – or at least something similar (file suffix .hddbd). SAP is not making it easy for us 🙂
Regards,
Trond
From the HANA Native Development side we don’t have a development object with the file suffix hddbd. We do have hdbdd, but that is the same CDS – Core Data Services. Its the same syntax used in ABAP DDL sources. We use CDS in HANA Native to define tables, structures, associations, and views. It is also the core language within the data definition of the River syntax.
Myself and several others in the HANA development group have never head of Content Delivery Services nor had we heard of hddbd. There is not artifact with that extension documented in the HANA Development guide nor does it open any specific editor in HANA Studio. Are you sure that perhaps you weren’t thinking of hdbdd?
Hi Thomas,
you’re perfectly right. Got the acronyms mixed up (guess I was subconsciously thinking of CDN as in internet content delivery services… another topic altogether).
Interesting that you make the link with RDL – I’ve been wondering about the similarities between CDS and RDL and how these fit together.
Regards,
Trond
Hi Christiaan,
if I’m using the substring element on MaxDB, I’m getting an error while generating the DDL:
Pressing “generate” a second time, I’m getting this:
If I’m commenting out this line, all is ok.
Any idea?
Uwe
Hi Uwe,
can you have a look at the activation log? Click in the DDLS editor window and find the activation log in Navigate > Open Activation Log.
Cheers,
Jasmin
No errors found in the log.
The log ends with:
Hi Uwe,
Does this error still occur? If so, I’d suggest you open a customer message for the component BC-DWB-DIC.
Kind regards
Chris
Hi Chris,
Done. If you need the number, just ping me.
Hi Christiaan,
we, or better Burkhard Diesing , found the reason for the error: the first number isn’t the offset (like in HANA?) but the starting position -> with substring( bpa.company_name, 1, 10 ) it works.
See ABAP documentation:
But in my opinion:
Cheers
Uwe
Hello Uwe,
the behaivior is the same but HANA doesn’t throw an error and set the
startposition to 1 if 0 is given.
But you are right the syntax check should be fail or MaxDB accept the 0 too.
I will discuss this internally what needs to change to follow expectations.
Burkhard
Hi Uwe and Burkhard,
Since SP8, the syntax check now fails if the offset isn’t a positive integer value.
I have updated the screenshot in the blog to now use substring(bpa.company_name, 1, 10).
Cheers
Chris
Very interesting.
Just wonder:
How does it live together with SAP River?
Hi Shai,
CDS is the “core” of SAP River. CDS enables the data modeling part of SAP River as well as the role based access control. In addition SAP River provides its own programming language to implement the business logic (actions) on top of the data model. See Introducing SAP River for more information.
To sum it up: You can use “pure” CDS to define your data models (as described in this blog) or you can use CDS in the context of SAP River.
Regards
Chris
Hi Experts,
Can a CDS view be used in defining another CDS view (similar to how a table is used in defining a CDS view)?
Thanks & Regards,
Saurabh
Hi Saurabh,
Yes! Views on views are possible in CDS. In fact, this design pattern is heavily used to enable many of our SAP products which use CDS. A good example for this use-case is when you have to define a specialized “consumption view” for providing data to the UI. These consumption views are very often based on other CDS views.
Regards
Chris
Thanks a lot for the quick response Chris!
Regards,
Saurabh
Hi Chris,
Some more questions:-
1) Are selects from multiple tables allowed for CDS view definition, for example something like SELECT table1.column1, table2.column2 FROM table1, table2 WHERE table1.column1 = table2.column1;
2) Are nested joins allowed for CDS view definition; for example something like
select from ((A join B on..) as C join (D join E on..) as F on..)
As of now I am getting a syntax error for both 1 and 2.
Thanks & Regards,
Saurabh
Hi Saurabh,
In both cases this is possible.
1) You can model this in CDS using simple INNER JOINs. The condition in the WHERE clause can used for the join condition (ON clause). See the F1-Help for details.
2) You have to be careful how you use the brackets when doing nested joins. In your example “as C” must be contained within the brackets, e.g.: (A as X inner join B as Y on X.f = Y.f ) inner join ( D as Z join E on Z.f = E.f ) on X.f = Z.f
Kind regards
Chris
Hi Chris,
I have to do multiple nested inner joins so I need to have aliases for the intermediate joins because some of the field names are same across tables used in the joins and also in the resultant intermediate joins. So my question is that in your example can I do something like this:-
((A as X inner join B as Y on X.f = Y.f ) as AB) inner join (( D as Z join E on Z.f = E.f) as DE )
on AB.f = DE.f
Thanks for all the help you have already provided and in advance for thsi one 🙂
Regards,
Saurabh
Hi Chris & Other Experts,
Can someone please help me with my last query (see above)?
Thanks & Regards,
Saurabh
Hi Saurabh,
you cannot specifiy alias names for joins, neither in CDS nor in any SQL dialect I know.
But since you can use the alias names of tables in the on-condition, this is not necessary:
(A as X inner join B as Y on X.f = Y.f ) inner join (D as Z join E on Z.f = E.f )
on X.f = Z.f
regards
Andreas
Hi Andreas,
Thanks for the reply. I used aliases for joins in HANA SQL script. Basically I am trying to port a HANA SQL script based AMDP to a CDS view.
Here is the working code that I used:-
METHODS hana_ld
IMPORTING
VALUE(mandt) TYPE sy-mandt
VALUE(it_geo_fr) TYPE t_loc
VALUE(it_geo_to) TYPE t_loc
VALUE(it_mtr) TYPE t_mtr
EXPORTING
VALUE(et_l2l) TYPE t_lane…
CLASS zsc_hana_ld IMPLEMENTATION.
METHOD hana_ld BY DATABASE PROCEDURE
FOR HDB LANGUAGE SQLSCRIPT
OPTIONS READ-ONLY
USING /scmb/toentity /sapapo/trm /scmtms/d_shzon.
et_l2l = select ‘1’ as “REQUEST_ID”, e.”GEO_FR”, f.”GEO_TO”, e.”TTYPE” as “MTR”, e.”TRMID_FR” as “TRMID”, 0 as “DIST_DET_RELEVANT”
from
(select a.”SCUGUID” as “GEO_FR”, b.”TRMID” as “TRMID_FR”, b.”TTYPE”
from “/SCMB/TOENTITY” as a
join
“/SAPAPO/TRM” as b
on a.”SCUGUID22″ = b.”LOCFR”
where a.”SCUGUID” in (select “SCUGUID” from :it_geo_fr)
and a.”MANDT” = :mandt
and b.”MANDT” = :mandt
and b.”TTYPE” in (select “MTR” as “TTYPE” from :it_mtr) ) as e
join
(select c.”SCUGUID” as “GEO_TO”, d.”TRMID” as “TRMID_TO”
from “/SCMB/TOENTITY” as c
join
“/SAPAPO/TRM” as d
on c.”SCUGUID22″ = d.”LOCTO”
where c.”SCUGUID” in (select “SCUGUID” from :it_geo_to)
and c.”MANDT” = :mandt
and d.”MANDT” = :mandt
and d.”TTYPE” in (select “MTR” as “TTYPE” from :it_mtr)) as f
on e.”TRMID_FR” = f.”TRMID_TO”;
The aliasing of joins is needed because the output of such a join is needed in other joins and the field names are repeated across tables and joins within this join and outside of it.
Any further pointers would be highly appreciated.
Thanks & Regards,
Saurabh
Hi Saurabh,
In your example, you are using sub selects in the from-clause. This is something we do not support in CDS, yet. In CDS you probably have to implement this using a view on view construct.
By the way, in your example you do not specify an alias name for a join but for a whole sub select inside the from clause. In this case the aliasing does make sense.
regards
Andreas
Hi Andreas,
Thanks a lot for the quick reply. So with the current CDS capabilities I will have to use the view on view construct. I will try this out.
Also sorry for using the wrong terminology (joins vs. sub select)!!
Regards,
Saurabh
hi Andreas,
Will the nested select be supported in future? besides, does abap cds support while loop?
thanks,
Anni
Hi Anni,
subselects are certainly on the backlog for the CDS Scrum team, unfortunately we have a long backlog.
Regarding Loops, since CDS like SQL is a purely declarative language I do not think we will have any procedural elements like loops in the near future.
If you need a loop, use SQL Script.
regards
Andreas
Hello!
I just need to be sure about this – you just said in no uncertain terms that loops are possible in SQL Script.
Can you just confirm that once again? I had it in my head that this was not the case, maybe it was not once, but maybe now this has been added.
So just to be clear – when you say “if you need a loop, use SQL Script” that means SQL script supports loops?
An example of the syntax would be wonderful, I bet it is not like ABAP, probably more like a so called “cursor” moving through a group of database records.
Cheersy Cheers
Paul
Hi Paul,
yes I can confirm loops in SQLScript, just look at the SQLScript reference here in the SCN. In the subsection Imperative SQLScript Logic-> Orchestration-Logic it says: Orchestration logic is used to implement data flow and control flow logic using imperative language constructs such as loops and conditionals.
regards
Andreas
I note the comparison CDS – RDL and the comment that CDS is a core part of River. This also seemed evident from the launch of the two products. In spite of this, there are some annoying differences in syntax between the two languages (CDS and RDL), such as the following:
– in RDL, fields within entities have to be defined using the key word “element”; this is not needed (nor possible) in CDS
– after the ending curly bracket for defining an entity in CDS, you need a semi-colon; this is not the case in RDL
– no imperative logic in CDS
Copy/pasting an RDL model into a CDS file (or vice versa) is, in other words, not possible. It looks almost like SAP created CDS and RDL in parallel, or maybe defined CDS based on a previous (and non-released) version of RDL? At the very best, these two “almost-identical” syntaxes are confusing, and gives the impression of a certain lack of coordination between various departments or development groups. Even worse, I imagine it will be more and more difficult to iron these bumps out as RDL and CDS takes hold.
That is, if SAP doesn’t decide to scrap one in favour of the other – if so, which one?
Regards,
Trond
PS: come to think of it, RDL is not released yet. Would it be possible to request someone at SAP to look into “syncing” the syntax properly?
Hi Trond,
in short and simplified: RDL = CDS + actions.
CDS is intended for defining data models, RDL adds imperative logic.
When restricted to pure model definition, CDS and RDL syntax should indeed be (almost) identical.
You are right with the assumption that different teams are working on CDS and RDL in parallel, but both are based on the same specification. Nevertheless it happens that there are (hopefully minor) misalignments concerning the syntax. They should be gone in the end.
Best regards,
Steffen
How to consume native Hana Models like calc view in CDS ?
Hi Joseph,
HANA calc views without parameters can be accessed in the ABAP / DDIC by creating an external View in the ADT (ABAP Development Tool). This view than can consumed in CDS View just like any other view or table declared in the DDIC.
Best regards
Andreas
Hello Christian,
is it possible to use build in varibles in the where condition of an CDS view ? In Detail i want to use the actual Date (sy-datum) as an condition Parameter. If i could use this Parameter directly, without an inputparameter for the view, i could use the view directly in a SADL Query for displaying the results in a FPM Application.
best regards
Hi Martin,
CDS does not support SY parameters. However, you can use view parameters as you proposed and set the value (e.g. sy-langu) via a dedicated interface.
For Gateway / OData Services via the query options API:
4.10
FPM: Will check with FPM colleagues.
Best regards
Marcel
Hello Martin,
view parameters can only be set by sub-classing the standard feeder class. In a subclass of CL_FPM_SADL_SEARCH_RESULT you can redefine method PREPARE_PBO with code like this:
*******************************************************************************************
*—– inherit
super->prepare_pbo( io_event ).
*—– test setting view parameters
CHECK io_event->mv_event_id = ….<some condition if not desired each time>
set_view_parameters( VALUE #( ( name = ‘HUGO’ value = ‘HALLO’ ) ) ).
*******************************************************************************************
An example is given in class CL_FPM_TEST_SADL_SRES_CUSTOM which is used in Test App FPM_TEST_SADL_SBOOK_CUSTOM (here only on DDIC, nit CDS, but the code is the same).
Best regards
Jens
Chris,
Can you define ABAP DDIC tables using CDS?
Amiya
Hi Amiya,
this is currently not possible for ABAP developers but it is in the pipeline. Since most of SAP’s software solutions already have/had their persistencies, the main focus has been on view building to enable code pushdown. Although there are plans to support defining database tables using CDS (with a similar integration in DDIC), I cannot give you a timeline.
The CDS implementation on SAP HANA, however, already offers this possibility. If you are doing HANA native development, you SHOULD already be using CDS to create your database tables (entities) instead of HDBTable. But any database tables defined directly in the HDB are not integrated into ABAP DDIC and the ABAP language.
Cheers
Chris
Thanks! Chris. Also, I wanted to confirm if data manipulation is not possible using these views.
These views are read-only. The CDS specification allows for updateable views, but they are not yet implemented. Currently there are investigations on how to used SADL and BOPF to enable “write” scenarios, including transactional handling (disclaimer here!), but again this is something for the furture.
Thanks for clarification, Chris.
Hi Chris,
You mention that “Annotations can be used to add metadata to CDS entities. Annotations specify the properties and semantics of the entity and its behaviour when it is accessed at runtime. This metadata can also be accessed by consumption tools using special APIs.”
What are these APIs?
Thanks.
I should clarify. Do you know if there is a way to access the annotation metadata outside of ABAP (i.e. directly through some HANA interface like XSJS)?
Thanks.
See my answer here:
Hi Eric,
here again we have to differentiate between the CDS implementation in ABAP (with DDIC support and running in the ABAP Application Server) and the implementation in HDB (with its own repository and independent of ABAP/DDIC).
You are referring to the possibility to define your own annotations in HANA CDS to annotate your entities which you defined in HANA (.hdbdd files). This is possible since HANA SPS9. Currently these annotations (the metadata) can only be accessed by SAP consumption tools. In other words, to date only internal APIs exist. There are plans to deliver public APIs with SPS10 (disclaimer here!). Then “external” consumers (in the context of HANA development) will also be able to access this information, e.g. via XSJS. It is important to note that these entities and their metadata are managed by the HANA DBMS, independent of the ABAP DDIC and ABAP runtime. This information is not found in the database tables mentioned by Uwe (since DDIC is not involved here).
In ABAP the situation is somewhat different. In ABAP CDS entities are defined and managed in/by the ABAP Repository, independent of the HANA catalog. In fact, ABAP CDS is “open”. You can use these CDS features regardless of the underlying database. Currently you cannot define your own annotations in ABAP CDS (like in HANA CDS), but you can simply write the annotation in the DDL source. During activation these annotations are not validated (like in HANA) but simply stored in the ABAP Repository as metadata (in the database tables mentioned by Uwe). Here again there are currently also no official ABAP APIs to access this data. In the future official APIs will be delivered.
To sum it up. Depending on where you define your CDS entities (ABAP or HANA), the metadata is stored in different repositories. Currently there are no official APIs for these repositories (neither in ABAP nor HANA). But they are on their way …
Regards
Chris
Chris,
Could you tell me where I can find a features-by-releases matrix for ABAP and HANA CDS? I searched on the internet but couldn’t find one.
I am looking for something of the below sort:
Amiya
Hi Amiya,
take a look here: .
I tried to give you a quick overview of the features delivered with SAP NW 7.4 SP5 and SP8. Of course there is always the ABAP Keyword Documentation which you can refer to.
Regards
Chris
Coming bcak to the @semantics again, in your example you link the currency field to the amont field, just like in ABAP DDIC tables where amount fields must point to a curency field like WAERS, and quantity fields must point to a unit of measure field like MEINS.
Since this is enforced in the ABAP DDIC the underlyin tables already have this information inside SAP – therefore is this semantic information intended for non-SAP based consumers e.g. EXCEL or a third party software system?
Is this like in the SEGW gateway transaction where I can add semantic information to say a field is a geo-co-ordinate so when something like Google Maps can access the data model it knows for sure what the co-ordinate fields are?
I may have this all wrong, but to keep ploughing ahead does this extra semantic information have any purpose within the ABAP environment?
Cheersy Cheers
Paul
Hi experts,
I created a simple CDS view with one input parameter and use it for 2 places in the “where” condition, But I am getting the errors as shown :
can anyone please help here?
Thanks
so r u doing bw-hana platform,….tell me
Yes. I checked and it is not supported as of now. Thanks.
Hi,
I tried using “having ” in one of ABAP CDS views but it is not working.
Here, I need to retrieve top 5 employee with max total revenue, how to do that?
Can anyone help here please ?
************************************
define view Zsoemp as select from snwd_so as so1
association [1] to snwd_employees as emp
on so1.created_by = emp.node_key
{
key emp.employee_id as emp_id,
key so1.so_id as so_id,
emp.node_key,
emp.first_name,
@Semantics.currencyCode: true
emp.currency,
@DefaultAggregation: #SUM
sum(so1.gross_amount) as Total_Revenue,
@DefaultAggregation: #SUM
sum(so1.net_amount) as Total_Net,
@DefaultAggregation: #SUM
sum(so1.tax_amount) as Total_Tax
}group by emp.employee_id,so1.so_id,emp.node_key,emp.first_name,emp.currency
having count(distinct emp.employee_id) = 5
**************************************
thanks
i can help you, if you tell me exactly on which platform you are working….and which tool you are using i mean version …
I am working on BW on HANA . SAP_BW – Release 750
and using Eclipse IDE for Java Developers
Version: Kepler Service Release 2
Build id: 20140224-0627
Thanks
that is amezing ..u got opportunity with latest BW awesom,
so now exactly state ur error …
There is such no error, I need to know how to select the top 5 (which has the highest number of sales).
We can’t put “select top 5 …” in ABAP CDS views, so what is the alternative one?
Regards,
Anita
Hi Anita,
currently the ABAP CDS does not support an ORDER BY clause yet. Since a “select top 5 …” only make sense with a defined order, this feature is postponed as well.
As soon as the ORDER BY is available, the LIMITS-clause will probably be supported as well.
What is the alternative ? For the ordering you have to rely on the query language on top of CDS, say, ABAP Open SQL.
best regards
Andreas
Thanks Andreas…
SQLScript…will solve the problem…
Hi Christiaan,
Great blog! Thanks for keeping it updated. I am currently creating CDS views for client handling (MANDT). You have mentioned in the blog : “Currently you can only define one CDS entity per DDL source.”
Does this still hold true ? I tried to create multiple views in one DDL source but failed. Wondering if some new syntax has come in to achieve this?
Hi Shailesh,
this still holds true for ABAP CDS and probably won’t change in the near future.
Kind regards
Chris
you are true chris
Hi Christiaan,
Thanks for this fabulous survey of cds-view .
I found that there were many annotations can be used in DDL building , but i wonder if there is a list of all annotations ? Cause currently I don`t have any idea about the relationship between annotations and the view definition(define view as select……).
Regards,
Zaza
Already found in ABAP Keyword Documentation 😉
Hi Zaza,
thanks for the update. I was just about to suggest the same ;-).
Kind regards
Chris
Hi All,
I have built an CDS view with 3 tables – T1, T2, T3. T1 and T2 have 1-1 mapping while T3 has multiple entries. I tried all options to get last(one) row from T3 but couldn’t able to achieve it. Getting multiple rows from the query.
Is there any possible way to exclude duplicates from table 3?
Appreciate if you help me on this.
Hi Christiaan,
Is this the future direction to build all views in ABAP itself and not use Hana Calculation views? Can you pl. provide some more insigts if SAP continue to suggest do calc views and cds views?
Hi all ,
Anybody know how to hide this navback button thought annotations ?
I cannot find any annotation working on it .
Thanks&BestRegards
Hi,
The link to Carine’s video () is returning “Access to this place or content is restricted…”. Any suggestions?
Hi Bryan,
please try again. The document should now be accessible.
Kind regards
Link works great now. Thanks! | https://blogs.sap.com/2014/02/04/new-data-modeling-features-in-abap-for-hana/ | CC-MAIN-2017-47 | refinedweb | 5,876 | 65.52 |
NOTE: As detailed in this blog, this functionality to provision Kyma and connect systems will be removed from the SAP C/4HANA cockpit and be replaced by the SAP Cloud Platform Extension Factory, Kyma runtime. For more information about the SAP Cloud Platform Extension Factory, Kyma runtime see “Get a fully managed runtime based on Kyma and Kubernetes” and “How to get started”. To register new system see the help.
One of the key components of the SAP Cloud Platform Extension Factory Kyma runtime is the Application Connector. The Application Connector provides a mechanism to simplify the connection between external systems and Kyma in a secure manner. Once the initial connection has been established, the registration of the external Events and APIs of the external system takes place. The Events and APIs are then available within the Kyma Service Catalog. The events can be consumed asynchronously with services and lambdas (serverless functions) deployed within Kyma. Additionally, the Application Connector provides monitoring and tracing capabilities to facilitate operational aspects.
In this blog, we will explore the steps to connect SAP Commerce Cloud to a Kyma runtime using the Application Connector. If you haven’t already configured your Kyma runtime, please refer to this blog.
First, open the SAP C/4HANA Cockpit and navigate to the Extensibility menu to display the Runtime. Next, select the Display Name of the desired Runtime.
This will bring you to the Runtime Details where you can initiate the system registration by choosing the Add button found i,n the Registered Systems list
Provide a Name for the System and choose Register to save the entry.
Choose the Copy key button, which will place the URL needed to connect the systems to your system’s clipboard.
With the key copied you can now proceed to the SAP Commerce Cloud system to complete the system connection. Open the backoffice application of the SAP Commerce Cloud system.
Connecting SAP Commerce Cloud – 1811
In the Filter Tree entries box, use the text “consumed” to filter the results and the choose the Consumed Certificate Credential option. Choose the kyma-cert credential Id and then choose the Receive Certificate Action button.
Enter the copied url into the Retrieve Certificate dialog text box and choose Retrieve Certificate
NOTE: If any webservices are not successfully registered, check that the defined urls in the Exposed Destination and Endpoint menu options are correct for each of the named services.
Connecting SAP Commerce Cloud – 1905
SAP Commerce Cloud – 1905 added a template based approach to allow multiple system registrations for added flexibility. In the Filter Tree entries box, use the text “api” to filter the results and the choose theoption. Select the Default_Template option in the table and the choose the Register Target Destination Action option.
Enter the copied url into the Token URL field and provide a value for the New Destination’s Id field. choose Register Destination Target to complete the step.
Which results in an additional entry in the table.
NOTE: If any webservices are not successfully registered, check that the defined urls in the Exposed Destination and Endpoint menu options are correct for each of the named services.
Verifying the Configuration
Once the events and services have been successfully configured, we can verify the configuration within the Kyma console. Open the SAP C/4HANA Cockpit and choose the Extensibility menu option. Within the desired Runtime, choose the Kyma Console link.
Choose the Applications menu option and then choose commerce
You should now find the list of Provided Services and Events
You can now bind the Application to a namespace and start creating your extensions!
To learn how to trigger a lambda function from an event see this blog or to trigger a microservice from an event see this blog.
Hi Jamie
Thanks for the great Blog!
How do we connect a C4C system to a runtime?
Thanks
Kevin
Hi Kevin,
Take a look at
Regards,
Jamie
Thanks Jamie, much appreciated
Hi Jamie,
I followed the instructions to connect C4C. However encountered an issue while adding the Remote Environment URL in C4C Event notification setting.
Error message: SAP Cloud Platform Extension Factory setup failed with error "Native SSL error"
Adding the certificate before this step was successful. I'm attaching the screenshot of the issue for your reference.
Thanks.
Prashanth
Hi Prashanth,
Can you please create a new question in the question an answers area to address this?
Thanks,
Jamie
Hi Jamie Cawley and Prasanth Rai,
I haven't found this question raised but I'm experiencing the same right now. Here is my question:
Cheers, Andrei
Thank you for the awesome Article.
How we can create connection between GKE/GCP Kyma and local hybris?
Regards,
Dmitry
The local hybris would have to be made available to the internet, the steps should be the same if this is the case. Please post questions in a new thread instead of adding as a comment to a blog. This will provide more visibility to your question.
Jamie
Hey,
Why i haven't the button?
Regards,
Dmitry
Hi Dmitry,
What version are you using?
Jamie
Hello Jamie,
I'm using the version 1811, and i cannot see the button for retrieving the certificate:
Can you help? There is another way to do this?
Regards,
Tiago
Hi Tiago,
Please check that you have the extensions installed noted in the architecture box
You can check in
https://<commerce url>/hac/platform/extensions
Regards,
Jamie
Hi Jamie, thanks for the reply,
We checked the extensions in Hybris console, and we didn't find the kyma extensions.
Therefore, we are assuming that we should add them by following these page's steps:
We don't know how to configure this in an SaaS environment, do you know how to do this or can you point us some references to do so ?
Regards,
Tiago
See
Regards,
Jamie
Hi Jamie Cawley
I am running into couple of errors when I try to register the destination target my CCV2 cloud.
2. The second issue is below
{}]; reason : [{I/O error on GET request for \"\": mykyma.hybris: Name or service not known; nested exception is java.net.UnknownHostException: mykyma.hybris: Name or service not known}
DO you have any ideas?
What versions are you using and is anything installed locally?
Regards,
Jamie
Hi Jamie
I am using SAP Commerce Cloud 1905 CCV2 in Azure and Kyma version is 1.9.0 in cx cloud.
Couple of questions.
What value shud I set for this property. ccv2.services.api.url.0 ?
I have also installed the SSL certificate for *.cx.cloud.sap, added it to the trust store in CCV2 , added a deployment config and did a deployment as well. But still I am seeing the PKIX error. Do you have the SSL certificate for CX cloud portal?
Do have the following extensions enabled?
"kymaintegrationbackoffice"
“kymaexternalservices”
“kymaexternalservicesbackoffice”
You shouldn’t have to modify any of the local.properties.
Regards,
Jamie
Hi Jamie
I had kymaintegrationbackoffice already.
I tried adding the following.
But my CCV2 build is failing with the following error
Extension 'kymaexternalservices' doesn't specify a path and no scanned extension was matching the name.
Hi Jamie
I had kymaintegrationbackoffice already.
I tried adding the following.
“kymaexternalservices”
“kymaexternalservicesbackoffice”
But my CCV2 build is failing with the following error
Extension ‘kymaexternalservices’ doesn’t specify a path and no scanned extension was matching the name.
Sorry that may have been a mistake. For the setup please refer to
Regards,
Jamie | https://blogs.sap.com/2019/05/28/sap-cloud-platform-extension-factory-connecting-to-sap-commerce-cloud/ | CC-MAIN-2021-21 | refinedweb | 1,243 | 54.93 |
Technical Articles
Getting started with the predefined CI/CD pipeline for Kyma runtime
Great news! With one of its latest releases, SAP Continuous Integration and Delivery is providing a predefined pipeline for container-based applications. With this new offering, you can create Docker images, push them to your container registry, and deploy them to SAP Business Technology Platform (SAP BTP) – all with the click of a button. In this blog post, I would like to walk you through the whole process of setting up a continuous integration and delivery pipeline for creating integration content in the SAP BTP, Kyma runtime. Get ready to bring your CI/CD journey to the next level.
SAP Continuous Integration and Delivery?
Container-based applications?
SAP BTP, Kyma runtime?
The solution to this is the SAP BTP, Kyma runtime – a fully managed Kubernetes runtime based on the open-source project called Kyma. This cloud-native solution allows you to develop and deploy applications with serverless functions and combine them with containerized microservices.
This post will cover:
- Where to get started
- How to connect SAP Continuous Integration and Delivery with the Kyma runtime
- How to configure credentials
- How to configure the pipeline stages and choose between build tools
Before you get started:
- You have enabled SAP Continuous Integration and Delivery on SAP BTP. See Enabling the Service.
- You are an administrator of SAP Continuous Integration and Delivery. See Assigning Roles and Permissions.
- You have enabled Kyma runtime. See SAP BTP, Kyma runtime: How to get started or follow the tutorial Enable SAP BTP, Kyma Runtime.
- You have a container registry, for example, a Docker Hub account.
- You have a repository with the source code (in this example we will use GitHub, but you can also use GitLab or Bitbucket server)
Getting started
Creating a new pipeline involves 4 simple steps:
- Create a Kyma service account.
For the CI/CD pipeline to run continuously, it is necessary to create Kyma application credentials that don’t expire. The authentication token for a cluster is generated in the kubeconfig file. The kubeconfig that you can download directly from the Kyma dashboard expires every 8 hours and is therefore not suited for our scenario. This requires us to create a custom kubeconfig file using a service account. A detailed step by step can be found in the following tutorial: Create a Kyma service account.
- Connect your repository to SAP Continuous Integration and Delivery and add a webhook to your GitHub account.
This step is where all the magic happens – whenever you are creating or updating a docker image locally, you can simply push the changes to your GitHub repository. A webhook push event is sent to the service and a build of the connected job is automatically triggered. Learn more in the following tutorial: Get Started with an SAP Fiori Project in SAP Continuous Integration and Delivery.
- Start with configuring the credentials in SAP Continuous Integration and Delivery:
- Kubernetes credentials
Add a “Secret Text” credential with the content of your kubeconfig file that you created in step 1 in the “Secret” field.
- Container registry credentials
The SAP Continuous Integration and Delivery service publishes a new container image each time a job is triggered. This process requires access to your container repository, in this case, Docker Hub.
Paste the following lines into the “Secret” text field and replace the placeholders:
{ "auths": { "<containerRegistryURL>": { "username": "<myUsername>", "password": "<myPassword>" } } }
For Docker Hub, it should look like this:
Now we are ready to configure our first pipeline job.
- In SAP Continuous Integration and Delivery, configure a new job as described in Create a Job. As “Pipeline”, choose “Container-Based Applications”.
Configuring the General Parameters
The “Container Registry URL” for Docker Hub is The name of your image consists of your given “Container Image Name” and a tag name. To tag your image, you have two options:
- If you define a “Container Image Tag”, all newly built container images have the same tag, in our case, 0.1.
- If you choose the “Tag Container Image Automatically” checkbox, your container images receive a unique version tag every build. This allows you to compare your container images afterwards. If you want to use this feature, be sure to add the following line to your Dockerfile after the FROM line:
ENV VERSION x.y
For “Container Registry Credentials”, choose the credentials you created in the last step.
After this step, you can start configuring the stages of the pipeline:
The Init stage is already done by the pipeline. You can enable or disable the Build, Acceptance, and Release stage depending on your needs. This way, you can choose to only build an image, only deploy an existing image, or combine the two to deploy an image you just built.
Configuring the Build Stage
Find the full path to your Dockerfile in GitHub and enter it in the “Container File Path” field.
For
it would look like this:
Configuring the Acceptance and Release Stage
The configuration of the Acceptance and Release stages are basically the same, except that in the Acceptance stage, you can profit from additional Helm tests, in case you choose to deploy the application with the Helm tool (you can find more information on which deployment tool to use in the next section).
Also – in the Acceptance stage, feel free to use a special Kyma namespace and kubeconfig for the purpose of testing a deployment in a different cluster.
Which deployment tool should I use?
The SAP Continuous Integration and Delivery service lets you choose between two deployment tools: helm3 or kubectl. Let’s go through both scenarios.
Deploy using helm3
With Helm, you can manage applications using Helm charts. In general, Helm charts are most useful for deploying simple applications and can handle the install-update-delete lifecycle for the applications deployed to the cluster. You can also use Helm to perform tests and it provides templates for your Helm charts. Some interesting configuration parameters:
- “Chart Path”: this is the path to the directory that contains the Chart.yaml file in your GitHub repository (note: the Chart.yaml needs to be in the helm/parent directory, but can be in a different subdirectory within it).
- “Helm Values”: this is the path to the values.yaml file in your GitHub repository. You also have the option to configure a “Helm Values Secret” by creating a “Secret Text” credential and adding the content of your values.yaml file to the “Secret field”.
Configuring the Helm values is optional – if you don’t set the Helm values either as a path or a secret, the values.yaml that is in the chart path will be used (If it doesn’t exist in the repo either, then no values will be used).
- “Force Resource Updates”: choose this checkbox to add a –force flag to your deployment. This will make Helm upgrade a resource to a new version of a chart.
Deploy using kubectl
kubectl comes in handy when you want to implement a complex, custom configuration or deploy a special application that involves a lot of operational expertise. The configuration parameters include:
- “Application Template File”: enter the name of your Kubernetes application template file.
- “Deploy Command”: choose “apply” to create a new resource or “replace” to replace an existing one.
- “Create Container Registry Secret”: Kubernetes needs to authenticate with a container registry to pull an image. Choose this checkbox to create a secret in your Kubernetes cluster based on your “Container Registry Credentials”.
Don’t forget to save your job!
Congratulations! Now you can create Docker images, push them to your container registry, and upload them into your SAP BTP, Kyma runtime. You can also monitor the status of your jobs and view their full logs:
If you found this post useful and want to learn more about this scenario, you can also see the Container-Based Applications product documentation on the SAP Help Portal.
Last but not least thank you Laura Veinberga for your collaboration. | https://blogs.sap.com/2022/04/22/getting-started-with-the-predefined-ci-cd-pipeline-for-kyma-runtime/ | CC-MAIN-2022-21 | refinedweb | 1,326 | 52.6 |
Components and supplies
Necessary tools and machines
Apps and online services
About this project
The winners of the TensorFlow Lite for Microcontrollers Challenge were announced on Octuber 18, 2021. This project was among the five worldwide winners and was featured in the Experiments with Google website.The Concept
A good practice to consider while being into traffic is to warn other users on the road of the direction you are going to take, before turning or changing lanes. This habit contributes to a smoother traffic flow and reduces sudden moves from unaware drivers. In fact, cars, motorbikes, trucks, buses and most of the vehicles you can think of, incorporate some kind of turn signalling devices.
Although being among the most vulnerable vehicles on the road, bicycles do not usually have such built-in signalling devices. In fact, bicycle riders either do not warn of a turn or if they do, they need to release a hand from the handlebar in order to make a sign to other drivers. This move reduces the rider stability and it might not be properly understood by everyone.
It is possible to find some add-on turn signal lights for bicycles in the market, but they usually require the rider to push a button to activate them and to push again to switch them off, in a similar way to motorbikes. And it is really easy to forget to switch them off. If you think twice about it, this is something worth to be improved.
From here is born VoiceTurn, a voice-controlled turn signal lighting concept initially thought for bicycles, which could be extended to other vehicles as well. The aim of this project is to use a Machine Learning algorithm to teach a tiny microcontroller to understand the words left! and right! and act accordingly by switching the corresponding turn signal light on.The Board
The microcontroller board to be used is an Arduino Nano 33 BLE Sense: an affordable board featuring a 32-bit ARM® Cortex™-M4 CPU running at 64 MHz, a bunch of built-in sensors, including a digital microphone, and Bluetooth Low Energy (BLE) connectivity.
However, what makes this board an excellent candidate for this project is its possibility of running Edge Computing applications on it using Tiny Machine Learning (TinyML). In short, after creating Machine Learning models with TensorFlow Lite, you can easily upload them to the board using the Arduino Integrated Development Environment (IDE).
Speech recognition at the Edge does not require to send the voice streaming to a Cloud server for processing, thus suppressing network latency. Additionally, it runs offline, so you can be certain that your turn signal lights won't stop working when passing through a tunnel. Last but not least, it preserves user privacy since your voice is not stored or sent anywhere.Train the Machine Learning Model
The word recognition model has been created using Edge Impulse, a development platform for embedded Machine Learning focused on providing an amazing User Experience (UX), awesome documentation and open source Software Development Kits (SDKs). Their website states:
Edge Impulse was designed for software developers, engineers and domain experts to solve real problems using machine learning on edge devices without a PhD in machine learning.
Meaning that you can have little to no knowledge of Machine Learning and still develop your applications successfully.
You can use this tutorial as a starting point for audio classification with Edge Impulse. The following steps will describe how this project has been tailored to fulfill the particular needs of VoiceTurn.
The first thing you need to do is to sign up on Edge Impulse to create a free developer subscription. After the account confirmation step, log in and create a project. You will be prompted with a wizard asking about the kind of project you wish to create. Click on Audio:
In the next step, you can choose between three options. The first one is to build a custom audio dataset yourself by connecting a microphone-enabled development board. This process requires to record a large amount of audio data in order to obtain acceptable results, so we will ignore it for now. The second option is to upload an existing audio dataset and the third option to follow a tutorial. Click on Go to the uploader, within the Import existing data choice to keep going with VoiceTurn.
The audio dataset that we are going to use is the Google Speech Commands Dataset, which consists of 65, 000 one-second-long utterances of 30 short words, by thousands of different people. You can download the version 2 of this dataset from this link.
It is possible to think that only the audio recording subsets corresponding to the words left and right would be needed to train the model. However, as Pete Warden, from Google Brain, states in the article describing the methods used to collect and evaluate the dataset:
A key requirement for keyword spotting in real products is distinguishing between audio that contains speech, and clips that contain none.
Therefore, we will also use the subset of audio recordings contained in the _background_noise_ folder in order to enrich our model with some background noise. In addition, we will complement the noise database with the audios from the noise folder from the Keyword spotting pre-built dataset available as part of the Edge Impulse documentation.
Having a dataset containing the words left and right and some background noise is not enough, since we also need to provide the model with additional words. This way, if another word is heard, it will not be classified as left or right, but it will go into another category. To do that, we can select a random collection of audio recordings from the first dataset we downloaded. Just notice that the total amount of additional audio recordings should be similar to the total amount of recordings of each word of interest.
Once the audio recordings are gathered, upload them into your Edge Impulse project. Note that you will need to upload 4 different datasets, each of them corresponding to a different category. Browse your files, enter the labels manually, being Left, Right, noise and other, respectively, and make sure Automatically split between training and testing is selected. This will leave aside about 20% of the samples to be used for testing the model afterwards. Click on Begin upload.
As you can notice, all the audio samples you uploaded are available to check and listen to. Make sure that all of them are one-second-long in duration. If they are longer, click on the dots on the right of the audio sample row and click on Split sample, setting a segment length of 1000 ms.
As a result, you should now see the total duration of your training and testing data, respectively, as well as your data split into four categories:
You can also double-check that the duration of your testing data is roughly 20% of the total dataset duration.
The next step is to design an impulse, which is the whole set of operations performed on the input voice data until the words are classified. Click on Create Impulse on the left hand menu. Our impulse will consist of an input block to slice the data, a processing block to pre-process them and a learning block to classify them into one of the four labels previously defined. Click on Add an input block and add a Time series data block, setting the window size to 1000 ms. Then, click on Add a processing block and add an Audio Mel Frequency Cepstral Coefficients (MFCC) block, which is suitable for human speech data. This block produces a simplified form of the input data, easier to process by the next block. Finally, click on Add a learning block and add a Classification (Keras) block, which is the Neural Network (NN) performing the classification and providing an output.
It is possible to configure both the processing and learning blocks. Click on MFCC on the left hand menu and you will see all the parameters related to the signal processing stage. In this project we will keep things simple and trust the default parameters of this block. Click on Generate features at the top and then click on the Generate features button to generate the MFCC blocks corresponding to the audio windows. After the job is finished you will be prompted with the Feature explorer, which is a 3D representation of your dataset. This tool is useful for quickly checking if your samples separate nicely into the categories you defined before, so that your dataset is suitable for Machine Learning. On this page you can also see an estimation of the time the Data Signal Processing (DSP) stage will take to process your data, as well as the RAM usage when running in a microcontroller.
Now you can click on NN Classifier on the left hand menu and start training your neural network, a set of algorithms able to recognize patterns in their learning data. Watch this video to have a quick overview about the working principle of neural networks and some of their applications. We will leave most of the default Neural Network settings unchanged but we will slightly increase the Minimum confidence rating to 0.7. This means that during training, only predictions with a confidence probability above 70% will be considered as valid. Enable Data augmentation and set Add noise to High to make our neural network more robust in real life scenarios. Click on Start training at the bottom of the page. When training is finished you will see the accuracy of the model, calculated using a subset of 20% of the training data allocated for validation. You can also check the Confusion matrix, being a table showing the balance of correctly versus incorrectly classified words, and the estimation of on-device performance.
You can now test the model you just trained with new data. It is possible to connect the Arduino board to Edge Impulse to perform a live classification of data. However, we will test the model using the test data we left aside during the Data acquisition step. Click on Model testing on the left hand menu and then on Classify all test data. You will receive feedback regarding the performance of your model. Additionally, the Feature explorer will allow you to check what happened with the samples that were not correctly classified, so you can re-label them if needed or move them back to training to refine your model.
Finally, you can build a library containing your model and being ready to be deployed in a microcontroller. Click on Deployment on the left hand menu, choose to create an Arduino library and go to the bottom of the page. Here it is possible to enable the EON™ Compiler to increase on-device performance at the cost of reducing accuracy. However, since the memory usage is not too high for the Arduino Nano 33 BLE Sense, we can disable this option so to perform with the highest possible accuracy. Finally leave the Quantized (int8) option selected and click on the Build button to download the.zip file containing your library.
The VoiceTurn Edge Impulse project is publicly available, so you can directly clone it and work on it if you wish.Check how words are classified
You can use the Arduino IDE to deploy the library built with Edge Impulse to your board. If you have not installed it yet, download the latest version from the Arduino software page. After that, you will need to add the drivers package supporting the Arduino Nano 33 BLE Sense board. Open the Arduino IDE and click on Tools > Board > Boards Manager...
Write nano 33 ble sense in the Search box and install the Arduino Mbed OS Nano Boards package.
Now your IDE is ready to work with your board. Connect your Arduino Nano 33 BLE Sense to your computer and apply the following settings:
- Click on Tools > Board > Arduino Mbed OS Nano Boards and select Arduino Nano 33 BLE as your board.
- Click on Tools > Port and choose the serial port your board is connected to. This will vary depending on your OS and your particular port.
To add the VoiceTurn library to your Arduino IDE, click on Sketch > Include Library > Add.ZIP Library... and go to the path where you saved your library .zip file. After that, click on File > Examples > VoiceTurn_inferencing > nano_ble33_sense_microphone_continuous to open the voice inferencing program provided by Edge Impulse.
You can already compile and upload this program to your board by clicking on the two icons located on the top left corner of the Arduino IDE. Once uploaded, click on the Serial Monitor icon on the top right corner and you will see the output of the previously developed Machine Learning classifier. You can test the accuracy of the program by saying the words left! and right! and check the probability of being inside one group or another, calculated by the program. You can also try saying other words and check if they are properly classified in the other group.
If you want to learn more about continuous audio sampling, check this tutorial from the Edge Impulse documentation.TensorFlow Lite for Microcontrollers
At this point you might be wondering: didn't you mention you would be using TensorFlow? Well, as it is mentioned in this interesting article from the TensorFlow Blog, TensorFlow is inherently used by Edge Impulse:
Edge Impulse makes use of the TensorFlow ecosystem for training, optimizing, and deploying deep learning models to embedded devices.
If we look back to the previous steps of this project, we trained a model for word classification, then performed a Quantized (int8) optimization and finally built an Arduino library for deployment on our board. The article also states:
Developers can choose to export a library that uses the TensorFlow Lite for Microcontrollers interpreter to run the model.
In short, the VoiceTurn_inferencing library previously developed with Edge Impulse uses the TensorFlow Lite for Microcontrollers interpreter to run the word classifier Machine Learning model.
It is indeed quite easy to verify by yourself that this library uses TensorFlow Lite for Microcontrollers for running the classifier. From your Arduino IDE, click on File > Preferences and check the Sketchbook location. From the file explorer, go to this location and open VoiceTurn_inferencing/src/VoiceTurn_inferencing.h using a text editor. In line 41, you will find:
#include "edge-impulse-sdk/classifier/ei_run_classifier.h"
Now go back to the src folder and open edge-impulse-sdk/classifier/ei_run_classifier.h using again a text editor. In line 45 you will find:
#include "edge-impulse-sdk/tensorflow/lite/micro/micro_interpreter.h"
Which refers to the use of the TensorFlow Lite for Microcontrollers library. In fact, the TensorFlow Litelibrary is part of the Edge Impulse SDK, including its Micro C++ sub-library.Build the Hardware
The turn signal lights will consist of two LED strips: one on the left and the other on the right side of the bicycle. I have used an addressable RGB LED strip composed of WS2813 LEDs. Other LED strips could be used if you pay attention to the wiring and adapt the code afterwards. The first thing to do is to cut the LED strips so that they have the desired length. In this case, I have used a piece of 10 cm for each side, which corresponds to 6 LEDs. Pay attention to cutting the LED strips by the dotted line not to damage the electrical contacts.
As an optional step now, drill two holes on the ruler or the flat surface of your choice. If you want to strictly follow the project dimensions, use a 30-cm-long (roughly 12 inch) surface and drill the holes at the 11.5 cm and 18.5 cm positions, respectively. Use a rotary tool and place the flat surface between two objects with a similar height in order to have some free space for the drill head not to damage your table.
To be able to connect the LED strips to the rest of the setup, we need to add hook up wires to their pins. First, remove a small section of the jelly material covering the electrical contacts, using the scissors. Then, use the solder iron together with the solder flux to solder three hook up wires to the pins of each LED strip. I recommend to follow the usual color code, so that a red wire will be soldered to the 5V pin and a black one to the GND pin. Regarding the other pins, I have used a yellow wire for the right side and green wire for the left side. Although the WS2813 LED strip has four pins, DI (data input) and BI (back-up input) can be soldered together for the sake of simplicity. Once this step is done, remove the tape behind the LED strips and stick them to the flat surface at the 0-10 cm and 20-30 cm positions, respectively. If you drilled the holes on the surface in the previous step, pass the wires through them. If not, just put them around the border.
The overall setup is going to be divided into two parts and they will connect to each other using 3.5 mm jack/plug connectors. Grab the corresponding cables and cut them in about a 20%-80% proportion length, so that you will end up with a 80% length of cable containing the jack from one cable and another 80% length of cable containing the plug from the other cable. Remove a small portion of the external coating and you will see that each cable consists of 4 connections: three wires and the shield. Roll all the shield wires together, so that it will be easier to solder them.
Grab the long piece of cable containing the 3.5 mm jack and solder the wires as follows (if the color code is different, adapt it to your needs):
- The red wire of the cable to both red wires of the two LED strips (5V).
- The green wire of the cable to the green wire of the left LED strip (DI-BI).
- The yellow wire of the cable to the yellow wire of the right LED strip (DI-BI).
- The shield from the cable to both the black wires of the two LED strips (GND).
As a result, you should have the first part of the setup finished, consisting of the flat surface containing the turn signal lights and a piece of cable ending on a 3.5 mm jack. Avoid the electrical contacts to touch each other by individually covering them with tape or heat sink tubes.
The other part of the setup is simpler to build. Grab the long piece of cable containing the 3.5 mm plug and solder the wires to the Arduino board following the same color code as in the previous step and the Arduino pinout:
- The red wire to the 3V3 pin of the Arduino.
- The green wire to the D4 pin of the Arduino.
- The yellow wire to the D7 pin of the Arduino.
- The black wire to one of the GND pins of the Arduino.
You can find a wiring diagram in the Schematics section.
We can manage to have such a simple setup because we are using short LED strips containing just a few LEDs. If you use longer LED strips, you might need to power them with an external source, instead of using the 3V3 pin of the Arduino board, not to exceed the maximum current supported by the latter. If you do so, remember to solder a 1000 μF capacitor between the 5V and GND connections of your LED strips to ensure the stability of the supply voltage.Add functionality to the code
We previously checked in the Arduino Serial monitor how the words we say get classified into each of the four predefined groups. Once the hardware is built, now it is time to add the required functionality to switch one of the turn signal lights on after saying the word corresponding to that side.
First, you will need to install a library required by VoiceTurn to control the addressable LED strips that we will use for turn signalling. Click on Sketch > Include Library > Manage Libraries...
Write NeoPixel in the Search box and install the Adafruit NeoPixel library.
We will work on the nano_ble33_sense_microphone_continuous example program provided by the VoiceTurn_inferencing library built and imported in previous steps.
One LED strip is required for each direction, left or right, and each LED strip has 6 LEDs in total. Additionally, the built-in RGB LED of the Arduino board will be used for the rider to quickly know if the board is either ready to listen to a word or still making use of the turn signal lights. Define the pinout at the beginning part of the program, remembering the pins you used for the green and yellow wires when you built the hardware part:
/* LED strip pinout */
#define LED_PIN_LEFT 4
#define LED_PIN_RIGHT 7
#define LED_COUNT 6
/* Built-in RGB LED pinout */
#define RGB_GREEN 22
#define RGB_RED 23
#define RGB_BLUE 24
Include the NeoPixel library next to the other libraries already included:
#include <Adafruit_NeoPixel.h>
Declare the two LED strips by defining their number of LEDs, the data pin they are connected to and the strip type. Here, we will use
NEO_GRB + NEO_KHZ800 for the strip type, as this is the one corresponding to WS2812 LED strips and similar. You can modify this with the help of the library documentation if you are using a different LED strip type.
/* LED strips for left and right signals */
Adafruit_NeoPixel left(LED_COUNT, LED_PIN_LEFT, NEO_GRB + NEO_KHZ800);
Adafruit_NeoPixel right(LED_COUNT, LED_PIN_RIGHT, NEO_GRB + NEO_KHZ800);
Declare two constants that will be useful to tune the duration of the light signalling. The
period is roughly the waiting time required between an LED is switched on and the next one is switched on as well: the higher this time is set, the slower the animation will be. The
cycles variable refers to the number of times the animation will repeat for each time the left or right words are detected.
static int period = 100;
static int cycles = 3;
Now in the
setup() function, add the following lines of code to initialize the two LED strips and set their brightness level to about 20%. The latter acts as a software-driven current control, so that we make sure not to burn the LEDs or exceed the maximum current supported by the Arduino board, even if no resistor is soldered to the strip pins.
// Initialization of LED strips:
left.begin();
right.begin();
left.show();
right.show();
// Set BRIGHTNESS to about 2/5 (max = 255)
left.setBrightness(100);
right.setBrightness(100);
Initialize the built-in RGB LED pins as
OUTPUT.
pinMode(RGB_RED, OUTPUT);
pinMode(RGB_GREEN, OUTPUT);
pinMode(RGB_BLUE, OUTPUT);
As you may know, the code inside the
loop() function contains the program instructions that will be executed over and over. Add the following line at the beginning of the function to indicate to the bicycle rider that the board is listening. The
rgb_green() function will be created afterwards.
rgb_green(); // Shows the board is READY
After that, word classification is carried out and the program output is stored in the
result.classification variable. This variable is indexed following the labeling order we used when training the Machine Learning model with Edge Impulse. Remember we used the labels Left, Right, noise and other in that order. Therefore,
result.classification[0] contains the information corresponding to the Left group and
result.classification[0].value is the probability that the spoken word belongs to the Left group. Similarly,
result.classification[1] contains the information corresponding to the Right group and
result.classification[1].value is the probability that the spoken word belongs to the Right group. The indices 2 and 3 correspond to the noise and other groups, respectively, although we will not use them in this project.
We want to activate the corresponding LED strip if the word is detected with a probability above a certain threshold. For this project, the threshold probability is set to 80% for the left word and to 85% for the right word, to avoid false positives as far as possible. The thresholds are different, since they are tuned according to the output of the model testing stage performed with Edge Impulse. Add this piece of code at the end of the
loop() function, after all the Machine Learning processing is finished.
// 0 -> LEFT; 1 -> RIGHT
if (result.classification[0].value >= 0.80) {
turn(left);
}
if (result.classification[1].value >= 0.85) {
turn(right);
}
The only things left to add to the program are the functions in charge of activating the LED strips and controlling the built-in RGB LED. The
turn() function receives the LED strip to activate as an input parameter and switches on the LEDs one by one in orange following an animation similar to the ones in modern cars. Once the predefined number of cycles finish, the LED strip is switched off. Additionally, the bicycle rider is warned of the state of the program by setting the built-in RGB LED in red while the LED strips are being used and switching it off at the end of the function. Add this function at the end of your program, after the
loop() function finishes.
static void turn(Adafruit_NeoPixel& strip) {
rgb_red(); // Shows the board is BUSY
for (int i = 0; i < cycles; i++) {
for (int j = 0; j < strip.numPixels(); j++) {
strip.setPixelColor(j, strip.Color(255, 104, 0)); // Color: Orange
strip.show();
delay(period);
}
strip.clear();
strip.show();
delay(2 * period);
}
rgb_off(); // The board has FINISHED lighting the LED strip
}
And finally add three simple functions controlling the built-in RGB LED.
void rgb_red() {
digitalWrite(RGB_RED, HIGH);
digitalWrite(RGB_GREEN, LOW);
digitalWrite(RGB_BLUE, LOW);
}
void rgb_green() {
digitalWrite(RGB_RED, LOW);
digitalWrite(RGB_GREEN, HIGH);
digitalWrite(RGB_BLUE, LOW);
}
void rgb_off() {
digitalWrite(RGB_RED, LOW);
digitalWrite(RGB_GREEN, LOW);
digitalWrite(RGB_BLUE, LOW);
}
If you compile the program, upload it to the Arduino board and connect the hardware to your board, you will be able to test the VoiceTurn setup by yourself. After saying the word left!, the left-side turn signal light should be activated and after saying right! the same should happen with the right-side turn signal light.
If your tests are not as successfull as expected, you might need to adjust the sensitivity of the word classification process to your needs. You can do that by tuning two parameters:
- The word detection probability, initially set to 85%: by lowering this number, you will increase the amount of times that the device responds to your voice, but you will also increase the amount of false positives.
- The
EI_CLASSIFIER_SLICES_PER_MODEL_WINDOWconstant defined at the beginning of the program: this is the number of pieces in which the model window is subdivided. You can find further information about this parameter in this link.
I have included the link to the GitHub repository containing the project's code in the Code section, so check it out if you got lost at some point or if you want to directly try my code.Set up VoiceTurn on a bicycle
The proof-of-concept of VoiceTurn has been carried out on a bicycle. Plastic cable ties have been used to attach all the components to it, so that the setup can be easily installed and removed without any permanent modification made to the bicycle. In case you want the setup to be permanently installed, you can use other means.
The first part consists of the Arduino board and the two cables attached to it: the one containing the wires soldered to the Arduino pins and ending on the 3.5 mm plug and the micro-USB to USB cable plugging the board to the power bank. A couple of cable ties have been placed, attaching the two aforementioned cables to the rear brake cable. Both cables are rigid enough for the Arduino board to remain in a floating and stable position, with the microphone facing the rider.
We need to fix the two cables to the bicycle so that the 3.5 mm plug can reach the rear part of the bicycle and the USB plug can reach the seat, where the battery will be accommodated. The cables have been fastened alongthe top side of the bicycle frame.
Now place the bicycle watch mount around the seat bar, so that we obtain a flat surface where holding the ruler containing the turn signal lights. Use a couple of cable ties in cross arrangement to fix the other part of the setup to the seat bar at the watch mount location. Fix the remaining wires to the seat bar as well.
Connect the 3.5 mm jack to its corresponding plug and attach the remaining cable around the bicycle frame so that you do not pull from it unintentionally while riding.
Finally, place the power bank at the gap under the seat, hold it tight with cable ties and connect the USB cable to it, so that VoiceTurn is ready to use.Enjoy it!
Now it is time to build and try VoiceTurn yourself. You can even take it as a base project and keep improving it: reduce the wiring by using the Arduino's built-in BLE connectivity, add a wake word such as Hey Bike!, extend the word set adding more functionality... there are lots of possibilities.
I hope you enjoy this project and do not forget to leave your comments and feedback!
Code
VoiceTurn - GitHub Repository
Schematics
- RED (supply voltage): from Arduino's 3V3 pin to both LED strips' 5V pin.
- GREEN (data, left): from Arduino's D4 pin to left LED strip's data input pin (DI).
- YELLOW (data, right): from Arduino's D7 pin to right LED strip's data input pin (DI).
- BLACK (ground): from Arduino's GND pin to both LED strips' GND pin.
The 3.5 mm jack - plug 4 pin connectors and cable separate the setup into two modules for easier handling and mounting.
Author
Alvaro Gonzalez-Vila
- 1 project
- 0 followers
Published onJuly 18, 2021
Members who respect this project
you might like | https://create.arduino.cc/projecthub/a_g_v/voiceturn-voice-controlled-turn-lights-for-a-safer-ride-3a8e78 | CC-MAIN-2021-49 | refinedweb | 5,054 | 59.74 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.