text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
1. Roll call. Present 11/9 AT&T, Mark Jones BEA Systems, David Orchard Canon, Jean-Jacques Moreau IBM, David Fallside IBM, Noah Mendelsohn SAP AG, Volker Wiechers SeeBeyond, Pete Wenzel (scribe), Mark Nottingham Canon, Herve Ruellan IBM, John Ibbotson SAP AG, Gerd Hoelzing Sun Microsystems, Marc Hadley Systinet (IDOOX), Miroslav Simek Regrets Ericsson, Nilo Mitra Fujitsu Limited, Kazunori Iwasa Fujitsu Limited, Masahiko Narita Matsushita Electric, Ryuji Inoue Microsoft Corporation, Martin Gudgin Microsoft Corporation, Jeff Schlimmer Oracle, Anish Karmarkar Absent DaimlerChrysler R. & Tech, Andreas Riegg DaimlerChrysler R. & Tech, Mario Jeckle IONA Technologies, Eric Newcomer IONA Technologies, Oisin Hurley Macromedia, Glen Daniels Oracle, Jeff Mischkinsky Software AG, Dietmar Gaertner Software AG, Michael qChampion 2. Agenda review, and AOB DavidF: notes that he will be reviewing WG membership list in light of the new work we are starting and members' attendance and contributions. 3. Approval of June 18 telcon minutes Minutes approved without objection. 4. Review action items DavidF: New W3C Process Document has reached REC, but is not yet in force; expect it to be end of June to mid-July. The Chair et al will actively start moving the Scenarios etc docs forward. 5. Status reports -- Planning for next f2f meeting (July 24 and 25, Sophia Antipolis, France). Registration is now open, see (closes July 17). DavidF: Some time during F2F will be allocated to work on errata, if necessary, but main focus will be to make progress on attachment work. -- Registration of "application/soap+xml",. See update and thread starting at. No report. 6. Attachments (11.30 + 60) The attachment document we are working with is located at -- Placeholder for pending items o Update on Copyright and IP statements o Attachment requirements, see o We have had a request from the WSD WG to comment on some draft requirements for a hypothetical XInclude 1.1, No discussion on this topic. -- SOAP MTOM (Message Transmission Optimization Mechanism), see Please review, and be prepared to say whether or not this document is ready for publication as a Working Draft. If your answer is "not", be prepared to say what needs to be done in order to publish. DavidF: Several questions have been posted. Is it ready to publish as a Working Draft? We wouldn't be bound to move forward with this as the only basis for a Rec, but it would help make progress. Noah: Publishing would be good, gives public visibility to our direction. Should not be viewed as "fully baked", however. J-J: Agree, it's rough but should be published, even though it may change. Remove some of the end-notes. MarkJ: Publication is fine. Notes having to do with packaging are not reflected in this yet. Lots of loose ends, but let's put a stake in the ground. Needs to be reconciled with Requirements document. Is it independent from PASWA? Jacek: Publish ASAP. One reservation may be dealt with after publication. Note that Representation and indicating Media Type of binary data in infoset topics need to be discussed in the document. (These are missing parts of PASWA.) TonyG: Noticed some typos; emailed two comments. Why doesn't it use SOAP terminology (why "Inclusion Mechanism")? Noah: Care was taken to represent this as a "feature" with a "binding" that could be implemented as a SOAP "module". TonyG: Also, how would an intermediary act on OptimizationCandidates? No show-stoppers for WD. DaveO: In general, almost "good enough", publish it. Investigate relationship between this and Requirements. Capture more about requirements in this document. DavidF: Propose that we address the small points raised to clean up the draft before publication, to make people more comfortable. MarkJ: Many requirements are not yet addressed. Make a note that this draft has not been reconciled with Requirements. Alternative is to compare reqs against this. DaveO: Say that this was guided by reqs, development of which is ongoing. It is not necessary to do a formal req analysis after the fact. Noah: Good point; don't rewrite reqs to match the implementation. DaveO: Will propose text stating the relationship to req doc. MarkJ: Agree with this approach. Noah: Grammatical error in 2nd paragraph. DavidF: Email editorial changes to xml-dist-app list, and Herve can make updates next week. Jacek: Will post text addressing mention of representation and media types. TonyG: Property name in section 2.3, "aof:OptimizationCandidates" is not a URI. "aof" prefix is not defined. DavidF: Mappings between prefixes and namespaces should go at beginning of spec, in a "Notataional Convention" section like our other docs. TonyG: Yes, but property name is still not a URI. Suggest appending "OptimizationCandidates" to the URI in section 2.2. DavidF: Insert editor's note stating that we have not yet defined URIs for the relevant property names. TonyG: Other grammatical issues; will send a list of these to xml-dist-app. Next issue: if SOAP modules always add headers, how could this be implemented as a module? Noah: This is an abstract feature. We will mainly be concerned about ensuring that bindings can implement this, but someone could also do it as a module. TonyG: Last question is about intermediary handling. Noah: Section 2.4.3 states this is hop-by-hop. Someone could define an intermediary that reoptimizes the message, for example, but this would be written on top of this building block. TonyG: Suggest moving 2.4.3 first paragraph to the introduction and remove "However" from the 2nd par. Noah: Suggest leaving it, but adding a general description of this to intro. Will write this text. DavidF: Hold off on raising substantial issues until after WD publication, but send editorial changes to xml-dist-app for inclusion by editors. -- PASWA and XQuery datamodel, see. How shall we proceed in light of this analysis? DavidF: Noah outlined similarities between PASWA and XQuery data models, so perhaps we could align. Noah: Do we ever want to represent types other than base64Binary? If so, how are they conveyed? Proposed how to formulate PASWA in terms of the XQuery model, and it did not appear to be difficult. MarkJ: Would not want to rely on end-to-end schema mechanisms. It should be self-describing. DavidF: This would be a good topic for F2F, but please continue discussion via email. The WG thanks DavidF for all his effort in getting us to REC. -- 431, semantics of attachments and intermediaries, (was Q1 in) No discussion on this issue. -- 432, how does binding determine what to serialise as attachment, (was Q2 in) No discussion on this issue. End of meeting. | http://www.w3.org/2000/xp/Group/3/06/25-minutes.html | CC-MAIN-2016-50 | refinedweb | 1,094 | 58.79 |
Can anyone here please point me to use cases for
moralis/node
I am not sure what the intended use for it is supposed to be plus I cant use moralis with Typescript currently and that is such a pain, given that
import Moralis from 'moralis/node would through an error
Also is it possible to leverage the
serverUrl directly. if so, what will the authentication mechanism be.
The use case I am trying to leverage is one that treats Moralis like a microservice for my nodejs backend, which will enable us use the fine aspects of moralis as an integration layer for all things blockchain | https://forum.moralis.io/t/moralis-in-nodejs/1058 | CC-MAIN-2021-43 | refinedweb | 106 | 53.17 |
Not working after GDM 3.16.0.1-1
/usr/bin/gdm3setup:512: Warning: The property GtkWidget:margin-left is deprecated and shouldn't be used anymore. It will be removed in a future version.
self.Builder.add_from_file("/usr/share/gdm3setup/ui/gdm3setup.ui")
/usr/bin/gdm3setup:512: Warning: The property GtkWidget:margin-right is deprecated and shouldn't be used anymore. It will be removed in a future version.
self.Builder.add_from_file("/usr/share/gdm3setup/ui/gdm3setup.ui")
/usr/bin/gdm3setup:512: Warning: The property GtkImage:stock is deprecated and shouldn't be used anymore. It will be removed in a future version.
self.Builder.add_from_file("/usr/share/gdm3setup/ui/gdm3setup.ui")
Search Criteria
Package Details: gdm3setup 20150813-1
Dependencies (6)
- gdm (gdm-old, gdm-plymouth, mdm-display-manager, mdm-nosystemd)
- gdm3setup-utils>=20150507
- gnome-shell
- python2-dbus
- python2-lxml
- archlinux-artwork (optional) – Set an Archlinux logo
Required by (0)
Sources (1)
Latest Comments
ErkanMDR commented on 2015-04-10 18:27
Not working after GDM 3.16.0.1-1
NanoArch commented on 2013-11-07 17:19
GDM no longer have a wallpaper, but you can replace noise-texture.png in the gnome-shell theme by a wallpaper.
Rasi commented on 2013-11-05 13:16
How do you set a wallpaper with this? I cant see any option to do this...
jtaylor991 commented on 2013-10-26 19:11
Output from makepkg:
==> ERROR: Cannot find the fakeroot binary required for building as non-root user.
==> ERROR: Cannot find the strip binary required for object file stripping.
step-2 commented on 2013-06-23 00:19
as NanoArch suggested , adding the backgrounds to /usr/share/gnome-background-properties.xml fixes the issue , thanks .
NanoArch commented on 2013-06-22 16:51
@ step-2
like for gnome-control-center you need to create an xml file in /usr/share/gnome-background-properties
step-2 commented on 2013-06-22 13:04
Hi there,
i can't seem to get my wallpaper to show up in gdm3setup dialog ,
copying the wallpapers to /usr/share/backgrounds & /usr/share/themes/Adwaita/backgrounds doesn't help ,
any solution ?
NanoArch commented on 2013-04-18 18:51
Problem corrected by reinstalling gdm3setup-utils
step-2 commented on 2013-04-11 15:07
dont launch under gnome 3.8
Traceback (most recent call last):
File "/usr/bin/gdm3setup", line 1039, in <module>
MainWindow().show()
File "/usr/bin/gdm3setup", line 634, in __init__
proxy = dbus.SystemBus().get_object('apps.nano77.gdm3setup','/apps/nano77/gdm3setup') apps.nano77.gdm3setup was not provided by any .service files
NanoArch commented on 2012-12-06 17:57
python2-dbus added as a dependency
step-2 commented on 2012-12-05 09:34
cannot get it to launch , all dependencies installed
robvelor commented on 2012-11-25 18:08
I have the same issue as Lantean; I installed python2-lxml and still having this issue.
step-2 commented on 2012-11-14 18:39
whenever i change the background , it revert to solid blue .
Anonymous comment on 2012-10-24 03:53
Have the same issue as Lantean.
Installed the python dependency (python2-lxml) manually.
Anonymous comment on 2012-10-23 15:10
gdm3setup-20121011-1 crashes with:
Traceback (most recent call last):
File "/usr/bin/gdm3setup.py", line 911, in <module>
MainWindow().show()
File "/usr/bin/gdm3setup.py", line 532, in __init__
self.get_gdm()
File "/usr/bin/gdm3setup.py", line 622, in get_gdm
self.FALLBACK_LOGO = unquote(get_setting("FALLBACK_LOGO",settings))
File "/usr/bin/gdm3setup.py", line 883, in get_setting
return value
UnboundLocalError: local variable 'value' referenced before assignment
Anonymous comment on 2012-10-11 04:28
I had the same trouble as you donniezazen. In order to get the image to show up properly copy it to /usr/share/backgrounds. Make sure you update the image path(s) in the application to point to the new location of the file.
donniezazen commented on 2012-08-22 04:01
Every time I change my wallpaper it goes back to disabled.
blackout24 commented on 2012-08-14 01:04
Just wanted to say that everything works. It installs fine and I can change my background. Thanks!
billybigrigger commented on 2012-06-30 12:36
i have python2-lxml and gobject installed...still get this error...
[billybigrigger@minion ~]$ /usr/bin/gdm3setup.py
Traceback (most recent call last):
File "/usr/bin/gdm3setup.py", line 10, in <module>
from gi.repository import Gtk
File "/usr/lib/python2.7/site-packages/gi/importer.py", line 76, in load_module
dynamic_module._load()
File "/usr/lib/python2.7/site-packages/gi/module.py", line 224, in _load
overrides_modules = __import__('gi.overrides', fromlist=[self._namespace])
File "/usr/lib/python2.7/site-packages/gi/overrides/Gtk.py", line 1533, in <module>
raise RuntimeError("Gtk couldn't be initialized")
RuntimeError: Gtk couldn't be initialized
rudzha commented on 2012-06-04 10:41
Yup, couldn't start it without python2-lxml
progandy commented on 2012-05-31 20:19
The PKGBUILD is missing the python2 dependencies, e.g. gobject, and lxml
step-2 commented on 2012-05-31 09:21
doesnt launch at all
NanoArch commented on 2012-05-01 17:45
the bug in get_gdm.sh are now corrected
Anonymous comment on 2012-05-01 12:16
as a workaround you can comment out line 484 and 499 in /usr/bin/gdm3setup.py
Worked for me....
Anonymous comment on 2012-04-29 20:50
++
Anonymous comment on 2012-04-29 18:11
i have the same problem as loxley after updating to the last version
mephistopheles commented on 2012-04-28 23:33
loxley
++
loxley commented on 2012-04-28 21:00
Still doesn't work for me either,
Traceback (most recent call last):
File "/usr/bin/gdm3setup.py", line 737, in <module>
MainWindow().show()
File "/usr/bin/gdm3setup.py", line 404, in __init__
self.get_gdm()
File "/usr/bin/gdm3setup.py", line 484, in get_gdm
self.FALLBACK_LOGO = unquote(get_setting("FALLBACK_LOGO",settings))
File "/usr/bin/gdm3setup.py", line 709, in get_setting
return value
UnboundLocalError: local variable 'value' referenced before assignment
unknwn commented on 2012-04-28 17:26
Still doesn't work for me with gnome 3.4.
NanoArch commented on 2012-04-28 13:18
Now compatible with gnome 3.4
Anonymous comment on 2012-04-25 08:04
I confirm too. Maybe the new gnome 3.4?
Anonymous comment on 2012-04-25 07:07
I can confirm that it's not working in gnome-shell 3.4.1 with exact the same error messages as leprechau.
Anonymous comment on 2012-04-24 14:31
possibly an upstream issue but gdm3setup is no longer working for me with gnome-shell 3.4.1 from extra:
[ahurt@ahurt1 ~]$ gksu gdm3setup.py
Traceback (most recent call last):
File "/usr/bin/gdm3setup.py", line 700, in <module>
MainWindow().show_all()
File "/usr/bin/gdm3setup.py", line 398, in __init__
self.get_gdm()
File "/usr/bin/gdm3setup.py", line 476, in get_gdm
self.SHELL_LOGO = unquote(get_setting("SHELL_LOGO",settings))
File "/usr/bin/gdm3setup.py", line 672, in get_setting
return value
UnboundLocalError: local variable 'value' referenced before assignment
This happens with or without prepending with gksu
ying commented on 2012-01-22 14:25
Please add gksu to the exec command in the desktop stater file.
ying commented on 2012-01-22 14:25
Please add gksu to the exec command in the desktop stater file.
mstarzyk commented on 2012-01-21 20:53
One file is missing. Please apply this patch:
mstarzyk commented on 2012-01-21 20:48
One file is missing. Please apply this patch:
--- PKGBUILD 2012-01-09 13:24:41.000000000 +0100
+++ PKGBUILD.new 2012-01-21 21:46:54.000000000 +0100
@@ -40,6 +40,7 @@
install --mode=755 -D ${srcdir}/gdm3setup/get_gdm.sh ${pkgdir}/usr/bin/get_gdm.sh
install --mode=755 -D ${srcdir}/gdm3setup/set_gdm.sh ${pkgdir}/usr/bin/set_gdm.sh
install -D ${srcdir}/gdm3setup/gdm3setup.desktop ${pkgdir}/usr/share/applications/gdm3setup.desktop
+ install -D ${srcdir}/gdm3setup/gdm3setup.ui ${pkgdir}/usr/share/gdm3setup/ui/gdm3setup.ui
install -D ${srcdir}/gdm3setup/apps.nano77.gdm3setup.service ${pkgdir}/usr/share/dbus-1/system-services/apps.nano77.gdm3setup.service
install -D ${srcdir}/gdm3setup/apps.nano77.gdm3setup.service ${pkgdir}/usr/share/dbus-1/services/apps.nano77.gdm3setup.service
install -D ${srcdir}/gdm3setup/apps.nano77.gdm3setup.conf ${pkgdir}/etc/dbus-1/system.d/apps.nano77.gdm3setup.conf
Anonymous comment on 2011-11-19 13:12
Cloning 'gdm3setup' to 'gdm3setup-build' is useless. There is no change on source.
NanoArch commented on 2011-11-14 17:02
I have added a option to change font for GTK UI, for gnome-shell UI you need to edit the theme
Anonymous comment on 2011-10-27 11:00
How can I change the font of gdm3 ?
caspian commented on 2011-10-03 08:39
The apply button is always greyed out for me. Even when I run the application with gksu or sudo from the terminal...
Anonymous comment on 2011-09-02 13:27
easy to use
Anonymous comment on 2011-08-25 17:32
Works great
robvelor commented on 2011-08-01 23:40
Thanks, works great.
arriagga commented on 2011-06-27 11:35
you should include 'gksu' on Depends.
pezz commented on 2007-01-01 00:57
Not sure if this is a bug in GDM3 in general, but disabling the user list doesn't work.
pezz commented on 2007-01-01 00:56
Not sure if this is a bug in GDM3 in general, but disabling the user list doesn't work. | https://aur.archlinux.org/packages/gdm3setup/?ID=50232&comments=all | CC-MAIN-2016-40 | refinedweb | 1,583 | 51.24 |
TTabCom This class performs basic tab completion. You should be able to hit [TAB] to complete a partially typed: username environment variable preprocessor directive pragma filename (with a context-sensitive path) public member function or data member (including base classes) global variable, function, or class name Also, something like someObject->Func([TAB] someObject.Func([TAB] someClass::Func([TAB] someClass var([TAB] new someClass([TAB] will print a list of prototypes for the indicated method or constructor. Current limitations and bugs: 1. you can only use one member access operator at a time. eg, this will work: gROOT->GetListOfG[TAB] but this will not: gROOT->GetListOfGlobals()->Conta[TAB] 2. nothing is guaranteed to work on windows or VMS (for one thing, /bin/env and /etc/passwd are hardcoded) 3. CINT shortcut #2 is deliberately not supported. (using "operator.()" instead of "operator->()") 4. most identifiers (including C++ identifiers, usernames, environment variables, etc) are restriceted to this character set: [_a-zA-Z0-9] therefore, you won't be able to complete things like operator new operator+ etc 5. ~whatever[TAB] always tries to complete a username. use whitespace (~ whatever[TAB]) if you want to complete a global identifier. 6. CINT shortcut #3 is not supported when trying to complete the name of a global object. (it is supported when trying to complete a member of a global object) 7. the list of #pragma's is hardcoded (ie not obtained from the interpreter at runtime) ==> user-defined #pragma's will not be recognized 8. the system include directories are also hardcoded because i don't know how to get them from the interpreter. fons, maybe they should be #ifdef'd for the different sytems? 9. the TabCom.FileIgnore resource is always applied, even if you are not trying to complete a filename. 10. anything in quotes is assumed to be a filename so (among other things) you can't complete a quoted class name: eg, TClass class1( "TDict[TAB] // this won't work... looks for a file in pwd starting with TDict 11. the prototypes tend to omit the word "const" a lot. this is a problem with ROOT or CINT. 12. when listing ambiguous matches, only one column is used, even if there are many completions. 13. anonymous objects are not currently identified so, for example, root> printf( TString([TAB gives an error message instead of listing TString's constructors. (this could be fixed) 14. the routine that adds the "appendage" isn't smart enough to know if it's already there: root> TCanvas::Update() press [TAB] here ^ root> TCanvas::Update()() (this could be fixed) 15. the appendage is only applied if there is exactly 1 match. eg, this root> G__at[TAB] root> G__ateval happens instead of this root> G__at[TAB] root> G__ateval( because there are several overloaded versions of G__ateval(). (this could be fixed)
Clear classes and namespace collections.
Forget all Cpp directives seen so far.
Forget all environment variables seen so far.
Close all files.
Forget all global functions seen so far.
Forget all global variables seen so far.
Forget all pragmas seen so far.
Close system files.
Forget all user seen so far.
Do the class rehash.
Cpp rehashing.
Environemnt variables rehashing.
Close files.
Reload global functions.
Reload globals.
Reload pragmas.
Reload system include files.
Reload users.
Return the list of classes.
Return the list of CPP directives.
"path" should be initialized with a colon separated list of system directories
Uses "env" (Unix) or "set" (Windows) to get list of environment variables.
Return the list of globals.
Return the list of global functions.
Return the list of pragmas
Return the list of system include files.
reads from "/etc/passwd"
[static utility function] if all the strings in "*pList" have the same ith character, that character is returned. otherwise 0 is returned. any string "s" for which "ExcludedByFignore(s)" is true will be ignored unless All the strings in "*pList" are "ExcludedByFignore()" in addition, the number of strings which were not "ExcludedByFignore()" is returned in "nGoodStrings".
[static utility function] adds a TObjString to "*pList" for each entry found in the system directory "dirName" directories that do not exist are silently ignored.
[static utility function] returns a colon-separated string of directories that CINT will search when you call #include<...> returns empty string on failure.
[static utility function] calls TSystem::GetPathInfo() to see if "fileName" is a system directory.
[static utility function] creates a list containing the full path name for each file in the (colon separated) string "path1" memory is allocated with "new", so whoever calls this function takes responsibility for deleting it.
[static utility function] calling "NoMsg( errorLevel )", sets "gErrorIgnoreLevel" to "errorLevel+1" so that all errors with "level < errorLevel" will be ignored. calling the function with a negative argument (e.g., "NoMsg( -1 )") resets gErrorIgnoreLevel to its previous value.
[private]
[private]
[private]
[private]
[private]
Same as above but does not print the error message.
[private] (does some specific error handling that makes the function unsuitable for general use.) returns a new'd TClass given the name of a variable. user must delete. returns 0 in case of error. if user has operator.() or operator->() backwards, will modify: context, *fpLoc and fBuf. context sensitive behavior.
[private]
Returns the place in the string where to put the \0, starting the search from "start" | http://root.cern.ch/root/html520/TTabCom.html | crawl-003 | refinedweb | 882 | 58.79 |
Migrating from BundleWrap 2.x to 3.x
As per semver, BundleWrap 3.0 breaks compatibility with repositories created for BundleWrap 2.x. This document provides a guide on how to upgrade your repositories to BundleWrap 3.x. Please read the entire document before proceeding.
metadata.py
BundleWrap 2.x simply used all functions in
metadata.py whose names don't start with an underscore as metadata processors. This led to awkward imports like
from foo import bar as _bar. BundleWrap 3.x requires a decorator for explicitly designating functions as metadata processors:
@metadata_processor def myproc(metadata): return metadata, DONE
You will have to add
@metadata_processor to each metadata processor function. There is no need to import it; it is provided automatically, just like
node and
repo.
The accepted return values of metadata processors have changed as well. Metadata processors now always have to return a tuple with the first element being a dictionary of metadata and the remaining elements made up of various options to tell BundleWrap what to do with the dictionary. In most cases, you will want to return the
DONE options as in the example above. There is no need to import options, they're always available.
When you previously returned
metadata, False from a metadata processor, you will now have to return
metadata, RUN_ME_AGAIN. For a more detailed description of the available options, see the documentation.
File and directory ownership defaults
Files, directories, and symlinks now have default values for the ownership and mode attributes. Previously the default was to ignore them. It's very likely that you won't have to do anything here, just be aware.
systemd services enabled by default
Again, just be aware, it's probably what you intended anyway.
Environment variables
The following env vars have been renamed (though the new names have already been available for a while, so chances are you're already using them):
Item.display_keys and Item.display_dicts
If you've written your own items and used the
display_keys() or
display_dicts() methods or the
BLOCK_CONCURRENT attribute, you will have to update them to the new API. | https://docs.bundlewrap.org/guide/migrate_23/ | CC-MAIN-2018-47 | refinedweb | 350 | 58.08 |
If you are an advanced user of drawRect on your ipop*, you will know that of course drawRect will not actually run until "all processing is finished." setNeedsDisplay flags a view as invalidated and the OS, in a word, waits until all processing is done. This can be infuriating in the common situation where you want to have:
- a view controller 1
- starts some function 2
- which incrementally 3
- creates a more and more complicated artwork and 4
- at each step, you setNeedsDisplay (wrong!) 5
- until all the work is done 6
Of course, when you do that, all that happens is that drawRect is run once only after step 6. What you want is for the ^&£@%$@ view to be refreshed at point 5. This can lead to you smashing your ipops on the floor, scanning Stackoverflow for hours, screaming at the kids more than necessary about the dangers of crossing the street, etc etc. What to do?
Footnotes:
* ipop: i Pad Or Phone !
Solution to the original question..............................................
In a word, you can (A) background the large painting, and call to the foreground for UI updates or (B) arguably controversially there are four 'immediate' methods suggested that do not use a background process. For the result of what works, run the demo program. It has #defines for all five methods.
Truly astounding alternate solution introduced by Tom Swift..................
Tom Swift has explained the amazing idea of quite simply manipulating the run loop. Here's how you trigger the run loop:
[[NSRunLoop currentRunLoop] runMode: NSDefaultRunLoopMode beforeDate: [NSDate date]];
This is a truly amazing piece of engineering. Of course one should be extremely careful when manipulating the run loop and as many pointed out this approach is strictly for experts.
The Bizarre Problem That Arises..............................................
Even though a number of the methods work, they don't actually "work" because there is a bizarre progressive-slow-down artifact you will see clearly in the demo.
Scroll to the 'answer' I pasted in below, showing the console output - you can see how it progressively slows.
Here's the new SO question:
Mysterious "progressive slowing" problem in run loop / drawRect.
Here is V2 of the demo app...
You will see it tests all five methods,
#ifdef TOMSWIFTMETHOD [self setNeedsDisplay]; [[NSRunLoop currentRunLoop] runMode:NSDefaultRunLoopMode beforeDate:[NSDate date]]; #endif #ifdef HOTPAW [self setNeedsDisplay]; [CATransaction flush]; #endif #ifdef LLOYDMETHOD [CATransaction begin]; [self setNeedsDisplay]; [CATransaction commit]; #endif #ifdef DDLONG [self setNeedsDisplay]; [[self layer] displayIfNeeded]; #endif #ifdef BACKGROUNDMETHOD // here, the painting is being done in the bg, we have been // called here in the foreground to inval [self setNeedsDisplay]; #endif
You can see for yourself which methods work and which do not.
you can see the bizarre "progressive-slow-down". why does it happen?
you can see with the controversial TOMSWIFT method, there is actually no problem at all with responsiveness. tap for response at any time. (but still the bizarre "progressive-slow-down" problem)
So the overwhelming thing is this weird "progressive-slow-down": on each iteration, for unknown reasons, the time taken for a loop deceases. Note that this applies to both doing it "properly" (background look) or using one of the 'immediate' methods.
Practical solutions ........................
For anyone reading in the future, if you are actually unable to get this to work in production code because of the "mystery progressive slowdown" ... Felz and Void have each presented astounding solutions in the other specific question, hope it helps. | https://blog.csdn.net/dqjyong/article/details/17204991 | CC-MAIN-2019-26 | refinedweb | 570 | 61.06 |
In Emacs, you can perform dissociation based on the current buffer using "M-x dissociated-press RET". See that function's documentation string ("C-h f dissociated-press RET") for details on how to specify k and the use of letter or word statistics.
In practice, most implementations don't do a real Markov chain, but instead do the following:
Diss.
#!/usr/bin/env python2
from whrandom import choice
from sys import stdin
from time import sleep
dict = {}
def dissociate(sent):
"""Feed a sentence to the Dissociated Press dictionary."""
words = sent.split(" ")
words.append(None)
for i in xrange(len(words) - 1):
if dict.has_key(words[i]):
if dict[words[i]].has_key(words[i+1]):
dict[words[i]][words[i+1]] += 1
else:
dict[words[i]][words[i+1]] = 1
else:
dict[words[i]] = { words[i+1]: 1 }
def associate():
"""Create a sentence from the Dissociated Press dictionary."""
w = choice(dict.keys())
r = ""
while w:
r += w + " "
p = []
for k in dict[w].keys():
p += [k] * dict[w][k]
w = choice(p)
return r
if __name__ == '__main__':
while 1:
s = stdin.readline()
if s == "": break
dissociate(s[:-1])
print "=== Dissociated Press ==="
try:
while 1:
print associate()
sleep(1)
except KeyboardInterrupt:
print "=== Enough! ==="
This code may be used from the command line or as a Python module. The command-line handler (the last chunk of code, beginning with if __name__ == '__main__') reads one line at a time from standard input, and treats each line as a sentence. When it reaches EOF, it begins printing one dissociated sentence per second.
The dissociate function stores frequency information about successive words in the global dictionary dict. That is to say: Every word in the input text occurs as a key in dict. The value of dict[foo], for some word foo, is itself a dictionary. It stores the words which have occurred immediately after foo in the source text, and the number of times they have done so. The end of a sentence is represented with the null value None.
The associate function creates a new sentence based on the frequency information in dict. It begins a sentence with a random word from the source text. Next, it uses dict to select a word which, in the original text, followed the first word. The probability of each possible word being selected is based on that word's frequency following the first word in the original text. If the "word" selected is the None value, the sentence is complete; otherwise, it picks a new word that has followed the last.
Here is a sample source text file. Please note the lack of punctuation; this program isn't smart enough to deal with it appropriately.
all your base are belong to us
everything is a base for soy lesbians
are you all monkeys
monkeys lesbians and soy are good for you
now is the time for all good lesbians to come to the aid of their monkeys
good monkeys belong in a zoo
on everything you can meet a zoo full of lesbians
And here is a sample of the output based on these sentences:
meet a zoo full of their monkeys lesbians and soy are you
a zoo full of their monkeys
on everything is the time for soy are good monkeys lesbians
time for all your base for you can meet a base are belong to us
good monkeys belong to us
base are belong to come to us
now is the time for all your base are you can meet a zoo
is a zoo
now is the time for soy are good lesbians and soy are good monkeys belong to the time for all monkeys
full of lesbians
base for all good monkeys
lesbians and soy lesbians to the time for soy lesbians
Log in or registerto write something here or to contact authors.
Need help? accounthelp@everything2.com | http://everything2.com/title/dissociated+press | CC-MAIN-2014-42 | refinedweb | 647 | 65.25 |
08 July 2009 15:14 [Source: ICIS news]
TORONTO (ICIS news)--Celanese expects to stop all production of acetic acid and vinyl acetate monomer (VAM) at its site in Pardies, France, by December, the US-based chemicals major said on Wednesday.
The announcement follows an agreement Celanese reached with the site’s works council, it said.
Celanese first said in January it was considering closing the site, which it acquired as part of its takeover of ?xml:namespace>
Closure expenses for Pardies were around $90m-100m (€65m-72m), which Celanese would record in the second half of 2009, it said
The Pardies site has capacities of 450,000 tonnes/year of acetic acid and 150,000 tonnes/year of VAM.
($1 = €0.72) | http://www.icis.com/Articles/2009/07/08/9231196/celanese-to-close-france-acetic-acid-and-vam-site-by-december.html | CC-MAIN-2014-42 | refinedweb | 123 | 60.18 |
I still haven't used turbogears myself, so, in the end, this might be interesting for other users that are starting in the turbogears world themselves ;-)
1st step: getting turbogears
OK: downloaded version 0.8.9, which seems to be the latest release
2nd step: installing it
Running setup.py install did some nice things for me... it downloaded setuptools automatically (as it was a pre-requisite) and then proceeded getting other dependencies (egg files) and installed all those without further problems (yeap, many dependencies there).
Now, I still don't have docutils (which it says will make it more 'fun') nor a database, so, I'll get it before I proceed.
It says to use easy_install to get docutils, but it needs some search to know exactly what it means (turns out to be a script installed in my computer at C:\bin\Python24\Scripts\easy_install.exe)
Turns out it didn't do it's job:
[C:\bin]c:\bin\Python24\Scripts\easy_install.exe docutils
Searching for docutils
Reading
Reading
Best match: docutils 0.4
Downloading
Requesting redirect to (randomly selected) 'mesh' mirror
error: No META HTTP-EQUIV="refresh" found in Sourceforge page at
docutils/docutils-0.4.tar.gz?use_mirror=mesh
So, I went on to get it myself and then proceeded to get pysqlite ().
Now that all the turbogears pre-requisites are there, let's see how pydev looks... First, my interpreter configuration has become obsolete, so, all those libraries that were added later are not there... To make it up-to-date, I have to remove the interpreter and add it again (so, it should get all those new libraries -- version 1.2.0 of pydev and earlier had a 'cache' bug, so, you'd need to remove the interpreter, press apply and only after it proceed to add the interpreter again).
Now, just to make sure pysqlite is there, create the following script to see if it works:
from pysqlite2 import dbapi2 as sqlite
con = sqlite.connect("mydb")
print con
Ok, connection there, and a mydb file created, so, everything seems fine.
I decided to go with the 20 minute-flash that turbogears has there...(now that I'm revising this, I'd advise against it and would go with the wiki tutorial, as the flash version hides too many things). It starts creating a project with tg_admin quickstart. So, I decided to create a new pydev project from eclipse before and placed it at d:\turbogears, and ran tg_admin quickstart from within that directory to get started.
I called the project "wiki test" and it created all the structure without any problems, and starting it with wikitest-start.py actually started the server without any problems ()
Now, to configure pydev, you have to add the created folder d:\turbogears\wikitest as a source folder for the project (this would give you code-completion and code-analysis for that project). Details on how to do that can be found at
Ok, now on to the database creation: got some things mixed up here because the video didn't specify the dev.cfg changes (but the written one does, so, not such a big problem).
Kid template editing: I'd reccomend having WTP (Web Tools Platform) to edit html files... surely makes things much nicer.
Interesting point: I've just now noted that turbogears spawns 2 shells when I run the server, I think that one is the http server and the other some auto-reloader, but this makes some things much harder:
1. If you run it from inside pydev and kill the shell from the eclipse console, the other process will not die (and as you don't have Ctrl+C in that console, you will have to keep killing the other shell manually).
2. You just won't be able to debug it from pydev, as that other shell will be the one that will actually serve the requests (altough you can do it with the remote debugger from pydev extensions).
So, after checking its code a little I've seen:
if conf('autoreload.on', defaultOn): ... do the auto-reload stuff, so, you can set autoreload.on=False in the configurations and run it from pydev, so that you can debug it (but this way, you'll loose the auto-reloader stuff, so, I guess you'll have to decide: pydev regular debugger without auto-reload or pydev extensions remote debugger with the auto-reload (in this case, you could even run it from outside of Eclipse).
So, I guess that's it. Pydev is configured up to the point where you can use code-completion, the debugger and code-analysis without any problems.
Cheers,
Fabio
10 comments:
Hello:
I haven't tried the debug function, but it looks great for pydev/web tools editor for TurboGears.
I also wrote something more on my blog.
cheers
Nice howto ! perhaps a pydev & django could be a good shot also ... if you get the chance !
dude, how do you configure eclipse so that the KID files on turbogears will have syntax coloring?
I think you could have WTP installed for the syntax highlighting (there's currently no option for highlighting code in KID for python code).
I have run the turbogears with pydev.
But that can not debug. All of the breakpoints I set are not stop while I operate by browser.
To WTP on KID files you have to go to Preferences - General\Content Types and add *.kid for the Text\HTML entry.
These comments have been invaluable to me as is this whole site. I thank you for your comment.
Can you write a similar post for TurboGears2? I'd like to link to it from here:
Python IDEs with TurboGears2 Support
Thanks,
Robert
@Robert
Will try to do it, although I can't promise on a date right now...
Best Regards,
Fabio | http://pydev.blogspot.com/2006/07/configuring-pydev-to-work-with.html?showComment=1182362400000 | CC-MAIN-2017-17 | refinedweb | 980 | 70.53 |
private.
private keyword in Java
private keyword or modifier in java can be applied to member field, method or nested class in Java. you can not use private modifier on top level class. private variables, methods and class are only accessible on the class on which they are declared. private is highest form of Encapsulation Java API provides and should be used as much as possible. Its best coding practice in Java to declare variable private by default. private method can only be called from the class where it has declared. As per Rules of method overriding in Java, private method can not be overridden as well. private keyword can also be applied to constructor and if you make constructor private you prevent it from being sub-classed. popular example of making constructor private is Singleton class in Java which provides getInstance() method to get object instead of creating new object using constructor in Java. here are some differences between private and protected, public and package level access
package or default access level in Java
there is no access modifier called package instead package is a keyword which is used to declare package in Java, package is a directory on which a class in Java belongs. Package or default access level is second highest restrictive access modifier after private and any variable, method or class declared as package-private is only accessible on the package it belongs. good thing about default modifier is that, top level class can also be package-private if there is no class level access modifier.
protected keyword in Java
Difference between private and protected keyword is that protected method, variable or nested class not only accessible inside class, inside package but also outside of package on subclass. if you declare a variable protected means any one can use it if they extend your class. top level class can not be make protected as well.
public keyword in Java
public is the least restrictive access modifier in Java programming language and its bad practice to declare field, method or class by default public because once you make it public its very difficult to make any change on internal structure of class as it affect all clients using it. Making class or instance variable public also violated principle of Encapsulation which is not good at all and affects maintenance badly. instead of making variable public you should make it private and provided public getter and setter. public modifier can also be applied to top level class. In Java name of file must be same with public class declared in the file.
That's all difference between private, protected, package and public access modifier. As you have seen difference between private and public lies on how accessible a particular field, method or class would have. public means you can access it anywhere while private means you can only access it inside its own class.
Just to note all private, protected or public modifier are not applicable to local variables in Java. local variable can only be final in java.
Other Java fundamentals tutorials from Javarevisited
3 comments :
"Just to note all private, protected or public modifier are not applicable to local variables in Java."
Doesn't "local variable" term indicate that subject is local in scope. Access modifiers are only which can be shared. Right??
Indeed, but given its obvious that local variable are only accessible on block on which it declared, many programmer get confused if you ask can you use public, private or protected keyword on that...? | http://javarevisited.blogspot.com/2012/10/difference-between-private-protected-public-package-access-java.html?showComment=1350661159748 | CC-MAIN-2014-52 | refinedweb | 591 | 50.97 |
Inside the ASP.Net Worker Process there are two thread pools. The worker thread pool handles all incoming requests and the I/O Thread pool handles the I/O (accessing the file system, web services and databases, etc.). Each App Domain has its own thread pool and the number of operations that can be queued to the thread pool is limited only by available memory; however, the thread pool limits the number of threads that can be active in the process simultaneously.
Source: Microsoft Tech Ed 2007 DVD: Web 405 "Building Highly Scalable ASP.NET Web Sites by Exploiting Asynchronous Programming Models" by Jeff Prosise.
So how many threads are there in these thread pools? I had always assumed that the number of threads varies from machine to machine – that ASP.NET and IIS were carefully and cleverly balancing the number of available threads against available hardware, but that is simply not the case. The fact is that ASP.Net installs with a fixed, default number of threads to play with: the 1.x Framework defaults to just 20 worker threads (per CPU) and 20 I/O threads (per CPU). The 2.0 Framework defaults to 100 threads in each pool, per CPU. Now this can be increased by adding some new settings to the machine.config file. The default worker thread limit was raised to 250 per CPU and 1000 I/O threads per CPU with the .NET 2.0 SP1 and later Frameworks. 32 bit windows can handle about 1400 concurrent threads, 64 bit windows can handle more, though I don’t have the figures.
In a normal (synchronous) Page Request a single worker thread handles the entire request from the moment it is received until the completed page is returned to the browser. When the I/O operation begins, a thread is pulled from the I/O thread pool, but the worker thread is idle until that I/O thread returns. So, if your page load event fires off one or more I/O operations, then that main worker thread could be idle for 1 or more seconds and in that time it could have serviced hundreds of additional incoming page requests.
So long as the number of concurrent requests does not exceed the number of threads available in the pool, all is well. But when you are building enterprise level applications the thread pool can become depleted under heavy load, and remember by default heavy load is more than just 200 simultaneous requests assuming a dual CPU Server. When this happens, new requests are entered into the request queue (and the users making the requests watch that little hour glass spin and consider trying another site). be reproducible in your test environment, as it only happens under extreme load.
To solve this problem ASP.Net provides four asynchronous Programming models. Asyncronous Pages, Asyncronous HttpHandlers, Asyncronous HttpModules and Asyncronous Web Services. The only one that is well documented and reasonably well known is the asynchronous Web Services model. Since there is quite a lot of documentation on that, and since in future web services should be implemented using the Windows Communication Foundation, we shall concentrate only on the other three. Let’s begin with the first asynchronous programming model, Asynchronous Pages.
Source: Microsoft Tech Ed 2007 DVD: Web 405 "Building Highly Scalable ASP.NET Web Sites by Exploiting Asynchronous Programming Models" by Jeff Prosise. To make a page Asynchronous, we insert what we refer to as an “Async Point” into that page’s lifecycle, which you can see in green on the right. We need to write and register with ASP.NET a pair of Begin and End Events. At the appropriate point in the page’s lifecycle, ASP.NET will call our begin method. In the begin method we will launch an asynchronous I/O operation, for example an asynchronous database query, and we will immediately return from the begin method. As soon as we return, ASP.Net will drop the thread that was assigned to that request back into the thread pool where it may service hundreds or even thousands of additional page requests while we wait for our I/O operation to complete. As you’ll see when we get to the sample code, we return from our begin method an IAsyncResult Interface, through which we can signal ASP.NET when the async operation that we launched has completed. It is when we do that, that ASP.NET reaches back into the thread pool, pulls out a second worker thread and calls our end method, and then allows the processing of that request to resume as normal. So, from ASP.NET’s standpoint it is just a normal request, but it is processed by 2 different threads; and that will bring up a few issues that we’ll need to discuss in a few moments. Now, none of this was impossible with the 1.1 framework, but it was a lot of extra work, and you lost some of the features of ASP.NET in the process. The beauty of the 2.0 and later frameworks is that this functionality is built right into the Http pipeline, and so for the most part everything works in the asynchronous page just as it did in the synchronous one.
In order to create an Asynchronous page you need to include the Async=”True” attribute in the page directive of your .aspx file. That directive tells the ASP.NET engine to implement an additional Interface on the derived page class which lets ASP.NET know at runtime that this is an asynchronous page. What happens if you forget to set that attribute? Well the good news is that the code will still run just fine, but it will run synchronously, meaning that you did all that extra coding for nothing. I should also point out that to make an Asynchronous data call, you also need to add “async=true;” or “Asynchronous Processing=true;” to your connection string – If you forget that and make your data call asynchronously, you will get a SQL Exception. The second thing we need to do in order to create an asynchronous page is to register Begin and End Events. There are 2 ways to register these events. The first way is to use a new method introduced in ASP.NET 2.0 called AddOnPreRenderCompleteAsync:
using System;
using System.Net;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
public partial class temp : System.Web.UI.Page
{
private static readonly Uri c_UrlImage1 = new Uri(@"");
private HttpWebRequest request;
void Page_Load(object sender, EventArgs e)
{
request = (HttpWebRequest)WebRequest.Create(c_UrlImage1);
AddOnPreRenderCompleteAsync(
BeginAsyncOperation,
EndAsyncOperation
);
}
IAsyncResult BeginAsyncOperation(object sender, EventArgs e,
AsyncCallback cb, object state)
// Begin async operation and return IAsyncResult
return request.BeginGetResponse(cb, state);
void EndAsyncOperation(IAsyncResult ar)
// Get results of async operation
HttpWebResponse response = (HttpWebResponse)request.EndGetResponse(ar);
Label1.Text = String.Format("Image at {0} is {1:N0} bytes", response.ResponseUri, response.ContentLength);
}
The second way is to use RegisterAsyncTask:
PageAsyncTask task = new PageAsyncTask(
BeginAsyncOperation,
EndAsyncOperation,
TimeoutAsyncOperation,
null
);
RegisterAsyncTask(task);
void TimeoutAsyncOperation(IAsyncResult ar)
// Called if async operation times out (@ Page AsyncTimeout)
Label1.Text = "Data temporarily unavailable";
}
These methods can be called anywhere in the page’s lifecycle before the PreRender event, and are typically called from the Page_Load event or from the click event of a button during a postback. By the way, you can register these methods from within a UserControl, as long as that control is running on a page that has set the async = true attribute. Again, if it runs on a page without that attribute, the code will still run just fine, but it will run synchronously.
As you can see from just these simple examples, building asynchronous pages is more difficult than building synchronous ones. I’m not going to lie to you. And real world use of these techniques is even more complicated – there is no Business Logic or data layer in the examples above. I don’t want you to leave here believing that you need to make every page asynchronous. You don’t. What I recommend, is doing surgical strikes. Identify that handful of pages in your application that perform the lengthiest I/O and consider converting those into asynchronous pages. The cool thing about this, is that it can improve not only scalability, but also performance, because when you are not holding onto the threads, new requests get into the pipeline faster, they spend less time waiting in that application request queue out there. So, users are happier because pages that they would have had to wait on before – even the ones you have not converted to asynchronous pages, but which might have been delayed while threads were idle, will now load faster. What’s more, as you’ll see in a moment, using RegisterAsyncTask will allow you to perform I/O operations in parallel, which may also improve performance. Having said that, making pages asynchronous is not really about improving performance, it is about improving scalability – making sure that we use the threads in the thread pool as efficiently as we possibly can.
Now I’m sure you are wondering why there are two ways, what the differences are between them, and when you should choose one over the other. Well, there are 3 important differences between AddOnPreRenderCompleteAsync and RegisterAsyncTask.
OK. So I expect some of you are thinking “But what if I have a data access layer in my application? My pages can’t go directly to the database, they have to go through that data access layer, or they have to go through my BLL, which calls the Data Access Layer.” Well, ideally, you should simply add the asynchronous methods to your DAL. If you wrote the DAL yourself or have access to its source code, you should add the Begin and End methods to it Adding the asynchronous methods to your DAL is the best, most scalable solution and doesn’t change the example code much at all: Instead of calling begin and end methods defined inside the page class, you simply call MyDAL.Begin… or MyBll.Begin… when you call RegisterAsyncTask or AddOnPreRenderAsync.
Unfortunately, neither Llblgen nor the Enterprise library (nor LINQ for that matter) supports asynchronous data calls natively. However, I believe that you can modify the generated code in llblgen to enable asynchronous data calls. You could also crack open the source code of the Enterprise library and add the asynchronous methods yourself, but before you try check to see if it has already been done.
The 2nd Asynchronous Programming model in ASP.NET is for HttpHandlers and has been around since .Net 1.x, but was not documented any better in version 2 than it was in version 1. Http Handlers are one of the two fundamental building blocks of ASP.NET, an http handler is an object that is built to handle http requests and convert them into http responses. For the most part, each handler corresponds to a file type. For example, there is a built in handler in ASP.NET that handles .aspx files. It is that handler that knows how to instantiate a control tree and send that tree to a rendering engine. The ASMX Handler knows how to decode SOAP and allows us to build web services.
Basically an HTTP Handler is just a class that implements the IHttpHandler interface, which consists of an IsResuable Boolean function and a ProcessRequest method which is the heart of an httphandler as its job is to turn a request into a response. The ProcessRequest method is passed an HttpContext Object containing all the data asp.net has collected about the request, as well as exposing the Session, Server, Request and Response objects that you are used to working with in page requests.
public class HelloHandler : IHttpHandler
public void ProcessRequest(HttpContext context)
string name = context.Request["Name"];
context.Response.Write("Hello, " + name);
public bool IsReusable
get { return true; }
There are 2 ways to build them. One way is to add the class to your project and register is in the web.config. If you want to register it for any file extension not currently handled by asp.net, you would need to add that extension to IIS. The easier way, is to deploy them as .ASHX files. The ASHX extension has already been registered in IIS, it is auto compiled, no changes are required in the web.config and performance is the same. Ok. So you know what they are and how to build one, when is an appropriate time to use one?
Handlers are commonly used to generate custom XML and RSS feeds, to unzip and render files stored as BLOB fields in the database including image files or logos, HTTP Handlers can also be used as the target of AJAX calls.
A common mistake that programmers new to .net, especially those like myself who came from classic ASP or PHP, is to use the Page_Load event of a page to create a new http response. For example, before I learned about httphandlers, I would use the page load event to create an xml document or a dynamic PDF file and output it to the response stream with a response.End() to prevent the page continuing after I output my file. The problem with that approach is that you are executing a ton of code in asp.net that doesn’t need to execute. When ASP.NET sees that request come in, it thinks it is going to need to build and render a control tree. By pointing the link at the handler instead, you will gain a 10-20% performance increase every time that request is fetched, just because of the overhead you have reduced. Put simply, Http Handlers minimize the amount of code that executes in ASP.NET.
To implement an Asynchronous handler you use the interface IHttpAsyncHandler, which adds BeginProcessRequest and EndProcessRequest methods. The threading works the same way as with an async page. After the begin method is called, the thread returns to the thread pool and handles other incoming requests until the I/O thread completes its work, at which point it grabs a new thread from the thread pool and completes the request.
Page.RegisterAsyncTask cannot be used here, so if you need to run multiple async tasks you will need to implement your own IAsyncResult Interface and pass in your own callbacks to prevent the EndProcessRequest method being called before you have completed all your async operations.
HTTP Modules are another fundamental building block of ASP.NET. They don’t handle requests, instead they sit in the HTTP Pipeline where they have the power to review every request coming in and every response going out. Not only can they view them, but they can modify them as well. Many of the features of ASP.NET are implemented using httpmodules: authentication, Session State and Caching for example, and by creating your own HTTP Modules you can extend ASP.NET in a lot of interesting ways. You could use an HTTP Module for example to add google analytics code to all pages, or a custom footer. Logging is another common use of HTTP Modules. E-Commerce web sites can take advantage of HTTP Modules by overriding the default behavior of the Session Cookie. By default, ASP.NET Session Cookies are only temporary, so if you use them to store shopping cart information, after 20 minutes of inactivity, or a browser shut down they are gone. You may have noticed that Amazon.com retains shopping cart information much longer: You could shut down your laptop, fly to Japan and when you restart and return to Amazon your items will still be there. If you wanted to do this in ASP.NET you could waste a lot of time writing your own Session State Cookie Class, or you could write about 10 lines of code in the form of an HTTP Module that would intercept the cookie created by the Session Object before it gets to the browser, and modify it to make it a persistent cookie. So, there are lots and lots of practical uses for HTTP Modules. An Http Module is nothing more than a class that implements the IHttpModule Interface, which involves an Init method for registering any and all events that you are interested in intercepting, and a dispose method for cleaning up any resources you may have used.
public class BigBrotherModule : IHttpModule
public void Init(HttpApplication application)
application.EndRequest +=
new EventHandler(OnEndRequest);
void OnEndRequest(Object sender, EventArgs e)
HttpApplication application = (HttpApplication)sender;
application.Context.Response.Write
("Bill Gates is watching you");
public void Dispose() { }
The events you can intercept in an HTTP Module:
Source: Microsoft Tech Ed 2007 DVD: Web 405 "Building Highly Scalable ASP.NET Web Sites by Exploiting Asynchronous Programming Models" by Jeff Prosise.
Notice the HTTP Handler at the end there that converts the request into a response. These events will always fire in this order, in every request. The Authenticate Request event is the one fired by ASP.NET when a requested page requires authentication. It checks to see if you have an authentication cookie and if you do not, redirects the request to the login page. In the simple example, I was using that End Request event, which is the last one before the response is sent to the browser. So, that is what HTTP Modules are for, and how they work. Why do we need an Asynchronous version? Well if you really want to see how scalable your application is, add an HTTP Module that makes a synchronous call to a webservice or a database. Since the event you register will be fired on every request, you will tie up an additional thread from the asp.net thread pool on every single request that is just waiting for these I/O processes to complete. So, if you write a synchronous HTTP Module that inserts a record into a database for every single request, and that insert takes 1 second, EVERY single request handled by your application will be delayed by 1 second. So if you need to do any type of I/O from within an HTTP Module, I recommend you make the calls asynchronously and if you are retrieving data, cache it! To Register Async Event Handlers in an http module - In the Init Method, simply register your begin and end methods using AddOnPreRequestHandlerExecuteAsync:
public void Init (HttpApplication application)
AddOnPreRequestHandlerExecuteAsync (
new BeginEventHandler (BeginPreRequestHandlerExecute),
new EndEventHandler (EndPreRequestHandlerExecute)
);
IAsyncResult BeginPreRequestHandlerExecute (Object source,
EventArgs e, AsyncCallback cb, Object state)
// TODO: Begin async operation and return IAsyncResult
void EndPreRequestHandlerExecute (IAsyncResult ar)
// TODO: Get results of async operation
Errors can happen at any point during the execution of a command. When ASP.NET can detect errors before initiating the actual async operation, it will throw an exception from the begin method; this is very similar to the synchronous case in which you get the exceptions from a call to ExecuteReader or similar methods directly. This includes invalid parameters, bad state of related objects (no connection set for a SqlCommand, for example), or some connectivity issues (the server or the network is down, for example). Now, once we send the operation to the server and return, ASP.NET doesn’t have any way to let you know if something goes wrong at the exact moment it happens. It cannot just throw an exception as there is no user code above it in the stack when doing intermediate processing, so you wouldn't be able to catch an exception if it threw one. What happens instead is that ASP.Net stores the error information, and signals that the operation is complete. Later on, when your code calls the end method, ASP.Net detects that there was an error during processing and the exception is thrown. The bottom line is that you need to be prepared to handle errors in both the begin and the end methods, so it is wise to wrap both events in a try – Catch block.
Now you have seen three of the asynchronous programming models ASP.NET has to offer, hopefully I have impressed upon you how important it is to at least consider using them when creating pages that do I/O if you expect those pages to be heavily trafficked. Remember you can also create asynchronous web services. I didn’t cover those here because there is pretty good documentation for that already. The good thing about Asynchronous Programming models is that it enables us to build scalable and responsive applications that use minimal resources (threads/context switches). What is the down side? Well it forces you to split the code into many callback methods, making it hard to read, confusing to debug and difficult for programmers unfamiliar with asynchronous programming to maintain. With this in mind, whenever I add an asynchronous method to an object in the my projects, I also add a traditional Synchronous version. For example, if I had created a BeginUpdatexxx() Method in the BLL, there would also be a traditional Updatexxx() Method, so that if anyone else finds themselves having to use that object, they won’t be left scratching their heads, wondering “how on earth do I use that?”
Asynchronous command execution is a powerful extension to.NET. It enables new high-scalability scenarios at the cost of some extra complexity.
For more information on multi-threading in ASP.NET I highly recommend you read "Custom Threading in ASP.Net".
Visit my other sites:
© Copyright 2007-2017 Williablog.net. All rights reserved. | Powered by BlogEngine.NET 2.6.0.5 | http://www.williablog.net/williablog/?tag=/httpmodule | CC-MAIN-2017-22 | refinedweb | 3,609 | 61.67 |
Related link:.
I’d picked up a Palm V at JavaOne a few years ago when they were first announcing this stuff, and stopped paying attention when it seemed like Java for the Palm got lost in more ambitious projects. Around the beginning of this year I noticed that J2ME, at least in its smallest CLDC/MIDP incarnation, was available. Around June, I finally got around to downloading it and starting to poke at it. (Sadly, I’m stuck for now doing my J2ME work on Windows, since the Mac OS X Java doesn’t provide J2ME support.)
The UI tools are primitive, and interface results vary drastically from device to device. Still, they work pretty nicely, and the low-level UI offers just enough features just enough of a toolkit to let me show people a graphic rendition of the information they’ve entered - trees in a forest. Working with integer-only math is a bit tricky when I want to create plots based on compass coordinates, but it just takes some extra thought.
The one piece of my J2ME work that’s now available is the Tiny API for Markup, a not-quite-XML-compliant parser that will be one of the bases for my SVG work. After doing this, I can see why Jon Bosak kept telling the XML 1.0 group that XML had to be processable on PDAs - though I think XML misses the J2ME mark. (A processor in C would likely be more compact and have more space, so they did okay for PDAs in general, I guess.)
Working in J2ME means do more with less, and has really illuminated specifications in a way I wasn’t capable of doing before. The difficulties in the XML spec become much clearer, SVG Tiny starts to look huge, and things which might have looked reasonable in the ever-faster ever-more-bloated environment of PCs become plainly stupid. While I miss a few things (like Java 2 Collections), for the most part, it’s a privilege to work in a leaner environment for a while. I doubt this will sweep the computing world, but maybe it should.
Is less more?
I know exactly how you feel.
I know exactly how you feel. My own research into, and at times outright yearning for, a simple and streamlined desktop apps [] is a direct result of my work in mobile computing and more particularly J2ME. Working with J2ME really makes you think about what really is necessary in providing the solution as opposed to whiz-bang features and other forms of programmatic kung fu. What's more amazing to me is how incredibly small and lightweight functional applications can be relative to what we have on the desktop. Its made me come to the realization that as programmers we have, perhaps subconsciously, become far too reliant on the every increasing resources the Intel, Microsoft and Dell tell us we need.
This is not to say that Photoshop or a full-featured web browser can be written in a 35k MIDlet. For some applications this approach is not really feasible. Nonetheless most application code are interfaces with some data handling logic. Coupled with Web services to a robust backend, these applications should be quite small and very quick on their "desktop mainframes" most users have on their desktop. If it weren't for the 20 MB download required, I would say the .NET Framework in some ways demonstrates. The MIDP KVM and core libraries are measured in KBs not MBs.
MIDP is a bit too spartan and extreme for use on the desktop. With the basic and limited GUI provided by MIDP, you will not win any design contests. MIDP is just one profile (and the smallest for that matter) in the J2ME world. In the J2ME connected device configuration or CDC (MIDP is in the Connect Device Limited Configuration or CDLC) there are profiles such as the Personal Basis profile, that have richer, yet highly streamlined, APIs that might be more appropriate for desktop application. (J2SE may be fine for developing JBuilder, but its just too fat and overkill for its own good. Don't even get me started on Applets either.)
Are you aware of the kXML 2 [] parser? Its substantially smaller then your TAM. (~6 KB I believe) kXML a simple pull parser rather then Simon's SAX2-like parser. The next version of MIDP that is in the works, MIDP 2.0 [], will most likely include an XML parser that I believe may be based on kXML. Additionally the JCP has released or are developing several extension APIs that can be added on an as needed basis -- all in the tens of KB size range.
The uses of XML over expensive slow and unreliable networks is an issue that has not received enough attention in my mind particularly in the area of mobile computing and Web services, a topic I have voiced my reservations about. [] Jeff Bone has recently been writing about "XML sucking" and in his recent post [] points to XTalk a "semi-parsed, pseudo-binary representation of an XML" from IBM Labs. WAP employed a similar tact with its WAP Binary XML (WBXML) specification. While attending the Wireless Enterprise Seminar in Atlanta this May, I saw Adam Bosworth present a wireless application framework powered by J2ME using a "binary DOM representation" to communicate to its ultra thin client app. JXTA uses a binary format for its J2ME client, but I'm unsure if it is a tokenized binary representation of XML like the others.
While XML purist probably wince at this ideas, they seem like a reasonable comprise given the constraints of the mobile computing and making XML viable while remaining true to its core principles. I only hope that we settle on one standard method for this encoding.
[Adapted from weblog post at.]
Thanks, and
yes I'm aware of the kXML parser. I went looking for J2ME parsers at the start of this project and just didn't find anything that felt right.
I've not yet begun to optimize the TAM parser, and suspect I can probably chop out at least 5k, depending on what I'm willing to throw overboard. (Namespace support alone added 3k.) It's really just a start, and I'm well-aware that I supported more of XML than most J2ME developers are likely to want.
On the push-pull side of things, I'm pretty solidly a push person. Maybe the pull angle doesn't work with how I think about XML, or maybe I've just spent too much time writing SAX filters.
On binary formats, yes, I'm definitely an XML purist, and I've not yet seen a binary representation of XML that makes much sense to me. On the other hand, there's lots and lots of room for shared binary formats that don't pretend to be XML, and I hope to see more development in that direction in the future.
TinyLine
Speaking of SVG graphics in J2ME, I just wanted to point out TinyLine ().
early experiment
I just knocked 3.4K off the parser's jar file by striking namespace support and chopping out support for comments. Not a bad start, more to come.
TinyLine very cool, but
the PersonalJava implementation it runs under is a lot more capable than the J2ME MIDP/CLDC profile. I'm looking forward to seeing a new J2ME profile in the same size range as PersonalJava, though!
The next generation of Personal Java is called Personal Profile
PersonalJava was developed before the Sun developed the configuration/profile approach to J2ME. Hence the name was changed to the Personal Profile [] to fit into that scheme. I don't believe a reference implementation is available yet. Its close cousin, the Personal Basis Profile [] does have a reference implementation available.
MIDP 2.0 Proposed Final Draft released today.
The proposed final draft of the MIDP 2.0 specification was released today. It can be found here:
OT: J2ME and Mac OS X
Not really the point of the article, but Simon writes: "Sadly, I'm stuck for now doing my J2ME work on Windows, since the Mac OS X Java doesn't provide J2ME support."
Wouldn't it be more fair to write "the J2ME dev kit doesn't support Mac OS X"? It's Sun/JavaSoft that chose to leave Mac-based developers out in the cold by not bringing over the kvm or other dev tools to OSX.
BTW, On the main point of the article -- it must interesting to work in a "small" environment again. I recently allocated a 100KB buffer for wrangling MP3's and chided myself that "my first computer only had 32KB total RAM!"
--invaliidname
OT: J2ME and Mac OS X
As far as I understand you should be able to use J2ME under Mac OS X since February 2003:
Gerhard | http://www.oreillynet.com/xml/blog/2002/08/the_strange_pleasures_of_j2me.html | crawl-002 | refinedweb | 1,491 | 69.52 |
Reflex FRP gallery editor
When I post a series of photos to a personal blog I find myself editing HTML in Vim and switching back and forth to a browser to see if I have written comments in the right places and ordered the photos correctly. I could use a HTML editor to do this, but why not try FRP with Haskell? :) Apparently I sort of use FRP at work so trying out Reflex wasn’t too much of a leap.
This post is adapted from the todo list and drag ‘n’ drop examples in this Reflex repo.
Let’s start with the types. My basic blob of data is an image (a URL), a comment about the image, and a flag to indicate if the image should appear in the final output:
data Image = Image { imageFileName :: T.Text -- ^ e.g. "" , imageVisible :: Bool -- ^ Output in HTML render? , imageRemark :: T.Text -- ^ Comment that goes before the image. } deriving (Eq, Ord, Show)
The images have to be rendered in a particular order, so we’ll use a
Map
Map Int a
where the integer keys provide the ordering and
a is some type.
To toggle visibility in the final rendering, we flip
imageVisible:
toggleVisibility :: Int -> Map Int Image -> Map Int Image toggleVisibility k m = M.adjust f k m where f (Image x b c) = Image x (not b) c
We can set the description for an image:
setDesc :: (Int, T.Text) -> Map Int Image -> Map Int Image setDesc (k, c) m = M.adjust f k m where f (Image x b _) = Image x b c
We can move the
kth image up:
moveUp :: Int -> Map Int Image -> Map Int Image moveUp 0 m = m moveUp k m = let xs = M.elems m in M.fromList $ zip [0..] $ take (k-1) xs ++ [xs !! k, xs !! (k-1)] ++ drop (k+1) xs -- ^^^ Assumes contiguous keys!
and down:
moveDown :: Int -> Map Int Image -> Map Int Image moveDown k m | k == fst (M.findMax m) = m | otherwise = let xs = M.elems m in M.fromList $ zip [0..] $ take k xs ++ [xs !! (k+1), xs !! k] ++ drop (k+2) xs
It’s not efficient to completely rebuild the map by converting it to a list and back again, but this’ll do for now.
In terms of the user interface there are a few events to consider:
- user toggles visibility of the
kth image;
- user moves the
kth image up;
- user moves the
kth image down; and
- user changes the comment text for the
kth image.
We’ll put these four events into our own type. The first three are of
type
Event t Int where the
Int is the key for the
image in question. The last one has type
Event t (Int, T.Text)
since we need the key and the text that was entered into the textbox.
In Reflex, the event type is Event.
data ImageEvent t = ImageEvent { evToggle :: Event t Int , evUp :: Event t Int , evDown :: Event t Int , evKey :: Event t (Int, T.Text) }
imageW creates an unnumbered list of images, consisting of
a text field indicating if the image will be visible; a text box for writing a comment;
buttons to toggle visibility and move the image up and down; and finally the image itself.
imageW :: forall m t. (MonadWidget t m) => Dynamic t (Map Int Image) -> m (Dynamic t (Map Int (ImageEvent t))) imageW xs = elClass "ul" "list" $ listWithKey xs $ \k x -> elClass "li" "element" $ do dynText $ fmap (T.pack . show . imageVisible) x el "br" $ return () let xEvent = imageRemark <$> uniqDyn x ti <- textInput $ textBoxAttrs & setValue .~ (updated xEvent) tEvent <- updated <$> return (zipDynWith (,) (constDyn k) (_textInput_value ti)) el "br" $ return () (visibleEvent, moveUpEvent, moveDownEvent) <- elClass "div" "my buttons" $ do visibleEvent <- (fmap $ const k) <$> button "visible" moveUpEvent <- (fmap $ const k) <$> button "up" moveDownEvent <- (fmap $ const k) <$> button "down" return (visibleEvent, moveUpEvent, moveDownEvent) elClass "p" "the image" $ elDynAttr "img" (fmap f x) (return ()) return $ ImageEvent visibleEvent moveUpEvent moveDownEvent tEvent where f :: Image -> Map T.Text T.Text f i = M.fromList [ ("src", imageFileName i) , ("width", "500") ] textBoxAttrs :: TextInputConfig t textBoxAttrs = def { _textInputConfig_attributes = constDyn $ M.fromList [("size", "100")] }
To process the dynamic map we use listWithKey:
listWithKey :: forall t k v m a. (Ord k, MonadWidget t m) => Dynamic t (Map k v) -> (k -> Dynamic t v -> m a) -> m (Dynamic t (Map k a))
Specialised to our usage, the type is:
listWithKey :: forall t m. (MonadWidget t m) => Dynamic t (Map Int Image) -> (Int -> Dynamic t Image -> m (ImageEvent t)) -> m (Dynamic t (Map Int (ImageEvent t)))
It’s like mapping over the elements of the dynamic input:
listWithKey xs $ \k x -> ...
We use elClass to produce the elements on the page. For example the text attribute showing if the image is visible or not can be rendered using dynText:
dynText $ fmap (T.pack . show . imageVisible) x
We have an
fmap since
x :: Dynamic t Image and Dynamic
has a
Functor instance.
The image list and all the events are wrapped up in
imageListW. Here’s the main part:
imageListW :: forall t m. MonadWidget t m => Dynamic t T.Text -> m () imageListW dynDrop = do let eventDrop = fmap const $ updated $ fmap parseDrop dynDrop :: Event t (MM Image -> MM Image) rec xs <- foldDyn ($) emptyMap $ mergeWith (.) [ eventDrop , switch . current $ toggles , switch . current $ ups , switch . current $ downs , switch . current $ keys ] bs <- imageW xs let toggles :: Dynamic t (Event t (M.Map Int Image -> M.Map Int Image)) ups :: Dynamic t (Event t (M.Map Int Image -> M.Map Int Image)) downs :: Dynamic t (Event t (M.Map Int Image -> M.Map Int Image)) keys :: Dynamic t (Event t (M.Map Int Image -> M.Map Int Image)) toggles = (mergeWith (.) . map (fmap $ toggleVisibility) . map evToggle . M.elems) <$> bs ups = (mergeWith (.) . map (fmap $ moveUp) . map evUp . M.elems) <$> bs downs = (mergeWith (.) . map (fmap $ moveDown) . map evDown . M.elems) <$> bs keys = (mergeWith (.) . map (fmap $ setDesc) . map evKey . M.elems) <$> bs ta <- textArea $ (def :: TextAreaConfig t) { _textAreaConfig_setValue = (T.concat . map rawHTML . M.elems) <$> updated xs , _textAreaConfig_attributes = taAttrs } return ()
Notice that
toggles is used before it is defined! This is made possible by
using the recursive do extension which provides
the ability to do value recursion.
The key bit is the use of mergeWith that combines all of the events.
mergeWith :: Reflex t => (a -> a -> a) -> [Event t a] -> Event t a
Here,
mergeWidth (.) will left-fold simultaneous events.
rec xs <- foldDyn ($) emptyMap $ mergeWith (.) [ eventDrop , switch . current $ toggles , switch . current $ ups , switch . current $ downs , switch . current $ keys ]
The
toggles has type
Dynamic t (Event t (M.Map Int Image -> M.Map Int Image))
so we use switch
and current
to get to an
Event type:
ghci> :t switch switch :: Reflex t => Behavior t (Event t a) -> Event t a ghci> :t current current :: Reflex t => Dynamic t a -> Behavior t a ghci> :t switch . current switch . current :: Reflex t => Dynamic t (Event t a) -> Event t a
This merge is also where we bring in the drag ‘n’ drop event via
eventDrop which is how we get
a list of images into the dynamic map.
Try it out
To try it out without setting up Reflex, grab gallery_editor.zip, unzip it, and open
gallery_editor/gallery.jsexe/index.html in your browser. Drag some images onto the top area of the page using your file manager. Tested on Ubuntu 16.
Or, grab the source from Github.
| https://carlo-hamalainen.net/2016/09/17/reflex-frp-gallery-editor/ | CC-MAIN-2020-40 | refinedweb | 1,222 | 66.13 |
Descripción
Latest update includes refreshed interface. No addons for each social network anymore! Instagram, Facebook, YouTube and TikTok feeds are available all in one plugin.
Important
The plugin has moved to the new Instagram Basic Display API.
To make your widgets work again, reconnect your instagram accounts in the plugin settings. Read more about the changes
Características
-!
Capturas
Instalación.
Reseñas
Colaboradores y desarrolladores
«Social Slider Feed» es un software de código abierto. Las siguientes personas han colaborado con este plugin.Colaboradores
«Social Slider Feed» ha sido traducido a 2 idiomas locales. Gracias a los traductores por sus contribuciones.
Traduce «Social Slider
- Fix Facebook shortcode
- Fix Slick slider arrows
- WP 5.9 compatibility
- Minor fixes
2.0.2
-
- Corrección de errores. to only save as attachments
1.4.0
- Fixed the issue where duplicate images were being inserted into Media Library
- Added a button in widget to remove previously created duplicate images
- Simplified the options to save images into media library
- Added an option to show backlink to help plugin development
1.3.3
- Fixed notification error message.
1.3.2
- Fixed deeplink issue with smartphones. Contributors via wordpress forum @ricksportel
- Added option to block users when searching for hashtag. Sponsored by VirtualStrides.com
- Modified sizes to show square cropped and original sizes
- Added new wordpress size only for instagram plugin – regenerating thumbnails might be required.
- Added option to stop Pinterest pinning on images
1.3.1
- Fixed issue when no images were shown due to instagram recent changes.
- Caption fix when no caption in image
- set wait time to 3 min for php because of larger images
- updated flexislider to latest version
1.3.0
- Added Option to search for hashtags
- Added Limit for number of words to appear in caption
- Fixed 500 server error that occurred when loading 15+ images
- Fixed css for some themes
1.2.3
- Added Links for Instagram Hashtags
- Updated flexislider to 2.5.0
- Added Slide Speed in milliseconds
- Brought back Image Size for images loaded directly from Instagram
- Changed CSS for thumbnails Template
- Added Thumbnails Without borders template
1.2.2
- Modified the code to work with new Instagram Page
- Removed Image Size option when loading images directly from Instagram
- 24 Images can now be displayed
- Fixed multiple widget bug using widget ids in class names
- Added better explanation for sources
1.2.1
- Bug fixes
- Shortcode for widgets
- Option not to insert images into media library
1.2.0
- Full Rewrite of the plugin
1.1.3
- bug fix not working after wordpresss update
- Added multisite support
- Javascript for slider is enqueued at the top of the page
1.1.2
- minor bug fix
- Added Optional Slider Caption Overlay template
1.1.1
- The text and control for slider visible on mouse over
- Reorganised Slider html format
- Css Styling for slider
1.1.0
- Added Option to link images to a Custom URL
- Added Option to link images to locally saved instagram images
- Fixed flexislider namespace causing problems in sites using flexislider
- Rename css classes to match new flexislider namespace
1.0.4
- Added Option to insert images into media library
- Fixed error caused by missing json_last_error() function ( php older than 5.3 only )
1.0.3
- Added Option to link images to User Profile or Image Url
- Code Cleanup
1.0.2
- Compatibility for php older than 5.3
- Styling fix for thumbnail layout
- Added Option to Randomise Images
1.0.1
- Removed preg_match
- Using exact array index
- Corrección de errores.
1.0
- First Release | https://es.wordpress.org/plugins/instagram-slider-widget/ | CC-MAIN-2022-21 | refinedweb | 584 | 57.77 |
help! trouble with pop method
Russ Russ
Greenhorn
Joined: Oct 21, 2002
Posts: 4
posted
Oct 22, 2002 09:06:00
0
My
applet
is supposed to detect palindromes (words that are spelled the same forwards or backwords, ie "radar").
GUI is named WordGUI.
Stack class is named WordStack.
I am trying to use the pop() with stacks. I was able to use the push() without problems. Basically, I'm accepting user input from a textfield in my WordGUI class and pushing each char onto a stack array named data. Then I need to pop the chars out of the data array into a new stack array called popped. I then need to compare the contents of the 2 arrays to see if they are equal. If they are - we have detected a palindrome. Mission accomplished.
So... what am I doing wrong with my pop method? I think there's also a problem with my for loop that uses pop. Any ideas? Thank you.
package assign4; import java.awt.*; import java.awt.event.*; import java.applet.*; public class WordGUI extends Applet implements ActionListener{ WordStack myStack = new WordStack(); private Button testButton; private TextField wordField; private boolean isPalindrome = false; char popOutput; public void init(){ wordField = new TextField(20); add(wordField); wordField.addActionListener(this); testButton = new Button("Test for palindromes"); add(testButton); testButton.addActionListener(this); }//init() public void paint (Graphics g){ g.drawString("Enter a word in the textField above and click", 25, 100); g.drawString("the button to check if the word is a palindrome", 25, 115); myStack.trimToSize(); myStack.display(g, 150);//used for testing purposes here if(isPalindrome){ g.drawString("Is a palindrome!", 25, 75); } else{ g.drawString("Is not a palindrome", 25, 75); } }//paint() public void actionPerformed(ActionEvent event){ if(event.getSource() == testButton){ String userInput = wordField.getText(); isPalindrome = myStack.evaluate(userInput); } repaint(); }//actionPerformed() }//class WordGUI package assign4; import java.awt.*; import java.awt.event.*; import java.applet.*; public class WordStack implements Cloneable{ private char[] data; private char[] popped; private int manyItems; private boolean isPalindrome = false; public WordStack(){ final int initialCapacity = 10; manyItems = 0; data = new char[initialCapacity]; popped = new char[initialCapacity]; }//generic constructor public WordStack(int initialCapacity){ if(initialCapacity < 0){ throw new IllegalArgumentException ("initialCapacity too small: " + initialCapacity); } manyItems = 0; data = new char[initialCapacity]; popped = new char[initialCapacity]; }//constructor public int getCapacity(){ return data.length; }//getCapacity() public boolean isEmpty(){ return (manyItems == 0); }//isEmpty() public char pop(){ if(manyItems == 0){ //throw new EmptyStackException(); } return data[--manyItems]; }//pop() public boolean evaluate(String userInput){ for(int i = 0; i < userInput.length(); i++){ push(userInput.charAt(i)); } for(int i = 0; i < userInput.length(); i++){ popped[0] = pop(); } if(data[].equals(popped[])){ return true; } else{ return false; } }//evaluate() public void push(char item){ if(manyItems == data.length){ ensureCapacity(manyItems * 2 + 1); } data[manyItems] = item; manyItems++; }//push() public void ensureCapacity(int minimumCapacity){ char biggerArray[]; if(data.length < minimumCapacity){ biggerArray = new char[minimumCapacity]; System.arraycopy(data, 0, biggerArray, 0, manyItems); data = biggerArray; } }//ensureCapacity() public void display(Graphics g, int yLoc){//being used for testing purposes here for(int i = 0; i < data.length; i++){ g.drawString("" + data[i] + popped[i], 25, yLoc); yLoc += 15; } }//display() public void trimToSize(){ char trimmedArray[]; if(data.length != manyItems){ trimmedArray = new char[manyItems]; System.arraycopy(data, 0, trimmedArray, 0, manyItems); data = trimmedArray; } }//trimToSize() }//class WordStack
[Dave fixed code tags]
[ October 22, 2002: Message edited by: Dave Vick ]
Dave Vick
Ranch Hand
Joined: May 10, 2001
Posts: 3244
posted
Oct 22, 2002 09:22:00
0
Russ
I noticed that you also posted this to several other forums - that is called "cross posting". Cross posting is bad.
Most of the visitors here read more than one forum and it is frustrating to see the same question repeated over and over. You have a better chance of getting a correct and timely answer if you post it to the most appropriate forum instead of using the shotgun approach and hoping you'll hit something.
thanks
Dave
William Barnes
Ranch Hand
Joined: Mar 16, 2001
Posts: 986
I like...
posted
Oct 22, 2002 10:45:00
0
for(int i = 0; i < userInput.length(); i++) { popped[0] = pop(); }
Ya, this looks like it isn't going to work very well.
Please ignore post, I have no idea what I am talking about.
I agree. Here's the link:
subject: help! trouble with pop method
Similar Threads
need help with java
method ... not found in class ...
help! trouble with pop method!
help! trouble with pop method
Filling Array
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/392693/java/java/trouble-pop-method | CC-MAIN-2015-40 | refinedweb | 760 | 57.67 |
MiddleKit does some funky code generation in GeneratedPy/GenFoo.py to
import the super class that resides in the above directory. Previously,
this code used ".." in sys.path, which was flat out wrong because that
relates to your current working directory. The new version works off of
__file__, which should be correct.
The test suites pass and my projects work, but the change is somewhat
subtle.
If you use MiddleKit, please consider a cvs update, re-generation and
then trying things out.
Bottom line:
If you are using MiddleKit, please update to the latest cvs.
A new test case in the test suite covers the (old) bug and all
pre-existing test cases pass.
I fixed an obscure bug involving _inherited_ obj refs when they have to
be UPDATEd post-INSERT of the containing object. A rare case that I
first encountered tonight.
Explaining the full details would be tedious and boring. If you really
want to know, see the latest checkins and study SQLObjectStore as well
as the new Tests/MKObjRef.mkmodel/TestEmpty3.py.
-Chuck | http://sourceforge.net/p/webware/mailman/message/11150713/ | CC-MAIN-2015-35 | refinedweb | 176 | 66.94 |
Extract Text from PDF Files in Python for NLP
Extraction of text from PDF using PyPDF2
This notebook demonstrates the extraction of text from PDF files using python packages. Extracting text from PDFs is an easy but useful task as it is needed to do further analysis of the text.
Working with .PDF Files
We are going to use
PyPDF2 for extracting text. You can download it by running the command given below.
!pip install PyPDF2
Now we will import
PyPDF2.
import PyPDF2 as pdf
We have used the file
NLP.pdf in this notebook. The
open() function opens a file, and returns it as a file object.
rb opens the file for reading in binary mode.
file = open('NLP.pdf', 'rb') file
<_io.BufferedReader
PdfFileReader(file) initializes a
PdfFileReader object for the file handler
file.
pdf_reader = pdf.PdfFileReader(file) pdf_reader
<PyPDF2.pdf.PdfFileReader at 0x246349895c0>
You can check all the operations which we can perform on
pdf_reader using
help(pdf_reader).
help(pdf_reader)
getIsEncrypted() property shows whether the PDF file is encrypted. It is returning
False that means the file which we are using is not encrypted.
pdf_reader.getIsEncrypted()
False
getNumPages() calculates and returns the number of pages in the PDF file.
pdf_reader.getNumPages()
19
getPage() retrieves a page by number from the PDF file.
page1 = pdf_reader.getPage(0)
Page object
page1 has function
extractText() to extract text from the PDF page.
page1.extractText()[:1050]
'Lkit: A Toolkit for Natuaral Language Interface Construction 2. Natural Language Processing (NLP) This section provides a brief history of NLP, introduces some of the main problems involved in extracting meaning from human languages and examines the kind of activities performed by NLP systems. 2.1. Background Natural language processing systems take strings of words (sentences) as their input and produce structured representations capturing the meaning of those strings as their output. The nature of this output depends heavily on the task at hand. A natural language understanding system serving as an interface to a database might accept questions in English which relate to the kind of data held by the database. In this case the meaning of the input (the output of the system) might be expressed in terms of structured SQL queries which can be directly submitted to the database. The first use of computers to manipulate natural languages was in the 1950s with attempts to automate translation between Russian and English [Locke & Booth]'
page2 = pdf_reader.getPage(1) page2.extractText()
'Lkit: A Toolkit for Natural Language Interface Construction 2.2. Problems Two problems in particular make the processing of natural languages difficult and cause different techniques to be used than those associated with the construction of compilers etc for processing artificial languages. These problems are (i) the level of ambiguity that exists in natural languages and (ii) the complexity of semantic information contained in even simple sentences. Typically language processors deal with large numbers of words, many of which have alternative uses, and large grammars which allow different phrase types to be formed from the same string of words. Language processors are made more complex because of the irregularity of language and the different kinds of ambiguity which can occur. The groups of sentences below are used as examples to illustrate different issues faced by language processors. Each group is briefly discussed in the following section (in keeping with convention, ill-formed sentences are marked with an asterix). 1. The old man the boats. 2. Cats play with string. * Cat play with string. 3. I saw the racing pigeons flying to Paris. I saw the Eiffel Tower flying to Paris. 4. The boy kicked the ball under the tree. The boy kicked the wall under the tree. 1. In the sentence "The old man the boats" problems, such as they are, exist because the word "old" can be legitimately used as a noun (meaning a collection of old people) as well as an adjective, and the word "man" can be used as a verb (meaning take charge of) as well as a noun. This causes ambiguity which must be resolved during syntax analysis. This is done by considering all possible syntactic arrangements for phrases and sub-phrases when necessary. The implication here is that any parsing mechanism must be able to explore various syntactic arrangements for phrases and be able to backtrack and rearrange them whenever necessary. 2 '
Append, Write or Merge PDFs
PdfFileWriter() class supports writing PDF files out, given pages produced by another class typically
PdfFileReader().
pdf_writer = pdf.PdfFileWriter()
addPage() adds a page to the PDF file. The page is usually acquired from a
PdfFileReader instance.
pdf_writer.addPage(page2) pdf_writer.addPage(page1)
Now we are going to open a new file
Pages.pdf and write the contents of
pdf_writer to it.
pdf_writer.write() writes the collection of pages added to
pdf_writer object out as a PDF file.
close() closes the opened file.
output = open('Pages.pdf','wb') pdf_writer.write(output) output.close()
Pages.pdf has the following pages:-
| https://kgptalkie.com/nlp-tutorial-3-extract-text-from-pdf-files-in-python-for-nlp/ | CC-MAIN-2021-17 | refinedweb | 827 | 55.74 |
Kirk Munro started his professional career in 1997 as a developer at FastLane Technologies Incorporated, where he worked on an advanced scripting language called FINAL (FastLane Integrated Network Application Language). 10 years later while working at Quest Software he returned to his scripting language roots and became a Poshoholic when he started working with PowerShell and PowerShell-based applications like PowerGUI. Today he is a member of the PowerGUI team and spends all of his professional time using PowerShell and helping others use PowerShell through newsgroups, online forums, events, and his Poshoholic blog.
Kirk is also an ecoholic and a father of two children. His children participate in home-based learning with his wife instead of traditional school. He tries to practice natural, holistic living outside of the office as much as possible. When traveling, natural food stores are one the first places he will visit. He’s still waiting for companies to start giving out organic cotton t-shirts at trade shows and conferences.
1. What does being an MVP mean to you?
For me, being an MVP means having even more support, opportunities and resources available to continue doing what I love to do: helping others in the community through online support and through face-to-face interaction.
- If you could ask Steve Ballmer one question about Microsoft, what would it be?
When is Microsoft going to show the true compassion, innovation and leadership over environmental issues in the software industry that the world needs?
- What do you think the best software ever written was?
I had a few things immediately jump in my head when I read this question. PowerShell, hands down, is the best professional software I use. It really is the best thing since sliced bread. The other things that jumped into my head are games from my childhood. The first release of King’s Quest I, where you actually had to think of the commands you wanted to type instead of just moving a mouse pointer around and clicking; and Ultima III: Exodus, for its ability to take me on a fantastic adventure without painting the graphics so detailed that I couldn’t use my imagination to picture it myself. These fit into my definition of best because they were a great part of the inspiration behind my pursuing a career working with computers in the first place. I still play them from time to time.
- If you were the manager of Admin Frameworks/PowerShell, what would you change?
I’d add support for namespaces and drop the recommendation for third-party snapins to use silly prefixes that obscure their names.
- What are the best features/improvements of Admin Frameworks/PowerShell?
Its self-discoverability, flexibility, capability, and versatility. It takes quite a while after you start using PowerShell before you discover any limitations that can’t be worked around.
- What was the last book you read?
“La colère de Mulgarath” (Book 5 in the Spiderwick Chronicles series, “The Wrath of Mulgarath”, translated into French). I read to my kids pretty much every night before they go to sleep, and we just finished reading the entire series.
- What music CD do you recommend?
Avril Lavigne – The Best Damn Thing. Not deep or meaningful at all, but I enjoy it.
- What makes you a great MVP?
Who said I was great? (Thanks to that person!) I’m just a guy who is passionate about what he does who loves to help other people. I’ve been helping friends, family, neighbours and co-workers with computer problems since we got our first computer (a TRS-80) many years ago. I also help people in a grocery store, in a library, or at the market if the opportunity presents itself. Giving time to help others and seeing and feeling the happiness that results is a wonderful experience. If that makes me a great MVP, well then, great!
- What is in your computer bag?
My laptop (obviously), “Presenting to Win” by Jerry Weissman, my new Microsoft LifeCam NX-6000 webcam so that I can see my kids when I’m travelling and show them a little bit of where I am, a 150GB pocket drive, a USB drive or two, a retractable RJ-45 cable and a soft DVD case (currently holding “The Transformers”), and a small bottle of hand sanitizer.
- What is the best thing that has happened since you have become an MVP?
Being recorded on .NET Rocks!, dnrTV and RunAs Radio, and having all three shows posted in the same week! Carl, Richard and Greg do a great job of those shows, and it was truly an honor to be invited to talk about PowerShell and PowerGUI on them.
- What is your motto?
Do better, in everything that you do.
- Who is your hero?
There are too many people doing many great things to have one hero. My hero these days is anyone who challenges the conventional way of thinking about things and uses their passion to influence positive changes in life in a way that is beneficial for people and for the planet. My kids and my wife do that, and that makes them my heroes. David Suzuki, Dr. Jane Goodall, John Taylor Gatto, Al Gore and Woody Harrelson are all heroes that come to mind. There are lots more, but these are a few examples.
- What does success mean to you?
Success means being proud of who you are, enjoying what you do, doing it well, and being able to do all of that without regret or remorse.
A real pleasure to meet you.
Regards to you all. | https://blogs.technet.microsoft.com/canitpro/2008/04/03/mvp-profile-kirk-munro/ | CC-MAIN-2019-04 | refinedweb | 934 | 64 |
In python, variables are the named locations in the memory to store the data values. The variables will act as a container to hold the data values that can be changed any time based on the defined conditions in the applications.
In python, to declare the variable assign value to it without mentioning any type, unlike other programming languages. As Python is a dynamically typed language, the python interpreter will automatically set the variable type based on the assigned value.
Following is the example of declaring and assigning value to the variables in python.
a = 100
b = "Welcome to Tutlane"
print(a) #100
print(b) #Welcome to Tutlane
If you observe the above example, we created two variables (a, b) without specifying the variable type. As python is a type-inferred language, it will automatically decide the type based on the assigned value.
In python, the variables are not bound to any specific type. So, you can change the type of variables even after they set.
Following is the example of changing the type of a variable in python by assigning the different values.
a = 100
print(type(a)) //#<class 'int'>
a = "Welcome to Tutlane"
print(type(a)) //#<class 'str'>
If you observe the above example first, we created a variable (a) by assigning an integer (100) value, and the data type of a variable is
int. Afterward, we used the same variable to store string (Welcome to Tutlane) value, and the data type of variable is changed to the
string.
As discussed, Python is a dynamically-typed language, and the type checking will happen at runtime. So, the type of variables will decide during the runtime based on the assigned value.
While defining the variables in a python programming language, you need to follow specific naming convention rules.
The following are some valid ways to define the variable names in the python programming language.
# Valid variable names
abc = 100
a2b = "Hi"
_abc = 'Welcome'
Abc = 30.5
ABC = "Test"
The following are some invalid ways to define the variable names in the python programming language.
# Invalid variable names
2ab = 20
a b c = "Test"
a-bc = 30.5
global = 20
In python, you can assign multiples values to multiple variables in a single line like as shown below.
a, b, c = 10, 20.5, "Welcome"
print(a) #10
print(b) #20.5
print(c) #Welcome
If you observe the above example, we defined multiple variables (a, b, c) with various values in a single line.
In python, you can also assign a single value to multiple variables at once by defining it in a single line like as shown below.
a = b = c = 10
print(a) #10
print(b) #10
print(c) #10
If you observe the above example, we assigned the same value (10) to multiple variables (a, b, c) in a single line.
Generally, we will use a
Following is the example of using + character in
#using + character to combine variables
a = "welcome"
b = " to tutlane"
print(a + b) #welcome to tutlane
c = 10
d = 20
print(c + d) #30
If you observe the above example, we used + character in a print statement to add one variable to another. If two variables are string type, then the + character will combine the text, and if two variables are number type, then the + character will act as a mathematical operator.
If we use the + character to combine text and variables, it will work fine, but if we use the + symbol to combine the variables of different types (For example, string type and number type) we will get an error.
Following is the example of combining the text and variable.
#combine text and variable
name = "Tutlane"
print("Welcome to " + name) #Welcome to Tutlane
Following is the example of combining the string and number datatype variables.
#combine string and number type variables
a = 10
b = "welcome"
print (a + b) #TypeError: unsupported operand type(s) for +: 'int' and 'str'
This is how we can print the required output using + character in python print statement.
In python, the access of variables inside the program will vary based on the defined scope of variables. In python, you can define two types of variables, i.e., local variables and global variables.
The scope of python local variables restricted to the function where we defined. We can't access the variable outside of the function.
Following is the example of defining the local variables in python.
def greeting():
greet = "Welcome to Tutlane"
print(greet)
If you observe the above example, we defined the greet variable inside the greeting() function, but we are trying to access the greet variable outside of the defined function.
When you execute the above program, you will get an error like as shown below.
To access variables anywhere in the program, you need to create variables outside of the function, and those will be called global variables in python.
Following is the example of creating the global variables in python.
msg = "welcome to tutlane"
def greeting():
print(msg)
greeting() #welcome to tutlane
If you observe the above example, we defined the global variable msg outside of the function and accessing the same variable inside the function.
In python, if you create a variable inside of function with the same global variable name, that variable scope is limited to that function only. The global variable will remain the same as its original value.
Following is the example of creating the variable inside of function with the same global variable name in python.
msg = "Welcome to Tutlane"
def greeting():
msg = "Learn Python"
print(msg)
greeting()
print(msg)
If you observe the above example, we created a variable (msg) inside the greeting function with the same global variable name (msg).
When we execute the above program, we will get the result as shown below.
Learn Python
Welcome to Tutlane
If you observe the above result, the scope of the variable that we defined inside of function with the same global variable name is limited to within the function.
If you want to declare a variable inside the function (local variable), but want to use it outside of the function, you need to define the variable with the
global keyword.
Following is the example of creating the global variable inside the function using the
global keyword in python.
def greeting():
global msg
msg = "Learn Python"
greeting()
print("Welcome, "+ msg) #Welcome, Learn Python
If you observe the above example, we created a global variable (msg) inside the greeting() function using a
global keyword and we are accessing the same variable outside of the function.
The
global keyword is also useful to make the changes to global variables inside of the function.
Following is the example of changing the value of the global variable inside of function in python.
msg = "Welcome to Tutlane"
def greeting():
global msg
msg = "Learn Python"
greeting()
print(msg)
If you observe the above example, we modify the value of the global variable (msg) inside the function by using a global keyword in python.
When we execute the above program, we will get the result as shown below.
If you observe the above result, the global variable (msg) value has been updated with the changes we made in the greeting function.
This is how we can use variables in python to store the data values in memory and perform required operations based on our requirements. | https://www.tutlane.com/tutorial/python/python-variables | CC-MAIN-2020-45 | refinedweb | 1,230 | 58.62 |
Spawn a child process, given a vector of arguments and an environment
#include <process.h> int spawnve( int mode, const char * path, char * const argv[], char * const envp[] );
variable=value
that's used to define an environment variable.
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The spawnve() function creates and executes a new child process, named in path with the NULL-terminated list of arguments in the argv vector.
The spawnve() function isn't a POSIX 1003.1 function, and isn't guaranteed to behave the same on all operating systems. It calls spawn().()):
The spawnve() function's return value depends on the mode argument:
If an error occurs, -1 is returned (errno is set).
See also the errors for ConnectAttach() and MsgSendvnc().
If mode is P_WAIT, this function is a cancellation point. | https://www.qnx.com/developers/docs/7.1/com.qnx.doc.neutrino.lib_ref/topic/s/spawnve.html | CC-MAIN-2022-27 | refinedweb | 145 | 58.79 |
In this tutorial, you will build a desktop Flickr image uploader using the AS3/FlickrAPI and exporting the application as an AIR app.
Step 1: Create a New Flex Project
Start out by opening Flex Builder and creating a new project by hitting "File > New > Flex Project". Go ahead and give your project a name and location. The main thing you need to worry about here is the "Application Type", make sure that is set to "Desktop (runs in Adobe AIR)".
Step 2: Download Necessary Libraries
Before we begin programming, we need to download the libraries we'll need for this project. Those libraries include the corelib by Adobe and of course the Flickr AS3 library
You'll need to get the latest build of the Flickr AS3 API via SVN because there is a problem with the "upload" function of the released builds that hasn't been fixed yet.
Step 3: Move Libraries to Project Folder
With your libraries downloaded, we need to move them into our project folder. Unzip the "corelib" and navigate to the "com" folder inside of the "src" folder. Now open your project folder in a new window and open the "src" folder. Drag the "com" folder to your project's "src" folder.
Inside of the Flickr API folder, you'll find a similar file structure as the "corelib" folder. Drill down into the "src > com > adobe > webapis" folder and grab the "flickr" folder. Move that folder over to the project folder into this directory "src > com > adobe > webapis".
Head back to Flex Builder and refresh your Package Explorer. You should now see the libraries you downloaded showing up inside of your project folder.
Step 4: Set Up the User Interface - Part 1
We'll not only be uploading images to our Flickr account, but the Title, Tags, and a Description as well, so we'll need the proper fields.
Set your document size to 320x400. Right-click on your Flex Project Folder and select "properties". Scroll down to the Flex Compiler panel and enter the following code into the "additional compiler arguments" field, "
-default-size 320 415".
Switch to Design view, open the Components panel and drag out an Image component. Make sure to give the Image component an id titled "imagePreview", set its height to 205 pixels and constrain its proportions to be 10 pixels from the left, right, and top of the view in the Layout panel.
Next, drag out two TextInput components to the stage and stack them on top of one another with a padding of 10 pixels between them constraining them both to 10 pixels from the left and right. Give the first field an ID of "imageTitle" and set the text value to "Image Title". Give the second field an id of "imageTags" and a text value of "Tags".
Step 5: Set Up the User Interface - Part 2
So far we have a preview area for our selected image, fields to enter a title and tags for our image. One more piece of data is missing, a description. Go to the Components panel and drag a Text Area component out and place it below the Tags field. Set the height to 70 pixels and constrain the width to 10 pixels from the right and left. Give the Text Area an id of "imageDesc" and text value of "Image Description".
Now all we need now is a Select button, an Upload button and a progress bar to monitor our upload progress. Go ahead and drag two buttons to the display area and a progress bar. Place the first button 10 pixels from the left and constrain it to that position. Give it an id of "selectBtn" and set its label to "Select". Place the second button 10 pixels from the right and constrain it to that position as well. Set its id to "uploadBtn" and label it "Upload". Position the progress bar in the middle of the two buttons and constrain it to the middle of the application. Let's give it an id of "pBar".
Your application should look like the image below:
Step 6: Tab Index
Switch to code view inside of Flex Builder and find the input fields you just created. The three fields you'll need are the "Title", "Tags" and "Description" fields. Click inside of each one and add this code
tabIndex="n", replacing "n" with a sequential number, like so:
<mx:Image <mx:TextInput <mx:Button <mx:Button <mx:ProgressBar <mx:TextInput <mx:TextArea
Step 7: Sign Up For A Flickr API Key
First, head on over to Flickr and sign up for an API key.
Flickr will ask you to name your application and give it a description.
Once you fill in the proper information and agree to the to the terms and conditions, click submit and then Flickr will direct you to a screen with your API Key and the Secret Key for your app. Keep the API Key and Secret handy, you'll need them soon.
Step 8: Create a Class to Connect to Flickr
Now let's create a new ActionScript Class that will serve as our connection to Flickr. Head back into Flex Builder and create a new ActionScript Class from the File > New menu; name it
FlickrConnect.
Go ahead and paste in these "import" commands and I'll explain their purpose.
package { import flash.net.SharedObject;//needed to set system cookies import flash.net.URLRequest; import flash.net.navigateToURL;//opens the authorization window in the browser import mx.controls.Alert;//we'll use two alert windows in our app import mx.events.CloseEvent;//Detects when the alert window is closed //Import all the the Flickr API classes to make sure we have everything we need import com.adobe.webapis.flickr.*; import com.adobe.webapis.flickr.events.*; import com.adobe.webapis.flickr.methodgroups.*;
With this class, we're going to pass Flickr our API key and the app's secret key and in return we'll get an authentication token which we'll store as a cookie on the user's system. When our app sends the key to Flickr it will open a browser window asking the user to authenticate the application with their Flickr account, once they choose "authorize" and they return to the app they will be be greeted by an alert window asking them to click "OK" once they have authorized the app with Flickr. Doing this will then send off for the security token and set the cookie storing that token locally in order to bypass the authentication process every time the app is opened.
Step 9: Create Flickr Instance and Initialize the Service
public class FlickrConnect { public var flickr:FlickrService; private var frob:String; private var flickrCookie:SharedObject = SharedObject.getLocal("FlexFlickrUploader");//store Flickr Token in a cookie public function FlickrConnect() { flickr = new FlickrService("xxxxxxxxxxxxxxxxxxxxxxxxxxxx");//enter Flickr API key flickr.secret = "xxxxxxxxxxxxxxxx"; if(flickrCookie && flickrCookie.data.auth_token)//if cookie AND Auth Token exist, set token { flickr.token = flickrCookie.data.auth_token; }else//if not, get authentication { flickr.addEventListener(FlickrResultEvent.AUTH_GET_FROB, getFrobResponse); flickr.auth.getFrob(); } }
In the code above we start by declaring 3 variables that we'll be using in this class. The "flickr" variable is set as public because we'll reference this object from within the parent application, the other two variables are private because they are specific to this class only.
In the class constructor, initialize the flickr object by setting it equal to a "new FlickrService" and passing in your Flickr API key as a string. Underneath, set the secret key of our newly created service to the key given to you by Flickr when you applied for an API key.
Underneath our declarations, we first check to see if our system cookie exists and if an "authentication token" has been set. If both of those arguments equal true, we go ahead and set the "token" property of our flickr service equal to the authentication token stored in our cookie. If either of those arguments are not true, we continue the process of authenticating the application.
Add and event listener to the flickr service. The type is of "FlickrResultEvent" and we're listening for "AUTH_GET_FROB". Enter the function name "getFrobResponse". Start a new line and execute the "getFrob()" function of the Flickr API.
Frob
Flickr doesn't define the term "frob" in their API documentation, however a brief explanation is listed below.
A 'frob' is just a hex-encoded string that the Flickr servers hand out as part of the authorization process; a more conventional term for it would be a 'nonce'.
A more detailed definition can be found here.
Step 10: Get Frob
The function
getFrob() will send our API key to Flickr and if the key is valid, Flickr will return a string to us. The frob will be passed to another function that will construct a login URL that we'll direct the user to to login into their Flickr account and give our app permission to upload photos.
private function getFrobResponse(event:FlickrResultEvent):void { if(event.success) { frob = String(event.data.frob); var auth_url:String = flickr.getLoginURL(frob, AuthPerm.DELETE);//generates a login URL navigateToURL(new URLRequest(auth_url), "_blank"); //opens the browser and asks for your verification Alert.show("Close this window AFTER you login to Flickr", "Flickr Authorization", Alert.OK, null, onCloseAuthWindow); } }
Once we get a response back from Flickr with a frob, we check to see if the response returned a "success". Once it's determined a frob was returned, we assign the data returned to a String variable, create another String variable that will be the authentication URL, and then use one of the Flickr AS3 API built in functions that will generate our login URL and assign its value to our "auth_url" string.
The next part should be familiar to anyone who's worked in Flash for a while. Use Flash's built in "navigateToURL" function to open Flickr in the browser and prompt the user to log in and give permission to our app to access their account. As part of this process we'll be asking Flickr for "DELETE" permission which is the highest access level an app can have. With that level of access, we'll be able to upload, edit, add, and delete. This is a bit overkill, but I chose to keep it at this level as a reference for your own projects.
At the same time we're being directed to Flickr's login page, our app is generating an alert window. This window will include the message "Close this window AFTER you login to Flickr". When the user has logged into Flickr and returned to the app, they will hit "OK" which will call another function that will retrieve an access token from Flickr.
Step 11: Get Access Token
public function onCloseAuthWindow(event:CloseEvent):void { flickr.addEventListener(FlickrResultEvent.AUTH_GET_TOKEN, getTokenResponse); flickr.auth.getToken(frob); }
This function simply asks Flickr for an access token, Flickr will see that our app (as identified by our frob) has been authorized and will return the token.
Step 12: Set Access Token and System Cookie
private function getTokenResponse(event:FlickrResultEvent):void { if(event.success) { var authResult:AuthResult = AuthResult(event.data.auth); flickr.token = authResult.token; flickrCookie.data.auth_token = flickr.token; flickrCookie.flush();//set cookie on local computer } }
The last function in our FlickrConnect class will accept the token sent from Flickr and store it in a system cookie. Start by again checking to make sure the event was successful. If we were successful in retrieving a token from Flickr, create an instance of "AuthResult" and assign it to a variable called "authResult". Set the value of the variable equaled to the "auth" value of the returned data. Set the "token" property of our FlickrService to the "token" property of our "authResult" variable.
Next, assign a property of "auth_token" to the cookie we created at the beginning of the class (flickrCookie) and equal it to the "flickr.token". All that is left is to set the cookie on our local computer, we do so by using the "flush()" function of the SharedObject in AS3.
Now that we have a class to connect to Flickr and set our authentication and permissions, we can start coding the main part of our application.
Step 13: Imports and Variables
In our main script, we'll import three classes; the class we just created, the built in Flex Alert class, and the Upload class of the Flickr AS3 API.
Of the four variables we're going to need, the first one we need to create is an instance of the FlickrConnect class we just created, name that class "flickrLogin". Create a variable called "uploader" with an instance of "Upload" and pass in the flickr instance from our FlickrConnect class. Create two more variables, both of the "File" type. One of those variables, we'll call "file", the other "fileToOpen".
import FlickrConnect; import mx.controls.Alert; import com.adobe.webapis.flickr.methodgroups.Upload; private var flickrLogin:FlickrConnect = new FlickrConnect(); private var uploader:Upload = new Upload(flickrLogin.flickr); private var file:File; private var fileToOpen:File = File.documentsDirectory;
Step 14: Initialize and Image Select Function
Now that we have our imports and variables set up, we need to initiate our application. During the initialization process, set the progress bar (pBar) to invisible. We only want the bar to be visible when we're uploading an image.
The next function is to open the file browser for the user to select an image.
private function init():void { pBar.visible = false; } private function selectImageFile(root:File):void { var imgFilter:FileFilter = new FileFilter("Images", "*.jpg;*.gif;*.png"); root.browseForOpen("Open", [imgFilter]); root.addEventListener(Event.SELECT, fileSelected); }
Step 15: Read File Information and Update Input fields
Create a function named "fileSelected" which will fire when the user selects an image. This function will also read that image's file name and url. Update the "Title" input field with the selected file's name and target the "Image Preview", setting it's URL to the URL of the file selected.
private function fileSelected(event:Event):void { imageTitle.text = fileToOpen.name; imagePreview.source = fileToOpen.url; }
Step 16: Upload File and Track Progress
Create two more functions, one to handle the upload of the image to Flickr and the other to track its progress via the progress bar.
Name the first function "uploadFile" with a type of "MouseEvent". Inside of that function, set the variable that we created earlier, "file" to type "File" and pass in the URL of the image selected by the user. Add two listeners to that variable. The first listener will be a "DataEvent" listening for upload complete and its target function will be called "uploadCompleteHandler". The second listener will be a progress event and its target will be the function "onProgress".
Create the second function and name it "onProgress". Inside of the function set the progress bar to visible and set its source to that of "file".
private function uploadFile(event:MouseEvent):void { file = new File(fileToOpen.url); file.addEventListener(DataEvent.UPLOAD_COMPLETE_DATA, uploadCompleteHandler); file.addEventListener(ProgressEvent.PROGRESS, onProgress); uploader.upload(file, imageTitle.text, imageDesc.text, imageTags.text); } private function onProgress(event:ProgressEvent):void { pBar.visible = true; pBar.source = file; }
Step 17: Upload Complete
Once the upload is complete, Flickr will send a response back to our app letting us know the upload has finished. Flickr's response back to us will be in the form of XML, we'll need to parse that XML and determine the response whether it be an "ok" or something else. All we need to know is if the response is "ok" then launch an Alert window stating that the upload succeeded or if the response if anything else, it means that the upload failed and we need to let the user know.
private function uploadCompleteHandler(event:DataEvent):void { pBar.visible = false; trace("upload done"); var xData:XML = new XML(event.data); trace(xData); if(xData[0].attribute("stat") == "ok"){ Alert.show("Upload Successful", "Upload Status"); }else { Alert.show("Upload Failed", "Upload Status"); } }
Step 18: Call Functions and Initiate Application
At this point, if you test your application nothing will happen. That's because we haven't added click functions to our buttons and more importantly, we haven't initiated our application.
Inside of your main application's code, scroll down and find the code for the buttons we created using the GUI at the beginning of this tutorial. We'll need to add "Click" handlers to each button to tell them which function to execute when they are clicked.
The select button will call
selectImageFile(fileToOpen) with the variable
fileToOpen passed into it.
<mx:Button
The upload button will call
uploadFile(event) and will pass in an event into it.
<mx:Button
Now all we need to do is initiate our application. We do this by adding some code to the top of our file under the "WindowedApplication" element. All we need to do is add call the function
init() with this
applicationComplete. It should look like this:
<mx:WindowedApplication xmlns:
Step 19: Test Your Application
Once you've finished coding your application, it's time to test it to make sure it works.
Click "debug" in Flex Builder to deploy the application.
The application will alert you to only click "OK" after you log in to Flickr and give permission to the app to access your Flickr account.
Step 20: Select An Image To Upload
After clicking "OK" you'll see your blank application waiting for input.
Click "Select" and navigate to an image on your local computer. Once selected, click "Open". You should now see a preview of the image you selected. Go ahead and give it a title and a description. Think of some tags that go along with the image and enter them into the "tags" field, separated by commas. Click "Upload".
If you were successful you should see the following screen.
Just to make sure the image uploaded successfully, head on over to your Flickr account and view the image you just uploaded.
Step 21: Export as AIR
Now that we know our app is working properly, we can export it as an AIR application. To do that, click "File > Export > Release Build". There aren't any settings on the first window that we need to change, so click "Next" and head to the next window.
Create a certificate by filling in the "Publisher Name" and "Password" fields. Browse a location to save the certificate and name it. Click "finish" and wait for your AIR app to build.
Conclusion
Your app is finished, it's working and you've exported it out for AIR. What now? Now you can expand upon this application with some more of the API functions or you can deploy as is.
Thank you for taking the time to work through this tutorial, I hope you enjoyed it. And remember... keep learning!
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
Envato Market has a range of items for sale to help get you started. | http://code.tutsplus.com/tutorials/build-a-desktop-flickr-uploader-with-air--active-5775 | CC-MAIN-2016-22 | refinedweb | 3,191 | 62.98 |
My Blogging Process - Part 1
Calvin A. Allen
Originally published at
calvinallen.net
on
・4 min read
After a conversation about “how we blog” in a Slack channel I’m part of, I decided it may be best to just blog it. Nothing more meta than blogging about your blog, right?
My entire blogging process encompasses a variety of technologies:
- Jekyll
- GitHub
- Netlify
- Microsoft Power Automate (previously “Flow”)
- Azure Functions
- Azure CosmosDB
- Rebrandly
I’m going to split this into two posts given the length of the list above, so in this post, we’ll only be covering items 1-3.
The Technology Stack
Jekyll
It all starts with Jekyll, a static site generator written in Ruby. For the most part, its a basic Jekyll site, but I do have two custom plugins associated with it that do some “magic”. I also don’t post pages that have future dates, which allows me to stage articles and push them to the repository ahead of time (yes, they would be visible in the repo, just not live on the site - I’m okay with that).
Let’s talk about the plugins.
./_plugins/file_exists.rb
This plugin gives me a custom liquid tag I can use in my templates to see if a given file exists on disk. I use this for “cover image” on posts. The general idea being, for a given post, I can add a
cover.jpg file alongside the post, and it gets used in social media cards. If the file doesn’t exist, I fall back to a generic image (my headshot), so I will always have a cover image - just maybe not a custom one for the post.
I use it, like so, in my atom.xml file:
{% assign cover_image = post.path | prepend: '/' | prepend: site.source %} {% capture cover_image_exists %}{% cover_exists {{ cover_image }} %}{% endcapture %} {% if post.image and cover_image_exists == "true" %} <media:thumbnail xmlns: {% else %} <media:thumbnail xmlns: {% endif %}
and like this, in the head.html file (which gets applied to every single page of the site, not just the posts themselves):
{% assign cover_image = page.path | prepend: '/' | prepend: site.source %} {% capture cover_image_exists %}{% cover_exists {{ cover_image }} %}{% endcapture %} {% if page.image and cover_image_exists == "true" %} <meta property="og:image" content="{{ site.url }}{{ page.url }}{{ page.image }}" /> {% else %} <meta property="og:image" content="{{ site.url }}/images/social/headshot.jpg" /> {% endif %}
./_plugins/postfiles.rb
This one is a little more involved, but the gist is this:
When I add a new post to my blog, I create a folder with a specific naming convention in a folder that designates the year:
/_posts/2020/2020-01-21-my-blogging-process/
Inside of that folder goes the post file itself, with the same name as the folder:
/_posts/2020/2020-01-21-my-blogging-process/2020-01-21-my-blogging-process.md
When I want to add a custom cover image to a specific post, that folder is where I would drop the
cover.jpg, so you end up with:
/_posts/2020/2020-01-21-my-blogging-process/
- 2020-01-21-my-blogging-process.md
- cover.jpg
This plugin,
postfiles.rb, handles moving that
cover.jpg from the
_posts staging folder to the REAL FOLDER when the site is compiled. By default, in Jekyll, that operation would not work, unfortunately. This allows me to place any screenshots related to a specific post into that same directory as the post, and not in some generic location at the root of the site, like,
calvinallen.net/images/, which takes more effort to maintain, in my opinion.
Now, you might say, “but creating all those folders and files is annoying”, and you’d be right. That’s why I have a rake task in the repo that asks me a couple of questions, and then creates the folder, markdown file, and then launches it in my editor (Visual Studio Code). The only “manual” step after that is dropping in a
cover.jpg file, if necessary
GitHub
Every bit of my site is git-controlled on GitHub, and is completely “open source”. I have an
edit link configured on each post that allows a viewer to create a quick edit and pull request on GitHub if they were to see a problem with a post and wanted to suggest the fix. Now, even though I use Jekyll, I am NOT using “GitHub Pages”, because they do not support the custom/unsupported plugins, which I have / use (mentioned in the previous section). And, because of that, we go into the hosting section with Netlify.
Netlify
Netlify offers free building and hosting, plus TLS certificates from Let’s Encrypt (AND AUTO RENEWALS!), and it all gets triggered when I push to the
master branch of my sites repository (mentioned above).
Conclusion
That sums up the basic workflow I have of “adding a new post” and getting it deployed. Items 4-7 are all about getting that new post “socialized”, and we’ll discuss all of those, coming up in Part 2.
🎩 JavaScript Enhanced Scss mixins! 🎩 concepts explained
In the next post we are going to explore CSS @apply to supercharge what we talk about here....
| https://practicaldev-herokuapp-com.global.ssl.fastly.net/calvinallen/my-blogging-process-part-1-30j1 | CC-MAIN-2020-24 | refinedweb | 851 | 62.98 |
C++ Code for Testing Random Number Generator
I am using "Numerical Recipes (3rd Edition)" to create a numerical method that computes random numbers. My problem is that I'd like to use tests such as Chi-Square to test the uniformity/performance of the generator, but I don't know how to do this.
You can see my basic number generator in the readable file randomnumbers. I've included portions from "Numerical Recipes" (which you can easily find on Scribd) on random numbers and statistical functions. Based on the structure of the codes, I need help writing a program that uses Chi-Square (or something else appropriate from statistical functions.pdf) to test my random number generator.
I am extremely new to c++, so if my bid is too low please let me know. I'm not sure how long the necessary code will be or how much expertise it will require, so I don't know how much you will charge.
In case you can't read my cpp code, it's very basic:
#include <iostream>
#include "/Users/.../Documents/NR/code/nr3.h"
#include "/Users/.../Documents/NR/code/ran.h"
int main(){
Int seed = 12345678;
Ran myran(seed);
for(int i=0;i<10;i++){
cout<<"Using ran = " << myran.doub() <<endl;
}
return 0;
}
and returns 10 random numbers. This is just an example, it probably isn't my final code, just something simple for you to see what I'm going for. Please note that I am very weak with c++ and really don't know much about what I'm doing.© BrainMass Inc. brainmass.com June 18, 2018, 12:11 am ad1c9bdddf
Solution Preview
Hi,
I wrote the program using the following link as a reference: ...
Solution Summary
C++ Code for testing random number generators are examined. | https://brainmass.com/math/number-theory/code-testing-random-number-generator-456054 | CC-MAIN-2018-26 | refinedweb | 299 | 63.19 |
django-journal 1.23.0
Keep a structured -- i.e. not just log strings -- journal of events in your applications
Log event to a journal. Keep details of the event linked to the event message, keep also the template for displaying the event in case we want to improve display.
To use just do:
import django_journal django_journal.record('my-tag', '{user} did this to {that}', user=request.user, that=model_instance)
Admin display
admin.JournalModelAdmin recompute messages from the journal message as HTML adding links for filtering by object and to the change admin page for the object if it has one.
Recording error events
If you use transactions you must use error_record() instead of record() and set JOURNAL_DB_FOR_ERROR_ALIAS in your settings to define another db alias to use so that journal record does not happen inside the current transaction.
- Author: Entr'ouvert
- Download URL:
- License: AGPLv3
- Package Index Owner: entrouvert
- DOAP record: django-journal-1.23.0.xml | https://pypi.python.org/pypi/django-journal/1.23.0 | CC-MAIN-2016-50 | refinedweb | 158 | 55.54 |
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/types.h>
#include "m_ctype.h"
#include "my_byteorder.h"
#include "my_dbug.h"
#include "my_inttypes.h"
#include "my_loglevel.h"
#include "my_macros.h"
#include "my_xml.h"
#include "mysys_err.h"
Convert a string between two character sets.
Optimized for quick copying of ASCII characters in the range 0x00..0x7F. 'to' must be large enough to store (form_length * to_cs->mbmaxlen) bytes.
Convert a string between two character sets.
'to' must be large enough to store (form_length * to_cs->mbmaxlen) bytes.
Identify whether given like pattern looks like a prefix pattern, which can become candidate for index only scan on prefix indexes.
Get the length of the first code in given sequence of chars.
This func is introduced because we can't determine the length by checking the first byte only for gb18030, so we first try my_mbcharlen, and then my_mbcharlen_2 if necessary to get the length | https://dev.mysql.com/doc/dev/mysql-server/latest/ctype_8cc.html | CC-MAIN-2019-30 | refinedweb | 151 | 72.73 |
go to bug id or search bugs for
New/Additional Comment:
Description:
------------
Building the bundled intl extension (I happen to build it separately from php, using phpize) fails on Mac OS X 10.7.5 (with clang 425.0.28 as provided by Xcode 4.6.3), in several PHP versions (7.2.9, 7.1.21). It worked in previous PHP versions. Here's the error message from PHP 7.2.9:
In file included from ext/intl/intl_convertcpp.cpp:24:
In file included from /opt/local/include/php72/php/main/php.h:35:
In file included from /opt/local/include/php72/php/Zend/zend.h:328:
/opt/local/include/php72/php/Zend/zend_operators.h:114:18: error: use of undeclared identifier 'finite'
if (UNEXPECTED(!zend_finite(d)) || UNEXPECTED(zend_isnan(d))) {
^
/opt/local/include/php72/php/main/../main/php_config.h:2619:24: note: expanded from macro 'zend_finite'
#define zend_finite(a) finite(a)
^
/opt/local/include/php72/php/Zend/zend_portability.h:312:52: note: expanded from macro 'UNEXPECTED'
# define UNEXPECTED(condition) __builtin_expect(!!(condition), 0)
^
Perhaps it was caused by this commit:
Add a Patch
Add a Pull Request
Thanks for the report. Perhaps you need to use phpize from the same (or at least) PHP version. Older PHP versions will have different php_config.h. Otherwise, ext/intl can also be built shared.
Btw, what does __cplusplus macro contain?
Thanks.
I do use phpize from the same version of PHP. I know that intl can be built as a separate extension using phpize; we do so in MacPorts. This bug report is that on this specific version of Mac OS X with its version of Xcode, and this version of PHP, it fails to compile. It's not a problem with older versions of PHP on this version of Mac OS X and Xcode, and it's not a problem with this version of PHP on other versions of Mac OS X and Xcode.
Both on Mac OS X 10.7 where the build fails, and on 10.8 and later up to at least 10.13 where the build succeeds, __cplusplus is 199711L. On Mac OS X 10.6 where the build also succeeds, the Apple gcc 4.2.1 compiler is used, and its __cplusplus value is 1.
In Nix we have the same issue when building our packages on darwin.
It's present in 7.2.9, 7.2.10 as well as 7.1.21. It's not present in 7.0.32 or 5.6.38.
Here's a full build log of 7.2.10 on Darwin:
For the moment we have disabled the intl module on Darwin just to get the new versions into our unstables. But since we changed the feature set available it also means that we're unable to backport bug fix releases to our users of our stable channels on both NixOS, other Linux and Darwin.
@elis the build log seems to available anymore. Perhaps a gist could work? Also the OS and compiler version were interesting to know.
@php-bugs-2018 Otherwise I was doing some research and stumbled upon the fact, that OS X 10.7 is EOL already for about 4 years. It seems that even 10.9 already EOL. The issue seems quite weird because the condition checks already whether it's lower than C++11. You could try to remove __cplusplus check in configure.ac and Zend/configure.ac and see whether it changes something. If it doesn't, that it is probably missing the other part of the condition.
Thanks.
Yes, we know Mac OS X 10.7 is old. But maybe there's somebody out there trying to learn PHP, and the only computer they have available is an old hand-me-down Mac that cannot be upgraded past 10.7.
Here are some fresh build logs for you, on Mac OS X 10.7.5, Xcode 4.6.3, Apple LLVM version 4.2 (clang-425.0.28) (based on LLVM 3.2svn).
php 7.1.22:
php 7.2.10:
Can't provide a build log of php 7.3.x because of bug #76825.
The following patch has been added/updated:
Patch Name: bug76826.poc.0.patch
Revision: 1538723399
URL:
Thanks for the build new logs. Is that with the vanilla package? Were you able to play with the condition as i've suggested? Perhaps you could try the patch i've just attached. I'd be really reluctant to do fixes for EOL versions, but at least we could figure out what goes wrong there. Probably only you can do that, as the chances to find someone with this OSX version are probably marginal.
Thanks.
@ab I've completely missed the answers here. I have tried to build 7.2.11 and 7.1.23 which have failed the same way.
Too bad it's the same log viewer as before: -- but it's the only way I have to get the log out from the darwin builds since I don't have any darwin machines myself.
I've also tried to apply your patch on 7.2.11 and 7.1.23 for darwin, but it didn't apply at all:
@elis yep, that's the same bug. Thanks for re-posting the log. I've no Mac as well, so only able to guess the conditions. With the patch - you need to ensure the -p option has the correct level number, depending on the CWD when the patch gets applied. Alternatively you can edit the paths in the patch, so they suffice for the build constellation.
Thanks.
@ab - I think the reason for the patch not applying was because the lines it tried to remove wasn't there in that version of the source that it tried to patch. It also seems to differ between 7.1.X and 7.2.X what they look like.
Make sure to take a copy of the log if you have use for it since it will go away. Nothing I can do about that :/
Also, could you lend me some pointer to how to subscribe to this issue. I've tried several times to fill email, solve math-problem and press subscribe but haven't got a single message.
I am not a C++ programmer but from what I've been able to research, "finite" is an old deprecated method of determining if a number is finite, and "isfinite" is its replacement, available as of C99.
Given that, the PHP code used to make sense: It used to use "isfinite" if it was available, and otherwise it would use "finite" if that was available. The change in ad790bea2e4a8a25c79ceab964601f3785cd2bf1 seems wrong to me, because now, if compiling in C++11 mode or newer, the newer "isfinite" method isn't used, and the deprecated "finite" is used instead. Why would we want to use a deprecated function when a newer replacement function is available, especially if we're compiling in a newer language mode?
The reason for the compile failure I reported appears to be that the intl extension uses icu, and used `icu-config --cxxflags` to get the flags it should use. We're using icu 58.2 in MacPorts, and it returns "--std=c++0x". (Yes, there is a typo: it should be one dash instead of two, but the compiler still recognizes the flag despite that.) The intl extension was changed in 4acc8500acd134dfab1a2ddb83aeb39fa1033abe on 9/28/2018 to always use C++11 mode regardless what ICU says. So we're in C++11 mode and __cplusplus is 201103L which is why PHP is now trying (misguidedly, in my opinion) to use "finite" here. I've also seen the same problem when building the third-party swoole extension, which also uses C++11.
So why doesn't "finite" exist, even though HAVE_FINITE is 1? On Mac OS X 10.7, /usr/include/math.h is just a wrapper that includes /usr/include/architecture/i386/math.h, and in that file, the definition of "finite" and other deprecated functions is inside a block which checks:
#if !defined( __STRICT_ANSI__) && !defined(_ANSI_SOURCE) && (!defined(_POSIX_C_SOURCE) || defined(_DARWIN_C_SOURCE))
And it turns out that __STRICT_ANSI__ is defined (due, I believe, to requesting conformance to an ISO C/C++ standard, in this case c++0x), therefore these legacy functions don't get defined. I can get the build to succeed on 10.7 by undefining __STRICT_ANSI__ (e.g. by adding "-U__STRICT_ANSI__" to CXXFLAGS), though I don't know what other implications that might have elsewhere in the system headers so I don't think this is the solution we want to use.
In OS X 10.8, Apple moved the header to /usr/include/math.h and rewrote it so that it only checks:
#if __DARWIN_C_LEVEL >= __DARWIN_C_FULL
__DARWIN_C_LEVEL is defined to be __DARWIN_C_FULL so on 10.8 and later old "finite" is still available in strict ANSI mode.
As for why it also built successfully on 10.6, that's because on that system `icu-config --cxxflags` returns nothing, because the old g++ 4.2.1 compiler on that system doesn't have any C++11 support. Therefore we're not in strict ANSI mode, therefore the deprecated functions are still defined.
I have tried reverting ad790bea2e4a8a25c79ceab964601f3785cd2bf1 and it builds successfully on 10.7, and still builds fine on 10.13, despite the fact that the commit message for ad790bea2e4a8a25c79ceab964601f3785cd2bf1 says that "isfinite" was supposed to have moved into the std namespace as of C++11. Further research tells me that "isfinite" and friends are only in the std namespace if you #include <cmath>. PHP doesn't do that; instead, it includes the older <math.h>, in which case those functions are not in the std namespace. I believe this confirms my suspicion that ad790bea2e4a8a25c79ceab964601f3785cd2bf1 doesn't do anything useful and should be reverted.
Thanks for the detailed analysis. The finite family is with this regardobsolete because C99 defines isfinite, the man page says, but all the current PHP versions don't use strict C99. In how far isfinite is better or finite is worse, is another question.
This issue seems to show up in quite different constellations, it is hard to find a middle ground. There was same issue reported in bug #74904 on Solaris, which was fixed by the existing patch (though other C++ issues arise). I with this regardwas able to reproduce this issue with older versions of gcc on Linux, too. Namely from gcc 4.9.x to gcc 5.x.x.
Now, on the Apple side, not sure whether C99 is default, but isfinite should definitely not be there when C++11 or up is compiled. With C++ it has to be std::isfinite from cmath. On some gcc versions however, including cmath brings another can of bugs, so the best way looks like to avoid it for now.
The facts you depict confirm as well, that this math.h vs. cmath/C++ topics in regard to differnet platforms and compiler versions are handled in very different ways and a diligence is due to keep and improve the compatibility. Were you able also to check the attached patch, which makes an exception for the Apple platform? If Apple allows isfinite even if C++11 is compiled, then that might be the way to solve it for that platform.
We might have it easier, when PHP has switched to C99. That however is to be checked and anyway won't affect already released branches.. | https://bugs.php.net/bug.php?id=76826&edit=1 | CC-MAIN-2022-40 | refinedweb | 1,921 | 75.71 |
Member
43 Points
Oct 17, 2008 04:54 PM|LINK
I created a new MVC BEta Project and copied over a few views and I get this error now:
'SubmitButton' is not a member of 'System.Web.Mvc.HtmlHelper'
It's not even listed as an option for me - what the hell?
Anyone know what I am doing wrong here?
MVC beta
Member
233 Points
Oct 17, 2008 05:16 PM|LINK
I believe this HtmlHelper is only availbe in the Futures assembly. The Futures assembly is no longer referenced automatically. It is a seperate download.
It looks like this HtmlHelper wasn't doing a whole lot for you in this case anyway. Why not just write:
<input type="submit" id="Search" name="Search" value="Search" class="btn"/>
Oct 18, 2008 12:39 AM|LINK
Download the MvcFuture bits from CodePlex for the Beta and then add MvcFutures back to the <Pages> section of your web.config:
<pages>
<namespaces>
<add namespace="Microsoft.Web.Mvc"/>
</namespaces>
</pages>
SubmitButton should then become available inside your view once you rebuild (the intellisense does not always immediately pickup on changes).
2 replies
Last post Oct 18, 2008 12:39 AM by jeloff | http://forums.asp.net/t/1335656.aspx/1?MVC+BETA+SubmitButton+is+not+a+member+of+System+Web+Mvc+HtmlHelper+ | CC-MAIN-2013-20 | refinedweb | 198 | 63.59 |
Here is a task out of a book
Your task is to write a C++ program to help you convert a number into roman numerals. Roman numerals are I for 1, V for 5, X for 10, L for 50, C for 100, D for 500, and M for 1000. Some numbers are formed by using a subtraction of Roman “digits”; for example, IV is 4, since V minus I is 4. Others are IX
for 9, XL for 40, XC for 90, CD for 400, and CM for 900. Your program will proceed in three parts:
1. Write a program that reads in a decimal number from the screen (using cin) and prints out
a simple Roman numeral. A simple Roman numeral does not use the subtraction rule. For
example, the simple Roman numeral for the number 493 will be CCCCLXXXXIII, and the simple
Roman numeral for the number 3896 will be MMMDCCCLXXXXVI. Your program should print the
entire Roman numeral on a single line, with no spaces between letters. You will need a while
loop in your program to print out the letters one at a time. Hint: Look at your minutes
program for ideas on how figure out which letter to print next.
2. Modify your program so that you print out all of the repeated letters in a single iteration
of your while loop. You will need to put a few for loops inside your while loop to do this
part. For example, if the Roman numeral you have to print out is MMMMXXX, the first part
iterated through your while loop seven times, one for each letter you were printing out. Now,
you want to iterate your while loop just twice, one for each kind of Roman numeral being
printed. Thus, the first iteration will print out MMMM, and the second will print out XXX. Your
final output will be MMMMXXX on a single line.
3. Modify your program to also handle subtractions. So now, 493 should give the answer CDXCIII,and 3896 should give the answer MMMDCCCXCVI.
I got the program to work with this code
#include <iostream> using namespace std; int main() { double num; int intnum, m, d, c, l, x, v, i, n; char yes ='y'; while (yes == 'y') { cout << "Enter a number: "; cin >> num; intnum = (int)num; if (intnum >= 1000) { m = intnum / 1000; n = 0; { for (n; n < m; n++) cout << "M"; } intnum = intnum%1000; } if (intnum >= 900) { cout << "CM"; intnum = intnum%900; } else if (intnum >= 500) { { d = intnum / 500; n = 0; for (n; n < d; n++) cout << "D"; } intnum = intnum%500; } if (intnum >= 400) { cout << "CD"; intnum = intnum%400; } else if (intnum >= 100) { { c = intnum / 100; n = 0; for (n; n < c; n++) cout << "C"; } intnum = intnum%100; } if (intnum >= 90) { cout << "XC"; intnum = intnum%90; } else if (intnum >= 50) { { l = intnum / 50; n = 0; for (n; n < l; n++) cout << "L"; } intnum = intnum%50; } if (intnum >= 40) { cout << "XL"; intnum = intnum%40; } else if (intnum >= 10) { { x = intnum / 10; n = 0; for (n; n < x; n++) cout << "X"; } intnum = intnum%10; } if (intnum >= 9) { cout << "IX"; intnum = intnum%9; } else if (intnum >= 5) { { v = intnum / 5; n = 0; for (n; n < v; n++) cout << "V"; } intnum = intnum%5; } if (intnum >= 4) { cout << "IV"; intnum = intnum%4; } else if (intnum >= 1) { i = intnum; n = 0; for (n; n < i; n++) cout << "I"; } cout << "\nWould you like to run this program again? (y/n): "; cin >> yes; cout << endl; } return 0; }
However, I don't quite understand the procedure that the directions tell me to do. For instance, number one says:
"You will need a while loop in your program to print out the letters one at a time."
Don't I need more than one like I do?
I also don't know how to approach number 2.
As you can tell, I don't use a while loop to print out the roman numerals. So I'm not following the directions.
Can anyone help clarify some of this for me?
Thanks | https://www.daniweb.com/programming/software-development/threads/382662/integer-to-roman-numeral-help | CC-MAIN-2021-17 | refinedweb | 671 | 72.39 |
Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
B01-02632560-294
The OpenCL Programming Book
The.
2
B01-02632560-294
The OpenCL Programming Book
Contents
Foreword 4 Foreword 6 Acknowledgment 7 Abou the Authors 8 Introduction to Parallelization 10 Why Parallel 10 Parallel Computing (Hardware) 10 Parallel Computing (Software) 15 Conclusion 30 OpenCL 31 What is OpenCL? 31 Historical Background 31 An Overview of OpenCL 34 Why OpenCL? 36 Applicable Platforms 37 OpenCL Setup 41 Available OpenCL Environments 41 Developing Environment Setup 44 First OpenCL Program 51 Basic OpenCL 59 Basic Program Flow 59 Online/Offline Compilation 67 Calling the Kernel 77 Advanced OpenCL 98 OpenCL C 98 OpenCL Programming Practice 131 Case Study 164 FFT (Fast Fourier Transform) 164 Mersenne Twister 200 Notes 245
3
B01-02632560-294
The OpenCL Programming Book
Foreword
“The free lunch is over.” The history of computing has entered a new era. Until a few years ago, the CPU clock speed determined how fast a program will run. I vividly recall having a discussion on the topic of software performance optimization with a System Engineer back in 2000, to which his stance was, “the passing of time will take care of it all,” due to the improving technology. However, with the CPU clock speed leveling off at around 2005, it is now solely up to the programmer to make the software run faster. The free lunch is over. The processor venders have given up on increasing CPU clock speed, and are now taking the approach of raising the number of cores per processor in order to gain performance capability. In recent years, many multi-core processors were born, and many developing methods were proposed. Programmers were forced to grudgingly learn a new language for every new type of processor architecture. Naturally, this caused a rise in demand for one language capable of handling any architecture types, and finally, an open standard was recently established. The standard is now known as “OpenCL”. With the khronos group leading the way (known for their management of the OpenGL standard), numerous vendors are now working together to create a standard framework for the multi-core era. Will this new specification become standardized? Will it truly be able to get the most out of multi-core systems? Will it help take some weight off the programmer’s shoulders? Will it allow for compatible programs to be written for various architectures? Whether the answers to these questions become “Yes” or “No” depends on the efforts of everyone involved. The framework must be designed to support various architectures. Many new tools and libraries must be developed. It must also be well-received by many programmers. This is no easy task, as evidenced by the fact that countless programming languages continually appear and disappear. However, one sure fact is that a new standard developing method is necessary for the new multi-core era. Another sure fact is that “OpenCL” is currently in the closest position to becoming that new standard. The Fixstars Corporation, who authors this book, has been developing softwares for the Cell 4
Fixstars Corporation Chief Executive Officer Satoshi Miki 5 . when its full power is unleashed by efficient usage of the hundreds of cores that exist within. We write this book in hopes that the readers of this book learns. uses. leading us to want an open framework. since 2004. We have also been amazed by the capability of the GPU. in order to see the performance capability of the processor in action.B01-02632560-294 The OpenCL Programming Book Broadband Engine co-developed by Sony. many of our clients had been expressing their distaste for the lack of standards between the different architectures. “OpenCL” is a framework that developed solely out of need. Therefore. enough to battle its infamously difficult hardware design. and thus become part of the ongoing evolution that is this multi-core era. Toshiba.E. and contribute to the development of OpenCL. and IBM. OpenCL is still in its infancy. Meanwhile. We have been awed by the innovative idea of the Cell B. and thus not every need can be fulfilled.
Fixstars is a skilled OpenCL practitioner and is ideally qualified to create state-of-the-art OpenCL educational materials.B01-02632560-294 The OpenCL Programming Book. Neil Trevett President. I wholeheartedly recommend to this book to anyone looking to understand and start using the amazing power of OpenCL. The Khronos Group 6 . A vital piece of any living standard is enabling the industry to truly understand and tap into the power full potential of the technology.
we do however assume that the reader has a good grasp of the C language. The official reference manual for OpenCL can be found online on Khronous' website at: 7 .B01-02632560-294 The OpenCL Programming Book Acknowledgment The book is intended for those interested in the new framework known as OpenCL.0/docs/man/xhtml/ Sample programs in this book can be downloaded at. While we do not assume any preexisting knowledge of parallel programming. since we introduce most of what you need to know in Chapter 1. Those who are already experts in parallel programming can start in Chapter 2 and dive straight into the new world of OpenCL.fixstars.org/opencl/sdk/1.
B01-02632560-294 The OpenCL Programming Book About the Authors Ryoji Tsuchiyama Aside from being the leader of the OpenCL team at Fixstars. My background. I worked on projects such as construction of a data collection system (Linux kernel 2. I get most of my vitamins and minerals from reading datasheets and manuals of chips from different vendors. as well as parallel processing using accelerators such as Cell/B. as well as memory-mapped registers. Recently. Between teaching numerous seminars and writing technical articles. for which my only memory concerning it is playing the Invader game for hours on end. that I/O and memory accesses must be conquered prior to tackling parallelization. and to a career where I orchestrate a bunch of CPUs and DSPs to work together in harmony with each other. I have been dealing with distributed computing on x86 clusters. but I have yet to come up with a good solution. I'm constantly thinking of ways to 8 . I am constantly trying to meet some sort of a deadline. and knew at first sight she was the one. my fascination with computers have gotten me through a Master's Degree with an emphasis on distributed computing. and I particularly enjoy glancing at the pages on latency/throughput. I joined Fixstars in hopes that it will help me in achieving my life goal of creating the ultimate compiler as it necessitates an in-depth understanding of parallel processing. and GPU. Since my first encounter with a computer (PC-8001) as a child. My first computer was the Macintosh Classic. I'm becoming more convinced by day. I still wish to write a book on how to make a GPU at some point. That being said. I take part in both the development and the marketing of middlewares and OS's for multi-core processors. Akihiro Asahara At Fixstars. I fell in love with the wickedly awesome capability of the GPU 2 years ago. is in Astrophysics. Back in my researching days. Takuro Iizuka I am one of the developers of the OpenCL Compiler at Fixstars. however. I am also involved in the development of a financial application for HPC.2 base!) for large telescopes.E. and developing a software for astronomical gamma-ray simulation (in FORTRAN!). Takashi Nakamura I am the main developer of the OpenCL Compiler at Fixstars. however. I often wonder how much geekier I could have been had I grew an appreciation for QuickDraw at that time instead.
I currently serve the CEO position here at Fixstars. as well as offer seminars. For more information. Since then. I have been working hard to help spread multi-core technology that is already becoming the standard. I was convinced that software developing process will have to change drastically for the multi-core era.com/en. Satoshi Miki As one of the founding member. In 2004. About Fixstars Corporation Fixstars Corporation is a software company that focuses on porting and optimization of programs for multi-core processors. 9 . manufacturing. and I am currently working on obtaining a PhD in information engineering.fixstars. and decided to change the direction of the company to focus on the research and development of multi-core programs.E. My debut as a programmer was at the age of 25. and the GPU. using multi-core processors such as the Cell/B. Fixstars offers a total solution to improve the performance of multi-core based operations for fields that requires high computing power such as finance.B01-02632560-294 The OpenCL Programming Book fit in some sort of astronomical work here at Fixstars. Fixstars also works on spreading the multi-core technology through publication of articles introducing new technologies. and digital media. please visit. medical.
which are then solved concurrently("in parallel"). Today. which effectively caused the CPU clock speed to level off. The processor vendors were forced to give up their efforts to increase the clock speed. and instead adopt a new method of increasing the number of cores within the processor. at around 2004 when Intel’s CPU clock speed reached 4GHz. SMP (Symmetric Multiprocessing) system .a network of general-purpose computers. Since the CPU clock speed has either remained the same or even slowed down in order to economize the power usage. old software designed to run on a single processor will not get any faster just by replacing the CPU with the newest model. operating on the principle that large problems can often be divided into smaller ones.identical processors (in powers of 2) connected 10 .[1] Many different hardware architectures exist today to perform a single task using multiple processors.known as the supercomputer architecture. However. which significantly increased each passing year. the increase in power consumption and heat dissipation formed what is now known as the “Power Wall”. which lead up to the introduction to OpenCL in the chapters to follow. but that it is becoming common in various applications. software speedup was achieved by using a CPU with a higher clock speed. Why Parallel In the good old days. dual-core CPUs are commonplace even for the basic consumer laptops.a combination of computer resources from multiple administrative domains applied to a common task. Some examples. To get the most out of the current processors. what exactly is "parallel computing"? Wikipedia defines it as "a form of computation in which many calculations are carried out simultaneously. Cluster server system . Parallel Computing (Hardware) First of all. the software must be designed to take full advantage of the multiple cores and perform processes in parallel. This shows that parallel processing is not just useful for performing advanced computations. MPP (Massively Parallel Processor) systems .B01-02632560-294 The OpenCL Programming Book Introduction to Parallelization This chapter introduces the basic concepts of parallel programming from both the hardware and the software perspectives. in order of decreasing scale is: Grid computing .
Multiple Data streams SIMD) One instruction is broadcasted across many compute units. each CPU that makes up the system uses a unique memory space. Single Data stream (MISD) Multiple instruction streams process a single data stream. Multiple Instruction. most parallel computing hardware architectures. 11 . where each unit processes the same instruction on different data. various micro-processors include SIMD processors. Single Data stream (SISD) SISD system is a sequential system where one instruction stream process one data stream. such as the SMP and cluster systems. Flynn's Taxonomy Flynn's Taxonomy is a classification of computer architectures proposed by Michael J. Very few systems fit within this category. the MIMD architecture is further categorized by memory types. In distributed memory type systems. In shared memory type systems. The pre-2004 PCs were this type of system. Single Instruction. 3. SSE instruction on Intel CPU. 4.B01-02632560-294 The OpenCL Programming Book together to act as one unit. The two main memory types used in parallel computing system are shared memory and distributed memory types. It is based on the concurrency of instruction and data streams available in the architecture. a type of a supercomputer. Multi-core processor . Recently. fall within the MIMD category. For this reason. The vector processor. Using this classification scheme. Multiple Instruction. An instruction stream is the set of instructions that makes up a process. 2. each CPU that makes up the system is allowed access to the same memory space.a single chip with numerous computing cores. is an example of this architecture type. Single Instruction. with the exception for fault tolerant systems. and a data stream is the set of data to be processed. For example. 1. Flynn [2]. Multiple Data streams (MIMD) Multiple processing units each process multiple data streams using multiple instruction streams. and SPE instruction on Cell Broadband Engines performs SIMD instructions.
Even with these external networks. If each CPU is running a process. while 82% are cluster systems. have been replaced by cluster systems.6% are MPP systems. the system with distributed memory types requires data transfers to be explicitly performed by the user. This is known as a cluster server system. 17. Infiniband. Due to this reason. giving it a much higher cost performance than the MPP systems. For the reason given above. The main difference between a cluster system and a MPP system lies in the fact that a cluster does not use specialized hardware. This is due to the fact that these transfers occur via an external network. which has significantly become faster compared to the traditional Gigabit Ethernet. NEC's Earth Simulator and IBM's blue Gene are some of the known MPP systems. many MPP systems. memory. which are made up of CPU. a system with shared memory type allows the two processes to communicate via Read/Write to the shared memory space. which performs tasks such as large-scale simulation. of the top 500 supercomputers as of June 2009. which is perhaps the most commonly-seen distributed memory type system. According to the TOP500 Supercomputer Sites[3]. Distributed Memory type Tasks that take too long using one computer can be broken up to be performed in parallel using a network of processors. and a network port.B01-02632560-294 The OpenCL Programming Book Different memory types result in different data access methods. since the two memory spaces are managed by two OS's. One problem with cluster systems is the slow data transfer rates between the processors. MPP (Massively Parallel Processor) system is also another commonly-seen distributed memory type system. the transfer rates are still at least an order of magnitude slower than the local memory access by each processor. On the other hand. via a specialized fast network. which used to be the leading supercomputer type. 10Gbit Ethernet. Some recent external networks include Myrinet. cluster systems are suited for parallel algorithms where the CPU's do 12 . This type of computing has been done for years in the HPC (High Performance Computing) field. The next sections explore the two parallel systems in detail. This connects numerous nodes.
An example is the risk simulation program used in Derivative product development in the finance field. The Intel Multiprocessor Specification Version 1. However. which makes the bandwidth between the processors and the shared memory a bottleneck. increasing the number of processors naturally increases the number of accesses to the memory. The main difference from a SMP system is that the physical distance between the processor and the memory changes the access speeds. By prioritizing the usage of physically closer memory (local memory) rather than for more distant memory (remote memory). this results in a much simpler system from the software perspective. To reduce the access cost to the remote memory. and only effective up to a certain number of processors. Since data transfers/collections are unnecessary.B01-02632560-294 The OpenCL Programming Book not have to communicate with each other too often.0 released back in 1994 describes the method for using x86 processors in a multi-processor configuration. which can become expensive. Although 2-way servers are inexpensive and common.1: SMP and NUMA Another example of a shared memory type system is the Non-Uniform Memory Access (NUMA) system. a 13 ." These algorithms are used often in simulations where many trials are required. the bottleneck in SMP systems can be minimized. An example of a shared memory type system is the Symmetric Multiprocessing (SMP) system (Figure 1. 32-Way or 64-Way SMP servers require specialized hardware. all processors share the same address space. SMP systems are thus not scalable. and 2-Way work stations (workstations where up to 2 CPU's can be installed) are commonly seen today [4]. These types of algorithms are said to be "Course-grained Parallel.1. allowing these processors to communicate with each other through Read/Writes to shared memory. but these trials have no dependency. Figure 1. left). Shared Memory Type In shared memory type systems.
looking at typical x86 server products bring about an interesting fact. Since these cores are relatively simple and thus do not take much area on the 14 . Networking these multi-core processors actually end up creating a NUMA system. Figure 1.B01-02632560-294 The OpenCL Programming Book processor cache and a specialized hardware to make sure the cache data is coherent has been added.E. The Dual Core and Quad Core processors are SMP systems. In addition. The non-CPU hardware in this configuration is known as an Accelerator.2: Typical 2Way x86 Server Accelerator The parallel processing systems discussed in the previous sections are all made by connecting generic CPUs. Now that the basic concepts of SMP and NUMA are covered. the NUMA gets rid of this to use a interconnect port that uses a Point-to-Point Protocol. In other words. another approach is to use a different hardware more suited for performing certain tasks as a co-processor. and Hyper Transport by AMD. and this system is known as a Cash Coherent NUMA (cc-NUMA). when these are used in multi-processor configuration. These ports are called Quick Path Interconnect (QPI) by Intel. the mainstream 2+way x86 server products are "NUMA systems made by connecting SMP systems" (Figure 1.2) [5]. Although this is an intuitive solution. since the processor cores all access the same memory space. it becomes a NUMA system.) and GPUs. Some popular accelerators include Cell Broadband Engine (Cell/B. Server CPUs such as AMD Opteron and Intel Xeon 5500 Series contains a memory controller within the chip. which is a bus that connects multiple CPU as well as other chipsets. Accelerators typically contain cores optimized for performing floating point arithmetic (fixed point arithmetic for some DSPs). Thus. Front Side Bus (FSB). The hardware to verify cache coherency is embedded into the CPU.
as well as high power usage. Another example is NVIDIA's GPU chip known as Tesla T10.E. the circuit board and semiconductor fields use automatic visual checks. since many factories and research labs are trying to cut back on the power usage. The medical imaging devices such as ultrasonic diagnosing devices and CT Scans are taking in higher and higher quality 2D images as an input every year. as well as what type of a hybrid system. and the generic CPUs are not capable of processing the images in a practical amount of time. For example. the accelerators provide a portable and energy-efficient alternative to the cluster. OpenCL. These accelerators are typically used in conjunction with generic CPUs. For example. This is mainly due to the fact that the generic CPU's floating point arithmetic capability has leveled off at around 10 GFLOPS. these accelerators are attracting a lot of attention. These 9 cores are connected using a high-speed bus called the Element Interconnect Bus (EIB). Thus. The number of checks gets more complex every year. making it unfit for applications requiring frequent I/O operations. the transfer speed between the host CPU and the accelerator can become a bottle neck. In summary. which contains 30 sets of 8-cored Streaming Processors (SM). and GPUs can perform between 100 GFLOPS and 1 TFLOPS for a relatively inexpensive price. needs to be made wisely. Cell/B.E. is a development framework to write applications that runs on these "hybrid systems". for a total of 240 cores on one chip. requiring faster image processing so that the rate of production is not compromised. contains 1 PowerPC Processor Element (PPE) which is suited for processes requiring frequent thread switching. creating what's known as a "Hybrid System". Thus. a decision to use a hybrid system. numerous cores are typically placed. and placed on a single chip. in brief. an accelerator allows for a low-cost. It is also more "Green". In recent years.B01-02632560-294 The OpenCL Programming Book chip. and 8 Synergistic Processor Elements (SPE) which are cores optimized for floating point arithmetic. Parallel Computing (Software) 15 . high-performance system. which makes it a better option than cluster server systems. low-powered. while Cell/B. Using a cluster server for these tasks requires a vast amount of space. However.
Figure 1. and run each subtasks on each core. This type of method is called Sequential processing. single-CPU processor (SISD system). hardware architectures that involve performing one task using numerous processors had been the main focus. = task_b(i). If this code is compiled without adding any options for optimization on a Dual Core system. As shown on Figure 1. and then performs addition at the end.i<N. List1. and the returned values are then summed up. task_c. the task is split up into 2 such that each subtask runs the loop N/2 times. This section will focus on the method for parallel programming for the discussed hardware architectures. This is clearly inefficient. the other core has nothing to do. so the intuitive thing to do here is to split the task into 2 subtasks. task_b.i++){ 002: 003: 004: 005: 006: 007: } resultA resultB resultC = task_a(i). = task_c(i).3: Parallel processing example 16 . Sequential vs Parallel Processing Take a look at the pseudocode below. This is then repeated N times. This is the basis of parallel programming.3. the program would run sequentially on one core.1: Pseudocode for parallel processing 001: for(i=0. so it becomes idle. This type of code can benefit from parallelization. In this scenario.3. resultD = task_d(i). resultAll += resultA + resultB + resultC + resultD. task_d. Running the code with N=4 is shown on the left hand side of Figure 1. Executing the above code on a single-core. incrementing i after each iteration.B01-02632560-294 The OpenCL Programming Book Up until this point. the 4 tasks would be run sequentially in the order task_a.
Decide on the best algorithm to execute the code over multiple processors 3. but since multi-core processors are becoming more common. it is necessary to follow the following steps. the use of these distributed processing techniques are becoming more necessary. The following sections will introduce basic concepts required to implement parallel processes. etc. in order to decide which sections can be executed in parallel. 2. In the past. 1. or OpenCL. Analyze the dependency within the data structures or within the processes. OpenMP. 17 . these skills were required only by a handful of engineers. Rewrite code using frameworks such as Message Passing Interface (MPI).B01-02632560-294 The OpenCL Programming Book For actual implementation of parallel programs.
The law can be proven as follows.4: Amdahl's Law Example 18 . For a more concrete example. Assume Ts to represent the time required to run the portion of the code that cannot be parallelized. Even without taking overhead into account. the speedup is limited by the portion of the code that must be run sequentially. the speedup S achieved is: S = T(1)/T(N) = T(1)/(Ts + Tp/N) = 1/(y+(1-y)/N) Taking the limit as N goes to infinity.3x speedup depending on y.B01-02632560-294 The OpenCL Programming Book Where to Parallelize? During the planning phase of parallel programs. Running the program with 1 processor. a 4-time speedup is achieved. Figure 1. and Tp to represent the time required to run the portion of the code that can benefit from parallelization. and 50%.4 shows the 2 cases where the proportion of the code that cannot be run in parallel (y) is 10%. certain existing laws must be taken into account. The first law states that if a program spends y% of the time running code that cannot be executed in parallel. Figure 1. the expected speed up from parallelizing is at best a 1/y fold improvement. the most speedup that can be achieved is S=1/y. This law is known as Amdahl's Law [6]. the processing time is: T(1) = Ts + Tp The processing time when using N processors is: T(N) = Ts + Tp/N Using y to represent the proportion of the time spent running code that cannot be parallelized. Ideally. However. as the law states. assume that a sequential code is being rewritten to run on a Quad Core CPU. the figure shows a difference of 3x and 1.
The graph clearly shows the importance of reducing the amount of sequentially processed portions.B01-02632560-294 The OpenCL Programming Book This problem becomes striking as the number of processors is increased. This also implies that the effort used for parallelizing a portion of the code that does not take up a good portion of the whole process may be in vain. In summary. especially as the number of cores is increased.5: Result of scaling for different percentage of tasks that cannot be parallelized 19 . GPUs such as NVIDIA's Tesla can have more 200 cores.5 shows the speedup achieved as a function of the sequentially processed percentage y and the number of cores. For a common 2-way server that uses Intel's Xeon 5500 Series CPUs which supports hyper-threading. the OS sees 16 cores. Figure 1. it is more important to reduce serially processed portions rather than parallelizing a small chunk of the code. Figure 1.
assume a program where a portion that must be run sequentially is limited to initialization and closing processes. Development of parallel programming requires close attention to these 2 laws. the fraction of the program that can be run in parallel also increases. a large-scale processing must take place. For example. This law states that as the program size increases. Parallel processing requires the splitting up of the data to be handled. and all other processes can be performed in parallel. and that Tp grows faster than Ts. the next step is to decide on the type of parallelism to use. In other words. Gustafson's law shows that in order to efficiently execute code over multiple processors. Types of Parallelism After scrutinizing the algorithm and deciding where to parallelize. Recall the previously stated equation: T(N) = Ts + Tp/N Gustafson states that of the changes in Ts.1. it is apparent that Gustafson's law holds true. Refer back to the code on List 1. This time. the Gustafson Law provides a more optimistic view. assume the usage of a Quad-core CPU to run 4 20 . Tp is directly proportional to the program size.B01-02632560-294 The OpenCL Programming Book While Amdahl's law gives a rather pessimistic impression of parallel processing. and/or the process itself. By increasing the amount of data to process by the program. thereby decreasing the portion that must be performed sequentially.
it also makes just as much sense to run each of these tasks in each processor.7. The pixels can be split up into blocks. As illustrated in Figure 1. more concrete example where this method can be applied is in image processing. the addition at each index can be performed completely independently of each other. For this operation. An intuitive approach is to let each processor perform N/4 iteration of the loop.7: Vector Addition 21 . and that all processors finish their task at around the same time. but since there are 4 tasks within the loop. This method is efficient when the dependency between the data being processed by each processor is minimal. The former method is called "Data Parallel". Figure 1. Figure 1. and the latter method is called "Task Parallel" (Figure 1. the number of processors is directly proportional to the speedup that may be achieved if overhead from parallelization can be ignored.B01-02632560-294 The OpenCL Programming Book processes at once. vector addition can benefit greatly from this method. Another. For example.6). and each of these blocks can be filtered in parallel by each processor.6: Data Parallel vs Task Parallel Data Parallel Main characteristics of data parallel method is that the programming is relatively simple since multiple processors are all running the same program.
One way is to implement a load balancing function on one of the processors. since task_a and task_c are doing nothing until task_b and task_d finishes. Also.6. the load balancer maintains a task queue. Task parallel method requires a way of balancing the tasks to take full advantage of all the cores. Since the processing time may vary depending on how the task is split up.B01-02632560-294 The OpenCL Programming Book Task Parallel The main characteristic of the task parallel method is that each processor executes different commands. This increases the programming difficulty when compared to the data parallel method. it is actually not suited for the example shown in Figure 1. Figure 1. and assigns a task to a processor that has finished its previous task. In the example in Figure 1.8.8: Load Balancing 22 . the processor utilization is decreased.
task_b. such that task_b. where processing frames are taken as inputs one after another. such as instruction decoding. but can be effective when processing. and register fetch. for example. arithmetic operation. In this example. Figure 1. This method is not suited for the case where only one set of tasks is performed. This concept can be used in parallel programming as well.9: Pipelining 23 .9 shows a case where each processor is given its own task type that it specializes in. task_c. the start of each task set is shifted in the time domain. videos.B01-02632560-294 The OpenCL Programming Book Another method for task parallelism is known as pipelining. Pipelining is usually in reference to the "instruction pipeline" where multiple instructions. task_c as an input. task_d takes in the output of task_a. are executed in an overlapped fashion over multiple stages. The data moves as a stream across the processors. Figure 1.
However. For example. the Cell/B. and the hardware is usually suited for one or the other. is more suited for performing task parallel algorithms. Programs usually have sections suited for data parallelism as well as for task parallelism. OpenCL allows the same code to be executed on either platforms. the hardware must be decided wisely.E. since the GPU (at present) is not capable of performing different tasks in parallel. due to the existence of many cores.B01-02632560-294 The OpenCL Programming Book Hardware Dependencies When porting sequential code to parallel code. but since it cannot change the nature of the hardware. 24 . the hardware and the parallelization method must be chosen wisely. since its 8 cores are capable of working independently of each other. the GPU is suited for data parallel algorithms.
For performing parallel instructions performed within the processor itself. but this is commonly done using a framework instead. the operating system performs execution. This section will explore the different methods. the operating system provides an API to create and manage these threads. the overhead from starting and switching is much smaller than when compared to processes. since these threads execute in the same memory space. which is a POSIX-approved thread protocol. UNIX provides a system call shmget() that allocates shared memory that can be accessed by different processes. UNIX provides a library called Pthreads. making sure each of these processes' distinct resources do not interfere with each other. Write parallel code using the operating system's functions. The code can be further broken down into "parallel processes" and "parallel threads" to be run on the processor. In general. Data transfer between programs may be performed by a system call to the operating system. POSIX is a standard for API specified by IEEE. and interruption within these process units. however. closing. at minimum. the OS system call may be used instead of the framework. In decreasing order of user involvement: 1. A thread is a subset of a process that is executed multiple times within the program. For example. the data transfer between programs is performed using network transfer APIs such as the socket system call. and some way of transferring data between the executed programs. 25 . Parallelism using the OS System Calls Implementing parallel programs using the OS system call requires. a call to execute and close a program. the next step is the implementation. 2. Use a parallelization framework for program porting. 3. If this is done on a cluster system. A process is an executing program given its own address space by the operating system. Use an automatic-parallelization compiler.B01-02632560-294 The OpenCL Programming Book Implementing a Parallel Program After deciding on the parallelization method. In general. In general. These threads share the same address space as the processes. The difference between processes and threads are as follows. For example.
i++) iptr[i] += 1. 014: for(i=0.h> 004: 005: #define TH_NUM 4 006: #define N 100 007: 008: static void *thread_increment(void *array) 009: { 010: int i. due to the light overhead.i++){ 028: array[i] = i.i<N. 025: 026: /* initialize array*/ 027: for(i=0.B01-02632560-294 The OpenCL Programming Book Whether to use parallel processes or parallel threads within a processor depends on the case. 011: int *iptr. 022: pthread_t thread[TH_NUM].i< N / TH_NUM. parallel threads are used if the goal is speed optimization. 029: } 26 .h> 003: #include <pthread.2 shows an example where each member of an array is incremented using multithreading.2: pthread example 001: #include <stdio. 015: 016: return NULL. 023: 024: int array[N]. but in general. List 1. List 1. 012: 013: iptr = (int *)array. 017: } 018: 019: int main(void) 020: { 021: int i.h> 002: #include <stdlib.
and the parallelization APIs in Boost C++. OpenMP for shared memory systems (SMP. array + i * N / TH_NUM) != 0) 034: { 035: return 1. NULL) != 0) 042: { 043: return 1.i<TH_NUM. NULL.B01-02632560-294 The OpenCL Programming Book 030: 031: /* Start parallel process */ 032: for(i=0. The start index is passed in as an argument. and the fourth argument is the argument passed to the thread. are limited.i<TH_NUM. It increments each array elements by one. These frameworks require the user to 27 . NUMA). 048: } The code is explained below. but the ones used in practical applications.i++){ 033: if (pthread_create(&thread[i]. 032-037: Creation and execution of threads. thread_increment. In line 033. the third argument is the name of the function to be executed by the thread. 003: Include file required to use the pthread API 008-017: The code ran by each thread. 022: Declaration of pthread_t variable for each thread. 044: } 045: } 046: 047: return 0. 039-045: Waits until all threads finish executing Parallelism using a Framework Many frameworks exist to aid in parallelization.i++){ 041: if (pthread_join(thread[i]. such as in research labs and retail products. 036: } 037: } 038: 039: /* Synchronize threads*/ 040: for(i=0. This is used in line 033. The most widely used frameworks are Message Passing Interface (MPI) for cluster servers.
List 1. i < N. From the original sequential code.3 shows an example usage of OpenMP.h> 002: #include<stdlib.i<N. such as GCC.h> 003: #include<omp. allowing the user to focus on the main core of the program. 026: } 28 . which is supported by most mainstream compilers. 017: } 018: 019: /* Parallel process */ 020: #pragma omp parallel for 021: for (i = 0. List 1. but take care of tasks such as data transfer and execution of the threads. 010: int rootBuf[N].B01-02632560-294 The OpenCL Programming Book specify the section as well as the method used for parallelization. 011: 012: omp_set_num_threads(TH_NUM). The compiler then takes care of the thread creation and the commands for thread execution. Microsoft C. which explicitly tells the compiler which sections to run in parallel. Intel C. the programmer inserts a "#pragma" directive.i++){ 016: rootBuf[i] = i. 023: } 024: 025: return(0).h> 004: #define N 100 005: #define TH_NUM 4 006: 007: int main () 008: { 009: int i. 013: 014: /* Initialize array*/ 015: for(i=0.3: OpenMP Example 001: #include<stdio. i++) { 022: rootBuf[i] = rootBuf[i] + 1.
and the number of times to run the loop. GCC (Linux) requires "-fopenmp". 003: Include file required to use OpenMP 004: Size of the array. The argument must be an integer. 012: Specify the number of threads to be used. The 29 . • -parallel: Enables automatic parallelization • -par-report3: Reports which section of the code was parallelized. The code is explained below. into the number of threads specified in 012. For example. which can be specified in the form -par-report[n] • -par-threshold0: Sets the threshold to decide whether some loops are parallelized. Automatic parallelization compiler Compilers exist that examines for-loops to automatically decide sections that can be run in parallel. There are 3 report levels. 020: Breaks up the for loops that follows this directive.c The explanations for the options are discussed below. enough computations must be performed within each loop to hide the overhead from process/thread creation. intel C (Linux) requires "-openmp". Compare with List 2 that uses pthreads. this number should be somewhat large to benefit from parallelism. This example shows how much simpler the programming becomes if we use the OpenMP framework. and Microsoft Visual C++ requires /OpenMP. This is specified in the form -par-threshold[n]. Intel C/C++ compiler does this when the option is sent. In general.c (Windows) > Icc /Qparallel /Qpar-report3 /Q-par-threshold0 -03 o parallel_test parallel_test. In order to benefit from parallelization.B01-02632560-294 The OpenCL Programming Book When compiling the above code. (On Linux) > icc -parallel -par-report3 -par-threshold0 -03 o parallel_test parallel_test. the OpenMP options must be specified. as well as how many threads to use.
the automatic parallelization compiler seems to be the best solution since it does not require the user to do anything. with higher number implying higher number of computations. Conclusion This section discussed the basics of parallel processing from both the hardware and the software perspectives. When this value is 0. 30 . all sections that can be parallelized become parallelized.B01-02632560-294 The OpenCL Programming Book value for n must be between 0 ~ 100. The default value is 75. The content here is not limited to OpenCL. making the performance suffer. as the code becomes more complex. Next chapter will introduce the basic concepts of OpenCL. no existing compiler (at least no commercial) can auto-generate parallel code for hybrid systems such as the accelerator. but in reality. so those interested in parallel processing in general should have some familiarity with the discussed content. the compiler has difficulty finding what can be parallelized. At a glance. As of August 2009.
The company authoring this book. it is no longer uncommon for laptops to be equipped with relatively high-end GPUs. by using one core for control and the other cores for computations. IBM. NVIDIA. such as CPU. using one language. Texas Instruments. OpenCL provides an effective way to program these CPU + GPU type heterogeneous systems. and it can be used on homogeneous. The heterogeneous system such as the 31 . Apple. GPU. Historical Background Multi-core + Heterogeneous Systems In recent years. multi-core processors such as the Intel Core i7. OpenCL. This chapter will introduce the main concepts of OpenCL.. who is known for their management of the OpenGL specification. and involved in the specification process. This chapter will give a glimpse into the history of how OpenCL came about. The physical limitation has caused the processor capability to level off. OpenCL (Open Computing Language) is "a framework suited for parallel programming of heterogeneous systems". so the effort is naturally being placed to use multiple processors in parallel. What is OpenCL? To put it simply.E. DSP. For desktops. more and more sequential solutions are being replaced with multi-core solutions. the Fixstars Corporation. Intel. Toshiba. The group consists of members from companies like AMD. is also a part of the standardization group. The goal of the standardization is ultimately to be able to program any combination of processors. however. OpenCL is standardized by the Khronos Group. Cell/B. is not limited to heterogeneous systems.B01-02632560-294 The OpenCL Programming Book OpenCL The previous chapter discussed the basics of parallel processing. it is possible to have multiple GPUs due to PCIe slots becoming more common. as well as an overview of what OpenCL is. etc. In recent years. which are all well-known processor vendors and/or multi-core software vendors. The framework includes the OpenCL C language as well as the compiler and the runtime environment required to run the code written in OpenCL C. Sony.
the CPU must perform tasks such as code execution. known for its use on the PLAYSTATION3. First. as well as GPUs specialized for GPGPU called "Tesla". the hardware can be chosen as follows: • Data Parallel → GPU • Task Parallel → Cell/B. we will take a look at CUDA. Another example of accelerator is Cell/B.E.E. In CUDA. which is a way to write generic code to be run on the GPU. The GPU is a processor designed for graphics processing.1.1: Example Kernel Code 001: __global__ void vecAdd(float *A. and the GPU side program is called the kernel. which uses the GPU for tasks other than graphics processing.. • General Purpose → CPU Vendor-dependent Development Environment We will now take a look at software development in a heterogeneous environment. 006: */ 32 . float *C) /* Code to be executed on the GPU (kernel) 002: { 003: int tid = threadIdx. and the user interface. which is an extension of the C language. Since no OS is running on the GPU. has been attracting attention as a result [7]. but its parallel architecture containing up to hundreds of cores makes it suited for data parallel processes. The GPGPU (General Purpose GPU). NVIDIA's CUDA has made the programming much simpler.B01-02632560-294 The OpenCL Programming Book CPU+GPU system is becoming more common as well. /* Get thread ID */ 004: 005: C[tid] = A[tid] + B[tid]. as it allows the CPU for performing general tasks and the GPU for data parallel tasks [8]. float *B. List 2. so that the data parallel computations can be performed on the GPU. Since this is suited for task parallel processing. The main difference between CUDA and the normal development process is that the kernel must be written in the CUDA language. The CPU side program is called the host program. NVIDIA makes GPUs for graphics processing.x. An example kernel code is shown in List 2. and the data parallel side (GPU) is called the "device". file system management. the control management side (CPU) is called the "host".
using specific library functions provided by the Cell SDK. List 2.1: Example Cell/B. /* kernel call(256 threads) */ 005: … 006: } The Cell/B. and a SPE-embedded instruction. 008: } The kernel is called from the CPU-side.1) Table 2. is programmed using the "Cell SDK" released by IBM [10]. Although it does not extend a language like the CUDA.E.2: Calling the kernel in CUDA 001: int main(void) /* Code to be ran on the CPU */ 002: { 003: … 004: vecAdd<<<1. the programmer is required to learn processor-specific instructions. Similarly.E.E. You may have noticed that the structures of the two heterogeneous systems are same. each requiring specific compilers. dB.2 [9]. (Table 2. SPE must call a SSE instruction. As an example. SIMD instructions on the SPE. dC). The Cell/B. comprises of a PPE for control. the programmers must learn two distinct sets of APIs. x86. However. as shown in List 2. specific library functions are required in order to run.2) 33 . and 8 SPEs for computations.B01-02632560-294 The OpenCL Programming Book 007: return. (Table 2. a VMX instruction. in order to perform a SIMD instruction. the SPE must be controlled by the PPE. and the computation being performed on GPU/SPE. respectively. for example. with the control being done on the CPU/PPE. 256>>>(dA. API commands Function Open SPE program Create SPE context Load SPE program Execute SPE program Delete SPE context Close SPE program API spe_image_open() spe_context_create() spe_context_load() spe_context_run() spe_context_destroy() spe_image_close() In order to run the same instruction on different processors. PPE.
all of these combinations have the following characteristics in common: • Perform vector-ized operation using the SIMD hardware • Use multiple compute processors in conjunction with a processor for control • The systems can multi-task However. which may prevent the platform to be chosen solely on its hardware capabilities. there may be varying difficulty in the software development process. each combination requires its own unique method for software development.B01-02632560-294 The OpenCL Programming Book Table 2. since software development and related services must be rebuilt ground up every time a new platform hits the market. the same sort of procedure is performed. The heterogeneous model. software developers will be able to write parallel code that is independent of the hardware platform. To summarize. where the CPU manages the DSPs suited for signal processing. made up of a control processor and multiple compute processors. OpenCL Runtime API Specification API used by the control node to send tasks to the compute cores By using the OpenCL framework. Also. An Overview of OpenCL OpenCL is a framework that allows a standardized coding method independent of processor types or vendors. Often times.2: SIMD ADD instructions on different processors X86 (SSE3) _mm_add_ps() PowerPC (PPE) vec_add() SPE spu_add() Additionally. as programming methods can be quite distinct from each other. This is especially more common in heterogeneous platforms. a similar model is used. Even here. and thus prove to be useless. The answer to this problem is "OpenCL". in the embedded electronics field. OpenCL C Language Specification Extended version of C to allow parallel programming 2. The software developers must learn a new set of APIs and languages. the following two specifications are standardized: 1. using a completely different set of APIs. is 34 . This can prove to be inconvenient. In particular. the acquired skills might quickly become outdated.
Performance OpenCL is a framework that allows for common programming in any heterogeneous environment. so the execution process can be written in normal C/C++. as well as all the GPU to perform computations in parallel [11]. If the performance suffers significantly as a result of using OpenCL. In order to actually execute the code. As an example." which is designed to be used for that particular environment [13]. a CPU core can use the other cores that include SSE units. its purpose becomes questionable. it must first be compiled to a binary using the OpenCL Compiler designed for that particular environment. This process must be coded by the programmer [12]. everything can be written using OpenCL. which includes the loading of the binary and memory allocation. including HPC. The OpenCL Runtime Library Specification also requires a function to compile the code to be run on the device. This brings up the question of whether performance is compromised. Therefore. The set of commands that can be used for this task is what is contained in the "OpenCL Runtime Library. Furthermore. Once the code is compiled. if an OpenCL implementation includes support for both multi-core CPU and GPU. This execution. all tasks ranging from compiling to executing can be written within the control code. Desktops and embedded systems. Therefore. requiring only this code to be compiled and run manually. OpenCL Software Framework So what exactly is meant by "using OpenCL"? When developing software using OpenCL. it must then be executed on the device. OpenCL will allow all of these areas to be taken care of. can be managed by the controlling processor. The control processer is assumed to have a C/C++ compiler installed. The execution process is common to all heterogeneous combinations. OpenCL code is capable of being executed on any computer.B01-02632560-294 The OpenCL Programming Book currently being employed in almost all areas. as it is often the case for this type of common language and middleware. the following two tools are required: • OpenCL Compiler • OpenCL Runtime Library The programmer writes the source code that can be executed on the device using the OpenCL C language. 35 .
it should be executed on a device suited for data parallel processes. The performance can vary greatly just by the existence of SIMD units on the device. Unix POSIX threads. asynchronous memory copy using DMA. coding in OpenCL does not imply that it will run fast on any device. Standardized Parallelization API As mentioned previously. It is. Thus. To be specific. as it should be able to run on any device. The aforementioned CUDA and Cell SDK are such examples. One of the functions that the OpenCL runtime library must support is the ability to determine the device type. it is independent of processors.B01-02632560-294 The OpenCL Programming Book OpenCL provides standardized APIs that perform tasks such as vectorized SIMD operations. That being said. By learning OpenCL. task parallel processing. Optimization OpenCL provides standard APIs at a level close to the hardware. and memory types. necessary for these processors to support OpenCL. but contrary to the other tools. operating systems. In case lower-level programming is desired. the abstraction layer is relatively close to the hardware. OpenCL does allow the kernel to be written in its native language using its own API. However. As far as standardized tools. and memory transfer between 36 . OpenCL prioritizes maximizing performance of a device rather than its portability. however. While the kernel written in OpenCL is portable. However. these include SIMD vector operations. This seems to somewhat destroy the purpose of OpenCL. data parallel processing. requiring the OpenCL compiler and the runtime library to be implemented. the hardware selection must be kept in mind. and memory transfers. and MPI for distributed memory type systems exist. OpenCL is another standardized tool. The hardware should be chosen wisely to maximize performance. as it would make the code non-portable. software developers will be able to code in any parallel environment. which can be used to select the device to be used in order to run the OpenCL binary. each heterogeneous system requires its own unique API/programming method. If a data parallel algorithm is written. Why OpenCL? This section will discuss the advantage of developing software using OpenCL. minimizing the overhead from the usage of OpenCL. making the switch over likely to be just a matter of time. OpenMP for shared memory systems. there are many reputable groups working on the standardization of OpenCL.
uses almost the exact same syntax as the C language. Therefore. it is necessary to learn the hardware-dependent methods.g. The control program using the OpenCL runtime API can be written in C or C++. The developer is required to learn how to use the OpenCL runtime API. such as printf(). the use of OpenCL would never result in limiting the performance. Also. Anyone who has developed code in these types of environment should see the benefit in this.B01-02632560-294 The OpenCL Programming Book the processors. the main steps taken is coding. the code will not have to be changed in order to control another device. As mentioned earlier. OpenCL Devices → Devices). once this is written to control one device type. Applicable Platforms This section will introduce the main concepts and key terms necessary to gain familiarity with OpenCL. and does not require a special compiler. and some features not required for computations. OpenCL allows for the use of the processor-specific API to be used in case some specific functions are not supported in OpenCL. This may seem counter-intuitive. since this would reduce the chance of wasting time on bugs that may arise from not having a full understanding of the hardware. but this is not very difficult. since in order to maximize performance. Some terms may be shortened for the ease of reading (e. It is extended to support SIMD vector operations and multiple memory hierarchies. 37 . debugging. it is possible to tune the performance to get the most out of the compute processors of choice using OpenCL. the developer can start directly from the tuning step. If the code is already debugged and ready in OpenCL. it should be kept in mind that if some code is written in OpenCL. it should work in any environment. and then tuning. Using the hardware-dependent coding method. especially to those who have experience with developing in heterogeneous environment such as CUDA. However. the processors in heterogeneous systems (OpenCL platforms) were called control processors and compute processors. Host + Device Up until this point. as the name suggests. Since high abstraction is not used. were taken out. Learning Curve OpenCL C language. but OpenCL defines these as follows.
This code is compiled using compilers such as GCC and Visual Studio. DSP. which is unique to each platform combination. which is made up of multiple Processing Elements. For CPU servers.Streaming Multiprocessor (SM) • Processing Element . There are no rules regarding how the host and the OpenCL device are connected. CPU are some of the typically used devices. which is linked to an OpenCL runtime library implemented for the host-device combination. GPU. Application Structure This section will discuss a typical application that runs on the host and device. Host Environment where the software to control the devices is executed. The memory associated with the processors is also included in the definition of a device. and use TCP/IP for data transfer. Since the OpenCL runtime implementation takes care of these tasks. Cell/B. 2. When creating an OpenCL application. the host controls the device using the OpenCL runtime API. The program run on the device is called a kernel. this kernel is written in OpenCL C. the CPUs can be connected over Ethernet.B01-02632560-294 The OpenCL Programming Book 1.GPU • Compute Unit .. the programmer will not have to know the exact details of how to perform these tasks. This is typically the CPU as well as the memory associated with it. and compiled using the OpenCL compiler for that device. • OpenCL Device . The host is programmed using C/C++ using the OpenCL runtime API. which contain numerous compute units.E. OpenCL device is expected to contain multiple Compute Units. As an example. As stated earlier.Scalar Processor (SP) OpenCL device will be called a device from this point forward. Since OpenCL expects the device to not be running an operating system. 38 . the GPU described in OpenCL terms is as follows. PCI Express is used most often. OpenCL Devices Environment where the software to perform computation is executed. the compiled kernel needs help from the host in order to be executed on the device. OpenCL explicitly separates the program run on the host side and the device side. In the case of CPU + GPU.
and another ID for each kernel within each compute unit. 39 . which is run on each processing element. each kernel requires a unique index so the processing is performed on different data sets.B01-02632560-294 The OpenCL Programming Book Parallel Programming Models OpenCL provides API for the following programming models. Memory Model OpenCL allows the kernel to access the following 4 types of memory. and the ID for the processing element is called the Work Item ID. The ID can have up to 3 dimensions to correspond to the data to process. Global Memory Memory that can be read from all work items. in which both the number of work groups and the work items are 1. • Data parallel programming model • Task parallel programming model In the data parallel programming model. The index space is defined and accessed by this ID set. OpenCL provides this functionality via the concept of index space. OpenCL gives an ID to a group of kernels to be run on each compute unit. For the task parallel programming model. The index space is defined for task parallel processing as well. For data parallel programming models. 1. so that each kernel processes different sets of data. and the number of work items to be ran on each compute unit (local item). The retrieval of this ID is necessary when programming the kernel. different kernels are passed through the command queue to be executed on different compute units or processing elements. the IDs are given for each item by the OpenCL runtime API. The number of workgroups can be computed by dividing the number of global items by the number of local items. The ID for the compute unit is called the Workgroup ID. It is physically the device's main memory. When the user specifies the total number of work items (global item). the same kernel is passed through the command queue to be executed simultaneously across the compute units or processing elements.
However. The cost memory is set and written by the host. but can be used more efficiently than global memory if the compute units contain hardware to support constant memory cache.B01-02632560-294 The OpenCL Programming Book 2. The 4 memory types mentioned above all exist on the device. all 4 memory types will be physically located on the main memory on the host. 3. However. is capable of reading and writing to the global. The host side has its own memory as well. Private Memory Memory that can only be used within each work item. 40 . The physical location of each memory types may differ depending on the platform. Constant Memory Also memory that can be read from all work items. It is physically the device's main memory. 4. Local Memory Memory that can be read from work items within a work group. Note that the physical location of each memory types specified above assumes NVIDIA's GPUs. The host. constant. only the kernel can only access memory on the device itself. OpenCL requires the explicit specification of memory types regardless of the platform. if the devices are x86 multi-core CPUs. It is physically the registers used by each processing element. and the host memory. It is physically the shared memory on each compute units. For example. on the other hand.
fixstars. giving the user a chance to verify the generated code. 2010. 64-bit). visit the FOXC website ( (32-bit. along with FOXC runtime. 2010). Also. multiple devices can be running in parallel on a multi-core system. Available OpenCL Environments This section will introduce the available OpenCL environment as of March. since multi-threading using POSIX threads are supported. but their OpenCL implementation allows the user to choose either environment. 64-bit) Further testing will be performed in the future. 41 . FOXC is a source-to-source compiler that generates readable C code that uses embedded SSE functions from an OpenCL C code. an OpenCL compiler. Hardware Tested Environment Multi-core x86 CPU. Therefore. an additional accelerator is unnecessary.com/en/foxc) NVIDIA OpenCL NVIDIA has released their OpenCL implementation for their GPU. to run program compiled using FOXC. which is the OpenCL runtime implementation. OpenCL users can use the FOXC and FOXC runtime to compile and execute a multi-core x86 program. Cent OS 5. and run a simple OpenCL program. NVIDIA also has a GPGPU development environment known as CUDA.B01-02632560-294 The OpenCL Programming Book OpenCL Setup This chapter will setup the OpenCL development environment. It is still a beta-release as of this writing (March. are softwares currently being developed by the Fixstars Corporation. x86 server Yellow Dog Enterprise Linux (x86. For the latest information. Although OpenCL is a framework that allows for parallel programming of heterogeneous systems. it can also be used for homogeneous systems. FOXC (Fixstars OpenCL Cross Compiler) FOXC.
B01-02632560-294 The OpenCL Programming Book Hardware CUDA-enabled graphics board. 64-bit) Red Had Enterprise Linux 5.com/object/cuda_learn_products. as well as their graphics boards. AMD's OpenCL environment supports both multi-core x86 CPU. AMD has a GPGPU development environment called "ATI Stream SDK". NVIDIA GeForce 8.9. 64-bit) Windows 7 (32-bit.3 (32-bit. 64-bit) Tested Environment The latest CUDA-enabled graphics boards are listed on NVIDIA's website () AMD (ATI) OpenCL ATI used to be a reputable graphic-chip vendor like NVIDIA.nvidia. 64-bit) Ubuntu 8.html). Some graphics products from AMD still have the ATI brand name.nvidia. (March 2010). 64-bit) Windows Vista (32-bit. Hardware Graphics boards by AMD. 64-bit) Yellow Dog Enterprise Linux 6.01 release. The latest supported environments are listed at (. which started to officially support OpenCL with their v2. 64-bit) Fedora 8 (32-bit.10 (32-bit. but was bought out by AMD in 2006.200 Series NVIDIA Tesla NVIDIA Quadro series Windows XP (32-bit. ATI Radeon HD 58xx/57xx/48xx ATI FirePro V8750/8700/7750/5700/3750 AMD FireStream 9270/9250 ATI Mobility Radeon HD 48xx ATI Mobility FirePro M7740 ATI Radeon Embedded E4690 Discrete GPU 42 .3 (32-bit.
offering five times the double precision performance of the previous Cell/B.com/tech/opencl/). which have yet to pass the OpenCL Spec test.6 (Snow Leopard) was the Apple OpenCL.E. Hardware Tested Environment Any Mac with an Intel CPU Mac OS X 10. made up of two POWER6+ processors. 64-bit) OpenSUSE 11. which can be downloaded from IBM Alpha Works ( (32-bit. 43 .aspx).B01-02632560-294 The OpenCL Programming Book Tested Environment Windows XP SP3 (32-bit)/SP2(64-bit) Windows Vista SP1 (32-bit. which is Apple's development environment. may help spread the use of OpenCL as a means to accelerate programs via GPGPU.E. QS22 is a blade server featuring two IBM PowerXCell 8i processors. The default support of OpenCL in the operating system. Xcode.com/gpu/ATIStreamSDKBetaProgram/Pages/default. The latest supported environments are listed on (. IBM's OpenCL includes compilers for both POWER processors.alphaworks.com/gpu/ATIStreamSDKBetaProgram/Pages/default. which was the first world-wide OpenCL release.ibm. 64-bit) Ubuntu 9.aspx) Apple OpenCL One of the main features included in MacOS X 10. which allows developers the access to OpenCL at their fingertips. which is IBM's traditional RISC processor. they are still alpha releases.6 (Snow Leopard) Apple's OpenCL compiler supports both GPUs and x86 CPUs.04 (32-bit. 64-bit) The latest list of supported hardware is listed on AMD's website (. as well as for Cell/B. processor. must be installed prior to the use of OpenCL. JS23 is another blade server.amd. IBM OpenCL IBM has released the alpha version of OpenCL Development Kit for their BladeCenter QS22 and JS23. As of this writing (March 2010).amd. 64-bit) Windows 7 (32-bit.
GCC) at default.gz -C /usr/local The following environmental variables must be set.6 (Snow Leopard) is required to use the Apple OpenCL.fixstars.com/mac/ (Figure 3. FOXC can be downloaded for free from the Fixstars website (. List 3.2: Bourne Shell export PATH=/usr/local/foxc-install/bin:${PATH} export LD_LIBRARY_PATH=/usr/local/foxc-install/lib:${LD_LIBRARY_PATH} This completes the installation of FOXC.1).1: CShell setenv PATH /usr/local/foxc-install/bin:${PATH} setenv LD_LIBRARY_PATH /usr/local/foxc-install/lib List 3. since OS X does not have the necessary development toolkits (e. Also.tar. OS IBM BladeCenter QS22 running Fedora 9 IBM BladeCenter JS23 running Red Had Enterprise Linus 5.com/en/foxc). Download the latest version of "Xcode" from. FOXC Setup This setup procedure assumes the installation of FOXC in a 64-bit Linux environment. Apple OpenCL Setup Mac OS X 10.B01-02632560-294 The OpenCL Programming Book Hardware.apple. > tar -zxf foxc-install-linux64. 44 . these kits must first be downloaded from Apple's developer site.3 Developing Environment Setup This section will walk through the setup process of FOXC. and NVIDIA OpenCL. Apple. The install package can be uncompressed as follows.g. under the directory "/usr/local".
Figure 3. You will need an ADC (Apple Developer Connection) account to download this file. You should see a file called "Xcode.1.1: Apple Developer Connection (. Figure 3. As of this writing (March 2010). The ADC Online account can be created for free. which automatically mounts the archive under /Volumes.2: Xcode Archive Content 45 .mpkg" (the name can vary depending on the Xcode version. however) (Figure 3. and this includes every tools required to start developing an OpenCL application.B01-02632560-294 The OpenCL Programming Book Xcode is a free IDE distributed by Apple.dmg file.com/mac/) Double click the downloaded .2). the latest Xcode version is 3.apple. The archive can be viewed from the Finder.2.
but there is one important step.3.4: Successful Xcode installation 46 . make sure that the box for "UNIX Dev Support" is checked.4. You may now use OpenCL on Mac OS X. Figure 3. On the screen shown in Figure 3. When you reach the screen shown in Figure 3.B01-02632560-294 The OpenCL Programming Book This file is the installation package for Xcode. Double click this file to start the installer.3: Custom Installation Screen Continue onward after making sure that the above box is checked. Most of the installation procedure is self-explanatory. Figure 3. the installation has finished successfully.
The installation procedure will be split up into that for Linux. Next.html). an upgrade or a downgrade is required. and that its version is 3.3_linux_64_190. Make sure GCC is installed. First. and for Windows. but will prove to be useful.5).gz Figure 3. • OpenCL Visual Profiler v1. Install NVIDIA OpenCL on Linux This section will walk through the installation procedure for 64-bit CentOs 5. Type the following on the command line. Type the following on the command line to get the GPU type (# means run as root).5: NVIDIA OpenCL download page 47 .nvidia. NVIDIA OpenCL uses GCC as the compiler.x prior to 4.nvidia. Downloading from this site requires user registration.tar.4 or 4. then GCC is not installed.B01-02632560-294 The OpenCL Programming Book NVIDIA OpenCL Setup NVIDIA OpenCL is supported in multiple platforms.com/object/get-opencl.run • GPU Computing SDK code samples and more: gpucomputingsdk_2.2. > gcc --version If you get an error message such as "command not found: gcc".html) (Figure 3. If the version is not supported for CUDA.run The following is optional. Download the following 2 files. you should make sure that your GPU is CUDA-enabled.0-beta_linux_64. • NVIDIA Drivers for Linux (64-bit): nvdrivers_2. # lspci | grep -i nVidia The list of CUDA-Enabled GPUs can be found online at the NVIDIA CUDA Zone ( Beta (64-bit): openclprof_1.3.com/object/cuda_gpus.3b_linux. go to the NVIDIA OpenCL download page (.
run You will get numerous prompts. Restart the X Window System after a successful installation. This requires the X Window System to be stopped. this must be stopped. which you can answer "OK" unless you are running in a special environment. # init 3 Execute the downloaded file using shell. # sh nvdrivers_2. If you are using a display manager. since it automatically restarts the X Window System. # service gdm stop Type the following to run Linux in command-line mode.29.B01-02632560-294 The OpenCL Programming Book First install the driver.3_linux_64_190. 48 . Type the following command as root if using gdb.
run You will be prompted for an install directory. • NVIDIA Drivers for WinVista and Win7 (190.nvidia.89_general. but will prove to be useful.com/object/get-opencl.exe • GPU Computing SDK code samples and more: gpucomputingsdk_2. under "Display Adapter".html) (Figure 3. Building of the SDK samples requires either Microsoft Visual Studio 8 (2005) or Visual Studio 9 (2008) (Express Edition and above). as this is required to install the "GPU Computing SDK code samples and more" package. Go to the CUDA-Enabled GPU list (link given in the Linux section) to make sure your GPU is supported. The GPU type can be verified on the "Device Manager".exe The following is optional.0 Beta (64-bit): openclprof_1. go to the NVIDIA OpenCL download page (. • OpenCL Visual Profiler v1. Next.3b_linux. This completes the installation procedure for Linux.zip Double click the download executable for the NVIDIA driver in order to start the installation procedure.89): nvdrivers_2. Install NVIDIA OpenCL on Windows This section will walk through the installation on 32-bit Windows Vista. Installation of the SDK requires user registration. Type the following on the command line. after the NVIDIA driver is installed.3_winvista_32_190. Download the following 2 files. After installation is completed successfully.0-beta_windows.5). log off as root. install the GPU Computing SDK. the SDK will be installed in the following directories: 49 . Downloading from this site requires user registration.3b_win_32.B01-02632560-294 The OpenCL Programming Book # startx At this point. Restart your PC when prompted. Next. and log back in as a user. Make sure this is installed. > sh gpucomputingsdk_2. which is $HOME by default.
dat. 1. On the right panel. Copy usertype.” Copy this file to the following directory.dat” already exists in the directory. On the left panel.B01-02632560-294 The OpenCL Programming Book • Windows XP C:¥Documents and Settings¥All Users¥Application Data¥NVIDIA Corporation¥NVIDIA GPU Computing SDK¥OpenCL • Windows Vista / Windows 7 C:¥ProgramData¥NVIDIA Corporation¥NVIDIA GPU Computing SDK¥OpenCL Restart your system after the SDK installation. double click on the “NVIDIA GPU Computing SDK Browser” icon which should have been created on your desktop. Some samples include a “Whitepaper”. The sample code is listed in increasing level of difficulty. To do so. you should see a file called “usertype. use an editor to append the content of NVIDIA SDK’s “usertype.dat” to the existing file. which is a detailed technical reference for that sample. If running on 32-bit Windows C:¥Program Files¥Microsoft Visual Studio 8¥Common7¥IDE (VC8) C:¥Program Files¥Microsoft Visual Studio 9¥Common7¥IDE (VC9) If running on 64-bit Windows C:¥Program Files (x86)¥Microsoft Visual Studio 8¥Common7¥IDE (VC8) C:¥Program Files (x86)¥Microsoft Visual Studio 9¥Common7¥IDE (VC9) If “usertype. type ". and then click on "Option". Visual Studio Setup Start Visual Studio. 2.cl" to the box labeled 50 . The following procedure will enable syntax highlighting. You will see a list of all the sample programs in the SDK. then "File Extension". Open the “OpenCL Samples” tab to see all the sample code written in OpenCL. go to the "Tool" Menu Bar. You should now be able to run the OpenCL sample program inside the SDK.dat Under the “doc” folder found inside the SDK installation directory. Syntax highlighting of OpenCL code on Visual Studio is not supported by default. select "Text Editor".
(The code can be downloaded from. Visual Studio will now perform syntax highlighting on . clext.dll If an application explicitly specifies the link to OpenCL. Hello World List 3. cl_gl. Since we have not yet gone over the OpenCL grammar.com/books/opencl) List 3. the string set on the kernel will be copied over to the host side.kernel (hello.4 shows the familiar "Hello.h.3 and 3.h.3: Hello World . When developing an OpenCL program. Since standard in/out cannot be used within the kernel. 51 .lib (32-bit version and 64-bit versions exist) • Dynamic Link Library (Default: "¥Windows¥system32") OpenCL. which can then be outputted. OpenCL. we will use the kernel only to set the char array.cl files.fixstars. World!" program.h • Library (Default: "NVIDIA GPU Computing SDK¥OpenCL¥common¥lib¥[Win32|x64]" OpenCL. we will start learning the OpenCL programming basics by building and running actual code.B01-02632560-294 The OpenCL Programming Book "Extension". First OpenCL Program From this section onward. the required files for building are located as follows. cl_platform.h.lib is not required at runtime.cl) 001: __kernel void hello(__global char* string) 002: { 003: string[0] = 'H'. Select "Microsoft Visual C++" on the drop-down menu. • Header File (Default: "NVIDIA GPU Computing SDK¥OpenCL¥common¥inc¥CL") cl. written in OpenCL. you should concentrate on the general flow of OpenCL programming. 005: string[2] = 'l'. In this program. 004: string[1] = 'e'.dll. 006: string[3] = 'l'. and then click the "Add" button.
015: string[12] = '!'. 016: string[13] = '¥0'. 013: string[10] = 'l'. 012: string[9] = 'r'. 021: cl_platform_id platform_id = NULL.h> 003: 004: #ifdef __APPLE__ 005: #include <OpenCL/opencl.c) 001: #include <stdio. 016: cl_context context = NULL. 022: cl_uint ret_num_devices.4: Hello World . 018: cl_mem memobj = NULL. 009: string[6] = ' '.h> 008: #endif 009: 010: #define MEM_SIZE (128) 011: #define MAX_SOURCE_SIZE (0x100000) 012: 013: int main() 014: { 015: cl_device_id device_id = NULL. 008: string[5] = '. 017: } List 3. 010: string[7] = 'W'. 014: string[11] = 'd'. 020: cl_kernel kernel = NULL. 011: string[8] = 'o'. 52 . 023: cl_uint ret_num_platforms. 017: cl_command_queue command_queue = NULL. 019: cl_program program = NULL.host (hello.h> 006: #else 007: #include <CL/cl.'.B01-02632560-294 The OpenCL Programming Book 007: string[4] = 'o'.h> 002: #include <stdlib.
040: source_size = fread(source_str. NULL. &ret). &device_id.B01-02632560-294 The OpenCL Programming Book 024: cl_int ret. fp). device_id. "r"). 027: 028: FILE *fp. &ret). 046: 047: /* Create OpenCL context */ 048: context = clCreateContext(NULL.¥n").cl". NULL. 038: } 039: source_str = (char*)malloc(MAX_SOURCE_SIZE). 1. &ret_num_platforms). &ret_num_devices). &platform_id. 037: exit(1). MAX_SOURCE_SIZE. 025: 026: char string[MEM_SIZE]. 055: 056: /* Create Kernel Program from the source */ 057: program = clCreateProgramWithSource(context. 042: 043: /* Get Platform and Device Info */ 044: ret = clGetPlatformIDs(1. NULL. 045: ret = clGetDeviceIDs(platform_id./hello. &device_id. &ret). 1. CL_DEVICE_TYPE_DEFAULT. 041: fclose(fp). 030: char *source_str. "Failed to load kernel. (const char **)&source_str. 53 . 1. 035: if (!fp) { 036: fprintf(stderr. 1. 031: size_t source_size. 052: 053: /* Create Memory Buffer */ 054: memobj = clCreateBuffer(context. CL_MEM_READ_WRITE.MEM_SIZE * sizeof(char). 0. 029: char fileName[] = ". 032: 033: /* Load the source code containing the kernel*/ 034: fp = fopen(fileName. 049: 050: /* Create Command Queue */ 051: command_queue = clCreateCommandQueue(context.
083: ret = clReleaseProgram(program).string. 0. 0. 086: ret = clReleaseContext(context). 082: ret = clReleaseKernel(kernel). 068: 069: /* Execute OpenCL Kernel */ 070: ret = clEnqueueTask(command_queue. 084: ret = clReleaseMemObject(memobj). 062: 063: /* Create OpenCL Kernel */ 064: kernel = clCreateKernel(program. NULL. 065: 066: /* Set OpenCL Kernel Parameters */ 067: ret = clSetKernelArg(kernel.1). 0. 075: 076: /* Display Result */ 077: puts(string). NULL. 085: ret = clReleaseCommandQueue(command_queue). NULL. NULL). 074: MEM_SIZE * sizeof(char). kernel. 087: 088: free(source_str). 54 . 081: ret = clFinish(command_queue). sizeof(cl_mem). 089: 090: return 0. "hello". NULL).B01-02632560-294 The OpenCL Programming Book 058: (const size_t *)&source_size. &ret).NULL). 0. CL_TRUE. &ret). NULL. 078: 079: /* Finalization */ 080: ret = clFlush(command_queue). &device_id. 059: 060: /* Build Kernel Program */ 061: ret = clBuildProgram(program. (void *)&memobj). 1. 071: 072: /* Copy results from the memory buffer */ 073: ret = clEnqueueReadBuffer(command_queue. 091: } The include header is located in a different directory depending on the environment (Table 3. memobj.
B01-02632560-294 The OpenCL Programming Book Make sure to specify the correct location.h CL/cl. The kernel and host code are assumed to exist within the same directory.0-beta4-lnx32 (32-bit Linux) Path-to-AMD/ati-stream-sdk-v2.1: Include Header Location (As of March.." should be replaced with the corresponding OpenCL SDK path.1: Include header location (as of March 2010) OpenCL implementation AMD Apple FOXC NVIDIA Include Header CL/cl..2.h OpenCL/opencl. Table 3. Table 3. The default SDK path is as shown in Figure 3.h CL/cl.h The sample code defines the following macro so that the header is correctly included in any environment. This section will describe the procedure under Linux/Mac OS X. The procedures for building vary depending on the OpenCL implementation.h> #endif Building in Linux/Mac OS X Once the program is written.2: Path to SDK SDK AMD Stream SDK 2.0 beta4 Path Path-to-AMD/ati-stream-sdk-v2. we are now ready to build and run the program. List 3. "path-to-. 2010) #ifdef __APPLE__ #include <OpenCL/opencl.h> #else #include <cl.0-beta4-lnx64 (64-bit Linux) Path-to-foxc/foxc-install FOXC 55 .
> make amd (Linux) > make apple (Mac OS X) > make foxc (Linux) > make nvidia (Linux) This should create an executable with the name "hello" in working directory.-rpath.-rpath./path-to-AMD/lib/x86_64 -lOpenCL (64-bit Linux) FOXC > gcc -I /path-to-foxc/include -L /path-to-foxc/lib -o hello hello.-rpath.c -Wl.c -Wl. > . you should get "Hello World!" on the screen.c -lOpenCL (64-bit Linux) Alternatively. If successful./path-to-AMD/lib/x86 -lOpenCL (32-bit Linux) > gcc -I /path-to-AMD/include -L/path-to-AMD/lib/x86_64 -o hello hello.c -Wl.B01-02632560-294 The OpenCL Programming Book NVIDIA GPU Computing SDK $(HOME)/NVIDIA_GPU_Computing_SDK (Linux) The build command on Linux/Max OS X are as follows: AMD OpenCL > gcc -I /path-to-AMD/include -L/path-to-AMD/lib/x86 -o hello hello./hello 56 . Run the executable as follows.c -lOpenCL (32-bit Linux) > gcc -I /path-to-NVIDIA/OpenCL/common/inc -L /path-to-NVIDIA/OpenCL/common/lib/Linux64 -o hello hello. you can use the Makefile included with the sample code to run the OpenCL code in various platforms as written below.c -framework opencl NVIDIA > gcc -I /path-to-NVIDIA/OpenCL/common/inc -L /path-to-NVIDIA/OpenCL/common/lib/Linux32 -o hello hello./path-to-foxc/lib -lOpenCL Apple > gcc -o hello hello.
go to “Linker” -> “Input”. Build and run the 57 . and in the box for “Additional library path”. and in the box for "Additional Dependencies". go to "Linker" -> "Input".B01-02632560-294 The OpenCL Programming Book Hello World! Building on Visual Studio This section will walk through the building and execution process using Visual C++ 2008 Express under 32-bit Windows Vista environment. type the following. then add the following in the box for “Additional include directories”: NVIDIA C:¥ProgramData¥NVIDIA Corporation¥NVIDIA GPU Computing SDK¥OpenCL¥common¥inc AMD C:¥Program Files¥ATI Stream¥include 2. NVIDIA C:¥ProgramData¥NVIDIA Corporation¥NVIDIA GPU Computing SDK¥OpenCL¥common¥lib¥Win32 AMD C:¥Program Files¥ATI Stream¥lib¥x86 3. Both NVIDIA and AMD OpenCL. From the project page. go to “C/C++” -> “General”. type the following.lib These should apply to All Configurations. which can be selected on the pull-down menu located on the top left corner. From the project page. From the project page. 1. The environment should now be setup to allow an OpenCL code to be built on. The OpenCL header file and library can be included to be used on a project using the following steps.
58 . and make sure you get the correct output.B01-02632560-294 The OpenCL Programming Book sample code.
4 hello. The OpenCL grammar will be explained in detail in Chapter 5. The sections to follow will walk through each code.c.3 hello. but for the time being. • The function specifier "__kernel" is used when declaring the "hello" function • The address specifier "__global" is used to define the function's string argument 59 . you can think of it as being the same as the standard C language. The device is programmed in OpenCL C. you should have the tools necessary for implementing a simple OpenCL program. The host is programmed in C/C++ using an OpenCL runtime API. You should also know the difference between a host memory and a device memory. as well as for the device side. By now. Basic Program Flow The previous chapter introduced the procedure for building and running an existing sample code. OpenCL Program Creating an OpenCL program requires writing codes for the host side. Kernel Code The function to be executed on the device is defined as shown in List 4. you should have an idea of the basic role of a host program and a kernel program. as well as when to use them. This section will walk through the actual code of the "Hello. List 4. as shown in List 3. World!" program introduced in the last chapter.cl.1: Declaring a function to be executed on the kernel 001: __kernel void hello(__global char * string) The only differences with standard C are the following.B01-02632560-294 The OpenCL Programming Book Basic OpenCL This chapter will delve further into writing codes for the host and the device.1. After reading this chapter. as shown in List 3.
execution. it will assume the address space to be __private.B01-02632560-294 The OpenCL Programming Book The "__kernel" specifies the function to be executed on the device. and private memory. Create program object 8. Host Code The host program tells the device to execute the kernel using the OpenCL runtime API. the clEnqueueTask() is called from the host to process the kernel. which is the device-side main memory. Create kernel object 10. • Task call: clEnqueueTask() • Data-parallel call: clEnqueueNDRangeKernel() Since the kernel is not of a data-parallel nature. 1. It must be called by the host. Select device 3. Create memory objects 6. Create Context 4. Read kernel file 7. constant. __constant. Read memory object 60 . Compile kernel 9. which includes initialization. Get a list of available platforms 2. If this is not specified. which is the device-side register. The procedure. which is done by using one of the following OpenCL runtime API commands. Execute kernel (Enqueue task) ← hello() kernel function is called here 12. local. but other setup procedures must be performed in order to actually run the code. is listed below. which is specified by __global. Set kernel arguments 11. The __global address specifier for the variable "string" tells the kernel that the address space to be used is part of the OpenCL global memory. The kernel is only allowed read/write access to global. and finalization. __private. Create command queue 5. respectively. Telling the device to execute the hello() kernel only requires the clEnqueueTask() command in the OpenCL runtime API. __local.
. which specifies whatever set as default in the platform to be used as the device.. CL_DEVICE_TYPE_DEFAULT. cl_uint ret_num_platforms.. If the desired device is the GPU.4. cl_uint ret_num_devices. The 2nd argument specifies the device type. The 3rd argument returns the number of OpenCL platforms that can be used. 021: .4. 044: ret = clGetPlatformIDs(1. &ret_num_platforms). cl_platform_id platform_id = NULL The platform model in OpenCL consists of a host connected to one or more OpenCL devices.. The 1st argument specifies how many OpenCL platforms to find which most of the time is one. &platform_id. which is returned as a list to the pointer platform_id of type cl_platform_id. 015: .. CL_DEVICE_TYPE_DEFAULT is passed. 022: . &device_id. This is done in the following code segment from hello.. 045: ret = clGetDeviceIDs(platform_id.. The 1st argument is the platform that contains the desired device. Free objects We will go through each step of the procedure using hello.B01-02632560-294 The OpenCL Programming Book 13. &ret_num_devices). then this should be CL_DEVICE_TYPE_GPU. In this case. 61 . Select Device The next step is to select a GPU device within the platform. Get a List of Available Platform The first thing that must be done on the host-side is to get a list of available OpenCL platforms. 1. This is done in the following code segment from hello.c as an example. cl_device_id device_id = NULL. The clGetDeviceIDs() function in line 45 selects the device to be used. 023: .c in List 3. and if it is CPU.c in List 3. The clGetPlatformIDs() function in line 44 allows the host program to discover OpenCL devices.. then this should be CL_DEVICE_TYPE_CPU.
which will be discussed later. &device_id.c in List 3. device_id. The 4th argument returns the handle to the selected device. In OpenCL. Create Memory Object 62 . For each device. This is done in the following code segment from hello. 051: command_queue = clCreateCommandQueue(context.. Create Command Queue The command queue is used to control the device.4. The 3rd argument specifies the list of device handlers. which will be used for memory copy and kernel execution.. The 5th argument returns the number of devices that corresponds to the device type specified in the 2nd argument. Create Context After getting the device handle.4. cl_context context = NULL. the next step is to create an OpenCL Context.B01-02632560-294 The OpenCL Programming Book The 3rd argument specifies the number of devices to use. The context is used by the OpenCL runtime for managing objects. This is done in the following code segment from hello. The function returns a handle to the command queue. If the specified device does not exist. The clCreateCommandQueue() function in line 51 creates the command queue. &ret). is performed through this command queue. An OpenCL context is created with one or more devices. The 2nd argument specifies the device which will execute the command in the queue. 016: . using the clCreateContext() function in line 48.. &ret). NULL. The 1st argument specifies the context in which the command queue will become part of.c in List 3. one or more command queue objects must be created. any command from the host to the device. then ret_num_devices will be set to 0. cl_command_queue command_queue = NULL. The 2nd argument specifies the number of devices to use. 1. such as kernel execution or memory copy. 048: context = clCreateContext(NULL.. 017: . NULL. 0.
030: char *source_str.cl kernel in list 3.c in List 3. This may be in a form of an executable binary. MEM_SIZE * sizeof(char). "Failed to load kernel. the kernel can only be executed via the host-side program. The clCreateBuffer() function in line 54 allocates space on the device memory [14]./hello.B01-02632560-294 The OpenCL Programming Book To execute a kernel. World!". this action must be performed on the host-side. "r"). 037: exit(1). 035: if (!fp) { 036: fprintf(stderr. or a source code which must be compiled using an OpenCL compiler. cl_mem memobj = NULL.cl". this allocated storage space gets the character string "Hello. In this example. CL_MEM_READ_WRITE.4. which is done in the hello. The CL_MEM_READ_WRITE flag allows the kernel to both read and write to the allocated device memory. This is done in the following code segment from hello. 029: char fileName[] = ".¥n"). The 3rd argument specifies the number of bytes to allocate. The allocated memory can be accessed from the host side using the returned memobj pointer. However. &ret). which allows the host to access the device memory. the host program must first read the kernel program. 031: size_t source_size. Therefore. a memory object must be created. To do this. the host reads the kernel source code. 028: FILE *fp. NULL. This is done using the standard fread() function. 054: memobj = clCreateBuffer(context. In this example. all data to be processed must be on the device memory. The 1st argument specifies the context in which the memory object will become part of.3. The 2nd argument specifies a flag describing how it will be used. 038: } 63 . the kernel does not have the capability to access memory outside of the device. 032: 033: /* Load kernel code */ 034: fp = fopen(fileName. Read Kernel File As mentioned earlier. To do this... 018: .
since the kernel program can contain multiple kernel functions [15].B01-02632560-294 The OpenCL Programming Book 039: source_str = (char*)malloc(MAX_SOURCE_SIZE). The 2nd argument is the number of target devices. 041: fclose(fp)..4. &device_id. the next step is to create a kernel object. The 3rd argument specifies the read-in source code. (const size_t *)&source_size. 040: source_size = fread(source_str. it is necessary to specify the kernel function's name when creating the kernel object. and the 4th argument specifies the size of the source code in bytes. 1. 061: ret = clBuildProgram(program. The clBuildProgram in line 61 builds the program object to create a binary. 057: 058: program = clCreateProgramWithSource(context. (const char **)&source_str. 019: .c in List 3. &ret). If the program object is to be created from a binary. Create Kernel Object Once the program object is compiled. Each kernel object corresponds to each kernel function. This is done by creating a program object. fp). Create Program Object Once the source code is read. The program object is created using the clCreateProgramWithSource() function. NULL. 1. cl_program program = NULL. The 1st argument is the program object to be compiled. The 3rd argument is the target device for which the binary is created. NULL). Therefore. MAX_SOURCE_SIZE. as shown in the following code segment from hello. The 4th argument specifies the compiler option string. This step is required.. Note that this step is unnecessary if the program is created from binary using clCreateProgramWithBinary(). clCreateProgramWithBinary() is used instead. NULL. Compile The next step is to compile the program object using an OpenCL C compiler. 1. this code must be made into a kernel program. 64 .
but it is possible to have multiple kernel functions for one program object. 067: ret = clSetKernelArg(kernel.4 65 .c in List 3. This is done by the code segment from hello. 0. meaning the 0th argument to the kernel is being set.cl in List 3. clSetKernelArg(kernel. (void *)&a). "hello". (void *) &memobj). Execute Kernel (Enqueue task) The kernel can now finally be executed. In this way. and the 2nd argument sets the kernel function name. the pointer to the allocated memory object is passed in. the arguments to the kernel must be set. with the 3rd argument specifying this argument size in bytes. sizeof(int). the clSetKernelArg() must be called for each kernel arguments. 0. In this example. the clCreateKernel() function must be called multiple times. This example only has one kernel function for one program object. 064: kernel = clCreateKernel(program.. This pointer must be specified on the host-side. sizeof(cl_mem). In this example. In this way. int a =10. The 4th argument is the pointer to the argument to be past in.. The 2nd argument selects which argument of the kernel is being passed in. since each kernel object corresponds to one kernel function.3 expects a pointer to a string array to be built on the device side. The 1st argument is the kernel object.B01-02632560-294 The OpenCL Programming Book 020: cl_kernel kernel = NULL. the memory management can be performed on the host-side. hello. Passing of the host-side data as a kernel argument can be done as follows. Set Kernel Arguments Once the kernel object is created. The 1st argument specifies the program object. However. which is 0 in this example. &ret). The clSetKernelArg() function in line 67 sets the arguments to be passed into the kernel. . The clCreateKernel() function in line 64 creates the kernel object from the program.
073: 074: ret = clEnqueueReadBuffer(command_queue. In order to wait for the kernel to finish executing. to be executed on the compute unit on the device. which keeps the host from executing the next command until the data copy finishes. The code that follows the clEnqueueTask() function should account for this. The "CL_TRUE" that is passed in makes the function synchronous. The 6th argument is the pointer to the host-side memory where the data is copied over to. CL_TRUE. meaning it just throws the kernel into the queue to be executed on the device. MEM_SIZE * sizeof(char). Also. that the "hello" kernel was queued asynchronously. the data copy instruction is placed in the command queue before it is processed.c in List 3. the clEnqueueTask() function is used for task-parallel instructions. 0. This is done in the following code segment from hello. string. This throws the kernel into the command queue. Read from the memory object After processing the data on the kernel. memobj. To copy the data from the host-side memory to the device-side memory.4. As the term "Enqueue" in the function indicates. the result must now be transferred back to the host-side... The 3rd argument specifies whether the command is synchronous or asynchronous. NULL. however.B01-02632560-294 The OpenCL Programming Book 070: ret = clEnqueueTask(command_queue. which queues the task and immediately executes the next instruction on the host-side. NULL). 026: . the 5th argument of the above function must be set as an event object. the clEnqueueWriteBuffer() function is used instead. 0. If "CL_FALSE" is passed in instead. NULL. This should make you question whether the memory copy from the device is reading valid data. NULL). Note that this function is asynchronous. while the 5th argument specifies the size of the data in bytes. the copy becomes asynchronous. This will be explained in "4-3-3 Task Parallelism and Event Object". 0. char string[MEM_SIZE]. The 2nd argument is the pointer to the memory on the device to be copied over to the host side. 66 . Recall. kernel. Data-parallel instructions should use the clEnqueueNDRangeKernel() function instead. The clEnqueueReadBuffer() function in lines 73~74 copies the data on the device-side memory to the host-side memory.
This is done in the code segment shown below from hello. However. the data copy may start before the kernel finishes its processing of the data. and a way to do this is explained in "4-3 Kernel Call". If the command queue was set to allow asynchronous execution. a kernel can be compiled either online or offline (Figure 4. in this case it is OK. ret = clReleaseMemObject(memobj). 082: 083: 084: 085: 086: ret = clReleaseKernel(kernel). all the objects need to be freed. since the same object can be used repeatedly. ret = clReleaseProgram(program). the 3rd argument passed in was a 0. it might make you think that this is a mistake. Figure 4.c in List 3. ret = clReleaseContext(context). the host-side's object management memory space may run out. the data copy command waits until the previous command in the queue. Therefore. is finished. Asynchronous kernel execution may be required in some cases. In real-life applications. the "hello" kernel.4. in which case the OpenCL runtime will throw an error. If too many objects are created without freeing. ret = clReleaseCommandQueue(command_queue). however.1). which will achieve incorrect results. Free Objects Lastly. This is because when the command queue was created.1: Offline and Online Compilation 67 . so object creation/deletion cycle do not usually have to be repeated. Online/Offline Compilation In OpenCL. which makes the queued commands execute in order.B01-02632560-294 The OpenCL Programming Book Looking just at the host-side code. the main course of action is usually a repetition of setting kernel arguments or the host-to-device copy -> kernel execution -> device-to-host copy cycle.
68 . However. multiple kernel binaries must be included. and the generated binary is what gets loaded using the OpenCL API. and that adaptive compiling of the kernel is possible. It also makes the testing of the kernel easier during development. This method is commonly known as JIT (Just in time) compilation. thus increasing the size of the executable file. this method may not be suited for commercial applications. this is not suited for embedded systems that require real-time processing. In a way. AMD. The problem with this method is that in order to execute the program on various platforms. the kernel is pre-built using an OpenCL compiler. FOXC on the other hand includes a stand-alone OpenCL kernel compiler. In "online-compilation". Since the kernel binary is already built. Hence. the kernel is built from source during runtime using the OpenCL runtime library. since OpenCL is a programming framework for heterogeneous environments. In fact. the time lag between starting the host code and the kernel getting executed is negligible. The OpenCL runtime library contains the set of APIs that performs the above operations.B01-02632560-294 The OpenCL Programming Book The basic difference between the 2 methods is as follows: • Offline: Kernel binary is read in by the host code • Online: Kernel source file is read in by the host code In "offline-compilation". The advantage of this method is that the host-side binary can be distributed in a form that's not device-dependent. since it gets rid of the need to compile the kernel each time. since the kernel code is in readable form. and Apple. the online compilation support should not come as a shock. Also. which makes the process of making a kernel binary intuitive. a stand-alone OpenCL compiler is not available for the OpenCL environment by NVIDIA. the built kernels has to be written to a file during runtime by the host program. in order to create a kernel binary in these environments.
cl_program program = NULL.B01-02632560-294 The OpenCL Programming Book We will now look at sample programs that show the two compilation methods. cl_uint ret_num_platforms./kernel. cl_int ret.h> 006: #else 007: #include <CL/cl.h> 003: 004: #ifdef __APPLE__ 005: #include <OpenCL/opencl. 69 . The first code shows the online compilation version.h> 002: #include <stdlib. char *source_str. float mem[MEM_SIZE]. List 4. cl_uint ret_num_devices. cl_platform_id platform_id = NULL. cl_kernel kernel = NULL. const char fileName[] = ". cl_device_id device_id = NULL. size_t source_size.h> 008: #endif 009: 010: #define MEM_SIZE (128) 011: #define MAX_SOURCE_SIZE (0x100000) 012: 013: int main() 014: { 015: 016: 017: 018: 019: 020: 021: 022: 023: 024: 025: 026: 027: 028: 029: 030: 031: FILE *fp. cl_mem memobj = NULL.cl". cl_context context = NULL.2: Online Compilation version 001: #include <stdio. cl_command_queue command_queue = NULL.
memobj. 0. NULL. /* Transfer data to memory buffer */ ret = clEnqueueWriteBuffer(command_queue. i++) { mem[i] = i. /*Initialize Data */ for (i = 0. i < MEM_SIZE. &ret). 0. fp). 1. &device_id. exit(1). &ret). /* Create memory buffer*/ memobj = clCreateBuffer(context. 70 . NULL. MEM_SIZE * sizeof(float). MEM_SIZE * sizeof(float). CL_TRUE. fclose(fp). "Failed to load kernel. &device_id. &ret_num_devices). mem. /* Create Command Queue */ command_queue = clCreateCommandQueue(context. } source_str = (char *)malloc(MAX_SOURCE_SIZE). if (!fp) { fprintf(stderr. /* Create OpenCL Context */ context = clCreateContext(NULL. &ret). CL_DEVICE_TYPE_DEFAULT. 1. MAX_SOURCE_SIZE. &ret_num_platforms). CL_MEM_READ_WRITE. ret = clGetDeviceIDs(platform_id. device_id. /* Load kernel source code */ fp = fopen(fileName. NULL. &platform_id. } /* Get platform/device information */ ret = clGetPlatformIDs(1. 1.B01-02632560-294 The OpenCL Programming Book: cl_int i. source_size = fread(source_str.¥n"). 0. "r"). NULL). NULL.
MEM_SIZE * sizeof(float).B01-02632560-294 The OpenCL Programming Book: /* Create Kernel program from the read in source */ program = clCreateProgramWithSource(context. 0. ret = clFinish(command_queue). (const size_t *)&source_size. 0. /* Display result */ for (i=0. /* Execute OpenCL kernel */ ret = clEnqueueNDRangeKernel(command_queue. NULL. /* Set OpenCL kernel argument */ ret = clSetKernelArg(kernel. i. NULL. ret = clReleaseProgram(program). 1. &ret). CL_TRUE. /* Create OpenCL Kernel */ kernel = clCreateKernel(program. size_t local_work_size[3] = {MEM_SIZE. 0}. 1. (void *)&memobj). 0. kernel. &device_id. i < MEM_SIZE. mem. 1. &ret). 0. ret = clReleaseKernel(kernel). global_work_size. } /* Finalization */ ret = clFlush(command_queue). i++) { printf("mem[%d] : %f¥n". 0. /* Transfer result from the memory buffer */ ret = clEnqueueReadBuffer(command_queue. (const char **)&source_str. NULL. 71 . memobj. local_work_size. ret = clReleaseCommandQueue(command_queue). NULL). /* Build Kernel Program */ ret = clBuildProgram(program. ret = clReleaseMemObject(memobj). NULL. NULL). NULL. 0. sizeof(cl_mem). mem[i]). 0}. NULL). "vecAdd". size_t global_work_size[3] = {MEM_SIZE.
cl_command_queue command_queue = NULL. cl_mem memobj = NULL. cl_device_id device_id = NULL. cl_platform_id platform_id = NULL.B01-02632560-294 The OpenCL Programming Book 098: 099: 100: 101: 102: 103: ret = clReleaseContext(context). free(source_str).h> 003: 004: #ifdef __APPLE__ 005: #include <OpenCL/opencl. cl_uint ret_num_platforms.h> 006: #else 007: #include <CL/cl. cl_int ret.3).h> 002: #include <stdlib. cl_uint ret_num_devices. List 4. cl_context context = NULL. 72 .h> 008: #endif 009: 010: #define MEM_SIZE (128) 011: #define MAX_BINARY_SIZE (0x100000) 012: 013: int main() 014: { 015: 016: 017: 018: 019: 020: 021: 022: 023: 024: 025: 026: float mem[MEM_SIZE]. cl_program program = NULL. return 0.3 Offline compilation version 001: #include <stdio. cl_kernel kernel = NULL. } The following code shows the offline compilation version (List 4.
} binary_buf = (char *)malloc(MAX_BINARY_SIZE). char fileName[] = ". NULL. /* Create OpenCL context*/ context = clCreateContext(NULL. &device_id. 1. cl_int binary_status. NULL. fclose(fp). CL_MEM_READ_WRITE. FILE *fp./kernel. if (!fp) { fprintf(stderr. &ret_num_devices). i++) { mem[i] = i. &ret). 1. /* Load kernel binary */ fp = fopen(fileName. "Failed to load kernel. fp). exit(1). &device_id. size_t binary_size. MAX_BINARY_SIZE. MEM_SIZE * sizeof(float).¥n"). char *binary_buf. 73 . device_id. &platform_id. &ret_num_platforms).B01-02632560-294 The OpenCL Programming Book: /* Create memory buffer */ memobj = clCreateBuffer(context. /* Create command queue */ command_queue = clCreateCommandQueue(context. binary_size = fread(binary_buf. cl_int i. "r"). /* Get platform/device information */ ret = clGetPlatformIDs(1. &ret). } /* Initialize input data */ for (i = 0. 0. 1. CL_DEVICE_TYPE_DEFAULT.clbin". i < MEM_SIZE. ret = clGetDeviceIDs(platform_id.
0. mem[i]). mem. NULL). /* Create kernel program from the kernel binary */ program = clCreateProgramWithBinary(context. 0.B01-02632560-294 The OpenCL Programming Book NULL.: /* Finalization */ ret = clFlush(command_queue). i++) { printf("mem[%d] :%f¥n". CL_TRUE. /* Copy result from the memory buffer */ ret = clEnqueueReadBuffer(command_queue. 1. sizeof(cl_mem). 0}. 0. 0. } /* Display results */ for (i=0. memobj. local_work_size. &ret). 0}. NULL). ret). 0. (void *)&memobj). NULL. CL_TRUE. /* Transfer data over to the memory buffer */ ret = clEnqueueWriteBuffer(command_queue. NULL). &binary_status. i < MEM_SIZE. NULL. NULL. memobj. sizeof(float). 74 . i. &ret). size_t local_work_size[3] = {MEM_SIZE. MEM_SIZE * /* Execute OpenCL kernel */ ret = clEnqueueNDRangeKernel(command_queue. MEM_SIZE * sizeof(float). 1. size_t global_work_size[3] = {MEM_SIZE. 0. &ret). &device_id. /* Set OpenCL kernel arguments */ ret = clSetKernelArg(kernel. 0. "vecAdd". mem. 0. NULL. *)&binary_size. global_work_size. (const size_t (const unsigned char **)&binary_buf. printf("err:%d¥n". /* Create OpenCL kernel */ kernel = clCreateKernel(program. kernel.
List 4. ret = clReleaseProgram(program). free(binary_buf).B01-02632560-294 The OpenCL Programming Book 093: 094: 095: 096: 097: 098: 099: 100: 101: 102: 103: } ret = clFinish(command_queue).5: Online compilation version . ret = clReleaseKernel(kernel). fp = fopen(fileName. "Failed to load kernel.4: Kernel program 001: __kernel void vecAdd(__global float* a) 002: { 003: 004: 005: 006: } a[gid] += a[gid]. "r"). if (!fp) { fprintf(stderr. The kernel program performs vector addition.5).4. fp). source_size = fread(source_str. 75 . ret = clReleaseContext(context). MAX_SOURCE_SIZE. The first major difference is the fact that the kernel source code is read in the online compile version (List 4.2 and List 4. so we will focus on their differences. ret = clReleaseCommandQueue(command_queue). return 0. 1. We will take a look at the host programs shown in List 4.¥n").3. List 4. exit(1). fclose(fp). int gid = get_global_id(0).Reading the kernel source code 035: 036: 037: 038: 039: 040: 041: 042: } source_str = (char *)malloc(MAX_SOURCE_SIZE). The two programs are almost identical. It is shown below in list 4. ret = clReleaseMemObject(memobj).
B01-02632560-294 The OpenCL Programming Book The source_str variable is a character array that merely contains the content of the source file. exit(1).¥n"). "Failed to load kernel. &device_id.7: Offline compilation . List 4. and then built. "r").clbin kernel. (const char **)&source_str. since the data is being read into a buffer of type char.Create kernel program 065: 066: 067: 068: 069: /* Build kernel program */ ret = clBuildProgram(program. fp). With offline compilation.7). NULL. /* Create kernel program from the source */ program = clCreateProgramWithSource(context. 1. In order to execute this code on the kernel. if (!fp) { fprintf(stderr. &ret). 1. > /path-to-foxc/bin/foxc -o kernel. this can be done as follows. 1. we will look at the source code for the offline compilation version (List 4. NULL. This means that the kernel source code must be compiled beforehand using an OpenCL compiler. binary_size = fread(binary_buf. 76 .Reading the kernel binary 036: 037: 038: 039: 040: 041: 042: 043: } binary_buf = (char *)malloc(MAX_BINARY_SIZE). fp = fopen(fileName. The code looks very similar to the online version. it must be built using the runtime compiler. NULL).cl The online compilation version required 2 steps to build the kernel program. MAX_BINARY_SIZE. The program is first created from source. In FOXC.6: Online compilation version . fclose(fp). (const size_t *)&source_size. Next.6. The difference is that the data on the buffer can be directly executed. the clCreateProgramWithSource() is replaced with clCreateProgramWithBinary. This is done by the code segment shown below in List 4. List 4.
To summarize. the GPUs are incapable of running different tasks in parallel. the difference between the two is whether the same kernel or different kernels are executed in parallel. &ret). Change clCreateProgramWithSource() to clCreateProgramWithBinary() 3. parallelizable code is either "Data Parallel" or "Task Parallel". At present. &device_id.2. &binary_status. 1. there is no reason for another build step as in the online compilation version. but hardware such as instruction fetch and program counters are shared across the processors. Calling the Kernel Data Parallelism and Task Parallelism As stated in the "1-3-3 Types of Parallelism" section. Since the kernel is already built. As shown in Figure 4. Since the processors can only process the same set of instructions across the cores. In OpenCL. most GPUs contain multiple processors. the processors scheduled to process Task B must be in idle mode until Task A is finished. in order to change the method of compilation from online to offline. the number of tasks equal to the number of processors can be performed at once. 77 .3 shows the case when multiple tasks are scheduled to be performed in parallel on the GPU.B01-02632560-294 The OpenCL Programming Book 067: 068: program = clCreateProgramWithBinary(context. *)&binary_size. For this reason. the following steps are followed: 1. Figure 4. (const size_t (const unsigned char **)&binary_buf. The difference becomes obvious in terms of execution time when executed on the GPU. Get rid of clBuildProgram() This concludes the differences between the two methods. when multiple processors perform the same task. Read the kernel as a binary 2. See chapter 7 for the details on the APIs used inside the sample codes.
This section will use vector-ized arithmetic operation to explain the basic method of implementations for data parallel and task parallel commands.B01-02632560-294 The OpenCL Programming Book Figure 4. 78 .2: Efficient use of the GPU Figure 4. When developing an application. called clEnqueueNDRangeKernel().3: Inefficient use of the GPU For data parallel tasks suited for a device like the GPU. subtraction. OpenCL provides an API to run the same kernel across multiple processors. The sample code performs the basic arithmetic operations. multiplication and division. The provided sample code is meant to illustrate the parallelization concepts. The overview is shown in Figure 4. the task type and the hardware need to be considered wisely. and use the appropriate API function. between float values. which are addition.4.
the input data consists of 2 sets of 4x4 matrices A and B.B01-02632560-294 The OpenCL Programming Book Figure 4. C[base+3] = A[base+3] / B[base+3]. 79 . int base = 4*get_global_id(0). __global float* C) 002: { 003: 004: 005: 006: 007: 008: 009: } C[base+0] = A[base+0] + B[base+0]. The output data is a 4x4 matrix C.8.kernel dataParallel.4: Basic arithmetic operations between floats As the figure shows. C[base+1] = A[base+1] .B[base+1].9). We will first show the data-parallel implementation (List 4. C[base+2] = A[base+2] * B[base+2]. List 4. List 4.8: Data parallel model .cl 001: __kernel void dataParallel(__global float* A. __global float* B. This program treats each row of data as one group in order to perform the computation.
cl_device_id device_id = NULL.9: Data parallel model .h> 002: #include <stdlib. cl_platform_id platform_id = NULL. float *A: 034: 035: A = (float *)malloc(4*4*sizeof(float)). cl_context context = NULL. cl_int ret. int i. cl_mem Amobj = NULL.h> 006: #else 007: #include <CL/cl.B01-02632560-294 The OpenCL Programming Book List 4. C = (float *)malloc(4*4*sizeof(float)).c 001: #include <stdio.h> 003: 004: #ifdef __APPLE__ 005: #include <OpenCL/opencl. cl_uint ret_num_devices.host dataParallel. B = (float *)malloc(4*4*sizeof(float)). cl_mem Cmobj = NULL. cl_mem Bmobj = NULL. cl_command_queue command_queue = NULL. cl_uint ret_num_platforms. float *B. cl_program program = NULL. j. cl_kernel kernel = NULL. float *C. 80 .
1. j++) { A[i*4+j] = i*4+j+1: FILE *fp. * Create command queue */ command_queue = clCreateCommandQueue(context. 1. &platform_id. 81 . i++) { for (j=0. i < 4.¥n"). &ret_num_devices). 1. const char fileName[] = ". NULL. &ret). char *source_str. NULL. CL_MEM_READ_WRITE. /* Load kernel source file */ fp = fopen(fileName. &ret). } source_str = (char *)malloc(MAX_SOURCE_SIZE). fp). source_size = fread(source_str. exit(1)./dataParallel. MAX_SOURCE_SIZE. ret = clGetDeviceIDs(platform_id. j < 4. /* Initialize input data */ for (i=0. B[i*4+j] = j*4+i+1. 4*4*sizeof(float). /* Create OpenCL Context */ context = clCreateContext(NULL. 0. size_t source_size. "Failed to load kernel. &ret_num_platforms). &device_id. * Create Buffer Object */ Amobj = clCreateBuffer(context. fclose(fp). &device_id. "r"). } } /* Get Platform/Device Information ret = clGetPlatformIDs(1. device_id.cl". if (!fp) { fprintf(stderr. CL_DEVICE_TYPE_DEFAULT. NULL.
NULL. sizeof(cl_mem). CL_MEM_READ_WRITE. NULL). &device_id. &ret). (void *)&Bmobj). 1. (const char **)&source_str. ret = clSetKernelArg(kernel. 0. 4*4*sizeof(float). NULL). Bmobj = clCreateBuffer(context. 0. size_t global_item_size = 4. size_t local_item_size = 1. 0. "dataParallel". ret = clEnqueueWriteBuffer(command_queue. B. 0. /* Execute OpenCL kernel as data parallel */ ret = clEnqueueNDRangeKernel(command_queue. NULL). &local_item_size. 82 . Amobj. &ret). (void *)&Amobj). NULL. /* Create kernel program from source file*/ program = clCreateProgramWithSource(context. CL_TRUE. 071: &ret). CL_TRUE. CL_MEM_READ_WRITE. &global_item_size. 4*4*sizeof(float). Cmobj. size_t *)&source_size. sizeof(cl_mem). NULL. C. NULL). /* Set OpenCL kernel arguments */ ret = clSetKernelArg(kernel. 1. 1. NULL. 4*4*sizeof(float). NULL. CL_TRUE. Bmobj. 4*4*sizeof(float). (const ret = clBuildProgram(program. 0. NULL). /* Copy input data to the memory buffer */ ret = clEnqueueWriteBuffer(command_queue. /* Create data parallel OpenCL kernel */ kernel = clCreateKernel(program. A. NULL. NULL. (void *)&Cmobj).: /* Transfer result to host */ ret = clEnqueueReadBuffer(command_queue. 0. 0. sizeof(cl_mem). 072: &ret). ret = clSetKernelArg(kernel. 1. NULL. NULL. kernel. 0. Cmobj = clCreateBuffer(context.B01-02632560-294 The OpenCL Programming Book &ret). 2. 4*4*sizeof(float).
} printf("¥n").B01-02632560-294 The OpenCL Programming Book 100: 101: 102: 103: 104: 105: 106: 107: 108: 109: 110: 111: 112: 113: 114: 115: 116: 117: 118: 119: 120: 121: 122: 123: 124: 125: 126: 127: } /* Display Results */ for (i=0. free(C). j < 4. the tasks are grouped according to the type of arithmetic operation being performed.10: Task parallel model . ret = clReleaseProgram(program). ret = clReleaseContext(context). 83 .10. i++) { for (j=0. ret = clReleaseMemObject(Amobj). __global float* B. i < 4. Next.kernel taskParallel. } /* Finalization */ ret = clFlush(command_queue). List 4.2f ". free(A). we will show the task parallel version of the same thing (List 4. ret = clReleaseMemObject(Cmobj). List 4. free(source_str). In this sample. __global float* C) 002: { 003: int base = 0. free(B). j++) { printf("%7. ret = clReleaseMemObject(Bmobj). C[i*4+j]). ret = clReleaseKernel(kernel). ret = clReleaseCommandQueue(command_queue).11). ret = clFinish(command_queue). return 0.cl 001: __kernel void taskParallelAdd(__global float* A.
B[base+8].B[base+4]. C[base+4] = A[base+4] * B[base+4]. __global float* C) 012: { 013: 014: 015: 016: 017: 018: 019: } 020: 021: __kernel void taskParallelMul(__global float* A. C[base+12] = A[base+12] . __global float* B. C[base+4] = A[base+4] . C[base+0] = A[base+0] . C[base+12] = A[base+12] * B[base+12]. C[base+8] = A[base+8] * B[base+8]. __global float* C) 032: { 033: 034: 035: 036: 037: 038: 039: } C[base+0] = A[base+0] / B[base+0]. __global float* C) 022: { 023: 024: 025: 026: 027: 028: 029: } 030: 031: __kernel void taskParallelDiv(__global float* A.B[base+12]. C[base+8] = A[base+8] . C[base+0] = A[base+0] * B[base+0]. C[base+8] = A[base+8] + B[base+8].B[base+0].B01-02632560-294 The OpenCL Programming Book 004: 005: 006: 007: 008: 009: } 010: 011: __kernel void taskParallelSub(__global float* A. C[base+0] = A[base+0] + B[base+0]. C[base+12] = A[base+12] + B[base+12]. __global float* B. __global float* B. 84 . C[base+12] = A[base+12] / B[base+12]. C[base+4] = A[base+4] / B[base+4]. int base = 2. C[base+4] = A[base+4] + B[base+4]. C[base+8] = A[base+8] / B[base+8]. int base = 3. int base = 1.
cl_uint ret_num_platforms. cl_context context = NULL. 85 .11: Task parallel model . cl_mem Cmobj = NULL: A = (float*)malloc(4*4*sizeof(float)). cl_mem Amobj = NULL.h> 006: #else 007: #include <CL/cl.h> 002: #include <stdlib. NULL}. cl_program program = NULL.B01-02632560-294 The OpenCL Programming Book List 4.c 001: #include <stdio.host taskParallel. int i. NULL. float* B. cl_platform_id platform_id = NULL. NULL. cl_device_id device_id = NULL. cl_command_queue command_queue = NULL. float* A. cl_kernel kernel[4] = {NULL. j. cl_int ret. B = (float*)malloc(4*4*sizeof(float)).h> 003: 004: #ifdef __APPLE__ 005: #include <OpenCL/opencl. cl_mem Bmobj = NULL. cl_uint ret_num_devices. float* C.
&ret). 1. 1.¥n"). CL_DEVICE_TYPE_DEFAULT. j++) { A[i*4+j] = i*4+j+1. size_t source_size. FILE *fp.cl". /* Initialize input data */ for (i=0. NULL./taskParallel. &device_id. /* Load kernel source file */ fp = fopen(fileName. &device_id. &ret_num_platforms). } source_str = (char *)malloc(MAX_SOURCE_SIZE).B01-02632560-294 The OpenCL Programming Book: C = (float*)malloc(4*4*sizeof(float)). 1. if (!fp) { fprintf(stderr. device_id. } } /* Get platform/device information */ ret = clGetPlatformIDs(1. fp). exit(1). &ret_num_devices). i < 4. 86 . MAX_SOURCE_SIZE. fclose(fp). char *source_str. j < 4. B[i*4+j] = j*4+i+1. "Failed to load kernel. i++) { for (j=0. "rb"). source_size = fread(source_str. &platform_id. /* Create command queue */ command_queue = clCreateCommandQueue(context. NULL. ret = clGetDeviceIDs(platform_id. const char fileName[] = ". /* Create OpenCL Context */ context = clCreateContext(NULL.
4*4*sizeof(float). 073: &ret). ret = clSetKernelArg(kernel[i]. "taskParallelMul". &ret). &device_id. NULL). Cmobj = clCreateBuffer(context. NULL). i++) { } /* Set OpenCL kernel arguments */ for (i=0. 074: 075: 076: 077: 078: 079: 080: 081: 082: 083: 084: 085: 086: 087: 088: 089: 090: 091: 092: 093: 094: 095: 096: 097: /* Execute OpenCL kernel as task parallel */ for (i=0. /* Copy input data to memory buffer */ ret = clEnqueueWriteBuffer(command_queue.B01-02632560-294 The OpenCL Programming Book CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE. 0. 4*4*sizeof(float). NULL. i < 4. &ret). (const char **)&source_str. NULL. (void *)&Bmobj). i++) { ret = clSetKernelArg(kernel[i]. (void *)&Cmobj). 0. &ret). &ret). ret = clEnqueueWriteBuffer(command_queue. Bmobj. NULL. 1. CL_TRUE. i < 4. 1. 4*4*sizeof(float). Amobj. 0. B. "taskParallelDiv". sizeof(cl_mem). (void *)&Amobj). Bmobj = clCreateBuffer(context. CL_MEM_READ_WRITE. kernel[2] = clCreateKernel(program. size_t *)&source_size. sizeof(cl_mem). CL_MEM_READ_WRITE. 0. NULL. /* Create task parallel OpenCL kernel */ kernel[0] = clCreateKernel(program. CL_TRUE. /* Create kernel from source */ program = clCreateProgramWithSource(context. 0. 4*4*sizeof(float). NULL. 2. 072: &ret). 1. (const ret = clBuildProgram(program. "taskParallelSub". CL_MEM_READ_WRITE. 87 . 4*4*sizeof(float). kernel[1] = clCreateKernel(program. "taskParallelAdd". &ret). NULL. sizeof(cl_mem). NULL). A. &ret). 069: 070: 071: &ret). ret = clSetKernelArg(kernel[i]. kernel[3] = clCreateKernel(program. NULL. /* Create buffer object */ Amobj = clCreateBuffer(context.
B01-02632560-294 The OpenCL Programming Book 098: 099: 100: 101: 102: 103: 104: 105: 106: 107: 108: 109: 110: 111: 112: 113: 114: 115: 116: 117: 118: 119: 120: 121: 122: 123: 124: 125: 126: 127: 128: 129: 130: 131: 132: return 0. ret = clReleaseKernel(kernel[3]). } } ret = clEnqueueTask(command_queue. free(C). ret = clReleaseProgram(program). ret = clReleaseKernel(kernel[0]). Cmobj. ret = clReleaseMemObject(Amobj). } printf("¥n"). 0. C[i*4+j]). CL_TRUE. ret = clFinish(command_queue). 0. ret = clReleaseKernel(kernel[2]). kernel[i].2f ". 4*4*sizeof(float). ret = clReleaseContext(context). ret = clReleaseKernel(kernel[1]). NULL). 0. free(B). C. /* Display result */ for (i=0. ret = clReleaseMemObject(Bmobj). j < 4. free(A). NULL). i < 4. /* Finalization */ ret = clFlush(command_queue). free(source_str). j++) { printf("%7. ret = clReleaseCommandQueue(command_queue). ret = clReleaseMemObject(Cmobj). 88 . i++) { for (j=0. NULL. NULL. /* Copy result to host */ ret = clEnqueueReadBuffer(command_queue.
Process the subset of data corresponding to the work-item ID A block diagram of the process is shown in Figure 4. and the way to execute these kernels. data parallel processing is done using the following steps. __global float * C) When the data parallel task is queued. so the parallelization model must be considered wisely in the planning stage of the application. and performance can vary by choosing one over the other. 001: __kernel void dataParallel(__global float * A.5. However. the source codes are very similar.B01-02632560-294 The OpenCL Programming Book 133: } As you can see. In the data parallel model. The get_global_id(0) gets the global work-item ID. 003: int base = 4*get_global_id(0). Get work-item ID 2. Figure 4. Each of these work-items executes the same kernel in parallel. The only differences are in the kernels themselves. Despite this fact. while in the task parallel model. __global float * B. which is used to decide the data to process. some problems are easier.5: Block diagram of the data-parallel model in relation to work-items 89 . the 4 arithmetic operations are grouped as one set of commands in a kernel. it may seem that since the task parallel model requires more code. that it also must perform more operations. 1. work-items are created. We will now walkthrough the source code for the data parallel model. regardless of which model is used for this problem. so that each work-items can process different sets of data in parallel. 4 different kernels are implemented for each type of arithmetic operation. the number of operations being performed by the device is actually the same. In general. At a glance.
In this case. kernel. but we have not touched upon how to decide the number of work-items to create. We have discussed that numerous work items get created. size_t global_item_size = 4. In this way. the global_item_size is set to 4. the variable "base" also have a different value for each work-item. NULL. The overall steps are summarized as follows. large amount of data can be processed concurrently. which keeps the work-items from processing the same data. the global work-item is multiplied by 4 and stored in the variable "base". This value is used to decide which element of the array A and B gets processed. C[base+1] = A[base+1] . 005: 006: 007: 008: C[base+0] = A[base+0] + B[base+0]. &global_item_size. and the local_item_size is set to 1. size_t local_item_size = 1.B[base+1]. C[base+3] = A[base+3] / B[base+3]. 90 . This is done in the following code segment from the host code. The 5th and 6th arguments determine the work-item size.B01-02632560-294 The OpenCL Programming Book In this example. The clEnqueueNDRangeKernel() is an OpenCL API command used to queue data parallel tasks. Since each work-item have different IDs. 0. NULL. NULL). 1. C[base+2] = A[base+2] * B[base+2]. &local_item_size. 090: 091: 092: 093: 094: 095: /* Execute OpenCL kernel as data parallel */ ret = clEnqueueNDRangeKernel(command_queue.
0. the queued task does not wait until the previous task is finished if there are idle compute units available that can be executing that task. Note that different kernels are implemented for each of the 4 arithmetic operations.6: Command queues and parallel execution 91 . In this model. CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE. i++) { ret = clEnqueueTask(command_queue. The above code segment queues the 4 kernels. 096: 097: 098: 099: } /* Execute OpenCL kernel as task parallel */ for (i=0. the out-of-order mode must be enabled when the command queue is created.6. The block diagram of the nature of command queues and parallel execution are shown in Figure 4. different kernels are allowed to be executed in parallel. i < 4. In OpenCL. Process data corresponding to the global work item ID on the kernel We will now walkthrough the source code for the task parallel model. kernel[i]. NULL). Figure 4. in order to execute a task parallel process. &ret).B01-02632560-294 The OpenCL Programming Book 1. 067: /* Create command queue */ 068: command_queue = clCreateCommandQueue(context. Using this mode. device_id. NULL. Create work-items on the host 2.
and the maximum is dependent on the platform. The work-items within a work-group can synchronize. since PCI Express supports simultaneous bi-directional memory transfers. such as clEnqueueNDRangeKernel(). provided that the commands are being performed by different processors. Work Group The last section discussed the concept of work-items. queuing the clEnqueueReadBuffer() and clEnqueueWriteBuffer() can execute read and write commands simultaneously. In order to implement a data parallel kernel. but a similar parallel processing could take place for other combinations of enqueue-functions. For example.B01-02632560-294 The OpenCL Programming Book The clEnqueueTask() is used as an example in the above figure. the number of work-groups must be specified in addition to the number of work-items. This section will introduce the concept of work-groups. and clEnqueueWriteBuffer(). This is why 2 different parameters had to be sent to the clEnqueueNDRangeKernel() function. 090: size_t global_item_size = 4. we can expect the 4 tasks to be executed in parallel. A work-group must consist of at least 1 work-item. since they are being queued in a command queue that has out-of-execution enabled. clEnqueueReadBuffer(). In the above diagram. Work-items are grouped together into work-groups. 92 . as well as share local memory with each other.
and that there are 4 work-groups to be processed. Figure 4.7. the work-items and work-groups can be specified in 2 or 3 dimensions. Figure 4. If the number of work-items cannot be divided evenly among the work-groups. Figure 4.1. local work-item ID. returning the error value CL_INVALID_WORK_GROUP_SIZE.8: Work-group and work-item defined in 2-D 93 . but it is also possible to retrieve the local work-item ID corresponding to the work-group ID.8 shows an example where the work-group and the work-item are defined in 2-D.7: Work-group ID and Work-item ID Table 4. and the work-group ID are shown below in Figure 4.B01-02632560-294 The OpenCL Programming Book 091: size_t local_item_size = 1. clEnqueueNDRangeKernel() fails.9 only used the global work-item ID to process the kernel. The number of work-items per work-group is consistent throughout every work-group. The function used to retrieve these ID's from within the kernel are shown in Table 4. The above code means that each work-group is made up of 1 work-item.1: Functions used to retrieve the ID's Function get_group_id get_global_id get_local_id Retrieved value Work-group ID Global work-item ID Local work-item ID Since 2-D images or 3-D spaces are commonly processed. The relationship between the global work-item ID. The code List 4.
it is an array of type size_t. 94 . get_global_id(). and the maximum number of work-items per work-group can be obtained by getting the value of CL_DEVICE_IMAGE_SUPPORT. and for CL_DEVICE_IMAGE_SUPPORT. The data-type of CL_DEVICE_MAX_WORK_ITEM is cl_uint.2. the ID's that are used to index them also have 3 dimensions.8 are shown below in Table 4. The get_group_id(). The maximum index space dimension can be obtained using the clGetDeviceInfo() function to get the value of CL_DEVICE_WORK_ITEM_DIMENSIONS. each corresponding to the dimension.2: The ID's of the work-item in Figure 4. The ID's for the work-items in Figure 4. Table 4.B01-02632560-294 The OpenCL Programming Book Since the work-group and the work-items can have up to 3 dimensions. get_local_id() can each take an argument between 0 and 2.8 Call get_group_id(0) get_group_id(1) get_global_id(0) get_global_id(1) get_local_id(0) get_local_id(1) Retrieved ID 1 0 10 5 2 5 Note that the index space dimension and the number of work-items per work-group can vary depending on the device.
12 shows an example of how the event objects can be used. NULL. kernel_C. clEnqueueTask(command_queue. List 4. as of this writing (12/2009). &events[0]). In this example. cl_uint num_events_in_wait_list. kernel_B.B01-02632560-294 The OpenCL Programming Book Also. they need to be executed sequentially. the OpenCL implementation for the CPU on Mac OS X only allows 1 work-item per work-group. 0. For example. clEnqueueTask(command_queue. cl_int clEnqueueTask (cl_command_queue command_queue. NULL. In order to make sure task A executes before task B. const cl_event *event_wait_list.12: Event object usage example clEnqueue events[4]. This object is returned on all commands that start with "clEnqueue". kernel_C. 0. the event object returned when task A is queued can be one of the inputs to task B. but these must be completed before kernel_E is executed. the execution order can be set using an event object. List 4. which keeps task B from executing until task A is completed. NULL. Task Parallelism and Event Object The tasks placed in the command-queue are executed in parallel. 0. The 5th parameter is the event object returned by this task when it is placed in the queue. and the 3rd argument is the number of events on the list. NULL. kernel_A. In OpenCL. the function prototype for clPEnqueueTask() is shown below. clEnqueueTask(command_queue. An event object contains information about the execution status of queued commands. cl_event *event) The 4th argument is a list of events to be processed before this task can be run. cl_kernel kernel. kernel_B. &events[1]). kernel_D. &events[2]). &events[3]). kernel_D can all be executed in any order. kernel_A. clEnqueueTask(command_queue. but in cases where different tasks have data dependencies. 0. 95 .
As an example. ---COLUMN: Relationship between execution order and parallel execution--In OpenCL. (A) → (C) → (B) → (D) If tasks B and C are not dependent on each other. events. If they are not. task B may finish before task C. Also. For example. NULL). kernel_E. the two sets of tasks will result in the same output. This processing is only allowed in the case where the tasks B and C do not depend on each other. This problem of whether a certain set of tasks can be performed in different order is a process-dependent problem.B01-02632560-294 The OpenCL Programming Book clEnqueueTask(command_queue. First. 4. 96 . then the 2 tasks cannot be processed in parallel. we will switch the order in which task B and C are performed. it may make more sense to implement all the tasks in a single thread. the concept of "order" and "parallel" need to be fully understood. there is usually a section on "order". the tasks placed in the command queue are executed regardless of the order in which it is placed in the queue. On the other hand. all the tasks must be performed sequentially. the two sets of tasks will not yield the correct result. These need to be considered when implementing an algorithm. When working on parallel processing algorithms. let's assume tasks B and C are processed in parallel. we will discuss the concept of "order". if the tasks are executed on a single-core processor. Parallel processing is a type of optimization that deals with changing the order of tasks. (A) → (B) → (C) → (D) Now. Thus. In this case. In the specification sheet for parallel processors. let's assume the following 4 tasks are performed sequentially in the order shown below. but task D waits for the results of both tasks B and C. if task C must be performed after task B.
one may wonder if it meets OpenMP's specification. two problems must be solved. • Tasks must be executed in a specific order = Cannot be parallelized • Tasks can be executed out of order = Can be parallelized Specifications are written to be a general reference of the capabilities. For example.B01-02632560-294 The OpenCL Programming Book In essence. OpenMP's "parallel" construct specifies the code segment to be executed in parallel. but often times. and the other is whether to process the tasks in parallel. 97 . it contains information such as how certain tasks are guaranteed to be processed in order. it clears up the definition. Decision to execute certain tasks in parallel is thus not discussed inside the specification. treating "order" as the basis when implementing parallel algorithms can clear up existing bottlenecks in the program. Instead. but placing this construct in a single-core processor will not process the code segment in parallel. but that they do not have to be. and do not deal with the actual implementation. since it means that the ordering can be changed or the tasks can be run parallel. One is whether some tasks can be executed out of order. Explanations on "ordering" is often times difficult to follow (Example: PowerPC's Storage Access Ordering). In this case. If you think about the "parallel" construct as something that tells the compiler and the processors that the loops can be executed out-of-order.
• Predefined identifiers are not supported. allowing for a more a flexible and optimized OpenCL program. This language is used to program the kernel. OpenCL C OpenCL C language is basically standard C (C99) with some extensions and restrictions. auto and register storage-class specifiers are not supported. • Pointer to a pointer cannot be passed as an argument to a kernel function. i++) { sum += a[i]. Restrictions Below lists the restricted portion from the C language in OpenCL C. and half are not supported. __constant. • The pointer passed as an argument to a kernel function must be of type __global. uchar. short. the language can be treated the same as C. 98 . • C99 standard headers cannot be included • The extern. i < n. ushort. Aside from these extensions and restrictions. • Bit-fields are not supported. • The function using the __kernel qualifier can only have return type void in the source code. for (int i=0. • Recursion is not supported. char2.1 shows a simple function written in OpenCL C. int n) { /* Sum every component in array "a" */ 002: 003: 004: 005: 006: 007: } } return sum. This chapter will go further in depth. • Variadic macros and functions are not supported. or __local. • Writes to a pointer of type char. int sum = 0. List 5. uchar2.1: OpenCL C Sample Code 001: int sum(int *a. • Variable length arrays and structures with flexible (or unsized) arrays are not supported.B01-02632560-294 The OpenCL Programming Book Advanced OpenCL Chapter 4 discussed the basic concepts required to write a simple OpenCL program. List 5. static.
// 128 integers allocated on global memory // pointer placed on the private memory. these address space qualifiers are used to specify which memory on the device to use. Some implementation allows access to a different memory space via pointer casting. local. List 5.1: Address Space Qualifiers and their corresponding memory Qualifier __global (or global) __local (or local) __private (or private) __constant (or constant) Corresponding memory Global memory Local memory Private memory Constant memory The "__" prefix is not required before the qualifiers.B01-02632560-294 The OpenCL Programming Book • Support for double precision floating-point is currently an optional extension.2. which points to a // 8 pointers stored on the local memory that points to single-precision float located on the local memory __global char * __local lgc[8].2: Variables declared with address space qualifiers 001: 002: 003: __global int global_data[128]. Address Space Qualifiers OpenCL supports 4 memory types: global. constant. If the qualifier is not specified. the pointers can be used to write to an address space as in standard C. but we will continue to use the prefix in this text for consistency.1. A few example variable declarations are given in List 5. Table 5. Similarly. It may or may not be implemented. The memory type corresponding to each address space qualifiers are shown in Table 5. with the exception for "__constant" pointers. the variable gets allocated to "__private". __local float *lf. In OpenCL C. and private. but this is not the case for all implementations. A type-cast using address space qualifiers is undefined. a char located on the global memory The pointers can be used to read address in the same way as in standard C. which is the default qualifier. Built-in Function 99 .
3: Kernel that uses sin() 001: 002: 003: 004 005: } } __kernel void runtest(__global float *out. i++) { out[i] = sin(in[i]/16. /* The float version is called */ double df = sin(in_double[i]). An example of calls to overloaded functions is shown in List 5. log(). __global float *in) { for (int i=0. reducing the need for functions such as sinf() in standard C.0f). List 5. Refer to chapter 7 for a full listing of built-in functions. • Math Functions Extended version of math. Many built-in functions are overloaded. i<16. dot().4: Calls to overloaded functions 001: __kernel void runtest(__global float *out. most math functions are overloaded.3 below shows an example of a kernel that uses the sin() function. • Geometric Functions Includes functions such as length(). In OpenCL. For example. i < 16. /* The double version is called */ 100 . Allows usage of functions such as sin(). __global double *in_double) { 002: 003: 004: for (int i=0. These can be used without having to include a header file or linking to a library. exp(). A few examples are shown below.4 below. since different functions exist for each of the passed in data-types. List 5. __global float *in_single. List 5. Get_local_size() are some examples. i++) { float sf = sin(in_single[i]).B01-02632560-294 The OpenCL Programming Book OpenCL contains several built-in functions for scalar and vector operations. Note that header files do not have to be included. • Work-Item Functions Functions to query information on groups and work-items. the sin() function is overloaded such that the precision of the returned value depends on whether a float or a double value is passed in as an argument.h in C. Get_work_dim(). A function is said to be overloaded if the same function name can be used to call different function definitions based on the arguments passed into that function.
Vector literals are used to place values in these variables.1 shows some example of vector data-types. Figure 5. Usage of these SIMD units can accelerate some processes by many folds. // Variable made up of 8 float values OpenCL defines scalar types char. Declaring a variable using a vector type creates a variable that can contain the specified number of scalar types. ushort. and 16. uchar. List 5. int. adding a number after the scalar type creates a vector type consisting of those scalar types. uint. float8 f8.5: Vector-type variable declaration int i4.1: Vector data-type examples Each component in the vector is referred to as a scalar-type. Figure 5. 8. which is basically a struct consisting of many components of the same data-type. // Variable made up of 4 int values // Variable made up of 2 char values char2 c2. which allows numerous data to be processed using one instruction. 4. long. such as in "int4". and vector-types with size 2. List 5. short. The use of vector data-types in OpenCL can aid in efficient usage of these SIMD units. ulong. 101 .5 shows examples of declaring vector-type variables. A group of these scalar-type values defines a vector data-type. double. In OpenCL C. Recent CPUs contain compute units called SIMD. float.B01-02632560-294 The OpenCL Programming Book 005: 006: } } Vector Data OpenCL defines vector data-types.
The above code performs the operation shown below. • ". float4 f4 = (float4)(sin(1. 3. List 5. The number of values set in the vector literals must equal the size of the vector. • The vector can be accessed via a number index using ". and ". • ".0f.3. float4 g4 = (float4)(1. 4. 1.0f). The built-in math functions are overloaded such that an component-by-component processing is allowed for vector-types.7 shows a few examples of component accesses.4).2.7f. sin(2.4f.0f)).xyzw" accesses indices 0 through 3. This number is in hex. 3.2f. sin(4.0f). List 5. and ". 1. 1.0f. Functions such as sin() and log() performs these operations on each component of the vector-type.6.0f. 1. List 5. 3. (float8) (1. 1.s" followed by a number.8f). 2.0f).2). 1) */ 102 . 1. 1.wzyx.hi" extracts the upper half of the vector.0f).even" extracts the even indices.xyzw. This can be done by adding .odd" extracts the odd indices. (int2)(3.8)). float4 g4 = (float4)(1.7. allowing access to up to 16 elements by using a~f or A~F for indices above 9.1f.B01-02632560-294 The OpenCL Programming Book A vector literal is written as a parenthesized vector type followed by a parenthesized set of expressions.6f. (int4)(5. It cannot access higher indices. accessing by number.7: Vector component access int4 x = (int4)(1. float4 f4 = sin(g4). The vector component can be accessed by adding an extension to the variable.6 shows some example usage of vector literals.lo" extracts the lower half of the vector.0f. List 5.5f. 2. 3. 2. (int8) ((int2)(1. /* a = (4.3f.0f.6: Vector Literal (int4) (1. 4). or by odd/even/hi/lo. 2. sin(3.0f.4). 4.0f). int4 a = x. ".
Figure 5. 1) */ int2 d = x. i < 4. 3. i++) { out4[3-i] = in4[i]. 4. List 5.8 shows a kernel code that reverses the order of an array consisting of 16 components. 4) */ We will now show an example of changing the order of data using the introduced concepts.2: Addition of 2 vector-types 103 .s3210.odd. Some operators are overloaded to support arithmetic operation on vector types. 4) */ int2 e = x.hi. An example is shown in Figure 5. 2.2. 2. • 005: Vector component access by index number.s01233210. List 5. 3.xx.8: Reverse the order of a 16-element array 001: __kernel void runtest(__global float *out.B01-02632560-294 The OpenCL Programming Book int2 b = x. 4. For these operations. /* c = (1. /* b = (1. • 002~003: Type cast allow access as a vector type • 004: Each loop processes 4 elements. __global float *in) { 002: 003: 004: 005: 006: 007: } } __global float4 *in4 = (__global float4*)in. /* d = (2. The loop is run 4 times to process 16 elements. /* e = (3. for (int i=0. arithmetic operation is performed for each component in the vector. __global float4 *out4 = (__global float4*)out. 1) */ int8 c = x.
List 5. 1. 0. 1. int4 tf = (float4)(1. Table 5.0f. 1. 104 . Logic operations such as "&&" and "||" are undefined. which would not use the SIMD unit effectively. ulong.0f. since it is more convenient when using SIMD units. a 0 (all bits are 0) is returned like usual. uint. 3. if (a<b) out[i] = a.0f). the branch must be performed for each vector component serially. For example.0f. with each vector component having a TRUE or FALSE value. else out[i] = b. a -1 (all bits are 1) is returned. 2. b = in1[i].2 below. a comparison of two-float4 types returns an int4 type. double Type after comparison char short int long Also. a vector type (not necessarily the same as the vector-types being compared) is returned. in the code shown in List 5.0f. ushort int. For example.2: Resulting vector-type after a comparison Operand type char. // tf = (int4)(0. Since SIMD operation performs the same operation on different data in parallel. branch instructions should be avoided when possible. The operand types and the returned vector-type after a comparison are shown in Table 5.0f) == (float4)(0. if the comparison is FALSE.0f. but if it is TRUE.9. 0) A 0/-1 is returned instead of 0/1.0f. -1. The result of comparison and selection operations are shown in the sections to follow. float long. the vector operation would be the same for each component.9: Branch process // "out" gets the smaller of "in0" and "in1" for each component in the vector int a = in0[i]. When 2 vector-types are compared. 1. with the exception to comparison and ternary selection operations.B01-02632560-294 The OpenCL Programming Book In general. uchar short. an integer value of either 0 or 1 is returned. Comparison Operations When 2 scalar-types are compared.
// a when TRUE and b when FALSE --. // a when TRUE and b when FALSE In this case. if the statement to the left of "?" evaluates to be FALSE.10 that does not use a branch. and if it evaluates to be TRUE. In a selection operation. then the value to the right of ":" is selected. then the value between the "?" and ":" is selected. If FALSE. b = in1[i]. out[i] = (a & cmp) | (b & ~cmp). out[i] = a<b ? a : b.9 rewritten without the branch // "out" gets the smaller of "in0" and "in1" for each component in the vector int a = in0[i]. and to do this. the SIMD unit is used effectively. which can rewrite the line out[i] = (a & cmp) | (b & ~cmp) // a when TRUE and b when FALSE with the line out[i] = bitselect(cmp. branch instruction should be taken out to use SIMD units. List 5. On a side note. a).B01-02632560-294 The OpenCL Programming Book Since a "-1" is returned when the condition is TRUE.11. it is much more convenient to have the TRUE value to be -1 instead of 1. OpenCL C language has a built-in function bitselect(). Ternary Selection Operations The difference in comparison operation brings about a difference in ternary selection operations as well.9 can be replaced by the code in 5. List 5. the code in List 5.10: List 5.10 can be rewritten as the code shown in List 5. cmp=0xffffffff.COLUMN: Should you use a vector-type? --105 . the code from List 5. // If TRUE. To summarize. int cmp = a < b.11: Ternary Selection Operator Usage Example // "out" gets the smaller of "in0" and "in1" for each component in the vector int a = in0[i]. since each processing elements are running the same set of instructions. b = in1[i]. cmp=0x00000000. b. Using this operation.
Also. (for example. Figure 5.3: Bit layout of "half" The "half" type is not as well known as 32-bit floats or 64-bit doubles. One bit is used for sign. NVIDIA GPU). but it does not specify how its operations are implemented. and auto-vectorization is still under development. Figure 5.3 below. and when vector operations are performed on these processors. In these cases. A list of types defined in OpenCL C as well as their bit widths are shown in Table 5. Thus. the operations will be performed sequentially for each scalars.B01-02632560-294 The OpenCL Programming Book The OpenCL C language defines vector type. and the remaining 10 bits are used as the mantissa (also known as significant or coefficient). the compiler should take care of deciding on the vector length to use the SIMD unit for optimal performance. In fact. Half OpenCL C language defines a 16-bit floating point type referred to as "half". 106 . OpenCL variable types Some defined types in OpenCL C may differ. resulting in slower execution. large vector-types such as double16 may fill up the hardware register. and some types may not be defined in the standard C language.3 shows the bit layout of a half-type. Since not all processors contain SIMD units. 5 bits are used for exponent. the optimization to effectively use SIMD units is placed on the hands of the programmers. but this type is defined as a standard in IEEE 754. the operation will not benefit from using a vector type. compilers are still not perfect. However.
h not required stddef.3: OpenCL variable types Data types bool char unsigned char. as well what how the numbers are rounded in each of the situations. or when a double is casted to a float. Table 5.4 shows a listing of situations when rounding occurs. ulong float half size_t intptr_t uintptr_t void Bit width Undefined 8 8 16 16 32 32 64 64 32 16 * * * stddef.h not required Remarks Rounding of floats When a float is casted to integer.B01-02632560-294 The OpenCL Programming Book Table 5.h not required stddef. Table 5. uchar short unsigned short. rounding must take place from the lack of available bits. Others may be set as an option. uint long unsigned long. The default may not be round to nearest even for Embedded Profiles Round to nearest even only Round towards zero only Same as for floating point arithmetic Builtin functions Cast from float to int Cast from int to float Cast from float to fixed point Same as for floating point arithmetic Type casting can be performed in the same way as in standard C. Explicit conversion can also be 107 .4: OpenCL C Rounding Operation Floating point arithmetic Rounding Method Round to nearest even by default. ushort int unsigned int.
14 shows the case where it is used to look at the bit value of a float variable. convert_<type>[_<rounding>][_sat] <type> is the variable type to be converted to. Table 5. The explicit conversion can be done as follows. <rounding> gets one of the rounding mode shown in Table 5. // Round to nearest even // Round toward zero // Round toward -∞ // Round toward ∞ Bit Reinterpreting In OpenCL the union() function can be used to access the bits of a variable. which allows for an option of how the number gets rounded.5. out->y = convert_int_rtz(in->y). List 5. out->z = convert_int_rtn(in->z). List 5. out->w = convert_int_rtp(in->w).13: Rounding example 001: __kernel void round_xyzw(__global int *out.B01-02632560-294 The OpenCL Programming Book performed. "_sat" can be used at the end to saturate the value when the value is above the max value or below the minimum value. <rounding> sets the rounding mode. List 5. than the rounding method will be the same as for type casting.14: Get bit values using union 108 .5: Explicit specification of the rounding mode Rounding mode rte rtz rtp rtn Rounding toward Nearest even 0 +∞ -∞ List 5.13 shows an example where floats are explicitly converted to integers using different rounding modes. If this mode is not set. __global float *in) 002: { 003: 004: 005: 006: 007: } out->x = convert_int_rte(in->x).
List 5. for example.B01-02632560-294 The OpenCL Programming Book 001: // Get bit values of a float 002: int float_int(float a) { 003: 004: 005: 006: } union { int i. where the number of bits is different. List 5. u. OpenCL memory and pointer seems complicated when compared to the standard C language. return as_int(a). where the total number of bits is the same. between float and short. float f. The action of the above code using standard C is not defined.15: Reinterpretation using as_int() 001: // Get the bit pattern of a float 002: int float_int(float a) { 003: 004:} 005: 006: // 0x3f800000(=1. is undefined and the answer may vary depending on the OpenCL implementation. } u. Reinterpretation between short4 and int2. Reinterpretation cannot be performed.0f) 007: int float1_int(void) { 008: 009: } return as_int(1. Local Memory We have mentioned the OpenCL memory hierarchy from time to time so far in this text.i.15 shows an example where the "as_int()" function is called with a 32-bit float as an argument. This section will discuss the reason for the seemingly-complex memory hierarchy. as well as one of the memory types in the hierarchy.0f). the bits of the float can be reinterpreted as an integer. called the local memory. OpenCL includes several built-in functions that make this reinterpretation easier. 109 . but when using OpenCL C. The function has the name "as_<type>".f = a. and is used to reinterpret a variable as another type without changing the bits. return u.
One is called SRAM (Static RAM) and the other is called DRAM (Dynamic RAM). __local int larr[128]. The cache is the SRAM in this case. memory space larger than the memory space cannot be allocated. and local storage on Cell/B. but cannot be used throughout due to cost and the complex circuitry. In most computers.16 shows some examples of declaring variables to be stored on the local memory.E. SRAM can be accessed quickly. In CPU. memory space can be made smaller than cache memory.B01-02632560-294 The OpenCL Programming Book There are two main types of memory. the frequently-used data uses the SRAM. since this memory was made small in order to speed up access to it. OpenCL calls these scratch-pad memory "local memory" [16]. This cache memory may not be implemented for processors such as GPU and Cell/B. This is mainly due to the fact that caches were not of utmost necessity for 3D graphics.16: Declaring variables on the local memory __local int lvar. Local memory that belongs to another work-group may not be accessed. __local int *ptr. for which the processor was designed for. and that cache hardware adds complexity to the hardware to maintain coherency. but cannot be accessed quickly as in the case for SRAM. Work-items within a work-group can use the same local memory. Some examples of scratch-pad memory include shared memory on NVIDIA's GPU. The advantage of using these scratch-pad memories is that it keeps the hardware simple since the coherency is maintained via software rather than on the cache hardware. DRAM on the other hand is cheap. Local memory is allocated per work-group. One disadvantage is that everything has to be managed by the software.E. List 5. The size of the local memory space that can be used on a work-group can be found by using the 110 . List 5.. where numerous compute units are available. // Declare lvar on the local memory // Declare array larr on the local memory // Declare a pointer that points to an address on the local memory The local memory size must be taken into account. Local memory can be used by inserting the "__local" qualifier before a variable declaration. the last accessed memory content is kept in cache located in the memory space near the CPU. Naturally. while less frequently-used data is kept in the DRAM. Also. Processors that do not have cache hardware usually contain a scratch-pad memory in order to accelerate memory access. since the memory content can be altered as needed via the software.
Since this memory size varies depending on the hardware.h> 006: #endif 007: #include <stdio. there may be cases where you may want to set the memory size at runtime. } for (int i=0. but it is typically between 10 KB and a few hundred KB. and each kernel is given half of the available local memory.B01-02632560-294 The OpenCL Programming Book clGetDeviceInfo function.18 shows an example where the local memory size for a kernel is specified based on the available local memory on the device. cl_command_queue command_queue = NULL. i < local_size. OpenCL expects at least 16 KB of local memory.h> 004: #else 005: #include <CL/cl.h> 008: 009: #define MAX_SOURCE_SIZE (0x100000) 010: 011: int main() { 012: 013: 014: 015: 016: 017: cl_platform_id platform_id = NULL. cl_uint ret_num_devices.18: Set local memory size at runtime (host) 001: #include <stdlib. The available local memory is retrieved using the clGetDeviceInfo function. 111 . passing CL_DEVICE_LOCAL_MEM_SIZE as an argument. cl_device_id device_id = NULL. List 5. int local_size) { 002: 003: 004: 005: } List 5. List 5. cl_context context = NULL. This can be done by passing in an appropriate value to the kernel via clSetKernelArg().h> 002: #ifdef __APPLE__ 003: #include <OpenCL/opencl. cl_uint ret_num_platforms.17: Set local memory size at runtime (kernel) 001: __kernel void local_test(__local char *p. i++) { p[i] = i. The local memory size depends on the hardware used.17 and List 5.
/* Set kernel argument */ clSetKernelArg(kernel. 1. size_t kernel_code_size. clGetPlatformIDs(1. cl_int ret. "local_test". 0. &device_id. (const size_t *)&kernel_code_size. NULL. cl_ulong local_size. kernel = clCreateKernel(program. &local_size. cl_local_size = local_size / 2. cl_local_size. NULL. 1. NULL. &ret_num_platforms). cl_int cl_local_size. local_size_size. &device_id. /* Get available local memory size */ clGetDeviceInfo(device_id. sizeof(local_size). cl_kernel kernel = NULL. cl_event ev. 1. (int)local_size). /* Build Program */ program = clCreateProgramWithSource(context. &device_id.cl". (const char **)&kernel_src_str. MAX_SOURCE_SIZE. command_queue = clCreateCommandQueue(context. 1. &ret). 0. 1. &ret). fclose(fp). "". NULL). &ret). printf("CL_DEVICE_LOCAL_MEM_SIZE = %d¥n". kernel_src_str = (char*)malloc(MAX_SOURCE_SIZE). char *kernel_src_str. &ret). &platform_id. FILE *fp. context = clCreateContext(NULL. clBuildProgram(program. fp). NULL). clGetDeviceIDs(platform_id. CL_DEVICE_LOCAL_MEM_SIZE. &local_size_size). "r"). &ret_num_devices). CL_DEVICE_TYPE_DEFAULT.B01-02632560-294 The OpenCL Programming Book: cl_program program = NULL. kernel_code_size = fread(kernel_src_str. fp = fopen("local. device_id. 112 .
data stored on the local memory must be transferred over to the global memory in order to be accessed by the host. 001: __kernel void local_test(__local char *p. This pointer can be used like any other pointer. clReleaseKernel(kernel). the value computed in this kernel gets thrown away. 002: 003: 004: } for (int i=0. &ev). However. since the local memory cannot be read by the host program. &cl_local_size). kernel. 0. i++) { p[i] = i. return 0. } /* Wait for the kernel to finish */ clWaitForEvents(1. starting with the kernel code. We will now go through the above code. clReleaseContext(context). 113 . i < local_size. int local_size) { The kernel receives a pointer to the local memory for dynamic local memory allocation. if (ret == CL_OUT_OF_RESOURCES) { puts("too large local"). clReleaseProgram(program). clReleaseCommandQueue(command_queue). In actual programs. sizeof(cl_local_size). 1.B01-02632560-294 The OpenCL Programming Book 053: 054: 055: 056: 057: 058: 059: 060: 061: 062: 063: 064: 065: 066: 067: 068: 069: 070: 071: 072: } clSetKernelArg(kernel. return 1. &ev). free(kernel_src_str). NULL. /* Execute kernel */ ret = clEnqueueTask(command_queue.
039: 040: 041: /* Get available local memory size */ clGetDeviceInfo(device_id. If the specified local memory size is too large. 0. &ev). (int)local_size). 0. The 4th argument is set to NULL. which is the value of the argument. you may find the 114 . sizeof(local_size). NULL). 049: cl_local_size = local_size / 2. --. which specifies the argument size. This is given in the 3rd argument. The clGetDeviceInfo() retrieves the local memory size.B01-02632560-294 The OpenCL Programming Book We will now look at the host code. kernel. For the record. so the actual usable local memory size may be smaller than the value retrieved using clGetDeviceInfo. &local_size_size).COLUMN: Parallel Programming and Memory Hierarchy --For those of you new to programming in a heterogeneous environment. 051: /* Set Kernel Argument */ 052: clSetKernelArg(kernel. &local_size. 056: 057: 058: 059: 060: } ret = clEnqueueTask(command_queue. NULL. The local memory may be used by the OpenCL implementation in some cases. CL_DEVICE_LOCAL_MEM_SIZE. The above code sets the size of local memory to be used by the kernel. printf("CL_DEVICE_LOCAL_MEM_SIZE = %d¥n". return 1. if (ret == CL_OUT_OF_RESOURCES) { puts("too large local"). We are only using half of the available local memory here to be on the safe side. the kernel will not be executed. cl_local_size. which is of the type cl_ulong. The kernel is enqueued using clEnqueueTask(). returning the error code CL_OUT_OF_RESOURCES. the local memory does not need to be freed. and this size is returned to the address of "local_size".
B01-02632560-294
The OpenCL Programming Book
emphasis on memory usage to be a bit confusing and unnecessary (those familiar with GPU or Cell/B.E. probably find most of the content to be rather intuitive). However, effective parallel programming essentially boils down to efficient usage of available memory. Therefore, memory management is emphasized in this text. This column will go over the basics of parallel programming and memory hierarchy. First important fact is that memory access cannot be accelerated effectively through the use of cache on multi-core/many-processor systems, as it had been previously with single cores. A cache is a fast storage buffer on a processor that temporarily stores the previously accessed content of the main memory, so that this content can be accessed quickly by the processor. The problem with this is that with multiple processors, the coherency of the cache is not guaranteed, since the data on the memory can be changed by one processor, while another processor still keeps the old data on cache. Therefore, there needs to be a way to guarantee coherency of the cache coherency across multiple processors. This requires data transfers between the processors. So how is data transferred between processors? For 2 processors A and B, data transfer occurs between A and B (1 transfer). For 3 processors A, B, and C, data is transferred from A to B, A to C, or B to C (3 transfers). For 4 processors A, B, C, and D, this becomes A to B, A to C, A to D, B to C, B to D, C to D (6 transfers). As the number of processors increase, the number of transfers explodes. The number of processing units (such as ALU), on the other hand, is proportional to the number of processors. This should make apparent of the fact that the data transfer between processors becomes the bottleneck, and not the number of processing units. Use of a cache requires the synchronization of the newest content of the main memory, which can be viewed as a type of data transfer between processors. Since the data transfer is a bottleneck in multi-core systems, the use of cache to speed up memory access becomes a difficult task. For this reason, the concept of scratch-pad memory was introduced to replace cache, which the programmer must handle instead of depending on the cache hardware. On Cell/B.E. SPEs and NVIDIA GPUs, one thing they have in common is that neither of them have a cache. Because of this, the memory transfer cost is strictly just the data transfer cost, 115
B01-02632560-294
The OpenCL Programming Book
which reduce the data transfer cost in cache that arises from using multiple processors. Each SPE on Cell/B.E. has a local storage space and a hardware called MFC. The transfer between the local storage and the main memory (accessible from all SPE) is controlled in software. The software looks at where memory transfers are taking place in order to keep the program running coherently. NVIDIA GPUs do not allow DMA transfer to the local memory (shared memory), and thus data must be loaded from the global memory on the device. When memory access instruction is called, the processor stalls for a few hundred clock cycles, which manifest the slow memory access speed that gets hidden on normal CPUs due to the existence of cache. On NVIDIA GPU, however, a hardware thread is implemented, which allows another process to be performed during the memory access. This can be used to hide the slow memory access speed.
Image Object
In image processing, resizing and rotation of 2-D images (textures) are often performed, which requires the processing of other pixels in order to produce the resulting image. The complexity of the processing is typically inversely proportional to the clarity of the processed image. For example, real-time 3-D graphics only performs relatively simple processing such as nearest neighbor interpolation and linear interpolation. Most GPUs implement some of these commonly used methods on a hardware called the texture unit. OpenCL allows the use of these image objects as well as an API to process these image objects. The API allows the texture unit to be used to perform the implemented processes. This API is intended for the existing GPU hardware, which may not be fit to be used on other devices, but the processing may be accelerated if the hardware contains a texture unit. Image object can be used using the following procedure. 1. Create an image object from the host (clCreateImage2D, clCreateImage3D) 2. Write data to the image object from the host (clEnqueueWriteImage) 3. Process the image on the kernel 4. Read data from the image object on the host (clEnqueueReadImage) An example code is shown in List 5.19 and List 5.20 below. 116
B01-02632560-294
The OpenCL Programming Book
List 5.19: Kernel (image.cl)
001: const sampler_t s_nearest = CLK_FILTER_NEAREST | CLK_NORMALIZED_COORDS_FALSE | CLK_ADDRESS_CLAMP_TO_EDGE; 002: const sampler_t s_linear = CLK_FILTER_LINEAR | CLK_NORMALIZED_COORDS_FALSE | CLK_ADDRESS_CLAMP_TO_EDGE; 003: const sampler_t s_repeat = CLK_FILTER_NEAREST | CLK_NORMALIZED_COORDS_FALSE | CLK_ADDRESS_REPEAT; 004: 005: __kernel void 006: image_test(__read_only image2d_t im, 007: 008: { 009: 010: 011: 012: 013: 014: 015: 016: 017: 018: 019: 020: 021: 022: 023: } /* repeat */ out[6] = read_imagef(im, s_repeat, (float2)(4.5f, 0.5f)); out[7] = read_imagef(im, s_repeat, (float2)(5.0f, 0.5f)); out[8] = read_imagef(im, s_repeat, (float2)(6.5f, 0.5f)); /* linear */ out[3] = read_imagef(im, s_linear, (float2)(0.5f, 0.5f)); out[4] = read_imagef(im, s_linear, (float2)(0.8f, 0.5f)); out[5] = read_imagef(im, s_linear, (float2)(1.3f, 0.5f)); /* nearest */ out[0] = read_imagef(im, s_nearest, (float2)(0.5f, 0.5f)); out[1] = read_imagef(im, s_nearest, (float2)(0.8f, 0.5f)); out[2] = read_imagef(im, s_nearest, (float2)(1.3f, 0.5f)); __global float4 *out)
List 5.20: Host code (image.cpp)
001: #include <stdlib.h> 002: #ifdef __APPLE__ 003: #include <OpenCL/opencl.h> 004: #else 005: #include <CL/cl.h> 006: #endif
117
B01-02632560-294
The OpenCL Programming Book
007: #include <stdio.h> 008: 009: #define MAX_SOURCE_SIZE (0x100000) 010: 011: int main(): /* Check if the device support images */ clGetDeviceInfo(device_id, CL_DEVICE_IMAGE_SUPPORT, sizeof(support), &support, clGetPlatformIDs(1, &platform_id, &ret_num_platforms); clGetDeviceIDs(platform_id, CL_DEVICE_TYPE_DEFAULT, 1, &device_id, &ret_num_devices); context = clCreateContext( NULL, 1, &device_id, NULL, NULL, &ret); result = (float*)malloc(sizeof(cl_float4)*num_out); int num_out = 9; cl_mem image, out; cl_bool support; cl_image_format fmt; cl_platform_id platform_id = NULL; cl_uint ret_num_platforms; cl_device_id device_id = NULL; cl_uint ret_num_devices; cl_context context = NULL; cl_command_queue command_queue = NULL; cl_program program = NULL; cl_kernel kernel = NULL; size_t kernel_code_size; char *kernel_src_str; float *result; cl_int ret; int i; FILE *fp; size_t r_size;
118
40. fp = fopen("image. &ret). /* Transfer target coordinate*/ size_t region[] = {4. CL_MEM_READ_ONLY. image. 119 . 1}. 0. return 1. 40. NULL).image_channel_order = CL_R. 0. 20. 10. "r"). 0. 0}. 10. kernel_src_str = (char*)malloc(MAX_SOURCE_SIZE). 10. fmt. origin. sizeof(cl_float4)*num_out. 30. &fmt. MAX_SOURCE_SIZE. 20. 40. /* Create output buffer */ out = clCreateBuffer(context. command_queue = clCreateCommandQueue(context. 40. /* Set parameter to be used to transfer image object */ size_t origin[] = {0. NULL. &ret).: /* Transfer to device */ clEnqueueWriteImage(command_queue. 20. 30. /* Size of object to be transferred */ /* Create Image Object */ image = clCreateImage2D(context. region. 4.image_channel_data_type = CL_FLOAT. CL_MEM_READ_WRITE. 30. 30. fclose(fp). device_id. CL_TRUE. float data[] = { /* Transfer Data */ 10.cl". fp). kernel_code_size = fread(kernel_src_str. }. 4. /* Create data format for image creation */ fmt. 0. 20.B01-02632560-294 The OpenCL Programming Book &r_size). 4. 1. } if (support != CL_TRUE) { puts("image not supported").
0. kernel. 1.B01-02632560-294 The OpenCL Programming Book 4*sizeof(float). 1. NULL. clReleaseProgram(program). clReleaseMemObject(image). 120 . result. NULL. out. 0. data. sizeof(cl_float4)*num_out. (const char **)&kernel_src_str. NULL). i < num_out. &ret). 0.result[i*4+0]. 077: 078: 079: 080: 081: 082: 083: 084: 085: 086: 087: 088: 089: 090: 091: 092: 093: 094: 095: 096: 097: 098: 099: 0100: 0101: 0102: 0103: 0104: 0105: 0106: 0107: 0108: 0109: 0110: } return 0.result[i*4+2]. /* Set Kernel Arguments */ clSetKernelArg(kernel. i++) { printf("%f. NULL). for (i=0. "". clSetKernelArg(kernel. (void*)&image).result[i*4+1]. (const size_t *)&kernel_code_size. CL_TRUE. 0. free(kernel_src_str). &device_id. NULL. cl_event ev. (void*)&out).%f. clReleaseMemObject(out). clBuildProgram(program. NULL. /* Build program */ program = clCreateProgramWithSource(context.%f.result[i*4+3]).%f¥n". 0. &ret). kernel = clCreateKernel(program. &ev). } /* Retrieve result */ clEnqueueReadBuffer(command_queue. 1. clReleaseContext(context). 0. "image_test". clEnqueueTask(command_queue. sizeof(cl_mem). free(result). clReleaseCommandQueue(command_queue). sizeof(cl_mem). NULL). clReleaseKernel(kernel).
000000.000000.000000.1.000000 13.0.000000. and "image_channel_data_type" sets the type of the element.1. fmt.1.0.0.0.000000 30.000000 10. 041: 042: &r_size).000000.1. CL_DEVICE_IMAGE_SUPPORT.000000. sizeof(support). 10.0.0. The above code sets the format of the image object.000000 We will start the explanation from the host side.000000. 057: 058: 059: /* Create data format for image creation */ fmt.000000 10.0.000000.000000.0. /* Check if the device support images */ clGetDeviceInfo(device_id. The format is of type cl_image_format. return 1. as not all OpenCL implementation may support image objects. &support.000000 10.000000.B01-02632560-294 The OpenCL Programming Book The result is the following (the result may vary slightly.0.0.1.000000.image_channel_data_type = CL_FLOAT.000000.000000.0.0. 043: 044: 045: 046: } if (support != CL_TRUE) { puts("image not supported").000000. Each data in the image object is of a vector type containing 4 components.000000. since OpenCL does not guarantee precision of operations).1.1. It is supported if CL_DEVICE_IMAGE_SUPPORT returns CL_TRUE [17].0.000000.000000. which is a struct containing two elements.000000.000000.000000.0. The possible values 121 .1.000000.000000 10.0. "image_channel_order" sets the order of the element.image_channel_order = CL_R.000000.000000.000000.000000 18.0.0.000000.1.000000 10.0.000000. The above must be performed.000000.
X) (X. CL_UNORM_SHORT_101010 can only be set when "image_channel_order" is set to "CL_RGB".0. Of these.0.0.6 below.Y.7: Possible values for "image_channel_data_types" Image Channel Data Type CL_SNORM_INT8 CL_SNORM_INT16 CL_UNORM_INT8 CL_UNORM_INT16 CL_UNORM_SHORT_565 CL_UNORM_SHORT_555 CL_UNORM_SHORT_101010 CL_SIGNED_INT8 CL_SIGNED_INT16 CL_SIGNED_INT32 CL_UNSIGNED_INT8 CL_UNSIGNED_INT16 CL_UNSIGNED_INT32 CL_FLOAT CL_HALF_FLOAT Corresponding Type char short uchar ushort ushort (with RGB) ushort (with RGB) uint (with RGB) char short int uchar ushort uint float half 122 .0f.1) (X.X) (X.0.Y.1) The values that can be set for "image_channel_data_type" is shown in Table 5.CL_BGRA.X.6: Possible values for "image_channel_order" Enum values for channel order CL_R CL_A CL_RG CL_RA CL_RGB CL_RGBA.CL_ARGB CL_INTENSITY CL_LUMINANCE Corresponding Format (X.7 below.X. Table 5. The 0/1 value represent the fact that those components are set to 0.1) (0. CL_UNORM_SHORT_555.W) (X.B01-02632560-294 The OpenCL Programming Book for "image_channel_order" are shown in Table 5. Table 5.Y) (X.X.Z.0f/1.Z.0.Y. The X/Y/Z/W are the values that can be set for the format. CL_UNORM_SHORT_565.0.0.1) (X.X.
0}. 30. The arguments are the corresponding context. NULL). data. 0. width. As shown above. 10. the image row pitch must be specified. image object. height. 40. host pointer. The target coordinate and target size are specified in a 3-component vector of type size_t. /* Set parameter to be used to transfer image object */ size_t origin[] = {0. 061: 062: /* Create image object */ image = clCreateImage2D(context. NULL. NULL). 1}. read/write permission. 4. 064: 065: 066: 067: 068: 069: 070: 071: 072: 073: 074: 075: 076: /* Transfer to device */ clEnqueueWriteImage(command_queue. 0. 20. CL_MEM_READ_ONLY. 10. 40. image object is created using clCreateImage2D(). 4. 4. we are ready to create the image object. If this is set. 123 . 40. 20. pointer to data to be transferred. image data format. input row pitch. }. region. 30. target coordinate. Now the data for the image object is transferred from the host to device. image. 30. Refer to 4-1-3 for explanation on the block enable and events objects. 20. Host pointer is used if the data already exists on the host. and the data is to be used directly from the kernel. 10. block enable. float data[] = { /* Transfer Data */ 10. origin. The clEnqueueWriteImage() is used to transfer the image data. number of events in wait list. the 3rd component to target coordinate is 0.B01-02632560-294 The OpenCL Programming Book Now that the image format is specified. 20. 0. input slice pitch. The host pointer is not specified in this case. and event object. 30. and the 3rd component to the size is 1. and error code. 0. The arguments are command queue. /* Size of object to be transferred */ 4*sizeof(float). &fmt. Passing this image object into the device program will allow the data to be accessed. 0. image row pitch. If the image object is 2-D. target size. 40. /* Transfer target coordinate*/ size_t region[] = {4. event wait list. CL_TRUE.
OpenCL allows different modes on how image objects are read. which gets passed in as a parameter when reading an image object. 002: const sampler_t s_linear = CLK_FILTER_LINEAR | CLK_NORMALIZED_COORDS_FALSE | CLK_ADDRESS_CLAMP_TO_EDGE. 001: const sampler_t s_nearest = CLK_FILTER_NEAREST | CLK_NORMALIZED_COORDS_FALSE | CLK_ADDRESS_CLAMP_TO_EDGE. Table 5. The "sampler_t" type defines a sampler object. The properties that can be set are the following.B01-02632560-294 The OpenCL Programming Book Now we will go over the device program.8: Sampler Object Property Values Sampler state <filter mode> Predefined Enums CLK_FILTER_NEAREST Description Use nearest defined coordinate Interpolate between neighboring values <normalized coords> <address mode> CLK_NORMALIZED_COORDS_FALSE CLK_NORMALIZED_COORDS_TRUE CLK_ADDRESS_REPEAT Unnormalized Normalized Out-of-range image coordinates wrapped to valid range CLK_FILTER_LINEAR 124 .8 below. and these properties are defined in this object type. 003: const sampler_t s_repeat = CLK_FILTER_NEAREST | CLK_NORMALIZED_COORDS_FALSE | CLK_ADDRESS_REPEAT. • Filter Mode • Normalize Mode • Addressing Mode The values that can be set for each property are shown in Table 5.
CL_LUMINANCE Clamp point (0. 0.0f) (0. CL_BGRA.4. Plot of the results for different sample modes are shown in Figure 5.0f. The CLK_FILTER_NEAREST mode determines the closest coordinate defined. CLK_ADDRESS_CLAMP CLK_ADDRESS_NONE The filter mode specifies the how non-exact coordinates are handled. CL_RGBA CL_R.0f.B01-02632560-294 The OpenCL Programming Book CLK_ADDRESS_CLAMP_TO_EDGE Out-of-range coord clamped to the extent Out-of-range coord clamped to border color Undefined.9.0f. 0. CL_INTENSITY. The addressing mode specifies how out-of-range image coordinates are handled. OpenCL defines the CLK_FILTER_NEAREST and CLK_FILTER_LINEAR modes. If CLK_ADDRESS_CLAMP is specified. CL_ARGB.0f. CL_RGB. 0. CL_RG. the input data is set to (10.0f) In our sample code. Table 5.0f. The CLK_FILTER_LINEAR mode interpolates the values from nearby coordinates (nearest 4 points if 2D and nearest 8 points if 3D). CL_RA. 0. The filter mode decides how a value at non-exact coordinate is obtained. 1. 20. 30). The address mode decides what to do when a coordinate outside the range of the image is accessed. then the returned value is determined based on the "image_channel_order" as shown in Table 5.0f. and uses this coordinate to access the value at the coordinate. Figure 5. 0.9: CLK_ADDRESS_CLAMP Result as a function of "image_channel_order" Enum values for channel order CL_A.4: Sampler object examples 125 .
0. 126 . This normalizes the image object so that its coordinates are accessed by a value between 0 and 1. However.B01-02632560-294 The OpenCL Programming Book The coordinates are normalized if the CLK_NORMALIZED_COORDS_TRUE is specified. the normalization must be either enabled or disabled for all image reads within a kernel.
5f)). s_nearest. out[1] = read_imagef(im.10 below. sampler object. 0.0 ~ 1. 0. 0. Embedded Profile OpenCL is intended to not only to be used on desktops environments. or uint4 depending on the format. The returned type is float4.3f.0 (-32768 becomes -1. (Note: The action of the read_image*() function is undefined when there is a format mismatch) Table 5.0) 0 ~ 255 normalized to 0. (float2)(1.0 ~ 1.0 ~ 1. or read_imageui(). read_imagei(). (float2)(0.5f)).0 as is Half value converted to float -128 ~ 127 CL_UNORM_INT16 0 ~ 65535 normalized to 0. The format when the image object is read must match the format used when the object was created using clCreateImage2D. The image object can be read using the built-in functions read_imagef(). int4. 010: 011: 012: out[0] = read_imagef(im. The coordinate is specified as a float2 type.10: Format to specify when reading an image object Function read_imagef() Format CL_SNORM_INT8 CL_SNORM_INT16 CL_UNORM_INT8 CL_FLOAT CL_HALF_FLOAT read_imagei() CL_SIGNED_INT8 Description -127 ~ 127 normalized to -1. The formats are shown in Table 5. s_nearest.0 CL_SIGNED_INT16 -32768 ~ 32767 CL_SIGNED_INT32 -2147483648 ~ 2147483647 read_imageui() CL_SIGNED_INT8 0 ~ 255 CL_SIGNED_INT16 0 ~ 65535 CL_SIGNED_INT32 0 ~ 4294967296 The arguments of the read_image*() functions are the image object. (float2)(0.0 ~ 1.0) -32767 ~ 32767 normalized to -1.5f. but for embedded systems 127 . and the coordinate to read.0 (-128 becomes -1. out[2] = read_imagef(im.5f)).B01-02632560-294 The OpenCL Programming Book Now we are ready to read the image object.8f. s_nearest.
. }. The result of operating on Inf or NaN is undefined. typedef struct __attribute__(((packed))) packed_struct_t { . float Inf and NaN may not be implemented. The attribute can be placed on variables. OpenCL defines the Embedded Profile. it may use the round-towards-zero method if the former is not implemented. __attribute__(( value )) This extension is starting to be supported by compilers other than GCC.. Below is an example of an attribute being placed on a variable. • Rounding of floats OpenCL typically uses the round-to-nearest-even method by default for rounding. This is a language extension to standard C that is supported in GCC. and functions. Attribute Qualifiers OpenCL allows the usage of attribute qualifiers. • Other optional functionalities In addition to the above. it may not be possible to implement every functions defined in OpenCL for embedded systems. However. 64-bit long/ulong. as they are optional.B01-02632560-294 The OpenCL Programming Book as well. Examples of an attribute being placed on types are shown below. int a __attribute__((aligned(16))). and will vary depending on the implementation. 3D image filter. 128 . types (typedef. typedef int aligned_int_t __attribute__((aligned(16))). The implementation that only supports the Embedded Profile may have the following restrictions. • Precision of Built-in functions Some built-in functions may have lower precision than the precision defined in OpenCL. struct). but in the Embedded Profile. This should be checked using the clGetDeviceInfo() function to get the "CL_DEVICE_SINGLE_FP_CONFIG" property. For this reason. which gives specific instructions to the compiler.
• reqd_work_group_size() Specifies the number of work-items/work-group. the start address of that variable gets aligned. Pragma OpenCL defines several pragmas. 129 . (1. The syntax is to use "#pragma OPENCL". 2 pragmas. • packed Same as the "packed" in GCC. the start address of that type gets aligned. There are also other attributes defined for function. which gives optimization hints to the compiler. each member in the struct gets padded. • aligned Same as the "aligned" in GCC. the address between this variable and the previously defined variable gets padded. The attributes that can be set in OpenCL are shown below. • endian This specifies the byte order used on a variable. When specified on a variable. type or function to set the attribute to. When specified on a struct. When specified on a type. "host" or "device" can be passed in as an argument to make the byte order the same as either host or device. When specified on a variable. This attribute on a function is undefined in OpenCL. Some hints are given below: • vec_type_hint() Suggests the type of vector to use for optimization • work_group_size_hint() Suggests the number of work-items/work-group.1) should be specified if the kernel is queued using clEnqueueTask().B01-02632560-294 The OpenCL Programming Book The basic grammar is to place an attribute after the variable. The kernel will throw an error when the number of work-items/work-group is a different number.1.
#pragma OPENCL EXTENSION <extension_name> : <behavior> <extension_name> gets the name of the OpenCL extension. the precision may be different from when the operations are performed separately. In some cases. EXTENSION This enables or disables optional OpenCL extensions to be used. This FP_CONTRACT pragma enables or disables the usage of these contract operations. #pragma OPENCL FP_CONTRACT on-off-switch Some hardware supports "FMA" instructions. The standard extensions are shown in Table 5. FP_CONTRACT This is the same as the FP_CONTRACT defined in standard C.11: Extension name Extension name cl_khr_fp64 cl_khr_fp16 cl_khr_select_fprounding_mode cl_khr_global_int32_base_atomics. These types of instructions. and EXTENSION are defined. cl_khr_global_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store all Extended capability Support for double precision Support for half precision Support for setting the rounding type Support for atomic operations of 32-bit values Support for atomic operations of 64-bit values Enable writes to 3-D image object Enable byte-size writes to address Support for all extensions <behavior> gets one of the value on Table 5.B01-02632560-294 The OpenCL Programming Book FP_CONTRACT. where multiple operations are performed as 1 operation.11 below. which sums a number with the product of 2 numbers in one instruction. 130 .12. are known as contract operations. Specifying "require" when the enabled extension is "all" will display an error. Table 5. cl_khr_global_int32_extended_atomics cl_khr_global_int64_base_atomics.
• Stock price data in passed in as an int-array named "value". A moving average filter is used commonly in image processing and signal processing as a low-pass filter. and gradually convert sections of the code to be processed in parallel. • The array length is passed in as "length" of type int • The width of the data to compute the average for is passed in as "width" of type int To make the code more intuitive. the code on List 5. We will use a sample application that analyzes stock price data to walk through porting of standard C code to OpenCL C in order to utilize a device.12: Supported values for <behavior> Behavior require disable enable Description Check that extension is supported (compile error otherwise) Disable extensions Enable extensions OpenCL Programming Practice This section will go over some parallel processing methods that can be used in OpenCL. You may not experience any speed-up depending on the type of hardware you have. We will start from a normal C code. This should aid you in gaining intuition on how to parallelize your code. The result of the moving average is returned as an array of floats with the name "average". Note that the sample code shown in this section is meant to be pedagogical in order to show how OpenCL is used. The analysis done in this application is to compute the moving average of the stock price for different stocks.B01-02632560-294 The OpenCL Programming Book Table 5. Standard Single-Thread Programming We will first walk through a standard C code of the moving average function.21 gets rid of all error checks that would 131 . The function has the following properties.
so this is the code will eventually be ported into kernel code. float *average. i++) { add_value += values[i]. i < length. This function is what we want to process on the device. for (i=0. int i. i++) { average[i] /= (float)width. i < length. } /* Compute sum for the (width)th ~ (length-1)th elements */ for (i=width. int add_value. i < width.: } } /* Compute average from the sum */ for (i=width-1. /* Compute sum for the first "width" elements */ add_value = 0. List 5. i++) { average[i] = 0. } /* Insert zeros to 0th ~ (width-2)th element */ for (i=0.B01-02632560-294 The OpenCL Programming Book normally be performed. i < width-1.0f.21: Moving average of integers implemented in standard C 001: void moving_average(int *values. } average[width-1] = (float)add_value. int length. average[i] = (float)(add_value).values[i-width] + values[i]. i++) { add_value = add_value . int width) 132 .
Zeros are placed for average[0] ~ average[width-2] in lines 22~25. i < width-1. each indices of the average array contain the average of the previous width-1 values and the value of that index. int width) 133 . i++) { add_value = 0. Lines 16-20 computes the sum for the remaining indices. which can become significant over time. The average itself is computed by first summing up the "width" number of values and storing it into the average array (lines 9-20). this method may result in rounding errors. and stores it in average[width-1]. List 5. In other words. subtracting the oldest value and adding the newest value. i < length. The value of average[0] ~ average[width-2] = average[1] are zeros. float *average. float add_value. average[3] contain the average of value[1].22 should be used. j++) { add_value += values[i . j. i++) { average[i] = 0.0f.22: Moving average of floats implemented in standard C 001: void moving_average_float(float *values. but if the input is of type float. for (j=0. Lines 9-14 computes the sum of the first "width" elements.j]. value[2] and value[3].B01-02632560-294 The OpenCL Programming Book In this example. a method shown in List 5. int length. which is more efficient than computing the sum of "width" elements each time.0f. since this computation will require values not contained in the input array. } /* Insert zeros to 0th ~ (width-2)th elements */ for (i=0. 002: 003: 004: 005: { 006: 007: 008: 009: 010: 011: 012: 013: 014: 015: 016: 017: 018: 019: } /* Compute average of (width-1) ~ (length-1) elements */ for (i=width-1. In this case. j < width. if the width is 3. int i. This works since the input data is an integer type. This is done by starting from the previously computed sum. since it would require value[-2] and value[-1].
B01-02632560-294 The OpenCL Programming Book 020: 021: 022: } } average[i] = add_value / (float)width. 50 List 5. We will now show a main() function that will call the function in List 5.24). 008: 009: /* Define width for the moving average */ 010: #define WINDOW_SIZE (13) 011: 012: int main(int argc.21 to perform the computation (List 5.24: Standard C main() function to all the moving average function 001: #include <stdio. 107.23: Input data (stock_array1.txt".h> 003: 004: /* Read Stock data */ 005: int stock_array1[] = { 006: 007: }. float *result. .23. #include "stock_array1. 109. char *argv[]) 013: { 014: 015: 016: 017: 018: int data_num = sizeof(stock_array1) / sizeof(stock_array1[0]). int window_num = (int)WINDOW_SIZE.. whose content is shown in List 5.txt) 100.. 98.h> 002: #include <stdlib.txt" 134 . 104. List 5. The input data is placed in a file called "stock_array1.
result[i]). i < data_num. The code in List 5. i. i++) { printf("result[%d] = %f¥n". We have now finished writing a single threaded program to compute the moving average. int i.25 after being ported. 002: __global float *average. window_num). Porting to OpenCL The first step is to convert the moving_average() function to kernel code. This code will be executed on the device. data_num. /* Allocate space for the result */ result = (float *)malloc(data_num*sizeof(float)). result. List 5. This will now be converted to OpenCL code to use the device. The above code will be eventually transformed into a host code that will call the moving average kernel.cl) 001: __kernel void moving_average(__global int *values.21 becomes as shown in List 5.25: Moving average kernel (moving_average. or OpenCL C. /* Call the moving average function */ moving_average(stock_array1. } /* Print result */ for (i=0. Our journey has just begun.B01-02632560-294 The OpenCL Programming Book 019: 020: 021: 022: 023: 024: 025: 026: 027: 028: 029: 030: 031: 032: 033: 034: 035: 036: 037: 038: } /* Deallocate memory */ free(result). 135 .
i < length. which adds the __kernel qualifier to the function. i++) { add_value += values[i]. } /* Insert zeros to 0th ~ (width-2)th elements */ for (i=0. i++) { average[i] = 0. int width) int i. int add_value.0f. } Note that we have only changed lines 1 and 2. and the address qualifier __global specifying the location of the input data and where the result will be placed. 136 .B01-02632560-294 The OpenCL Programming Book: } int length. i < length. for (i=0. i < width. i++) { average[i] /= (float)width.26. i++) { add_value = add_value . The host code is shown in List 5. /* Compute sum for the first "width" elements */ add_value = 0.values[i-width] + values[i]. } /* Compute average of (width-1) ~ (length-1) elements */ for (i=width-1. average[i] = (float)(add_value). } average[width-1] = (float)add_value. i < width-1. /* Compute sum for the (width)th ~ (length-1)th elements */ for (i=width.
B01-02632560-294
The OpenCL Programming Book
List 5.26: Host code to execute the moving_average() kernel
001: #include <stdlib.h> 002: #ifdef __APPLE__ 003: #include <OpenCL/opencl.h> 004: #else 005: #include <CL/cl.h> 006: #endif 007: #include <stdio.h> 008: 009: /* Read Stock data */ 010: int stock_array1[] = { 011: 012: }; 013: 014: /* Define width for the moving average */ 015: #define WINDOW_SIZE (13) 016: 017: #define MAX_SOURCE_SIZE (0x100000) 018: 019: int main(void) 020: { 021: 022: 023: 024: 025: 026: 027: 028: 029: 030: 031: 032: 033: 034: 035: cl_platform_id platform_id = NULL; cl_uint ret_num_platforms; cl_device_id device_id = NULL; cl_uint ret_num_devices; cl_context context = NULL; cl_command_queue command_queue = NULL; cl_mem memobj_in = NULL; cl_mem memobj_out = NULL; cl_program program = NULL; cl_kernel kernel = NULL; size_t kernel_code_size; char *kernel_src_str; float *result; cl_int ret; FILE *fp; #include "stock_array1.txt"
137: /* Create Kernel */ /* Create Program Object */ program = clCreateProgramWithSource(context, 1, (const char **)&kernel_src_str, (const size_t *)&kernel_code_size, &ret); /* Compile kernel */ ret = clBuildProgram(program, 1, &device_id, NULL, NULL, NULL); /* Read Kernel Code */ fp = fopen("moving_average.cl", "r"); kernel_code_size = fread(kernel_src_str, 1, MAX_SOURCE_SIZE, fp); fclose(fp); /* Create Command Queue */ command_queue = clCreateCommandQueue(context, device_id, 0, &ret); /* Create Context */ context = clCreateContext(NULL, 1, &device_id, NULL, NULL, &ret); /* Get Device */ ret = clGetDeviceIDs(platform_id, CL_DEVICE_TYPE_DEFAULT, 1, &device_id, &ret_num_devices); /* Get Platform */ ret = clGetPlatformIDs(1, &platform_id, &ret_num_platforms); /* Allocate space for the result on the host side */ result = (float *)malloc(data_num*sizeof(float)); /* Allocate space to read in kernel code */ kernel_src_str = (char *)malloc(MAX_SOURCE_SIZE); int data_num = sizeof(stock_array1) / sizeof(stock_array1[0]); int window_num = (int)WINDOW_SIZE; int i;
138
B01-02632560-294
The OpenCL Programming Book:
kernel = clCreateKernel(program, "moving_average", &ret); /* Create buffer for the input data on the device */ memobj_in = clCreateBuffer(context, CL_MEM_READ_WRITE, data_num * sizeof(int), NULL, &ret); /* Create buffer for the result on the device */ memobj_out = clCreateBuffer(context, CL_MEM_READ_WRITE, data_num * sizeof(float), NULL, &ret); /* Copy input data to the global memory on the device*/ ret = clEnqueueWriteBuffer(command_queue, memobj_in, CL_TRUE, 0, data_num * sizeof(int), stock_array1, 0, NULL, NULL); /* Set kernel arguments */ ret = clSetKernelArg(kernel, 0, sizeof(cl_mem), (void *)&memobj_in); ret = clSetKernelArg(kernel, 1, sizeof(cl_mem), (void *)&memobj_out); ret = clSetKernelArg(kernel, 2, sizeof(int), (void *)&data_num); ret = clSetKernelArg(kernel, 3, sizeof(int), (void *)&window_num); /* Execute the kernel */ ret = clEnqueueTask(command_queue, kernel, 0, NULL, NULL); /* Copy result from device to host */ ret = clEnqueueReadBuffer(command_queue, memobj_out, CL_TRUE, 0, data_num * sizeof(float), result, 0, NULL, NULL);
/* OpenCL Object Finalization */ ret = clReleaseKernel(kernel); ret = clReleaseProgram(program); ret = clReleaseMemObject(memobj_in); ret = clReleaseMemObject(memobj_out); ret = clReleaseCommandQueue(command_queue);
139
B01-02632560-294
The OpenCL Programming Book
108: 109: 110: 111: 112: 113: 114: 115: 116: 117: 118: 119: 120: }
ret = clReleaseContext(context); /* Display Results */ for (i=0; i < data_num; i++) { printf("result[%d] = %f¥n", i, result[i]); } /* Deallocate memory on the host */ free(result); free(kernel_src_str); return 0;
This host is based on the code in List 5.24, adding the OpenCL runtime API commands required for the kernel execution. Note the code utilizes the online compile method, as the kernel source code is read in (Lines 60-69). However, although the code is executable, it is not written to run anything in parallel. The next section will show how this can be done.
Vector Operations
First step to see whether vector-types can be used for the processing. For vector types, we can expect the OpenCL implementation to perform operations using the SIMD units on the processor to speed up the computation. We will start looking at multiple stocks from this section, as this is more practical. The processing will be vector-ized such that the moving average computation for each stock will be executed in parallel. We will assume that the processor will have a 128-bit SIMD unit, to operate on four 32-bit data in parallel. In OpenCL, types such as int4 and float4 can be used. List 5.27 shows the price data for multiple stocks, where each row contains the price of multiple stocks at one instance in time. For simplicity's sake, we will process the data for 4 stocks in this section.
List 5.27: Price data for multiple stocks (stock_array_many.txt)
140
209. 319. 315. 1098. 783. 1089. /* Compute sum for the first "width" elements for 4 stocks */ add_value = (int4)0.28: Price data for 4 stocks (stock_array_4. . 100. 12 109. 259. 950.. 790.txt) 100. i++) { add_value += values[i]. 313. 100. 980 For processing 4 values at a time. 310. 002: 003: 004: 005: { 006: 007: 008: 009: 010: 011: 012: 013: 014: 015: } average[width-1] = convert_float4(add_value). 1089... 1100.. .. 210. 107. respectively. 995. 1105. 212. 985. 971. 98.. 109. .cl) 001: __kernel void moving_average_vec4(__global int4 *values. int length. 200. int width) 141 . … 50. 210. 315.. List 5... .B01-02632560-294 The OpenCL Programming Book 100. 1098. 687.29: Vector-ized moving average kernel (moving_average_vec4. 212. i < width. 321. 33. 33. 310. 980. 321.29. 319. 21 107. 104. 18 … 50. The new kernel code will look like List 5. 792. 259. . 1098. ... 18 104. 209. for (i=0. 313.. /* A vector to hold 4 components */ __global float4 *average. 9 List 5. 788. 763. 1105. we can just replace int and float with int4 and float4.. 200.. int4 add_value.. 1100.. int i.. 983.. 1098. 15 98. 990.
so these do not need to be changed (Lines 12. average[i] = convert_float4(add_value).h> 006: #endif 007: #include <stdio.values[i-width] + values[i]. } The only differences from List 5.h> 004: #else 005: #include <CL/cl. Note that the operators (+. List 5. -. /) are overloaded to be used on vector-types.h> 008: 009: #define NAME_NUM (4) /* Number of Stocks */ 010: #define DATA_NUM (21) /* Number of data to process for each stock*/ 011: 012: /* Read Stock data */ 013: int stock_array_4[NAME_NUM*DATA_NUM]= { 142 . i < length.h> 002: #ifdef __APPLE__ 003: #include <OpenCL/opencl. and the use of convert_float4() function. 29).0f).7. } /* Compute average of (width-1) ~ (length-1) elements for 4 stocks */ for (i=width-1. } /* Insert zeros to 0th ~ (width-2)th element for 4 stocks*/ for (i=0. *.30: Host code to run the vector-ized moving average kernel 001: #include <stdlib.29 for float). i++) { average[i] /= (float4)width.10 for int and Lines 2. i++) { average[i] = (float4)(0. i < width-1. i++) { add_value = add_value .B01-02632560-294 The OpenCL Programming Book 016: 017: 018: 019: 020: 021: 022: 023: 024: 025: 026: 027: 028: 029: 030: 031: } /* Compute sum for the (width)th ~ (length-1)th elements for 4 stocks */ for (i=width.25 is the conversion of scalar to vector type (Lines 1. 18. i < length.24.
cl_mem memobj_out = NULL. int i. size_t kernel_code_size. int point_num = NAME_NUM * DATA_NUM for the result on the host side */ /* Allocate space to read in kernel code */ kernel_src_str = (char *)malloc(MAX_SOURCE_SIZE). int window_num = (int)WINDOW_SIZE. cl_mem memobj_in = NULL. j. cl_command_queue command_queue = NULL. int name_num = (int)NAME_NUM. cl_platform_id platform_id = NULL. cl_device_id device_id = NULL.B01-02632560-294 The OpenCL Programming Book 014: 015: }. float *result. char *kernel_src_str. FILE *fp. cl_int ret. cl_uint ret_num_devices. 143 . cl_kernel kernel = NULL. int data_num = (int)DATA_NUM. cl_context context = NULL. 016: #include "stock_array_4. cl_uint ret_num_platforms. cl_program program = NULL.
&ret). /* Get Platform */ ret = clGetPlatformIDs(1. /* Get Device */ ret = clGetDeviceIDs(platform_id. /* Read kernel source code */ fp = fopen("moving_average_vec4.cl". (const size_t *)&kernel_code_size. /* Create buffer for the result on the device */ 144 . &device_id. 1. /* Create kernel */ kernel = clCreateKernel(program. 0. /* Create command queue */ command_queue = clCreateCommandQueue(context. &device_id. &ret). &ret). &ret). 1. /* Create buffer for the input data on the device */ memobj_in = clCreateBuffer(context. (const char **)&kernel_src_str. point_num * sizeof(int): result = (float *)malloc(point_num*sizeof(float)). kernel_code_size = fread(kernel_src_str. &ret_num_devices). MAX_SOURCE_SIZE. NULL. /* Create Context */ context = clCreateContext( NULL. NULL. fp). CL_MEM_READ_WRITE. "moving_average_vec4". 1. CL_DEVICE_TYPE_DEFAULT. &ret_num_platforms). NULL. NULL). NULL. fclose(fp). /* Compile kernel */ ret = clBuildProgram(program. /* Create Program Object */ program = clCreateProgramWithSource(context. 1. &device_id. 1. &platform_id. NULL. "r"). &ret). device_id.
sizeof(cl_mem). 3. (void *)&memobj_out). NULL. j++) { printf("%f. 0. /* Set Kernel Arguments */ ret = clSetKernelArg(kernel. sizeof(int). ret = clReleaseMemObject(memobj_in). i < data_num. sizeof(cl_mem). 2. ret = clReleaseCommandQueue(command_queue). CL_TRUE. ". ret = clSetKernelArg(kernel. point_num * sizeof(int). 1. i++) { printf("result[%d]:". j < name_num. CL_MEM_READ_WRITE. ret = clReleaseProgram(program). /* Copy input data to the global memory on the device*/ ret = clEnqueueWriteBuffer(command_queue. 0. ret = clReleaseContext(context). /* Copy result from device to host */ ret = clEnqueueReadBuffer(command_queue. /* Print results */ for (i=0. NULL). ret = clReleaseMemObject(memobj_out). ret = clSetKernelArg(kernel. 0. memobj_in. point_num * sizeof(float). memobj_out. sizeof(int). NULL. (void *)&memobj_in). NULL. for (j=0. /* OpenCL Object Finalization */ ret = clReleaseKernel(kernel). 0. /* Execute kernel */ ret = clEnqueueTask(command_queue. result[i*NAME_NUM+j]). point_num * sizeof(float). (void *)&window_num). NULL). 0. ret = clSetKernelArg(kernel. result. NULL. i). } 145 . (void *)&data_num). kernel. stock_array_4. NULL).: memobj_out = clCreateBuffer(context. &ret). CL_TRUE.
002: 003: 004: 005: 006: { 007: int i. The new kernel code is shown in List 5.27. int length.cl (line 66). the kernel code that gets read is changed to moving_average_vec4. The kernel will take in a parameter "name_num".cl) 001: __kernel void moving_average_many(__global int4 *values. List 5. free(kernel_src_str). we will assume the number of stocks to process is a multiple of 4.26 is that the data to process is increased by a factor of 4. and the kernel name is changed to moving_average_vec4 (line 78). which is the number of stocks to process. } printf("¥n"). int name_num. int width) 146 . but we will instead allow the kernel to take care of this.29 and vector-ize the input data on the host side.B01-02632560-294 The OpenCL Programming Book 122: 123: 124: 125: 126: 127: 128: 129: 130: } return 0. Since we will be processing 4 stocks at a time. We will now change the program to allow processing of more than 4 stocks. We could just call the kernel on List 5.31 below. This will be used to calculate the number of loops required to process all stocks. and that the data length parameter is hard coded.31: Moving average kernel of (multiple of 4) stocks (moving_average_many. __global float4 *average. j. /* Deallocate memory on the host */ free(result). In addition. as in the data in List 5. the kernel code will just have to loop the computation so that more than 4 stocks can be computed within the kernel. For simplicity. The only difference from List 5.
for (i=0.h> 147 .0f). i < width-1. i < width.values[(i-width)*loop_num+j] + average[i*loop_num+j] = convert_float4(add_value). } /* Compute average of (width-1) ~ (length-1) elements for 4 stocks */ for (i=width-1. i < length. /* compute the number of times to loop */ int4 add_value.32.h> 002: #ifdef __APPLE__ 003: #include <OpenCL/opencl.32: Host code for calling the kernel in List 5. } /* Insert zeros to 0th ~ (width-2)th element for 4 stocks*/ for (i=0. 022: 023: 024: 025: 026: 027: 028: 029: 030: 031: 032: 033: 034: 035: } The host code is shown in List 5. i++) { average[i*loop_num+j] = (float4)(0. for (j=0. i++) { average[i*loop_num+j] /= (float4)width. /* Compute sum for the (width)th ~ (length-1)th elements for 4 stocks */ for (i=width. } } values[i*loop_num+j]. List 5. j++) { /* Compute sum for the first "width" elements for 4 stocks */ add_value = (int4)0. i++) { add_value = add_value . } average[(width-1)*loop_num+j] = convert_float4(add_value).B01-02632560-294 The OpenCL Programming Book 008: 009: 010: 011: 012: 013: 014: 015: 016: 017: 018: 019: 020: 021: int loop_num = name_num / 4. j < loop_num.31 001: #include <stdlib. i < length. i++) { add_value += values[i*loop_num+j].
016:: cl_platform_id platform_id = NULL.B01-02632560-294 The OpenCL Programming Book 004: #else 005: #include <CL/cl. cl_program program = NULL. FILE *fp. size_t kernel_code_size. #include "stock_array_many. cl_context context = NULL. cl_mem memobj_in = NULL.h> 006: #endif 007: #include <stdio.txt" 148 .h> 008: 009: #define NAME_NUM (8) /* Number of stocks */ 010: #define DATA_NUM (21) /* Number of data to process for each stock */ 011: 012: /* Read Stock data */ 013: int stock_array_many[NAME_NUM*DATA_NUM]= { 014: 015: }. cl_uint ret_num_platforms. cl_mem memobj_out = NULL. cl_kernel kernel = NULL. char *kernel_src_str. cl_command_queue command_queue = NULL. cl_device_id device_id = NULL. cl_uint ret_num_devices. float *result. cl_int ret.
/* Read kernel source code */ fp = fopen("moving_average_many. (const size_t *)&kernel_code_size. fp). /* Get Platform*/ ret = clGetPlatformIDs(1. &ret). int i. /* Create Program Object */ program = clCreateProgramWithSource(context. int point_num = NAME_NUM * DATA_NUM. 1. NULL.cl". 0. device_id. 1. /* Create Context */ context = clCreateContext(NULL. CL_DEVICE_TYPE_DEFAULT. &platform_id. MAX_SOURCE_SIZE. &ret_num_devices).B01-02632560-294 The OpenCL Programming Book: int window_num = (int)WINDOW_SIZE. &device_id. /* Allocate space for the result on the host side */ result = (float *)malloc(point_num*sizeof(float)). fclose(fp). /* Create Command Queue */ command_queue = clCreateCommandQueue(context. NULL. /* Compile kernel */ 149 . 1. int data_num = (int)DATA_NUM. &ret). /* Get Device */ ret = clGetDeviceIDs(platform_id. &device_id. int name_num = (int)NAME_NUM. "r"). 1. /* Allocate space to read in kernel code */ kernel_src_str = (char *)malloc(MAX_SOURCE_SIZE). &ret_num_platforms). j. &ret). (const char **)&kernel_src_str. kernel_code_size = fread(kernel_src_str.
point_num * sizeof(float). NULL. &ret). /* Create buffer for the input data on the device */ memobj_in = clCreateBuffer(context. (void *)&memobj_out). result. 0. stock_array_many. NULL).B01-02632560-294 The OpenCL Programming Book: ret = clBuildProgram(program. ret = clSetKernelArg(kernel. (void *)&name_num). NULL. CL_MEM_READ_WRITE. CL_TRUE. 4. sizeof(cl_mem). NULL). CL_TRUE. sizeof(cl_mem). 1. NULL). point_num * sizeof(int). sizeof(int). point_num * sizeof(int). /* Create kernel */ kernel = clCreateKernel(program. NULL). NULL. point_num * sizeof(float). /* Create buffer for the result on the device */ memobj_out = clCreateBuffer(context. ret = clSetKernelArg(kernel. 0. /* Copy input data to the global memory on the device*/ ret = clEnqueueWriteBuffer(command_queue. sizeof(int). 0. ret = clSetKernelArg(kernel. (void *)&window_num). 1. (void *)&data_num). NULL. kernel. memobj_out. ret = clSetKernelArg(kernel. NULL. /* Set Kernel Arguments */ ret = clSetKernelArg(kernel. 150 . /* Execute kernel */ ret = clEnqueueTask(command_queue. 0. /* Copy result from device to host */ ret = clEnqueueReadBuffer(command_queue. CL_MEM_READ_WRITE. "moving_average_many". ret = clReleaseProgram(program). 0. sizeof(int). 0. (void *)&memobj_in). &ret). NULL. 2. NULL. memobj_in. /* OpenCL Object Finalization */ ret = clReleaseKernel(kernel). &ret). &device_id. 3.
ret = clReleaseCommandQueue(command_queue). ret = clReleaseContext(context). This is the most basic method of parallelization. } /* Deallocate memory on the host */ free(result). which processed all the data. /* Print results */ for (i=0. only one instance of the kernel was executed. for (j=0. j < name_num. ". result[i*NAME_NUM+j]). To use multiple compute units simultaneously. free(kernel_src_str). the kernel code that gets read is changed to moving_average_many.cl (line 67). which is done simply by replacing scalar-types with vector-types. i). Up until this point. This section concentrated on using SIMD units to perform the same process on multiple data sets in parallel. i++) { printf("result[%d]:". The only difference from List 5. return 0. multiple kernel instances must be executed 151 . In addition. Data Parallel Processing This section will focus on using multiple compute units to perform moving average for multiple stocks. } printf("¥n").30 is that the number of stocks to process has been increased to 8. The next step is expanding this to use multiple compute units capable of performing SIMD operations.B01-02632560-294 The OpenCL Programming Book 112: 113: 114: 115: 116: 117: 118: 119: 120: 121: 122: 123: 124: 125: 126: 127: 128: 129: 130: 131: } ret = clReleaseMemObject(memobj_in). i < data_num. and the kernel name is changed to moving_average_many (line 79). which get passed in as an argument to the kernel (line 98). ret = clReleaseMemObject(memobj_out). j++) { printf("%f.
__global float4 *average. List 5. as this method is more suited for this process. /* "j" decides on the data subset to j = get_global_id(0). for (i=0. the code in List 5.B01-02632560-294 The OpenCL Programming Book in parallel. int4 add_value. /* Compute sum for the first "width" elements for 4 stocks */ add_value = (int4)0. We will use the data parallel model. or different kernels in parallel (task parallel). If this is not done. int name_num. 002: 003: 004: 005: 006: { 007: 008: 009: 010: 011: 012: 013: 014: 015: 016: 017: 018: 019: 020: /* Compute sum for the (width)th ~ (length-1)th elements for 4 stocks */ } average[(width-1)*loop_num+j] = convert_float4(add_value).cl) 001: __kernel void moving_average_vec4_para(__global int4 *values. int length. the same kernel will run on the same data sets.31 as the basis to perform the averaging on 8 stocks. Therefore. which happens to be the same value as the value of the iterator "j". /* Used to select different data set for each instance */ int i. j. each instance of the kernel must know where it is being executed within the index space. int width) process for the kernel instance*/ 152 . int loop_num = name_num / 4. The get_global_id() function can be used to get the kernel instance's global ID. They can either be the same kernel running in parallel (data parallel). i < width.33: Moving average kernel for 4 stocks (moving_average_vec4_para. Since this code operates on 4 data sets at once. This is achieved by setting the work group size to 2 when submitting the task. we can use 2 compute units to perform operations on 8 data sets at once. In order to use the data parallel mode. We will use the kernel in List 5. i++) { add_value += values[i*loop_num+j].33.31 can be rewritten to the following code in List 5.
i < length. i++) { add_value = add_value . i < width-1. In line 11. which specifies the instance of the kernel as well as the data set to process. the host code must be changed as shown below in List 5.h> 002: #ifdef __APPLE__ 003: #include <OpenCL/opencl. the value of "j" is either 0 or 1.h> 004: #else 005: #include <CL/cl. i++) { average[i*loop_num+j] /= (float4)width.0f).B01-02632560-294 The OpenCL Programming Book 021: 022: 023: 024: 025: 026: 027: 028: 029: 030: 031: 032: 033: 034: 035: } for (i=width. Since each compute unit is executing an instance of the kernel.34: Host code for calling the kernel in List 5.h> 008: 009: #define NAME_NUM (8) /* Number of stocks */ 010: #define DATA_NUM (21) /* Number of data to process for each stock */ 011: 012: /* Read Stock data */ 013: int stock_array_many[NAME_NUM*DATA_NUM]= { 153 .values[(i-width)*loop_num+j] + average[i*loop_num+j] = convert_float4(add_value). } /* Compute average of (width-1) ~ (length-1) elements for 4 stocks */ for (i=width-1. To take the change in the kernel into account. i < length. i++) { average[i*loop_num+j] = (float4)(0. } values[i*loop_num+j]. } /* Insert zeros to 0th ~ (width-2)th element for 4 stocks*/ for (i=0.33 001: #include <stdlib. List 5. which performs operations on 4 data sets.34.h> 006: #endif 007: #include <stdio. 8 data sets are processed over 2 compute units.
int window_num = (int)WINDOW_SIZE. int name_num = (int)NAME_NUM. cl_program program = NULL. float *result. cl_context context = NULL. cl_uint ret_num_platforms. cl_kernel kernel = NULL. cl_mem memobj_in = NULL to read in kernel code */ kernel_src_str = (char *)malloc(MAX_SOURCE_SIZE). cl_uint ret_num_devices. 016: #include "stock_array_many. int i. 154 . char *kernel_src_str. cl_int ret. int point_num = NAME_NUM * DATA_NUM.B01-02632560-294 The OpenCL Programming Book 014: 015: }. cl_command_queue command_queue = NULL. cl_platform_id platform_id = NULL. int data_num = (int)DATA_NUM.j. cl_mem memobj_out = NULL. FILE *fp. cl_device_id device_id = NULL. size_t kernel_code_size.
/* Create Context */ context = clCreateContext(NULL. &ret).cl". &ret). 1. point_num * sizeof(int). NULL. &device_id. "moving_average_vec4_para". 1. NULL. 1. /* Compile kernel */ ret = clBuildProgram(program. /* Create Program Object */ program = clCreateProgramWithSource(context. NULL. kernel_code_size = fread(kernel_src_str. /* Create buffer for the result on the device */ 155 . &platform_id. &ret). /* Get Platform*/ ret = clGetPlatformIDs(1. MAX_SOURCE_SIZE. /* Create buffer for the input data on the device */ memobj_in = clCreateBuffer(context. 0. (const size_t *)&kernel_code_size. NULL). CL_MEM_READ_WRITE. &device_id. &ret_num_platforms). /* Create kernel */ kernel = clCreateKernel(program. 1. 1. (const char **)&kernel_src_str. &ret). &ret_num_devices). NULL. device_id. /* Get Device */ ret = clGetDeviceIDs(platform_id: /* Allocate space for the result on the host side */ result = (float *)malloc(point_num*sizeof(float)). CL_DEVICE_TYPE_DEFAULT. NULL. &device_id. "r"). fclose(fp). fp). &ret). /* Read kernel source code */ fp = fopen("moving_average_vec4_para. /* Create Command Queue */ command_queue = clCreateCommandQueue(context.
NULL. ret = clSetKernelArg(kernel. NULL. (void *)&window_num). sizeof(int). (void *)&data_num). kernel. CL_MEM_READ_WRITE. /* Global number of work items */ local_item_size[0] = 1. (void *)&memobj_out). point_num * sizeof(int). CL_TRUE. size_t global_item_size[3]. (void *)&memobj_in). 0. NULL). 156 . 0. which indirectly sets the number of workgroups to 2*/ /* Execute Data Parallel Kernel */ ret = clEnqueueNDRangeKernel(command_queue. /* Set parameters for data parallel processing (work item) */ cl_uint work_dim = 1.: memobj_out = clCreateBuffer(context. /* Set Kernel Arguments */ ret = clSetKernelArg(kernel. 0. /* Number of work items per work group */ /* --> global_item_size[0] / local_item_size[0] becomes 2. point_num * sizeof(float). /* Copy result from device to host */ ret = clEnqueueReadBuffer(command_queue. stock_array_many. CL_TRUE. 0. NULL. ret = clSetKernelArg(kernel. &ret). size_t local_item_size[3]. 1. (void *)&name_num). result. sizeof(cl_mem). NULL. /* Copy input data to the global memory on the device*/ ret = clEnqueueWriteBuffer(command_queue. 0. memobj_out. sizeof(int). NULL. work_dim. sizeof(cl_mem). 3. global_item_size[0] = 2. local_item_size. 0. ret = clSetKernelArg(kernel. memobj_in. global_item_size. ret = clSetKernelArg(kernel. point_num * sizeof(float). 2. NULL). NULL). sizeof(int).
157 . i++) { printf("result[%d]: ". ret = clReleaseProgram(program). which indicates a bull market on the horizon. This will be implemented in a task parallel manner. This is implied from the number of global work items (line 107) and the number of work items to process using one compute unite (line 108).114. } /* Deallocate memory on the host */ free(result). /* Deallocate memory on the host */ for (i=0. } printf("¥n"). It is also possible to execute multiple work items on 1 compute unit. The Golden Cross is a threshold point between a short-term moving average and a long-term moving average over time. free(kernel_src_str). The data parallel processing is performed in lines 112 . Task Parallel Processing We will now look at a different processing commonly performed in stock price analysis. ret = clReleaseContext(context). for (j=0. This number should be equal to the number of processing elements within the compute unit for efficient data parallel execution. ret = clReleaseMemObject(memobj_out). j++) { printf("%f. known as the Golden Cross. i). but notice the number of work groups are not explicitly specified. j < name_num. return 0. ret = clReleaseMemObject(memobj_in).B01-02632560-294 The OpenCL Programming Book 121: 122: 123: 124: 125: 126: 127: 128: 129: 130: 131: 132: 133: 134: 135: 136: 137: 138: 139: 140: 141: 142: 143: } /* OpenCL Object Finalization */ ret = clReleaseKernel(kernel). ". result[i*NAME_NUM+j]). i < data_num. ret = clReleaseCommandQueue(command_queue).
The two moving averages will be performed in a task parallel manner. The out-of-order mode can be set as follows. OpenCL does not have an API to explicitly specify the index space for batch processing.h> 004: #else 158 .29 (moving_average_vec4. As mentioned in Chapter 4.B01-02632560-294 The OpenCL Programming Book Unlike data parallel programming. The 3rd argument CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE allows the command queue to send the next queued task to an available compute unit. Allowing out-of-order execution in the command queue sends the next element in queue to an available compute unit. &ret). The host side must do one of the following: • Allow out-of-order execution of the queued commands • Create multiple command queues Creating multiple command queues will allow for explicit scheduling of the tasks by the programmer. Command_queue = clCreateCommandQueue(context. We will use the code in List 5. we will use the out-of-order method and allow the API to take care of the scheduling. the command queue only allows one task to be executed at a time unless explicitly specified to do otherwise. varying the 4th argument to 13 and 26 for each of the moving average to be performed. In this section.h> 002: #ifdef __APPLE__ 003: #include <OpenCL/opencl. This ends up in a scheduling of task parallel processing. Each process need to be queued explicitly using the API function clEnqueueTask(). CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE.cl).35.35: Host code for task parallel processing of 2 moving averages 001: #include <stdlib. The host code becomes as shown in List 5. List 5. We will now perform task parallel processing to find the Golden Cross between a moving average over 13 weeks. and a moving average over 26 weeks. device_id.
#include "stock_array_4.txt" 159 . cl_uint ret_num_platforms. cl_context context = NULL. cl_mem memobj_out26 = NULL. event26. cl_kernel kernel26 = NULL. cl_event event13. cl_mem memobj_out13 = NULL.B01-02632560-294 The OpenCL Programming Book 005: #include <CL/cl.h> 006: #endif 007: #include <stdio. cl_mem memobj_in = NULL. cl_program program = NULL. cl_command_queue command_queue = NULL. size_t kernel_code_size.h> 008: 009: #define NAME_NUM (4) /* Number of stocks */ 010: #define DATA_NUM (100) /* Number of data to process for each stock */ 011: 012: /* Read Stock data */ 013: int stock_array_4[NAME_NUM*DATA_NUM]= { 014: 015: }. cl_device_id device_id = NULL. cl_kernel kernel13 = NULL. 016: 017: /* Moving average width */ 018: #define WINDOW_SIZE_13 (13) 019: #define WINDOW_SIZE_26 (26) 020: 021: 022: #define MAX_SOURCE_SIZE (0x100000) 023: 024: 025: int main(void) 026: { 027: 028: 029: 030: 031: 032: 033: 034: 035: 036: 037: 038: 039: 040: cl_platform_id platform_id = NULL. cl_uint ret_num_devices.
float *result13. int data_num = (int)DATA_NUM. &device_id. &ret_num_devices). &ret). cl_int ret. FILE *fp.B01-02632560-294 The OpenCL Programming Book: char *kernel_src_str. CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE. int point_num = NAME_NUM * DATA_NUM. /* Create Context */ context = clCreateContext( NULL. 1. &platform_id. /* Allocate space to read in kernel code */ kernel_src_str = (char *)malloc(MAX_SOURCE_SIZE). /* Read kernel source code */ 160 . /* Get Device */ ret = clGetDeviceIDs(platform_id. /* average over 26 weeks */ /* Get Platform */ ret = clGetPlatformIDs(1. &ret_num_platforms). &ret). NULL. NULL. &device_id. /* Create Command Queue */ command_queue = clCreateCommandQueue(context. /* average over 13 weeks */ result26 = (float *)malloc(point_num*sizeof(float)). int i. int window_num_13 = (int)WINDOW_SIZE_13. device_id. int name_num = (int)NAME_NUM. j. 1. CL_DEVICE_TYPE_DEFAULT. int window_num_26 = (int)WINDOW_SIZE_26. float *result26. /* Allocate space for the result on the host side */ result13 = (float *)malloc(point_num*sizeof(float)).
/* 13 weeks */ kernel26 = clCreateKernel(program. 1. NULL. (void *)&memobj_in). /* Create kernel */ kernel13 = clCreateKernel(program. sizeof(int). &device_id: fp = fopen("moving_average_vec4.cl". 0. /* Create buffer for the result on the device */ memobj_out13 = clCreateBuffer(context. ret = clSetKernelArg(kernel13. memobj_in. /* Set Kernel Arguments (13 weeks) */ ret = clSetKernelArg(kernel13. &ret). 0. CL_MEM_READ_WRITE. CL_MEM_READ_WRITE. 2. (void *)&window_num_13). /* 26 weeks */ /* Create buffer for the input data on the device */ memobj_in = clCreateBuffer(context. NULL. 0. "moving_average_vec4". &ret). NULL). ret = clSetKernelArg(kernel13. 1. CL_TRUE. NULL). NULL. sizeof(cl_mem). 1. sizeof(int). 3. &ret). (void *)&memobj_out13). /* 26 weeks */ /* Copy input data to the global memory on the device*/ ret = clEnqueueWriteBuffer(command_queue. 1. MAX_SOURCE_SIZE. fp). NULL. "r"). /* 13 weeks */ memobj_out26 = clCreateBuffer(context. point_num * sizeof(float). ret = clSetKernelArg(kernel13. NULL. "moving_average_vec4". (void *)&data_num). kernel_code_size = fread(kernel_src_str. fclose(fp). stock_array_4. &ret). /* Compile kernel */ ret = clBuildProgram(program. /* Create Program Object */ program = clCreateProgramWithSource(context. point_num * sizeof(int). point_num * sizeof(int). (const size_t *)&kernel_code_size. point_num * sizeof(float). &ret). CL_MEM_READ_WRITE. &ret). NULL. sizeof(cl_mem). 161 . (const char **)&kernel_src_str.
point_num * sizeof(float). /* Copy result for the 26 weeks moving average from device to host */ ret = clEnqueueReadBuffer(command_queue. j < name_num.B01-02632560-294 The OpenCL Programming Book 113: 114: 115: 116: 117: 118: 119: 120: 121: 122: 123: 124: 125: 126: 127: 128: 129: 130: 131: 132: 133: 134: 135: 136: 137: 138: 139: 140: 141: 142: 143: 144: 145: 146: 147: 148: /* Submit task to compute the moving average over 13 weeks */ ret = clEnqueueTask(command_queue. memobj_out26. ret = clReleaseMemObject(memobj_out13). ret = clReleaseContext(context). sizeof(cl_mem). /* Display results */ for (i=window_num_26-1. NULL. result26. i ). memobj_out13. 0. /* OpenCL Object Finalization */ ret = clReleaseKernel(kernel13). 2. sizeof(cl_mem). &event26). 0. (void *)&memobj_out26). (void *)&data_num). ret = clSetKernelArg(kernel26. i++) { printf("result[%d]:". ret = clReleaseProgram(program). ret = clSetKernelArg(kernel26. CL_TRUE. CL_TRUE. point_num * sizeof(float). /* Set Kernel Arguments (26 weeks) */ ret = clSetKernelArg(kernel26. 1. 0. (void *)&memobj_in). ret = clReleaseCommandQueue(command_queue). 1. NULL. 1. NULL). i < data_num. &event26. sizeof(int). kernel26. (void *)&window_num_26). NULL). sizeof(int). ret = clReleaseMemObject(memobj_in). for (j=0. &event13). 3. result13. ret = clSetKernelArg(kernel26. /* Copy result for the 13 weeks moving average from device to host */ ret = clEnqueueReadBuffer(command_queue. 0. kernel13. ret = clReleaseKernel(kernel26). 0. j++ ) { 162 . ret = clReleaseMemObject(memobj_out26). /* Submit task to compute the moving average over 26 weeks */ ret = clEnqueueTask(command_queue. &event13.
B01-02632560-294
The OpenCL Programming Book
149: 150: result26[i*NAME_NUM+j]) ); 151: 152: 153: 154: 155: 156: 157: 158: 159: 160: 161: } return 0; } }
/* Display whether the 13 week average is greater */ printf( "[%d] ", (result13[i*NAME_NUM+j] >
printf("¥n");
/* Deallocate memory on the host */ free(result13); free(result26); free(kernel_src_str);
The Golden Cross Point can be determined by seeing where the displayed result changes from 0 to 1. One thing to note in this example code is that the copying of the result from device to host should not occur until the processing is finished. Otherwise, the memory copy (clEnqueueReadBuffer) can occur during the processing, which would contain garbage. Note in line 114 that "&event13" is passed back from the clEnqueueTask() command. This is known as an event object, which specifies whether this task has finished or not. This event object is seen again in line 128 to the clEnqueueReadBuffer() command, which specifies that the read command does not start execution until the computation of the moving average over 13 weeks is finished. This is done similarly for the moving average over 26 weeks, which is submitted in line 123 and written back in line 131. In summary, the Enqueue API functions in general: • Inputs event object(s) that specify what it must wait for until it can be executed • Ouputs an event object that can be used to tell another task in queue to wait The above two should be used to schedule the tasks in an efficient manner.
163
B01-02632560-294
The OpenCL Programming Book
Case Study
This chapter will look at more practical applications than the sample codes that you have seen so far. You should have the knowledge required to write practical applications in OpenCL after reading this chapter.
FFT (Fast Fourier Transform)
The first application we will look at is a program that will perform band-pass filtering on an image. We will start by first explaining the process known as a Fourier Transform, which is required to perform the image processing.
Fourier Transform
"Fourier Transform" is a process that takes in samples of data, and outputs its frequency content. Its general application can be summarized as follows. • Take in an audio signal and find its frequency content • Take in an image data and find its spatial frequency content The output of the Fourier Transform contains all of its input. A process known as the Inverse Fourier Transform can be used to retrieve the original signal. The Fourier Transform is a process commonly used in many fields. Many programs use this procedure which is required for an equalizer, filter, compressor, etc. The mathematical formula for the Fourier Transform process is shown below
The "i" is an imaginary number, and ω is the frequency in radians. As you can see from its definition, the Fourier Transform operates on continuous data. However, continuous data means it contains infinite number of points with infinite precision. For this processing to be practical, it must be able to process a data set that contains a finite number of elements. Therefore, a process known as the Discrete Fourier Transform (DFT) was developed to estimate the Fourier Transform, which operates on a finite data set. The mathematical formula 164
B01-02632560-294
The OpenCL Programming Book
is shown below.
This formula now allows processing of digital data with a finite number of samples. The problem with this method, however, is that its O(N^2). As the number of points is increased, the processing time grows by a power of 2.
Fast Fourier Transform
There exists an optimized implementation of the DFT, called the "Fast Fourier Transform (FFT)". Many different implementations of FFT exist, so we will concentrate on the most commonly used Cooley-Tukey FFT algorithm. An entire book can be dedicated to explaining the FFT algorithm, so we will only explain the minimal amount required to implement the program. The Cooley-Tukey algorithm takes advantage of the cyclical nature of the Fourier Transform, and solves the problem in O(N log N), by breaking up the DFT into smaller DFTs. The limitation with this algorithm is that the number of input samples must to be a power of 2. This limitation can be overcome by padding the input signal with zeros, or use in conjunction with another FFT algorithm that does not have this requirement. For simplicity, we will only use input signals whose length is a power of 2. The core computation in this FFT algorithm is what is known as the "Butterfly Operation". The operation is performed on a pair of data samples at a time, whose signal flow graph is shown in Figure 6.2 below. The operation got its name due to the fact that each segment of this flow graph looks like a butterfly.
Figure 6.2: Butterfly Operation
165
B01-02632560-294
The OpenCL Programming Book
The "W" seen in the signal flow graph is defined as below.
Looking at Figure 6.2, you may notice the indices of the input are in a seemingly random order. We will not get into details on why this is done in this text except that it is an optimization method, but note that what is known as "bit-reversal" is performed on the indices of the input. The input order in binary is (000, 100, 010, 110, 001, 101, 011, 111). Notice that if you reverse the bit ordering of (100), you get (001). So the new input indices are in numerical order, except that the bits are reversed.
2-D FFT
As the previous section shows, the basic FFT algorithm is to be performed on 1 dimensional data. In order to take the FFT of an image, an FFT is taken row-wise and column-wise. Note that we are not dealing with time any more, but with spatial location. When 2-D FFT is performed, the FFT is first taken for each row, transposed, and the FFT is again taken on this result. This is done for faster memory access, as the data is stored in row-major 166
4a. the edges will be blurred. If transposition is not performed. Figure 6.3: 2-D FFT (a) Original Image (b) The result of taking a 2-D FFT Bandpass Filtering and Inverse Fourier Transform As stated earlier.4: Edge Filter and Low-pass Filter (a) Edged Filter (b) Low-pass Filter 167 . Using this characteristic. If the high-frequency components are cut instead. the signal that has been transformed to the frequency domain via Fourier Transform can be transformed back using the Inverse Fourier Transform. Figure 6.B01-02632560-294 The OpenCL Programming Book form.4b. which can greatly decrease speed of performance as the size of the image increases.3(a). This is known as an "Edge Filter". which leaves the part of the image where a sudden change occurs. For example. and its result is shown in Figure 6.3(b) shows the result of taking a 2-D FFT of Figure 6. the low-frequency components can be cut. resulting in an image shown in Figure 6. Figure 6. it is possible to perform frequency-based filtering while in the frequency domain and transform back to the original domain. This is known as a "low-pass filter". interleaved accessing of memory occurs.
The rest of the procedure is the same. Figure 6. the same kernel can be used to perform either operation.B01-02632560-294 The OpenCL Programming Book The mathematical formula for Inverse Discrete Fourier Transform is shown below.5: Program flow-chart 168 . The only differences are: • Must be normalized by the number of samples • The term within the exp() is positive. Overall Program Flow-chart The overall program flow-chart is shown in Figure 6. Therefore. Note its similarity with the DFT formula.5 below.
B01-02632560-294
The OpenCL Programming Book
Each process is dependent on the previous process, so each of the steps must be followed in sequence. A kernel will be written for each of the processes in Figure 6.5.
Source Code Walkthrough
We will first show the entire source code for this program. List 6.1 is the kernel code, and List 6.2 is the host code.
List 6.1: Kernel Code
001: #define PI 3.14159265358979323846 002: #define PI_2 1.57079632679489661923 003: 004: __kernel void spinFact(__global float2* w, int n) 005: {
169
B01-02632560-294
The OpenCL Programming Book
006: 007: 008: 009: 010: } 011:
unsigned int i = get_global_id(0); float2 angle = (float2)(2*i*PI/(float)n,(2*i*PI/(float)n)+PI_2); w[i] = cos(angle);
012: __kernel void bitReverse(__global float2 *dst, __global float2 *src, int m, int n) 013: { 014: 015: 016: 017: 018: 019: 020: 021: 022: 023: 024: 025: 026: 027: } 028: 029: __kernel void norm(__global float2 *x, int n) 030: { 031: 032: 033: 034: 035: } 036: 037: __kernel void butterfly(__global float2 *x, __global float2* w, int m, int n, int iter, uint flag) 038: { 039: 040: 041: unsigned int gid = get_global_id(0); unsigned int nid = get_global_id(1); x[nid*n+gid] = x[nid*n+gid] / (float2)((float)n, (float)n); unsigned int gid = get_global_id(0); unsigned int nid = get_global_id(1); dst[nid*n+j] = src[nid*n+gid]; j >>= (32-m); unsigned int j = gid; j = (j & 0x55555555) << 1 | (j & 0xAAAAAAAA) >> 1; j = (j & 0x33333333) << 2 | (j & 0xCCCCCCCC) >> 2; j = (j & 0x0F0F0F0F) << 4 | (j & 0xF0F0F0F0) >> 4; j = (j & 0x00FF00FF) << 8 | (j & 0xFF00FF00) >> 8; j = (j & 0x0000FFFF) << 16 | (j & 0xFFFF0000) >> 16; unsigned int gid = get_global_id(0); unsigned int nid = get_global_id(1);
170
B01-02632560-294
The OpenCL Programming Book:
int butterflySize = 1 << (iter-1); int butterflyGrpDist = 1 << iter; int butterflyGrpNum = n >> iter; int butterflyGrpBase = (gid >> (iter-1))*(butterflyGrpDist); int butterflyGrpOffset = gid & (butterflySize-1); int a = nid * n + butterflyGrpBase + butterflyGrpOffset; int b = a + butterflySize; int l = butterflyGrpNum * butterflyGrpOffset; float2 xa, xb, xbxx, xbyy, wab, wayx, wbyx, resa, resb; xa = x[a]; xb = x[b]; xbxx = xb.xx; xbyy = xb.yy; wab = as_float2(as_uint2(w[l]) ^ (uint2)(0x0, flag)); wayx = as_float2(as_uint2(wab.yx) ^ (uint2)(0x80000000, 0x0)); wbyx = as_float2(as_uint2(wab.yx) ^ (uint2)(0x0, 0x80000000)); resa = xa + xbxx*wab + xbyy*wayx; resb = xa - xbxx*wab + xbyy*wbyx; x[a] = resa; x[b] = resb;
071: __kernel void transpose(__global float2 *dst, __global float2* src, int n) 072: { 073: 074: 075: 076: 077: unsigned int iid = ygid * n + xgid; unsigned int oid = xgid * n + ygid; unsigned int xgid = get_global_id(0); unsigned int ygid = get_global_id(1);
171
B01-02632560-294
The OpenCL Programming Book
078: 079: 080: } 081: 082: __kernel void highPassFilter(__global float2* image, int n, int radius) 083: { 084: 085: 086: 087: 088: 089: 090: 091: 092: 093: 094: 095: 096: 097: 098: 099: 100: 101: 102: 103: 104: 105: } image[ygid*n+xgid] = as_float2(as_int2(image[ygid*n+xgid]) & window); } if (dist2 < radius*radius) { window = (int2)(0L, 0L); } else { window = (int2)(-1L, -1L); int2 window; int2 diff = n_2 - gid; int2 diff2 = diff * diff; int dist2 = diff2.x + diff2.y; int2 gid = ((int2)(xgid, ygid) + n_2) & mask; int2 n_2 = (int2)(n>>1, n>>1); int2 mask = (int2)(n-1, n-1); unsigned int xgid = get_global_id(0); unsigned int ygid = get_global_id(1); dst[oid] = src[iid];
List 6.2: Host Code
001: #include <stdio.h> 002: #include <stdlib.h> 003: #include <math.h> 004: 005: #ifdef __APPLE__ 006: #include <OpenCL/opencl.h>
172
size_t* lws.14159265358979 014: 015: #define MAX_SOURCE_SIZE (0x100000) 016: 017: #define AMP(a. default: gws[0] = x. inverse = 1 173 . gws[1] = y. 021: cl_command_queue queue = NULL.h> 009: #endif 010: 011: #include "pgm. 023: 024: enum Mode { 025: 026: 027: }.h" 012: 013: #define PI 3. 020: cl_context context = NULL. lws[1] = 1.B01-02632560-294 The OpenCL Programming Book 007: #else 008: #include <CL/cl. lws[0] = 1. forward = 0. cl_int x. break. cl_int y) 030: { 031: 032: 033: 034: 035: 036: 037: 038: 039: 040: 041: 042: switch(y) { case 1: gws[0] = x. lws[0] = 1. gws[1] = 1. 028: 029: int setWorkSize(size_t* gws. 022: cl_program program = NULL. b) (sqrt((a)*(a)+(b)*(b))) 018: 019: cl_device_id device_id = NULL. lws[1] = 1.
cl_int iter. cl_kernel brev = NULL. norm = clCreateKernel(program. } break. size_t gws[2]. cl_kernel norm = NULL. cl_kernel bfly = NULL. case inverse:flag = 0x80000000. (void *)&m). 1. "norm". cl_event kernelDone. ret = clSetKernelArg(brev. 2. } switch (direction) { case forward:flag = 0x00000000.B01-02632560-294 The OpenCL Programming Book 043: 044: 045: 046: 047: } 048: return 0. size_t lws[2]. cl_int ret. bfly = clCreateKernel(program. &ret). 049: int fftCore(cl_mem dst. cl_mem src. "butterfly". cl_int m. sizeof(cl_mem). 174 . sizeof(cl_mem). (void *)&dst). break. (void *)&src). &ret). &ret). 0. break. ret = clSetKernelArg(brev. enum Mode direction): ret = clSetKernelArg(brev. "bitReverse". cl_mem spin. brev = clCreateKernel(program. cl_int n = 1<<m. cl_uint flag. sizeof(cl_int).
iter <= m. lws. n). /* Perform Butterfly Operations*/ setWorkSize(gws. return 0. n). sizeof(cl_int). (void *)&m). NULL. &kernelDone). 1. bfly. for (iter=1. 2. ret = clSetKernelArg(bfly. sizeof(cl_mem). (void *)&iter). 3. ret = clReleaseKernel(norm). sizeof(cl_uint). (void *)&flag). 2. sizeof(cl_mem). &kernelDone). sizeof(cl_int). lws. (void *)&dst). ret = clSetKernelArg(bfly. NULL. ret = clReleaseKernel(brev). sizeof(cl_int). NULL. lws. 2. 1. ret = clEnqueueNDRangeKernel(queue. norm. } ret = clReleaseKernel(bfly). lws. (void *)&n). n). 3. ret = clWaitForEvents(1. iter++){ ret = clSetKernelArg(bfly.B01-02632560-294 The OpenCL Programming Book: ret = clSetKernelArg(brev. /* Reverse bit ordering */ setWorkSize(gws. NULL. brev. ret = clSetKernelArg(bfly. ret = clWaitForEvents(1. (void *)&dst). NULL). gws. ret = clSetKernelArg(norm. 175 . 0. (void *)&spin). lws. 2. 0. 5. sizeof(cl_int). sizeof(cl_mem). ret = clSetKernelArg(bfly. ret = clEnqueueNDRangeKernel(queue. n. n/2. n. NULL. } if (direction == inverse) { setWorkSize(gws. 4. ret = clSetKernelArg(norm. (void *)&n). &kernelDone). &kernelDone). gws. (void *)&n). 0. 0. gws. sizeof(cl_int). 0. ret = clEnqueueNDRangeKernel(queue. ret = clSetKernelArg(bfly. lws. NULL.
cl_uint ret_num_devices. const char fileName[] = ". size_t source_size. cl_int n. pgm_t ipgm. cl_kernel hpfl = NULL. cl_mem xmobj = NULL. cl_uint ret_num_platforms. pgm_t opgm. cl_float2 *xm. cl_float2 *rm. cl_platform_id platform_id = NULL. 176 .cl". cl_mem wmobj = NULL. cl_kernel trns = NULL. j. cl_int m. char *source_str. size_t lws[2]. cl_int ret. FILE *fp.B01-02632560-294 The OpenCL Programming Book 113: } 114: 115: int main() 116: { 117: 118: 119: 120: 121: 122: 123: 124: 125: 126: 127: 128: 129: 130: 131: 132: 133: 134: 135: 136: 137: 138: 139: 140: 141: 142: 143: 144: 145: 146: 147: 148: size_t gws[2]. cl_float2 *wm. cl_int i. cl_mem rmobj = NULL./fft. cl_kernel sfac = NULL.
xm = (cl_float2 *)malloc(n * n * sizeof(cl_float2)). &device_id. n = ipgm. } } /* Get platform/device */ ret = clGetPlatformIDs(1. source_size = fread(source_str. ((float*)xm)[(2*n*j)+2*i+1] = (float)0. &device_id. CL_DEVICE_TYPE_DEFAULT.buf[n*j+i].width. fclose( fp ). rm = (cl_float2 *)malloc(n * n * sizeof(cl_float2)). "r"). wm = (cl_float2 *)malloc(n / 2 * sizeof(cl_float2)).pgm"). /* Create Command queue */ 177 . /* Read image */ readPGM(&ipgm. i++) { for (j=0. m = (cl_int)(log((double)n)/log(2. j++) { ((float*)xm)[(2*n*j)+2*i+0] = (float)ipgm.0)). } source_str = (char *)malloc(MAX_SOURCE_SIZE). j < n. &platform_id.¥n"). &ret_num_devices). "Failed to load kernel. NULL. 1. i < n.B01-02632560-294 The OpenCL Programming Book 149: 150: 151: 152: 153: 154: 155: 156: 157: 158: 159: 160: 161: 162: 163: 164: 165: 166: 167: 168: 169: 170: 171: 172: 173: 174: 175: 176: 177: 178: 179: 180: 181: 182: 183: /* Load kernel source code */ fp = fopen(fileName. &ret_num_platforms). MAX_SOURCE_SIZE. if (!fp) { fprintf(stderr. /* Create OpenCL context */ context = clCreateContext(NULL. ret = clGetDeviceIDs( platform_id. fp). &ret). 1. 1. exit(1). for (i=0. NULL. "lena.
NULL. /* Build kernel program */ ret = clBuildProgram(program. &device_id. ret = clSetKernelArg(sfac. NULL. NULL). (void *)&wmobj). 0. NULL. wmobj = clCreateBuffer(context. sizeof(cl_mem). /* Transpose matrix */ 178 . lws. 188: &ret). CL_MEM_READ_WRITE. 1. n*n*sizeof(cl_float2). 0. forward). /* Create OpenCL Kernel */ sfac = clCreateKernel(program. CL_MEM_READ_WRITE. hpfl = clCreateKernel(program. 0. NULL. NULL). NULL. /* Transfer data to memory buffer */ ret = clEnqueueWriteBuffer(queue. 1). /* Create kernel program from source */ program = clCreateProgramWithSource(context. 1. &ret). rmobj = clCreateBuffer(context. "spinFact". m. "highPassFilter". gws. &ret). xm. NULL. 1. setWorkSize(gws. CL_MEM_READ_WRITE. &ret). sizeof(cl_int). wmobj. 0. /* Create Buffer Objects */ xmobj = clCreateBuffer(context. trns = clCreateKernel(program. lws. (void *)&n). ret = clEnqueueNDRangeKernel(queue.B01-02632560-294 The OpenCL Programming Book 184: 185: 186: 187: &ret). /* Create spin factor */ ret = clSetKernelArg(sfac. 189: &ret). 190: 191: 192: 193: 194: 195: 196: 197: 198: 199: 200: 201: 202: 203: 204: 205: 206: 207: 208: 209: 210: 211: 212: 213: 214: queue = clCreateCommandQueue(context. n*n*sizeof(cl_float2). 1. sfac. device_id. &ret). CL_TRUE. &ret). n*n*sizeof(cl_float2). 0. xmobj. NULL). (const char **)&source_str. /* Butterfly Operation */ fftCore(rmobj. "transpose". n/2. (const size_t *)&source_size. NULL. (n/2)*sizeof(cl_float2). xmobj. NULL.
2. 0. xmobj. NULL. lws. 0. sizeof(cl_int). NULL). lws. /* Transpose matrix */ ret = clSetKernelArg(trns. NULL. xm. n). m. lws. sizeof(cl_int). (void *)&n). wmobj. ret = clSetKernelArg(trns. inverse). NULL). CL_TRUE. (void *)&rmobj). gws. NULL. 1. 1. /* Apply high-pass filter */ cl_int radius = n/8. 2. rmobj. inverse). lws. ret = clSetKernelArg(hpfl. NULL). (void *)&rmobj). /* Inverse FFT */ /* Butterfly Operation */ fftCore(xmobj. ret = clSetKernelArg(trns. 2. 0. gws. NULL). (void *)&n). sizeof(cl_mem). 2. 0. n). sizeof(cl_int). /* */ 179 . lws. n*n*sizeof(cl_float2). 1. (void *)&xmobj). setWorkSize(gws. m. trns. /* Butterfly Operation */ fftCore(xmobj. NULL. (void *)&xmobj). trns. /* Read data from memory buffer */ ret = clEnqueueReadBuffer(queue. 0. (void *)&radius). n. sizeof(cl_mem). sizeof(cl_mem). n. 2. (void *)&rmobj). xmobj. hpfl.B01-02632560-294 The OpenCL Programming Book 215: 216: 217: 218: 219: 220: 221: 222: 223: 224: 225: 226: 227: 228: 229: 230: 231: 232: 233: 234: 235: 236: 237: 238: 239: 240: 241: 242: 243: 244: 245: 246: 247: 248: 249: ret = clSetKernelArg(trns. ret = clSetKernelArg(hpfl. setWorkSize(gws. ret = clSetKernelArg(hpfl. ret = clEnqueueNDRangeKernel(queue. 0. m. /* Butterfly Operation */ fftCore(rmobj. 0. ret = clEnqueueNDRangeKernel(queue. rmobj. NULL. gws. NULL. sizeof(cl_mem). wmobj. setWorkSize(gws. sizeof(cl_mem). forward). ret = clEnqueueNDRangeKernel(queue. lws. n). wmobj. 0. n. NULL. ret = clSetKernelArg(trns.
free(xm). free(rm). ret = clReleaseMemObject(xmobj). ret = clReleaseMemObject(wmobj). opgm. ret = clReleaseCommandQueue(queue). ret = clReleaseProgram(program). "output. destroyPGM(&opgm).pgm"). ret = clFinish(queue).B01-02632560-294 The OpenCL Programming Book 250: 251: 252: 253: 254: 255: 256: 257: 258: 259: 260: 261: 262: 263: 264: 265: 266: 267: 268: 269: 270: 271: 272: 273: 274: 275: 276: 277: 278: 279: 280: 281: 282: 283: 284: float* ampd. } } opgm. free(source_str). ret = clReleaseMemObject(rmobj). i < n. ampd). /* Write out image */ writePGM(&opgm. destroyPGM(&ipgm). free(ampd). i++) { for (j=0. ret = clReleaseKernel(trns).width = n. for (i=0.height = n. j++) { ampd[n*((i))+((j))] = (AMP(((float*)xm)[(2*n*i)+2*j]. normalizeF2PGM(&opgm. ret = clReleaseKernel(sfac). ret = clReleaseKernel(hpfl). ret = clReleaseContext(context). free(wm). j < n. ampd = (float*)malloc(n*n*sizeof(float)). 180 . /* Finalizations*/ ret = clFlush(queue). ((float*)xm)[(2*n*i)+2*j+1])).
3 is used to pre-compute the value of the spin factor "w". List 6. which is basically the real and imaginary components on the unit circle using cos() and -sin(). 007: 008: float2 angle = (float2)(2*i*PI/(float)n. such as the GPU. int n) 005: { 006: unsigned int i = get_global_id(0). which gets repeatedly used in the butterfly operation.B01-02632560-294 The OpenCL Programming Book 285: 286: 287: } return 0. The "w" is computed for radian angles that are multiples of (2π/n). which stores values to be used repeatedly on the memory.3: Create Spin Factor 004: __kernel void spinFact(__global float2* w. Note the shift by PI/2 in line 8 allows the cosine function to compute -sin(). Figure 6. On some devices. it may prove to 181 . 010: } The code in List 6.(2*i*PI/(float)n)+PI_2).6: Spin factor for n=8 The pre-computing of the values for "w" creates what is known as a "lookup table". We will start by taking a look at each kernel. This is done to utilize the SIMD unit on the OpenCL device if it has one. 009: w[i] = cos(angle).
015: unsigned int nid = get_global_id(1). 015: unsigned int nid = get_global_id(1). 023: 024: j >>= (32-m). The indices are correctly shifted in line 24.5. int n) 013: { 014: unsigned int gid = get_global_id(0).B01-02632560-294 The OpenCL Programming Book be faster if the same operation is performed each time. 025: 026: dst[nid*n+j] = src[nid*n+gid]. List 6. int n) 013: { 014: unsigned int gid = get_global_id(0). Also. 016: 017: unsigned int j = gid.5: Bit reversing (Using synchronization) 012: __kernel void bitReverse(__global float2 *x. 021: j = (j & 0x00FF00FF) << 8 | (j & 0xFF00FF00) >> 8. 027: } List 6. 022: j = (j & 0x0000FFFF) << 16 | (j & 0xFFFF0000) >> 16. int m.4: Bit reversing 012: __kernel void bitReverse(__global float2 *dst. where each work item stores the output locally until all work items are finished. Lines 18~22 performs the bit reversing of the inputs.4 shows the kernel code for reordering the input data such that it is in the order of the bit-reversed index. An alternative solution is shown in List 6. This is done since the coherence of the data cannot be guaranteed if the input gets overwritten each time after processing. 182 . These types of functions are known as an out-of-place function. 019: j = (j & 0x33333333) << 2 | (j & 0xCCCCCCCC) >> 2. 020: j = (j & 0x0F0F0F0F) << 4 | (j & 0xF0F0F0F0) >> 4. note that a separate memory space must be allocated for the output on the global memory. as the max index would otherwise be 2^32-1. int m. at which point the locally stored data is written to the input address space. as it may be more expensive to access the memory. 018: j = (j & 0x55555555) << 1 | (j & 0xAAAAAAAA) >> 1. __global float2 *src. List 6.
6 should be self-explanatory. List 6. It basically just dives the input by the value of "n". 031: } However. It may be supported in the future. 023: 024: j >>= (32-m). Shifting of a float value will result in unwanted results. can potentially decrease performance. but division by shifting is only possible for integer types. 027: 028: SYNC_ALL_THREAD /* Synchronize all work-items */ 029: 030: x[nid*n+j] = val. especially when processing large amounts of data. 025: 026: float2 val = x[nid*n+gid]. the version on List 6. 033: 034: x[nid*n+gid] = x[nid*n+gid] / (float2)((float)n. but depending on the device. these types of synchronization. If there is enough space on the device.4 should be used. 183 . 022: j = (j & 0x0000FFFF) << 16 | (j & 0xFFFF0000) >> 16. 019: j = (j & 0x33333333) << 2 | (j & 0xCCCCCCCC) >> 2. 020: j = (j & 0x0F0F0F0F) << 4 | (j & 0xF0F0F0F0) >> 4. you may be tempted to use shifting.6: Normalizing by the number of samples 029: __kernel void norm(__global float2 *x. 035: } The code in List 6. int n) 030: { 031: unsigned int gid = get_global_id(0). (float)n). 032: unsigned int nid = get_global_id(1). 018: j = (j & 0x55555555) << 1 | (j & 0xAAAAAAAA) >> 1. OpenCL does not currently require the synchronization capability in its specification. 021: j = (j & 0x00FF00FF) << 8 | (j & 0xFF00FF00) >> 8. Since the value of "n" is limited to a power of 2.B01-02632560-294 The OpenCL Programming Book 016: 017: unsigned int j = gid. The operation is performed on a float2 type.
wab. 043: int butterflyGrpDist = 1 << iter. 065: resb = xa .xbxx*wab + xbyy*wbyx. flag)). 062: wbyx = as_float2(as_uint2(wab. 046: int butterflyGrpOffset = gid & (butterflySize-1). 061: wayx = as_float2(as_uint2(wab. xbxx. 047: 048: int a = nid * n + butterflyGrpBase + butterflyGrpOffset.xx. 0x80000000)). resb. 040: unsigned int nid = get_global_id(1). 063: 064: resa = xa + xbxx*wab + xbyy*wayx. 052: 053: float2 xa.yx) ^ (uint2)(0x80000000. 041: 042: int butterflySize = 1 << (iter-1). xbyy. 054: 055: xa = x[a]. is shown in 184 . wayx. 050: 051: int l = butterflyGrpNum * butterflyGrpOffset. 069: } The kernel for the butterfly operation. uint flag) 038: { 039: unsigned int gid = get_global_id(0).yy. 049: int b = a + butterflySize. xb.7: Butterfly operation 037: __kernel void butterfly(__global float2 *x. int iter. 045: int butterflyGrpBase = (gid >> (iter-1))*(butterflyGrpDist). __global float2* w. 066: 067: x[a] = resa. 0x0)). 068: x[b] = resb.B01-02632560-294 The OpenCL Programming Book List 6. resa. 057: xbxx = xb. 056: xb = x[b]. int m. wbyx. 058: xbyy = xb. 059: 060: wab = as_float2(as_uint2(w[l]) ^ (uint2)(0x0. which performs the core of the FFT algorithm. 044: int butterflyGrpNum = n >> iter. int n.yx) ^ (uint2)(0x0.
Refer back to the signal flow graph for the butterfly operation in Figure 6. we need to know how the butterfly operation is grouped.B01-02632560-294 The OpenCL Programming Book List 6. This is stored in the variable butterflyGrpDistance. The differences of the indices between the groups are required as well. Looking at the signal flow graph. Now the indices to perform the butterfly operation and the spin factor can be found. In the first iteration. Lines 60 ~ 62 takes the sign of the spin factor into account to take care of the computation for the real and imaginary components. which can be derived from the "gid". butterflyGrpBase = (gid / butterflySize) * butterflyGrpDistance). as well 185 .2. The intermediate values required in mapping the "gid" to the input and output indices are computed in lines 42-46. we need to determine the indices to read from and to write to. We will now go into the actual calculation. Therefore. the required inputs are the two input data and the spin factor. First. since we are assuming the value of n to be a power of 2. Each work item performs one butterfly operation for a pair of inputs. the number of groups is the same as the number of butterfly operation to perform. it is split up into 2 groups. Next. The butterflyGrpBase variable contains the index to the first butterfly operation within the group. The butterflyGropOffset is the offset within the group. Next.7 above. This value is stored in the variable butterflyGrpNum. butterflyGrpOffset = gid % butterflySize. we can replace the division and the mod operation with bit shifts. The "butterflySize" is 1 for the first iteration. Lines 55 ~ 65 are the core of the butterfly operation. (n * n)/2 work items are required. but in the 2nd iteration. the variable "butterflySize" represents the difference in the indices to the data for the butterfly operation to be performed on. we see that the crossed signal paths occur within independent groups. For our FFT implementation. These are determined using the following formulas. and this value is doubled for each iteration. As the graph shows.
B01-02632560-294 The OpenCL Programming Book as the FFT and IFFT. int n.8 shows a basic implementation of the matrix transpose algorithm. int n) 072: { 073: unsigned int xgid = get_global_id(0). 075: 076: unsigned int iid = ygid * n + xgid. int radius) 083: { 084: unsigned int xgid = get_global_id(0).x + diff2. 088: int2 mask = (int2)(n-1. List 6. and Lines 67 ~ 68 stores the processed data.y. ygid) + n_2) & mask. n>>1). n-1). 091: 092: int2 diff = n_2 . 093: int2 diff2 = diff * diff. 077: unsigned int oid = xgid * n + ygid. 095: 096: int2 window.gid. 078: 079: dst[oid] = src[iid]. 080: } List 6.9: Filtering 082: __kernel void highPassFilter(__global float2* image. 097: 186 . List 6. __global float2* src. but this process can be speed up significantly by using local memory and blocking. 094: int dist2 = diff2. 085: unsigned int ygid = get_global_id(1).8: Matrix Transpose 071: __kernel void transpose(__global float2 *dst. We will not go into optimization of this algorithm. Lines 64 ~ 65 are the actual operations. 089: 090: int2 gid = ((int2)(xgid. 086: 087: int2 n_2 = (int2)(n>>1. 074: unsigned int ygid = get_global_id(1).
Most of what is being done is the same as for the OpenCL programs that we have seen so far. /* Perform butterfly operations */ setWorkSize(gws. (void *)&iter). 105: } List 6. a high pass filter extracts the edges. The main differences are: • Multiple kernels are implemented • Multiple memory objects are used. 094: 095: 096: 097: 098: &kernelDone). lws. The opposite can be performed to create a low pass filter. NULL. -1L). sizeof(cl_int). 187 . Next. for (iter=1. 102: } 103: 104: image[ygid*n+xgid] = as_float2(as_int2(image[ygid*n+xgid]) & window). 0. n/2. 0L). A high pass filter can be created by cutting the frequency within a specified radius that includes these DC components. 100: } else { 101: window = (int2)(-1L. the filter passes high frequencies and gets rid of the lower frequencies. As the kernel name suggests. the clSetKernelArg() only need to be changed for when an argument value changes from the previous time the kernel is called.B01-02632560-294 The OpenCL Programming Book 098: if (dist2 < radius*radius) { 099: window = (int2)(0L. lws. The spatial frequency obtained from the 2-D FFT shows the DC (direct current) component on the 4 edges of the XY coordinate system. In general. iter++){ ret = clSetKernelArg(bfly. requiring appropriate data flow construction Note that when a kernel is called repeatedly.9 is a kernel that filters an image based on frequency. gws. NULL. consider the butterfly operations being called in line 94 on the host side. and a low pass filter blurs the image. &kernelDone). bfly. 2. iter <= m. 4. 099: 100: } ret = clWaitForEvents(1. n). we will go over the host program. For example. ret = clEnqueueNDRangeKernel(queue.
B01-02632560-294 The OpenCL Programming Book This butterfly operation kernel is executed log_2(n) times. a setWorkSize() function is implemented in this program. In this program. cl_mem spin. it would be wise to use as few memory objects as possible. For example. These procedures are all grouped into one function fftCore(). cl_int x. The in-place kernel uses the same memory object for both input and output. the number work items. cl_int m. at least 2 additional memory objects are required. The problem with these types of kernel is that it would require too much memory space on the device. namely the bit reversal and the butterfly operation. Since an out-of-place operation such as the matrix transposition exist. 029: int setWorkSize(size_t* gws. 049: int fftCore(cl_mem dst. When this is called. The kernels in this program is called using clEnqueueNDRangeKernel() to operate on the data in a data parallel manner. Therefore. where the output data is written over the address space of the input data. The kernel must have its iteration number passed in as an argument. when calling the out-of-place transpose operation which is called twice. To reduce careless errors and to make the code more readable. The kernel uses this value to compute which data to perform the butterfly operation on. a memory object is required to store the pre-computed values of the spin factors. for both FFT and IFFT. the pointer to the memory object must be reversed the second time around. cl_int y) The program contains a set of procedures that are repeated numerous times. the data transfers between kernels occur via the memory objects. In fact. and that a data transfer must occur between memory objects. whose values differ depending on the kernel used. For this program. only these 3 memory objects are required for this program to run without errors due to race conditions. must be set beforehand. enum Mode direction) 188 . the arguments to the kernel must be appropriately set using clSetKernelArg() for each call to the kernel. size_t* lws. cl_mem src. To do this. The types of kernels used can be classified to either in-place kernel or out-of-place kernel based on the data flow. The out-of-place kernel uses separate memory objects for input and output.
We will add a new file. const char*. and the FFT direction.10. filename). output. The data structure is quite simple and intuitive to use. The full pgm. This file will define numerous functions and structs to be used in our program. the sample number normalized by the log of 2. and the spin factor. The format used for the image is PGM. int height.h 001: #ifndef _PGM_H_ 002: #define _PGM_H_ 003: 004: #include <math.h". 015: typedef struct _pgm_t { 016: 017: 018: int width.h> 005: #include <string. writePGM(pgm_t* pgm.h file is shown below in List 6.10: pgm. a conversion would need to be performed to represent the pixel information in 8 bits.B01-02632560-294 The OpenCL Programming Book This function takes memory objects for the input. normalizePGM(pgm_t* pgm. called "pgm. const char* filename). we will briefly explain the outputting of the processed data to an image. unsigned char *buf. 019: } pgm_t.h> 006: 189 . The width and the height gets the size of the image. Since each pixel of the PGM is stored as unsigned char. readPGM(pgm_t* pgm. This function can be used for 1-D FFT if the arguments are appropriately set. First is the pgm_t struct. and buf gets the image data. double* data). List 6. Lastly. This can read in or written to a file using the following functions. which is a gray-scale format that requires 8 bits for each pixel.
long begin. 019: } pgm_t. } FILE* fp. "rb"))==NULL) { fprintf(stderr. del. 190 . fseek(fp. unsigned char *buf. "Failed to open file¥n"). saveptr) 011: #else 012: #define STRTOK_R(ptr. del.B01-02632560-294 The OpenCL Programming Book 007: #define PGM_MAGIC "P5" 008: 009: #ifdef _WIN32 010: #define STRTOK_R(ptr. unsigned char *dot. begin = ftell(fp). char *token. char del[] = " ¥t¥n". pixs. end. int filesize. return -1. 0. w. saveptr) 013: #endif 014: 015: typedef struct _pgm_t { 016: 017: 018: 020: 021: int readPGM(pgm_t* pgm. h. if ((fp = fopen(filename. const char* filename) 022: { 023: 024: 025: 026: 027: 028: 029: 030: 031: 032: 033: 034: 035: 036: 037: 038: 039: 040: 041: 042: fseek(fp. saveptr) strtok_r(ptr. SEEK_END). int height. int width. int i. *pc. 0. *saveptr. del. saveptr) strtok_s(ptr. SEEK_SET). size_t bufsize. char *buf. luma. del.
del. filesize * sizeof(char). i++. &pc. dot++) { *dot = *token++. } 191 . } w = strtoul(token. token = (char *)STRTOK_R(NULL. bufsize = fread(buf. &saveptr). 2) != 0) { return -1. fseek(fp. token = (char *)STRTOK_R(NULL.B01-02632560-294 The OpenCL Programming Book: end = ftell(fp). luma = strtoul(token. token = (char *)STRTOK_R(buf. for (i=0. } token = (char *)STRTOK_R(NULL. del. del. 0. 10). token = (char *)STRTOK_R(NULL. &pc. filesize = (int)(end . if (token[0] == '#' ) { token = (char *)STRTOK_R(NULL. del. i< pixs. PGM_MAGIC. 1. buf = (char*)malloc(filesize * sizeof(char)). fp).begin). pixs = w * h. &saveptr). 10). token = pc + 1. del. if (strncmp(token. h = strtoul(token. &saveptr). &saveptr). &saveptr). dot = pgm->buf. SEEK_SET). 10). &saveptr). fclose(fp). &pc. "¥n". pgm->buf = (unsigned char *)malloc(pixs * sizeof(unsigned char)).
return -1. fprintf (fp. fclose(fp). w. PGM_MAGIC. h). pgm->height = h. double* x) return 0. pixs. w. pixs = w * h. i<pixs. dot = pgm->buf. int i. "Failed to open file¥n"). pgm->width = w. i++. dot++) { putc((unsigned char)*dot.B01-02632560-294 The OpenCL Programming Book 079: 080: 081: 082: 083: 084: } 085: 086: int writePGM(pgm_t* pgm. "wb+")) ==NULL) { fprintf(stderr. return 0. FILE* fp. } for (i=0. h = pgm->height. unsigned char* dot. h. "%s¥n%d %d¥n255¥n". 192 . fp). const char* filename) 087: { 088: 089: 090: 091: 092: 093: 094: 095: 096: 097: 098: 099: 100: 101: 102: 103: 104: 105: 106: 107: 108: 109: 110: 111: 112: } 113: 114: int normalizeD2PGM(pgm_t* pgm. w = pgm->width. } if ((fp = fopen(filename.
i++) { for (j=0. h = pgm->height. float* x) 147: { 148: 149: 150: w = pgm->width. h. j++) { if (max < x[i*w+j]) max = x[i*w+j]. int i. i < h. j++) { if((max-min)!=0) pgm->buf[i*w+j] = (unsigned char)(255*(x[i*w+j]-min)/(max-min)). i < h.B01-02632560-294 The OpenCL Programming Book 115: { 116: 117: 118: 119: 120: 121: 122: 123: 124: 125: 126: 127: 128: 129: 130: 131: 132: 133: 134: 135: 136: 137: 138: 139: 140: 141: 142: 143: 144: } 145: 146: int normalizeF2PGM(pgm_t* pgm. j. i++) { for (j=0. j < w. j < w. double max = 0. w. j. w. } } double min = 0. w = pgm->width. 193 . pgm->buf = (unsigned char*)malloc(w * h * sizeof(unsigned char)). return 0. for (i=0. if (min > x[i*w+j]) min = x[i*w+j]. } } else pgm->buf[i*w+j]= 0. int i. for (i=0. h.
i++) { for (j=0. } } return 0. else pgm->buf[i*w+j]= 0.B01-02632560-294 The OpenCL Programming Book 151: 152: 153: 154: 155: 156: 157: 158: 159: 160: 161: 162: 163: 164: 165: 166: 167: 168: 169: 170: 171: 172: 173: 174: 175: 176: } 177: h = pgm->height. float min = 0. i < h. j++) { if((max-min)!=0) pgm->buf[i*w+j] = (unsigned char)(255*(x[i*w+j]-min)/(max-min)). j++) { if (max < x[i*w+j]) max = x[i*w+j]. pgm->buf = (unsigned char*)malloc(w * h * sizeof(unsigned char)). i++) { for (j=0. 178: int destroyPGM(pgm_t* pgm) 179: { 180: 181: 182: 183: 184: 185: } 186: return 0. for (i=0. if (min > x[i*w+j]) min = x[i*w+j]. j < w. } } for (i=0. float max = 0. 194 . j < w. i < h. } if (pgm->buf) { free(pgm->buf).
We will now show how this can be done within the OpenCL framework for portability. The speed of execution is dependent on the device.B01-02632560-294 The OpenCL Programming Book 187: #endif /* _PGM_H_ */ When all the sources are compiled and executed on an image. The edges in the original picture become white. Time measurement can be done in OpenCL. as well as the type of parallelism used. in order to get the maximum performance. 195 . but this only guarantee that program can be executed. In order to tune a program.11.7(b).7(a) becomes the picture shown in Figure 6. which is triggered by event objects associated with certain clEnqueue-type commands. This code is shown in List 6. the execution time must be measured. List 6. a device and parallelism dependent tuning must be performed.11: Time measurement using event objects cl_context context. Figure 6. since it would otherwise be very difficult to see the result.7: Edge Detection (a) Original Image (b) Edge Detection Measuring Execution Time OpenCL is an abstraction layer that allows the same code to be executed on different platforms. the picture shown in Figure 6. Therefore. while everything else becomes black.
List 6. The first argument gets the number of events to wait for. clGetEventProfilingInfo(event.12: Event synchronization ret = clEnqueueNDRangeKernel(queue. clGetEventProfilingInfo() is used to get the start and end times.B01-02632560-294 The OpenCL Programming Book cl_command_queue queue. &event). The clWaitForEvents keeps the next line of the code to be executed until the specified events in the event list has finished its execution. CL_PROFILING_COMMAND_START. gws. lws. CL_PROFILING_COMMAND_START. 0. NULL). This can be done by using the clWaitForEvents() as in List 6.5f [ms]¥n". m. One thing to note when getting the end time is that the kernel queuing is performed asynchronously. NULL. We need to make sure. &ret). &start.11 shows the code required to measure execution time. … queue = clCreateCommandQueue(context. Command queue is created with the "CL_QUEUE_PROFILING_ENABLE" option 2. that the kernel has actually finished executing before we get the end time. NULL. MEM_SIZE. cl_event event. kernel. CL_PROFILING_COMMAND_END. clWaitForEvents(1. sizeof(cl_ulong). mobj. (end . sizeof(cl_ulong). and the 2nd argument gets the pointer to the event list. The code on List 6. cl_ulong start. in this case. CL_QUEUE_PROFILING_ENABLE.12. sizeof(cl_ulong). printf(" memory buffer write: %10. cl_ulong end. sizeof(cl_ulong). CL_TRUE.start)/1000000.0). device_id. &end. &event). clGetEventProfilingInfo(event. CL_PROFILING_COMMAND_END. &event). Kernel and memory object queuing is associated with events 3. &start. 0. 2. The code can be summarized as follows: 1. NULL). NULL). NULL). … clGetEventProfilingInfo(event. NULL. &end. clGetEventProfilingInfo(event. … ret = clEnqueueWriteBuffer(queue. 0. 196 .
CL_DEVICE_MAX_WORK_ITEM_SIZES. 512 work items can be split up into 2 work groups each having 256 work items. What values are allowed for the number of work groups and work items? 2. clGetDeviceInfo(device_id. size_t work_group_size.13: Get maximum values for the number of work groups and work items cl_uint work_item_dim. &work_group_size. NULL). List 6.1. clGetDeviceInfo(device_id. What are the optimal values to use for the number of work groups and work items? The first question can be answered using the clGetDeviceInfo() introduced in Section 5. or 512 work groups each having 1 work item. NULL).1]. The first clGetDeviceInfo() gets the maximum number of dimensions allowed for the work item. This raises the following questions: 1. This section focuses on what these values should be set to for optimal performance. The smallest value is [1. clGetDeviceInfo(device_id. sizeof(cl_uint). NULL). CL_DEVICE_MAX_WORK_ITEM_DIMENSIONS.B01-02632560-294 The OpenCL Programming Book You should now be able to measure execution times using OpenCL. 197 . The code is shown in List 6. CL_DEVICE_MAX_WORK_GROUP_SIZE. Index Parameter Tuning Recall that the number of work items and work groups had to be specified when executing data parallel kernels.13 below. size_t work_item_sizes[3]. work_item_sizes. There is quite a bit of freedom when setting these values.1. This returns either 1. 2 or 3. For example. sizeof(work_item_sizes). &work_item_dim. The second clGetDeviceInfo() gets the maximum values that can be used for each dimensions of the work item. sizeof(size_t).
Running the above code using NVIDIA OpenCL would generate the results shown below. For example.1} Yet another combination is: gws[] = {512.16. the number of work items and the work groups must be set before executing a kernel.1} The following is also possible: gws[] = {512. gws[] = {512.1} lws[] = {256.512.1} The following example seems to not have any problems at a glance.32. lws=local work-item size).1} lws[] = {32. since the size of the work-group size exceeds that of the allowed size (32*32 = 1024 > 512). and the work item ID within the work group. but would result in an error. The real problem is figuring out the optimal 198 .1} lws[] = {1. gws[] = {512.1} lws[] = {16.1.1.B01-02632560-294 The OpenCL Programming Book The third clGetDeviceInfo() gets the maximum work item size that can be set for each work group. The smallest value for this is 1. the following combination is possible. if the work item is 512 x 512. (gws=global work-item size.512. Max work-item dimensions : 3 Max work-item sizes Max work-group size : 512 512 64 : 512 As mentioned before.512. The global work item index and the local work item index each correspond to the work item ID of all the submitted jobs.1} Hopefully you have found the above to be intuitive.512.
B01-02632560-294 The OpenCL Programming Book combination to use. as shown in List 6. and see how it affects the processing time. The work-item corresponds to the processing element.1: Execution time when varying lws (units in ms) Process membuf write 1 0.53 32 0. In NVIDIA GPUs. A processing element corresponds to what is called CUDA cores. 16.53 . At this point. sizeof(cl_uint). the OpenCL architecture assumes the device to contain compute unit(s). We will now go back to the subject of FFT.36 199 64 0. and 512 local work-group size (Table 6. &compute_unit.14: Find the number of compute units cl_uint comput_unit = 0. As discussed in Chapter 2. 256. 64.54 16 0. This implies that the knowledge of the number of compute units and processing elements is required in deducing the optimal combination to use for the local work group size. In other words. ret = clGetDeviceInfo(device_id. 32. • All compute units can be used if the number of work groups is greater than 30. we need to look at the hardware architecture of the actual device.52 128 0. These can be found using clGetDeviceInfo(). 128. The following generations can be made from the above information: • Processor elements can be used efficiently if the number of work items within a work group is a multiple of 32. a work group gets executed on a compute unit. Each compute unit contains 8 processing elements.1). List 6. a compute unit corresponds to what is known as Streaming Multi-processor (SM). Execution time was measured for 1.14. which is made up of several processing elements. We will vary the number of work-items per work group (local work-group size).53 256 0. CL_DEVICE_MAX_COMPUTE_UNITS. and the work group corresponds to the compute unit. GT 200-series GPUs such as Tesla C1060 and GTX285 contain 30 compute units. NULL). we will use a 512 x 512 image. and a work-item gets executed on a processing element.54 512 0. but 32 processes can logically be performed in parallel. Table 6. For simplicity.
Note that the techniques used so far are rather basic.75 2.09 1.53 0. This algorithm was developed by a professor at the Hiroshima University in Japan.57 0.58 As you can see.51 0.54 37.51 3. but when combined wisely. • Long period • Efficient use of memory • High performance 200 .87 0.20 1.06 0.47 3.07 0.09 1. For example.12 0.01 0.49 3.10 0. which lead us to assume a similar auto-parallelization algorithms are implemented to be used within the OpenCL framework.58 0.83 1. a complex algorithm like the FFT can be implemented to be run efficiently over an OpenCL device. MT has the following advantages. We can only assume that the parallelization is performed by the framework itself. Mac OS X Snow Leopard comes with an auto-parallelization framework called the "Grand Central Dispatch". The FFT algorithm is a textbook model of a data parallel algorithm. We performed parameter tuning specifically for the NVIDIA GPU.01 0. This is due to the fact that 32 processes can logically be performed in parallel for each compute unit.02 0.01 6. Apple's OpenCL for multi-core CPUs only allows 1 work-item for each work group.07 0.12 1.01 0. but the optimal value would vary depending on the architecture of the device. We will now conclude our case study of the OpenCL implementation of FFT.06 0. Mersenne Twister This case study will use Mersenne Twister (MT) to generate pseudorandom numbers.50 3.61 0.96 0.41 0.49 0. At the time of this writing (December 2009).01 0.56 0.07 0. since the NVIDIA GPU hardware performs the thread switching.01 0.60 3.10 0.10 0.86 2.60 0.06 0. The implemented code should work over any platform where an OpenCL framework exists.42 0.95 0. the optimal performance occurs when the local work-group size is a multiple of 32. The performance does not suffer significantly when the local work-group size is increased.57 0.B01-02632560-294 The OpenCL Programming Book spinFactor bitReverse butterfly normalize transpose highPassFilter membuf read 0.01 0.88 0.62 0.47 2.
8. • Parallelize the state change operation The state is made up of a series of 32-bit words. each processing of the word can be parallelized. In this manner. By going through a process called "tempering" on this state. and depending 201 . The initial state then undergoes a set of operations. A block diagram of the steps is shown in Figure 6. more random numbers are generated. MT starts out with an initial state made up of an array of bits. Since the next state generation is done by operating on one 32-bit word at a time. However. an array of random numbers is generated.8. the following 2 methods come to mind.8: MT processing Studying the diagram in Figure 6. Parallelizing MT A full understanding of the MT algorithm requires knowledge of advanced algebra. which creates another state. An array of random numbers is again generated from this state. which gets in the way of parallel processing. we see a dependency of the states.B01-02632560-294 The OpenCL Programming Book • Good distribution properties We will now implement this algorithm using OpenCL. but the actual program itself is comprised of bit operations and memory accesses. To get around this problem. Figure 6. since the processing on each word does not require many instructions.
32 sets of parameters are generated. The sample code in List 6. n = 32. it may not be enough to take advantage of the 100s of cores on GPUs. Also.%d. would not be the same as that generated sequentially from one state.15 below shows how the get_mt_parameters() function in the "dc" library can be used. This type of processing would benefit more from using a SIMD unit that has low synchronization cost and designed to run few processes in parallel. 006: mt_struct **mts.%d. 009: for (i=0.h> 002: #include "dc. The generated random number. since the number of processes that can be parallelized is dependent on the number of words that make up the state.%d}¥n". and process these in parallel Since the dependency is between each state.%d.B01-02632560-294 The OpenCL Programming Book on the cost of synchronization. 202 . Dynamic Creator (dc) NVIDIA's CUDA SDK sample contains a method called the Dynamic Creator (dc) which can be used to generate random numbers over multiple devices [18]. performance may potentially suffer from parallelization.%d. in order to run OpenCL on the GPU. however.%d. 007: init_dc(4172).%d. 240 MT can be processed in parallel. i++) { 010: printf("{%d. "dc" is a library that dynamically creates Mersenne Twisters parameters.15: Using "dc" to generate Mersenne Twisters parameters 001: #include <stdio.%d. In this code. We will use the latter method. • Create numerous initial states. generating an initial state for each processing element will allow for parallel execution of the MT algorithm. i < n.h" 003: int main(int argc. This section will show how to generate random numbers in OpenCL using "dc". but this can be changed by using a different value for the variable "n".n).%d.521. for example.%d.%d.%d. If the code is executed on Tesla C1060. List 6. char **argv) 004: { 005: int i.%d. 008: mts = get_mt_parameters(32.
the value of PI can be computed. the proportion of the area inside the circle is equal to PI / 4.mts[i]->nn. and using the fact that the radius of the circle equals the edge of the square.mts[i]->maskC). Now points are chosen at random inside the square. 012: mts[i]->lmask. Assume that a quarter of a circle is contained in a square. which can be used to figure out if that point is within the quarter of a circle.mts[i]->wmask.9). 016: } OpenCL MT We will now implement MT in OpenCL. Figure 6.mts[i]->shiftB. We will use the Mersenne Twister algorithm to generate random numbers. solving for PI.mts[i]->shift0. Since the area of the quarter of a circle is PI * R^2 / 4.9: Computing PI using Monte Carlo 203 . we would get PI = 4 * (proportion).mts[i]->shift1. then use these numbers to compute the value of PI using the Monte Carlo method. 013: mts[i]->maskB.B01-02632560-294 The OpenCL Programming Book 011: mts[i]->aaa. The Monte Carlo method can be used as follows. and the area of the square is R^2. Using the proportion of hit versus the number of points.mts[i]->mm.mts[i]->shiftC.mts[i]->ww. 014: } 015: return 0.mts[i]->umask.mts[i]->rr. When this is equated to the proportion found using the Monte Carlo method. such that the radius of the circle and the edge of the square are equivalent (Figure 6.
1. uint aaa.lmask.17 shows the host code for this program. lll = mts->lmask. uint maskB. shift1. k++) { x = (st[k]&uuu)|(st[k+1]&lll). uint uuu = mts->umask. uint aa = mts->aaa. uint wmask. int k. List 6.nn.16: Kernel Code 001: typedef struct mt_struct_s { 002: 003: 004: 005: 006: 008: 009: /* Initialize state using a seed */ 010: static void sgenrand_mt(uint seed. for (k=0. i < mts->nn. shiftC. shiftB.16 shows the kernel code and List 6. which we will use when we get to the optimization phase.B01-02632560-294 The OpenCL Programming Book List 6. maskC. st[k] = st[k+m] ^ (x>>1) ^ (x&1U ? aa : 0U).umask. uint *state) { 011: 012: 013: 014: 015: 016: 017: 018: } 019: 020: /* Update state */ 021: static void update_state(__global const mt_struct *mts.rr. 204 . x.lim. for (i=0. int mm. seed = (1812433253 * (seed ^ (seed >> 30))) + i + 1. int shift0. i++) state[i] &= mts->wmask. The host program includes the commands required to measure the execution time. __global const mt_struct *mts. i < mts->nn. 007: } mt_struct.m. } for (i=0. k < lim. lim = n .ww. uint *st) { 022: 023: 024: 025: 026: 027: 028: 029: 030: 031: } lim = n . int i. int n = mts->nn. i++) { state[i] = seed. m = mts->mm.
n.__global mt_struct *mts. /* mts for this item */ out += gid * num_rand. mts. for (j=0. k++) { x = (st[k]&uuu)|(st[k+1]&lll). x ^= (x << mts->shiftB) & mts->maskB. out[i*nn + j] = x. state). st[n-1] = st[m-1] ^ (x>>1) ^ (x&1U ? aa : 0U). update_state(mts. j++) { /* Generate random numbers */ uint x = state[j]. st[k] = st[k+m-n] ^ (x>>1) ^ (x&1U ? aa : 0U). /* Output buffer for this item */ sgenrand_mt(0x33ff*gid. x ^= x >> mts->shift0. mts. x ^= (x << mts->shiftC) & mts->maskC. uint *state. 205 . x ^= x >> mts->shift1. 040: static inline void gen(__global uint *out. for (i=0. (uint*)state. mts += gid. i++) { int m = nn. uint state[17]. } x = (st[n-1]&uuu)|(st[0]&lll). (uint*)state). i < n. k < lim. n = (num_rand+(nn-1)) / nn. num_rand). uint seed = gid*3. j < m. int num_rand) { 041: 042: 043: 044: 045: 046: 047: 048: 049: 050: 051: 052: 053: 054: 055: 056: 057: } 058: 059: __kernel void genrand(__global uint *out.int num_rand){ 060: 061: 062: 063: 064: 065: 066: int gid = get_global_id(0). nn = mts->nn. j. /* Generate random numbers */ } } int i. /* Initialize random numbers */ gen(out. const __global mt_struct *mts.B01-02632560-294 The OpenCL Programming Book 032: 033: 034: 035: 036: 037: 038: } 039: for (. if (i == n-1) m = num_rand%nn.
__global uint *rand.h> 002: #include <CL/cl.lmask.umask. cl_uint wmask. for (i=0. int count = 0. cl_uint maskB.h> 005: 006: typedef struct mt_struct_s { 007: 008: 009: 010: 011: cl_uint aaa. i < num_rand.ww. int i.rr. y. } } if (len < 1) { /* sqrt(len) < 1 = len < 1 */ count++.nn. shiftC. /* x coordinate */ y = ((float)(rand[i]&0xffff))/65535. maskC. int num_rand) { 071: 072: 073: 074: 075: 076: 077: 078: 079: 080: 081: 082: 083: 084: 085: 086: 087: 088: 089: } out[gid] = count. cl_int shift0.B01-02632560-294 The OpenCL Programming Book 067: } 068: 069: /* Count the number of points within the circle */ 070: __kernel void calc_pi(__global uint *out.17: Host Code 001: #include <stdlib. /* Distance from the origin */ rand += gid*num_rand. x = ((float)(rand[i]>>16))/65535. cl_int mm. List 6. shift1. int gid = get_global_id(0).0f.h> 004: #include <math. len. 206 .0f.h> 003: #include <stdio. shiftB. i++) { float x. /* y coordinate */ len = (x*x + y*y).
8388607.8.32.7.8.8.15.8388607.8388607.15.12.-8388608.18.17.-1.32.-1.-2785280}.23.17.-8388608.1004886656.-605200512.-1.8388607.-8388608.-1.12.17.15.-1230685312.-173220480.8388607.15.-8388608.8.8388607. {-883314023.32.8388607.8388607.8.18.15.32.17.-8552448}.8.18. {-504393846.-8388608.12.8388607.23.8388607.-1114112}.8.-8388608.15.7.8388607.12.-8388608.23.-34242560}.8388607.17.8.-8388608.-8388608.15.7.23.7.-1.23. {-171838771.-2785280}.23.18.7. {-96493045.8.8388607.8388607.17.-1.-1.-8388608.-8388608.-8388608.32.7.-1.-1.-1.8.8.7.18.8388607.32.32.32.12.8388607.12. 032: {-558849839.12.18.32.17.-411738496.12.17.12.7.15.17.23.17.17.1001217664.-2785280}. {-1524638763.17.17.18.17.17.17.18. {-1611140857.12.-34275328}.18.653090688.8.18.32.32.18.-1. {-1001052777.-1.32.-1.-36372480}.23.15.922049152.23.7.-36339712}.8388607.-8388608.32.-1. {-1723207278.8388607.-8388608.15.18.12.32.18.17.-8388608.7.15.18.-8388608. {-72754417.-8388608.8388607.18.23.8388607.23.17.1727880064.-1.18.-608739712.32.-1263215232. {-588451954.8.18.-1.17.-638264704.-453715328.17.23.-1. {-2145422714.7.-1.-8388608.-1.-1.-1769472}.-1.32.8388607.7.7.-1736704}.8388607.7.7.12.-1242112640.17.-2785280}.23.18.7.-1.8388607.-8388608.8. {-1779704749.2000501632.8388607. 207 .15.23.23.8.12.7.12.8388607.32. 033: 034: 035: 036: 037: 038: 039: 040: 041: 042: {-1009388965. {-1331085928. {-291834292.32.23.-1535844992.17.1722112896.12.8.32.17.18.12.-2785280}.12.-8388608.17.7.15.8388607.-2818048}.23.-8388608.12.1970625920.8388607.921000576.18.17.8.8.15.32.8.-2686976}.-8388608.-8388608.23.18.15.7.17.-151201152.18.-1.7.32.B01-02632560-294 The OpenCL Programming Book 012: } mt_struct. {-250871863.17. {-489740155.18.12.18.-8388608.15.23.-1.12.8.-8388608.15.23.8.-2818048}.8.12.-1. {-1154537980.23.15.23.18.12.12.15.-8388608.7.18. {-189104639. {-1028775805.15.18.7.938827392.1958042496.-2785280}.32.15.15.-45776896}.23.8.23.8388607.-1. {-822663744.-184755584.-8388608.8.7.-1.15.32.1702190464. 013: 014: mt_struct mts[] = { /* Parameters generated by the Dynamic Creater */ 015: 016: 017: {-1162865726.18. 018: 019: 020: 021: 022: 023: 024: 025: 026: 027: 028: 029: 030: 031: {-1086664688.-2785280}.-34177024}.32.12.8. {-2061554476.-688128}.15.32.32.-2785280}.-1263215232. {-474740330. {-692780774.-3833856}.18.-2785280}.32.8.32.23.12.8.7. 043: {-690295716.15.17.17.12.12.23.-8388608.-2785280}.23.8388607.-196608}.-1946722688.7. {-1514624952.-1494951296.7.-2785280}.15.-1.8.-8388608.32.7.-688432512.7.23.15.8.12.12.15.-2785280}.-1.-36339712}.23.-1514742400.7.15.
32.-1.12. cl_mem dev_mts. {-1172830434.8388607. cl_ulong prof_start.1970625920. cl_int ret. cl_context context = NULL.-8388608.17.17.8.7.32.15.8.-8388608. cl_uint *result.-1.-2785280}.2000509824.B01-02632560-294 The OpenCL Programming Book 044: 045: 046: {-1787515619. clGetDeviceIDs(platform_id. &ret_num_devices). cl_event ev_mt_end. CL_DEVICE_TYPE_DEFAULT.8388607.-688128}. /* The number of random numbers generated using one int count_all.-8388608. cl_mem rand. num_generator = sizeof(mts)/sizeof(mts[0]). 047: 048: #define MAX_SOURCE_SIZE (0x100000) 049: 050: int main(): clGetPlatformIDs(1. cl_uint ret_num_platforms.12.}.-1. /* The number of double pi.17.23. {-746112289.23. ev_copy_end.12.32. &platform_id. prof_pi_end. prof_mt_end. cl_int num_rand = 4096*256. cl_platform_id platform_id = NULL.-2654208}. local_item_size[3].15. size_t kernel_code_size. cl_device_id device_id = NULL. generator */ generators */ 208 . cl_uint ret_num_devices. cl_kernel kernel_mt = NULL. kernel_pi = NULL. char *kernel_src_str. cl_command_queue command_queue = NULL.23. prof_copy_end.8.7. &device_id.18. count. FILE *fp.18.15. &ret_num_platforms). ev_pi_end.8388607. i.720452480. size_t global_item_size[3]. cl_program program = NULL.18. 1.7.
&ret). clEnqueueWriteBuffer(command_queue. /* MT parameter clSetKernelArg(kernel_mt. sizeof(mts). 2. clBuildProgram(program. sizeof(cl_uint)*num_generator. sizeof(num_rand). fp). NULL. kernel_src_str = (char*)malloc(MAX_SOURCE_SIZE). /* Number of random (output of genrand) */ (input to genrand) */ numbers to generate */ 209 . &device_id. (void*)&rand). NULL. sizeof(cl_mem). CL_QUEUE_PROFILING_ENABLE. /* Set Kernel Arguments */ clSetKernelArg(kernel_mt. fclose(fp). fp = fopen("mt. &ret). (const char **)&kernel_src_str. mts. NULL). "calc_pi". (void*)&dev_mts). CL_MEM_READ_WRITE. dev_mts. 1. &ret). /* Create input parameter */ dev_mts = clCreateBuffer(context. &num_rand). (const size_t *)&kernel_code_size. "r"). kernel_code_size = fread(kernel_src_str. NULL. kernel_pi = clCreateKernel(program. &ret). 0. &ret). sizeof(cl_mem). /* Random numbers clSetKernelArg(kernel_mt. command_queue = clCreateCommandQueue(context. NULL.cl". sizeof(cl_uint)*num_rand*num_generator. NULL. 1. NULL). 1: context = clCreateContext( NULL. "". CL_MEM_READ_WRITE. sizeof(mts). "genrand". count = clCreateBuffer(context. device_id. kernel_mt = clCreateKernel(program. CL_TRUE. CL_MEM_READ_WRITE. NULL. result = (cl_uint*)malloc(sizeof(cl_uint)*num_generator). 1. 0. &ret). &ret). MAX_SOURCE_SIZE. &ret). /* Build Program*/ program = clCreateProgramWithSource(context. &device_id. NULL. 1. 0. /* Create output buffer */ rand = clCreateBuffer(context.
B01-02632560-294 The OpenCL Programming Book 106: 107: 108: 109: 110: 1. kernel_pi. sizeof(num_rand). global_item_size. &ev_pi_end). local_item_size. 1. NULL. CL_TRUE. 111: 112: 113: 114: 115: 116: 117: 118: 119: 120: 121: 122: 123: 124: 125: 126: 127: 128: 129: 130: 131: 132: 133: clSetKernelArg(kernel_pi. &prof_start. (void*)&rand). sizeof(cl_uint)*num_generator. /* Average the values of PI */ count_all = 0. 1. /* Get result */ clEnqueueReadBuffer(command_queue. /* Counter for points clSetKernelArg(kernel_pi. NULL. global_item_size. result. CL_PROFILING_COMMAND_END. NULL. &ev_copy_end). 1. local_item_size[2] = 1. local_item_size[1] = 1. NULL. i < num_generator. /* Get execution time info */ clGetEventProfilingInfo(ev_mt_end. global_item_size[1] = 1. } pi = ((double)count_all)/(num_rand * num_generator) * 4. &ev_mt_end). sizeof(cl_mem). i++) { count_all += result[i]. &num_rand). printf("pi = %f¥n". sizeof(cl_ulong). 0. sizeof(cl_ulong). NULL). for (i=0. 0. (void*)&count). 0. 0. /* Create a random number array */ clEnqueueNDRangeKernel(command_queue. sizeof(cl_mem). local_item_size. CL_PROFILING_COMMAND_QUEUED. kernel_mt. count. /* Random numbers clSetKernelArg(kernel_pi. /* Number of random within circle (output of calc_pi) */ (input to calc_pi) */ numbers used */ global_item_size[0] = num_generator. global_item_size[2] = local_item_size[0] = num_generator. /* Compute PI */ clEnqueueNDRangeKernel(command_queue. pi). 2. 210 . clGetEventProfilingInfo(ev_mt_end. NULL. 0.
NULL). return 0. We will start by looking at the kernel code 009: /* Initialize state using a seed */ 010: static void sgenrand_mt(uint seed. for (i=0. clReleaseKernel(kernel_pi).0). clReleaseEvent(ev_copy_end). (prof_pi_end .prof_pi_end)/(1000000.prof_mt_end)/(1000000. CL_PROFILING_COMMAND_END. sizeof(cl_ulong). clReleaseProgram(program). NULL). __global const mt_struct *mts. clReleaseEvent(ev_pi_end). clReleaseContext(context). 134: 135: 136: 137: 138: 139: 140: 141: 142: 143: 144: 145: 146: 147: 148: 149: 150: 151: 152: 153: 154: 155: 156: 157: 158: } clReleaseMemObject(rand). (prof_copy_end . &prof_pi_end. clReleaseCommandQueue(command_queue). free(kernel_src_str). clReleaseEvent(ev_mt_end). i < mts->nn. i++) { state[i] = seed. &prof_copy_end. clGetEventProfilingInfo(ev_copy_end. free(result).0)). clGetEventProfilingInfo(ev_pi_end. 211 . NULL).B01-02632560-294 The OpenCL Programming Book &prof_mt_end. clReleaseMemObject(count). printf(" mt: %f[ms]¥n" " pi: %f[ms]¥n" " copy: %f[ms]¥n". CL_PROFILING_COMMAND_END. (prof_mt_end . sizeof(cl_ulong).prof_start)/(1000000. clReleaseKernel(kernel_mt).0). uint *state) { 011: 012: 013: int i.
int n = mts->nn. for (. n = (num_rand+(nn-1)) / nn. k++) { x = (st[k]&uuu)|(st[k+1]&lll). const __global mt_struct *mts. The above code is actually copied from the original code used for MT. k < lim. n. The above code updates the state bits. st[k] = st[k+m-n] ^ (x>>1) ^ (x&1U ? aa : 0U). m = mts->mm. lll = mts->lmask. for (k=0. nn = mts->nn.lim. k++) { x = (st[k]&uuu)|(st[k+1]&lll). st[n-1] = st[m-1] ^ (x>>1) ^ (x&1U ? aa : 0U).B01-02632560-294 The OpenCL Programming Book 014: 015: 016: 017: 018: } } seed = (1812433253 * (seed ^ (seed >> 30))) + i + 1.1. i < n. i++) state[i] &= mts->wmask. st[k] = st[k+m] ^ (x>>1) ^ (x&1U ? aa : 0U). 040: static inline void gen(__global uint *out. x.m. lim = n . int k. for (i=0. } lim = n . uint uuu = mts->umask. uint aa = mts->aaa. j. int num_rand) { 041: 042: 043: int i. i++) { 212 . i < mts->nn. uint *st) { 022: 023: 024: 025: 026: 027: 028: 029: 030: 031: 032: 033: 034: 035: 036: 037: 038: } } x = (st[n-1]&uuu)|(st[0]&lll). 020: /* Update state */ 021: static void update_state(__global const mt_struct *mts. for (i=0. The state bit array is being initialized. uint *state. k < lim.
update_state(mts. 213 . x ^= (x << mts->shiftB) & mts->maskB. /* mts for this item */ out += gid * num_rand. uint seed = gid*3. /* Initialize random numbers */ gen(out. (uint*)state. j++) { /* Generate random numbers */ uint x = state[j]. 059: __kernel void genrand(__global uint *out. __global uint *rand. uint state[17].int num_rand){ 060: 061: 062: 063: 064: 065: 066: 067: } int gid = get_global_id(0). int num_rand) { 071: int gid = get_global_id(0). 069: /* Count the number of points within the circle */ 070: __kernel void calc_pi(__global uint *out. num_rand). mts. The kernel takes the output address. and the number of random number to generate as arguments. x ^= (x << mts->shiftC) & mts->maskC. j < m. The update_state() is called to update the state bits. and a random number is generated from the state bits. } The code above generates "num_rand" random numbers. mts += gid. out[i*nn + j] = x. if (i == n-1) m = num_rand%nn. x ^= x >> mts->shift0.B01-02632560-294 The OpenCL Programming Book 044: 045: 046: 047: 048: 049: 050: 051: 052: 053: 054: 055: 056: 057: } } int m = nn. x ^= x >> mts->shift1. The global ID is used to calculate the addresses to be used by this kernel. (uint*)state).__global mt_struct *mts. The "mts" and "out" are pointers to the beginning of the spaces allocated to be used by all work-items. /* Output buffer for this item */ sgenrand_mt(0x33ff*gid. state). /* Generate random numbers */ The above shows the kernel that generates random numbers by calling the above functions. mts. MT parameters. for (j=0.
Next we will take a look at the host code. 046: {-746112289. Since a 32-bit random number is generated.8. /* Distance from the origin */ if (len < 1) { /* sqrt(len) < 1 = len < 1 */ count++.18.8388607. 214 . for (i=0. i++) { float x.12.-1.7.-2785280}.-1.32. /* x coordinate */ y = ((float)(rand[i]&0xffff))/65535. /* y coordinate */ len = (x*x + y*y).32.17. 014: mt_struct mts[] = { /* Parameters generated by the Dynamic Creater */ 015: .-1514742400. {-822663744. x = ((float)(rand[i]>>16))/65535.8. } } out[gid] = count. If the distance from the origin is less than 1.15. int i.12. &device_id.18.-2785280}. the upper 16 bits are used for the x coordinate. y.B01-02632560-294 The OpenCL Programming Book 072: 073: 074: 075: 076: 077: 078: 079: 080: 081: 082: 083: 084: 085: 086: 087: 088: 089: } int count = 0. i < num_rand. The above is an array of struct for the MT parameters generated using "dc". then the chosen point is inside the circle..0f.0f.-8388608.}.23. The above code counts the number of times the randomly chosen point within the square is inside the section of the circle. rand += gid*num_rand.17. 1. and the lower 16 bits are used for the y coordinate.23. CL_DEVICE_TYPE_DEFAULT.2000509824.8388607. 075: 076: clGetDeviceIDs(platform_id. &ret_num_devices).7.15..-8388608. len.
device_id. kernel_code_size = fread(kernel_src_str. &ret). (void*)&rand). /* Number of random (output of genrand) */ (input to genrand) */ 215 . 0. The code above transfers the MT parameters generated by "dc" to global memory. fp = fopen("mt. CL_MEM_READ_WRITE. &ret). NULL. sizeof(cl_uint)*num_generator. kernel_src_str = (char*)malloc(MAX_SOURCE_SIZE). sizeof(mts). 0. &device_id. 2. /* Create output buffer */ rand = clCreateBuffer(context. "". NULL. clBuildProgram(program: context = clCreateContext( NULL. /* MT parameter clSetKernelArg(kernel_mt. kernel_mt = clCreateKernel(program. CL_TRUE. /* Create input parameter */ dev_mts = clCreateBuffer(context. &ret). (const char **)&kernel_src_str. 1. (const size_t *)&kernel_code_size. NULL). NULL. mts. dev_mts. clEnqueueWriteBuffer(command_queue. CL_QUEUE_PROFILING_ENABLE. (void*)&dev_mts). sizeof(num_rand). sizeof(cl_uint)*num_rand*num_generator. sizeof(cl_mem). sizeof(cl_mem). &ret). "genrand". &ret). /* Random numbers clSetKernelArg(kernel_mt. 1. fclose(fp).cl". NULL). kernel_pi = clCreateKernel(program. NULL. "r"). NULL. "calc_pi". MAX_SOURCE_SIZE. &ret). &device_id. 1. 1. fp). 0. NULL. 101: 102: 103: 104: /* Set Kernel Arguments */ clSetKernelArg(kernel_mt. NULL. &ret). /* Build Program*/ program = clCreateProgramWithSource(context. &num_rand). result = (cl_uint*)malloc(sizeof(cl_uint)*num_generator). command_queue = clCreateCommandQueue(context. CL_MEM_READ_WRITE. count = clCreateBuffer(context. &ret). CL_MEM_READ_WRITE. 1. sizeof(mts).
/* Number of random within circle (output of calc_pi) */ (input to calc_pi) */ numbers used */ The code segment above sets the arguments to the kernels "kernel_mt". and "kernel_pi". global_item_size[2] local_item_size. /* Create a random number array */ clEnqueueNDRangeKernel(command_queue. This means that the kernels are executed such that only 1 work-group is created. global_item_size[0] = num_generator.B01-02632560-294 The OpenCL Programming Book numbers to generate */ 105: 106: 107: 108: clSetKernelArg(kernel_pi. NULL. 0. sizeof(cl_mem). 0. } /* Average the values of PI */ count_all = 0. /* Random numbers clSetKernelArg(kernel_pi. 1. which generates random numbers. NULL. kernel_pi. (void*)&count). for (i=0. 110: = 1. 2. 216 . printf("pi = %f¥n". 1. global_item_size. NULL. local_item_size[2] = 1. local_item_size[1] = 1. 122: 123: 124: 125: 126: 127: 128: 129: pi = ((double)count_all)/(num_rand * num_generator) * 4. sizeof(cl_mem). 111: 112: 113: 114: 115: 116: 117: /* Compute PI */ clEnqueueNDRangeKernel(command_queue. i++) { count_all += result[i]. local_item_size. /* Counter for points clSetKernelArg(kernel_pi. sizeof(num_rand). kernel_mt. which performs the processing on one compute unit. 1. i < num_generator. global_item_size[1] = 1. NULL. &num_rand). which counts the number of points within the circle. (void*)&rand). &ev_mt_end). pi). global_item_size. &ev_pi_end). 0. The global item size and the local item size are both set to the number of random numbers. local_item_size[0] = num_generator.
2. The correspondence of the global work-item and the local work-item to the hardware (at least as of this writing) are summarized in Table 6. Unlike OS threads. the value of PI is this proportion multiplied by 4. Table 6. Since the time required for the memory transfer operation is negligible compared to the other operations. Parallelization Now we will go through the code in the previous section. fibers do not execute over multiple cores. and use parallelization where we can.8 409.3.B01-02632560-294 The OpenCL Programming Book The value of PI is calculated by tallying up the number of points chosen within the circle by each thread to calculate the proportion of hits.8 536. As discussed earlier. but are capable of 217 .8 364. "Fibers" are lightweight threads capable of performing context switching by saving the register contents.9 copy (ms) 0. Table 6.2 243. Running the code as is on Core i7-920 + Tesla C 1060 environment.8 pi (ms) 2399. The code in the previous section performed all the operation on one compute unit.3: Correspondence of hardware with OpenCL work-item OpenCL NVIDIA AMD FOXC Global work-item SM OS thread OS thread Local work-item CUDA core Fiber Fiber "OS threads" are threads managed by the operating systems.002 We will look at how these processing times change by parallelization. such as pthreads and Win32 threads.07 0.2: Processing Times OpenCL NVIDIA AMD FOXC mt (ms) 3420.02 0. we will not concentrate on this part. the execution time using different OpenCL implementations for kernel executions and memory copy are shown below in Figure 6.
B01-02632560-294 The OpenCL Programming Book performing context switching quickly. A CUDA core is the smallest unit capable of performing computations. sizeof(cl_uint). We will change the code such that the local work-item size is the number of generated parameters created using "dc" divided by CL_DEVICE_MAX_COMPUTE_UNITS. 218 . Figure 6.10 shows the basic block diagram. "SM (Streaming Multiprocessor)" and "CUDA Core" are unit names specific to NVIDIA GPUs.10: OpenCL work-item and its hardware correspondences Please refer to the "CUDA ™ Programming Guide" contained in the NVIDIA SDK for more details on the NVIDIA processor architecture [19]. if (num_compute_unit > 32) num_compute_unit = 32. clGetDeviceInfo(device_id. &num_compute_unit. igure 6. NULL). This would require multiple work-groups to be created. A group of CUDA cores make up the SM. The number of work-groups and work-items can be set using the code shown below. Since the code on the previous section only uses one work-group. parallel execution via OS threads or SM is not performed. CL_DEVICE_MAX_COMPUTE_UNITS.
&num_rand). and each work-group requires a MT parameter. &num_rand). We have limited the maximum number of work-groups to 32. 0. sizeof(num_rand). local_item_size[2] = 1. (void*)&count). /* Set Kernel Arguments */ clSetKernelArg(kernel_mt. (void*)&dev_mts). /* Number of MT parameters */ clSetKernelArg(kernel_pi. __kernel void genrand(__global uint *out. local_item_size[0] = num_work_item. (void*)&rand). /* Random numbers(input to calc_pi) */ clSetKernelArg(kernel_pi. /* Random numbers(output of genrand) */ clSetKernelArg(kernel_mt. 3.__global mt_struct *mts. &num_generator). 2. global_item_size[1] = 1. 1. int num_generator){ 219 . sizeof(cl_mem). sizeof(cl_mem).B01-02632560-294 The OpenCL Programming Book /* Number of work items */ num_work_item = (num_generator+(num_compute_unit-1)) / num_compute_unit. sizeof(num_generator). /* Number of random numbers to generate */ clSetKernelArg(kernel_mt. sizeof(num_generator). global_item_size[2] = 1. 2. /* MT parameter(input to genrand) */ clSetKernelArg(kernel_mt. sizeof(cl_mem). since we have only defined 32 sets of MT parameters in List 6. (void*)&rand). 1. /* Number of MT parameters */ On the device side. &num_generator). local_item_size[1] = 1. /* Random numbers (input to calc_pi) */ clSetKernelArg(kernel_pi. /* Set work-group and work-item size */ global_item_size[0] = num_work_item * num_compute_unit.17. /* Counter for points withincircle (output of calc_pi) */ clSetKernelArg(kernel_pi. the number of random numbers to generate is limited by the number of available MT parameters. sizeof(num_rand).int num_rand. 3. sizeof(cl_mem). 0.
As shown in Figure 6. which has 4 cores. Thus.6 92. So the difference is in whether the work-items were executed in parallel over the processing units.B01-02632560-294 The OpenCL Programming Book int gid = get_global_id(0). int i. or the work-items were executed in parallel over the compute units. where parallelism is present over the multiple processing units.10. This was expected.4: Processing time after parallelization OpenCL NVIDIA AMD FOXC mt (ms) 2800. … } The processing times after making the changes are shown below in Table 6. int num_rand. the processing time was not reduced significantly.8 94. The code in this section allows one work-item to be executed on each compute-unit. Table 6. since it is executed on Core i7-920. NVIDIA GPUs can perform parallel execution over multiple work-items. if (gid >= num_generator) return. the processing time has not changed for NVIDIA's OpenCL. In the code in the previous section.4 73. However. int count = 0. uint seed = gid*3. if (gid >= num_generator) return. uint state[17]. all work items are performed in one compute unit. and this speed up is merely from less chance of register data being stored in another memory during context switching.8 pi (ms) 1018.6 59. 220 . __global uint *rand. int num_generator) { int gid = get_global_id(0).4.1 Note the processing time using FOXC and AMD are reduced by a factor of 4. … } __kernel void calc_pi(__global uint *out.
4 94.6 60.5: Processing times from increasing parallelism # MT parameters 128 NVIDIA AMD FOXC 256 NVIDIA AMD FOXC 512 NVIDIA AMD FOXC mt (ms) 767.3 379. but that it reduces significantly on the GPU.4 73.5. We will now look into optimizing the code to be executed on the GPU. Compared to processor with cache.8 214. The processing time is shown in Table 6. accessing the global memory each time can slow down the average access times by a large margin. To get around this problem on the NVIDIA GPU.B01-02632560-294 The OpenCL Programming Book To use the GPU effectively.8 94.8 pi (ms) 288. Since Tesla does not have any cache hardware.3 94. execution is slower on the NVIDIA despite increasing the parallelism. a wise usage of the local memory is required.4 92. Table 6.3 91.8 73. Increasing Parallelism This section will focus on increasing parallelism for efficient execution on the NVIDIA GPU.4 491. This will be done by increasing the number of MT parameters generated by "dc". Looking at the code again.5 below.6 59.6 62. access to the global memory requires a few hundred clock cycles. The first thing to look at for efficient execution on the GPU is the memory access.4 Note that the processing time does not change on the CPU. Optimization for NVIDIA GPU Looking at Table 6.7 117. the variables "state" and "mts" accesses the same memory space many 221 . parallel execution should occur in both the compute units and the CUDA cores.5 94.0 73.
NULL). /* Number of random numbers to generate */ clSetKernelArg(kernel_mt. /* MT parameter(input to genrand) */ clSetKernelArg(kernel_mt. (void*)&rand). We will change the code so that these data are written to the local memory. sizeof(mt_struct)*num_work_item. we will change the host-side code so the data to be local memory is passed in as an argument to the kernel. __local const mt_struct *mts. by using the __local qualifier. 1. __local uint *st) { /* … */ } static inline void gen(__global uint *out. sizeof(cl_mem). sizeof(cl_mem). (void*)&dev_mts). &num_rand). /* Local Memory(mts) */ The kernel will now be changed to use the local memory. /* Initialize state using a seed */ static void sgenrand_mt(uint seed.B01-02632560-294 The OpenCL Programming Book times. __local uint *state) { /* … */ } /* Update State */ static void update_state(__local const mt_struct *mts. sizeof(num_rand). /* Random numbers(output of genrand)*/ clSetKernelArg(kernel_mt. 0. 5. 2. &num_generator). int num_rand) { /* … */ } 222 . First. sizeof(cl_uint)*17*num_work_item. /* Set kernel arguments */ clSetKernelArg(kernel_mt. sizeof(num_generator). __local uint *state. 4. NULL). const __local mt_struct *mts. 3. /* Number of MT parameters */ clSetKernelArg(kernel_mt. /* Local Memory (state) */ clSetKernelArg(kernel_mt.
shiftB.shift0.6: Processing times on the Tesla using local memory # MT parameters 128 256 512 mt (ms) 246. mts. mts->shift1 = mts_g[gid].wmask.rr. mts->rr = mts_g[gid]. mts.1 223 . mts->umask = mts_g[gid]. mts->nn = mts_g[gid]. /* Output buffer for this item*/ sgenrand_mt(0x33ff*gid.maskC.8 206. mts->wmask = mts_g[gid]. __local uint *state_mem. num_rand).lmask. (__local uint*)state. mts->maskC = mts_g[gid]. Table 6. mts->shiftB = mts_g[gid].4 108.2 143.7 pi (ms) 287.maskB. /* Store current state in local memory */ if (gid >= num_generator) return. mts->shift0 = mts_g[gid]. __local uint *state = state_mem + lid*17.shiftC. /* Copy MT parameters to local memory */ mts->aaa = mts_g[gid]. /* Generate random numbers */ } The new execution times are shown in Table 6. mts += lid. __local mt_struct *mts){ int lid = get_local_id(0).7 113.nn. mts->maskB = mts_g[gid].int num_rand. int num_generator. out += gid * num_rand.6. (__local uint*)state).umask.B01-02632560-294 The OpenCL Programming Book __kernel void genrand(__global uint *out. int gid = get_global_id(0).shift1.__global mt_struct *mts_g.aaa. mts->mm = mts_g[gid]. mts->ww = mts_g[gid]. mts->lmask = mts_g[gid]. /* Initialize random numbers */ gen(out. mts->shiftC = mts_g[gid].ww.mm.
11: Grouping of work-items 224 . as well as the MT parameters. This would not be a problem for Tesla. which is 17 x 4 = 68 bytes. the local memory size is 16. In the original code. each work-item processes a different part of an array. This is known as a coalesced access. each work-item requires the storage of the state. each work-item requires 124 bytes.11. since the value of CL_DEVICE_MAX_COMPUT_UNITS is 30. On NVIDIA GPUs. Figure 6.384 bytes. Therefore. since the number of work-items per work-group will be increased. which can be broken up into the next 3 processes: • Updating state • Tempering • Computing the value of PI First. This change is illustrated in Figure 6.B01-02632560-294 The OpenCL Programming Book We see a definite increase in speed. Using this knowledge. which is 14 x 4 = 56. For our program. meaning 16384/124 ≈ 132 work-items can be processed within a work-group. it may not be able to process if the number of parameters is increased to 256 or 512. One of the properties of the NVIDIA GPU is that it is capable of accessing a block of continuous memory at once to be processed within a work-group. For GPUs that do not have many compute units. we will tackle the computation of PI. The thing to note about the local memory is that it is rather limited. we will write the code such that each work-group accesses continuous data on the memory. We will now further tune our program.
NULL. local_item_size_pi[0] = 128. 225 . global_item_size_pi[2] = 1. __local uint *count_per_wi) { int gid = get_group_id(0). int num_rand_all. /* Count the number of points within the circle */ __kernel void calc_pi(__global uint *out. int num_rand_per_compute_unit. end. begin. local_item_size_pi. kernel_pi. &ev_pi_end). 0. local_item_size_pi[1] = 1. int lid = get_local_id(0). int i. global_item_size_pi[0] = num_compute_unit * 128. and the number of work-items is set to 128. The number of work-groups is changed to CL_DEVICE_MAX_COMPUT_UNITS. The above is the change made on the host-side. int num_compute_unit. __global uint *rand. global_item_size_pi[1] = 1. /* Compute PI */ ret = clEnqueueNDRangeKernel(command_queue. NULL.B01-02632560-294 The OpenCL Programming Book The changes in the code are shown below. 1. local_item_size_pi[2] = 1. int count = 0. global_item_size_pi.
/* y coordinate */ len = (x*x + y*y). /* Reduce the end boundary index it is greater than the number of random numbers to generate*/ if (end > num_rand_all) end = num_rand_all.0f. x = ((float)(rand[i]>>16))/65535. } } /* Process the remaining elements */ if ((i + lid) < end) { float x.0f. end = begin + num_rand_per_compute_unit.0f. i < end-128. len. } } count_per_wi[lid] = count. /* Sum the counters from each work-item */ barrier(CLK_LOCAL_MEM_FENCE). /* Wait until all work-items are finished */ if (lid == 0) { int count = 0. 226 . i+=128) { float x. /* Process 128 elements at a time */ for (i=begin. // Compute the reference address corresponding to the local ID rand += lid. /* x coordinate */ y = ((float)(rand[i]&0xffff))/65535. y.B01-02632560-294 The OpenCL Programming Book /* Indices to be processed in this work-group */ begin = gid * num_rand_per_compute_unit. /* Distance from the origin */ if (len < 1) { /* sqrt(len) < 1 = len < 1 */ count++. /* Distance from the origin */ if (len < 1) { /* sqrt(len) < 1 = len < 1 */ count++. /* y coordinate */ len = (x*x + y*y). y. /* x coordinate */ y = ((float)(rand[i]&0xffff))/65535.0f. x = ((float)(rand[i]>>16))/65535. len.
and depending on the cost of synchronization. i++) { count += count_per_wi[i]. The code includes the case when the number of data to process is not a multiple of 128. /* Sum the counters */ } out[gid] = count. as well as the tempering process.B01-02632560-294 The OpenCL Programming Book for (i=0.05 ms. since the processing on each word does not require many instructions. and all work-items within a warp are executed synchronously. Figure 6. we will optimize the tempering algorithm. However. performance may potentially suffer from parallelization.10 below. some operations can be performed in parallel without the need to synchronize. Taking advantage of this property. 9 state update and 17 tempering can be performed in parallel. Next. For the parameters in our example program. NVIDIA GPUs performs its processes in "warps". A warp is made up of 32 work-items. this process has a parallel nature. Note that each work-group processes the input data such that all the work-items within the work-group access continuous address space. which is a 10-fold improvement over the previous code. } } The above code shows the changes made on the device side. The processing time using this code is 9. A new block diagram of the distribution of work-items and work-groups are shown in Figure 6. As stated earlier. This would enable these processes to be performed without increasing the synchronization cost. We had concluded previously that a use of SIMD units would be optimal for this process. to reduce the cost of synchronization. We will use one warp to perform the update of the state.10: Distribution of work-items for MT 227 . i < 128.
so making this change may not allow for proper operation on other devices. /* Number of work-items per group (warp = 32work items) */ 228 .B01-02632560-294 The OpenCL Programming Book Note that this optimization is done by using the property of NVIDIA GPUs. /* Total number of warps */ num_warp = warp_per_compute_unit * num_compute_unit . The changes to the program are shown below. /* Each compute unit process 4 warps */ warp_per_compute_unit = 4. /* Number of MT parameters per warp (rounded up) */ num_param_per_warp = (num_generator + (num_warp-1)) / num_warp.
sizeof(cl_uint)*17*num_work_item. 1. &num_rand_per_generator). sizeof(num_generator). 2. and this is sent in as an argument. &num_generator). /* Local Memory (mts) */ /* Set the number of work-groups and work-items per work-group */ global_item_size_mt[0] = num_work_item * num_compute_unit. /* Local Memory (state) */ clSetKernelArg(kernel_mt. int wlid) { int n = 17. local_item_size_mt[2] = 1. /* Set Kernel Arguments */ clSetKernelArg(kernel_mt. local_item_size_mt[1] = 1. The above shows the changes made on the host-side. /* Update state */ static void update_state(__local const mt_struct *mts. x. 0. /* Number of parameters to process per work-group */ clSetKernelArg(kernel_mt. __local uint *st. 3. lll = mts->lmask. sizeof(mt_struct)*num_work_item. &num_param_per_warp). sizeof(num_param_per_warp). /* Number of random numbers to generate for each MT parameter */ clSetKernelArg(kernel_mt. global_item_size_mt[1] = 1. uint aa = mts->aaa. local_item_size_mt[0] = num_work_item.B01-02632560-294 The OpenCL Programming Book num_work_item = 32 * warp_per_compute_unit. /* Number of random numbers to generate */ clSetKernelArg(kernel_mt. sizeof(num_rand_per_generator). 5. 229 . The number of random numbers to be generated by each work-group is first computed. 6. which is 4 warps. /* Number of random numbers per group */ num_rand_per_compute_unit = (num_rand_all + (num_compute_unit-1)) / num_compute_unit.lim. sizeof(cl_mem). (void*)&rand). NULL). /* Random numbers (output of genrand) */ clSetKernelArg(kernel_mt. m = 8. The number of work-items in each work-group is set to 128. 4. /* MT parameter (input to genrand) */ clSetKernelArg(kernel_mt. uint uuu = mts->umask. global_item_size_mt[2] = 1. sizeof(cl_mem). (void*)&dev_mts). NULL). int k.
No problem as long as the read operation * for each work-item within a work-group is finished before writing */ k = wlid + 9. * but since the operations within a warp is synchronized. wlid). const __local mt_struct *mts. x = (st[k]&uuu)|(st[k+1]&lll). } if (wlid < 7) { /* Same reasoning as above. if (i == n-1) m = num_rand%nn. j. nn = mts->nn. } } static inline void gen(__global uint *out. st[n-1] = st[m-1] ^ (x>>1) ^ (x&1U ? aa : 0U). the write to * st[k] by each work-item occurs after read from st[k+1] and st[k+m] */ k = wlid. if (wlid < m) { /* tempering performed within 1 warp */ 230 . st[k] = st[k+m-n] ^ (x>>1) ^ (x&1U ? aa : 0U). state. __local uint *state.B01-02632560-294 The OpenCL Programming Book if (wlid < 9) { /* Accessing indices k+1 and k+m would normally have dependency issues. int num_rand. for (i=0. x = (st[k]&uuu)|(st[k+1]&lll). } if (wlid == 0) { x = (st[n-1]&uuu)|(st[0]&lll). int wlid) { int i. update_state(mts. i++) { int m = nn. n. n = (num_rand+(nn-1)) / nn. i < n. st[k] = st[k+m] ^ (x>>1) ^ (x&1U ? aa : 0U).
int num_rand. } } } __kernel void genrand(__global uint *out. end. generator_id < end. int generator_id. uint x = state[j]. mts = mts + warp_id. /* Local ID within the warp */ __local uint *state = state_mem + warp_id*17. x ^= (x << mts->shiftB) & mts->maskB. __local mt_struct *mts){ int warp_per_compute_unit = 4. __local uint *state_mem. int workitem_per_warp = 32. /* Store state in local memory */ end = num_param_per_warp*warp_id + num_param_per_warp. int warp_id = wid * warp_per_compute_unit + lid / workitem_per_warp. generator_id ++) { 231 .__global mt_struct *mts_g. uint num_param_per_warp. int wlid = lid % workitem_per_warp. int num_generator. x ^= (x << mts->shiftC) & mts->maskC. x ^= x >> mts->shift1. int lid = get_local_id(0). /* Loop for each MT parameter within this work-group */ for (generator_id = num_param_per_warp*warp_id. if (end > num_generator) end = num_generator. x ^= x >> mts->shift0.B01-02632560-294 The OpenCL Programming Book int j = wlid. out[i*nn + j] = x. int wid = get_group_id(0).
shiftB. (__local uint*)state. mts->shiftC = mts_g[generator_id]. and parallel processing is performed within the warp.B01-02632560-294 The OpenCL Programming Book if (wlid == 0) { /* Copy MT parameters to local memory */ mts->aaa = mts_g[generator_id]. num_rand. wlid).lmask. reducing the need for synchronization.aaa. (__local uint*)state). mts->shift1 = mts_g[generator_id]. mts->wmask = mts_g[generator_id].nn.8 35. mts->shiftB = mts_g[generator_id]. mts->shift0 = mts_g[generator_id]. It is changed such that one DC parameter is processed on each warp. mts->maskB = mts_g[generator_id]. /* Initialize random numbers */ } gen(out + generator_id*num_rand. mts->nn = mts_g[generator_id]. Table 6. mts.7: Processing times after optimization # MT parameters 128 256 512 mt (ms) 57. mts->rr = mts_g[generator_id].shift1. mts->maskC = mts_g[generator_id].7 Note the processing times are reduced significantly.0 42.rr. mts->umask = mts_g[generator_id]. sgenrand_mt(0x33ff*generator_id.maskC. 232 . mts->lmask = mts_g[generator_id].maskB. /* Generate random numbers */ } } The above code shows the changes made on the device-side.ww.umask. The processing times after making these changes are shown in Table 6.wmask. mts->ww = mts_g[generator_id].mm. mts.7 below. mts->mm = mts_g[generator_id].shift0.shiftC.
lmask. char **argv) 019:{ 020: */ 021: 022: 023: 024: 025: 026: 027: 028: 029: 030: cl_int num_rand_per_compute_unit.19 (device). 011: cl_uint maskB. double pi.h> 005: 006:typedef struct mt_struct_s { 007: cl_uint aaa. cl_device_id device_id = NULL. List 6.h" 015: 016:#define MAX_SOURCE_SIZE (0x100000) 017: 018:int main(int argc. maskC.18 (host) and List 6. shiftC.h> 004:#include <math. cl_platform_id platform_id = NULL. cl_context context = NULL. cl_uint ret_num_devices. /* The number of random numbers to generate 233 . shiftB. int count_all. 009: cl_uint wmask.umask. 008: cl_int mm. shift1. num_rand_per_generator.h> 003:#include <stdio. cl_int num_rand_all = 4096*256*32. 013: 014:#include "mts.ww. cl_uint ret_num_platforms. 010: cl_int shift0.nn.B01-02632560-294 The OpenCL Programming Book The final code for the program with all the changes discussed is shown in List 6. cl_uint num_generator.h> 002:#include <CL/cl.18: Host-side optimized for NVIDIA GPU 001:#include <stdlib. unsigned int num_work_item. i.rr. there is a 90-fold improvement since the original program [20]. Running this code on Tesla. 012:} mt_struct .
prof_pi_end. cl_kernel kernel_mt = NULL. count. cl_uint num_compute_unit. cl_ulong prof_start. char *kernel_src_str. if (n == 128) { mts = mts128. num_rand_per_generator = num_rand_all / 256. mts_size = sizeof(mts128). size_t global_item_size_mt[3]. size_t global_item_size_pi[3]. ev_copy_end. num_generator = 256. } } num_param_per_warp. cl_program program = NULL. cl_event ev_mt_end. mt_struct *mts = NULL. int mts_size. num_warp. mts_size = sizeof(mts256). prof_mt_end: cl_command_queue command_queue = NULL. kernel_pi = NULL. warp_per_compute_unit. } if (n == 256) { mts = mts256. num_generator = 128. cl_mem rand. num_rand_per_generator = num_rand_all / 128. 234 . size_t kernel_code_size. ev_pi_end. cl_int ret. local_item_size_pi[3]. local_item_size_mt[3]. prof_copy_end. if (argc >= 2) { int n = atoi(argv[1]). cl_uint *result. FILE *fp. cl_mem dev_mts.
CL_MEM_READ_WRITE. "r"). mts_size = sizeof(mts512). 1. clGetDeviceIDs(platform_id. fclose(fp). NULL. /* Build Program*/ program = clCreateProgramWithSource(context. &ret). NULL. &platform_id. kernel_pi = clCreateKernel(program. kernel_src_str = (char*)malloc(MAX_SOURCE_SIZE). num_generator = 512. sizeof(cl_uint)*num_generator. "genrand". 1. &device_id. count = clCreateBuffer(context. clBuildProgram(program. NULL. &ret_num_platforms). CL_QUEUE_PROFILING_ENABLE. NULL. fp). &device_id. &ret). &ret_num_devices). context = clCreateContext( NULL. 1. &ret). result = (cl_uint*)malloc(sizeof(cl_uint)*num_generator).cl". fp = fopen("mt. 1.B01-02632560-294 The OpenCL Programming Book: kernel_mt = clCreateKernel(program. kernel_code_size = fread(kernel_src_str. &ret). num_rand_per_generator = num_rand_all / 512. NULL). &ret). /* Create output buffer */ rand = clCreateBuffer(context. sizeof(cl_uint)*num_rand_all. "calc_pi". CL_DEVICE_TYPE_DEFAULT. &ret). 1. "". CL_MEM_READ_WRITE. MAX_SOURCE_SIZE. command_queue = clCreateCommandQueue(context. &ret). device_id. 235 . clGetPlatformIDs(1. (const char **)&kernel_src_str. NULL. &device_id. (const size_t *)&kernel_code_size. } if (mts == NULL) { mts = mts512.
(void*)&dev_mts). NULL). /* MT parameter clSetKernelArg(kernel_mt. sizeof(cl_mem). (output of genrand) */ (input to genrand) */ &num_rand_per_generator). NULL). (void*)&rand). 4. /* Local clSetKernelArg(kernel_mt. /* Number of MT parameters per warp (rounded up) */ num_param_per_warp = (num_generator + (num_warp-1)) / num_warp. 6. sizeof(mt_struct)*num_work_item. /* Local random numbers to generate */ /* Number of parameters to process per work-group */ Memory (state) */ Memory (mts) */ 236 . &num_generator). NULL. clGetDeviceInfo(device_id. mts_size. mts. sizeof(num_param_per_warp). mts_size. &num_param_per_warp). sizeof(cl_mem). /* Total number of warps */ num_warp = warp_per_compute_unit * num_compute_unit . 3. sizeof(num_rand_per_generator). /* Number of random numbers per group */ num_rand_per_compute_unit = (num_rand_all + (num_compute_unit-1)) / num_compute_unit. 1. /* Number of work-items per group (warp = 32work items) */ num_work_item = 32 * warp_per_compute_unit.B01-02632560-294 The OpenCL Programming Book 099: 100: 101: 102: 103: 104: 105: 106: 107: 108: 109: 110: 111: 112: 113: 114: 115: 116: 117: 118: 119: /* Create input parameter */ dev_mts = clCreateBuffer(context. CL_MEM_READ_WRITE. clSetKernelArg(kernel_mt. NULL. 0. NULL). sizeof(cl_uint). 0. sizeof(cl_uint)*17*num_work_item. CL_DEVICE_MAX_COMPUTE_UNITS. /* Number of clSetKernelArg(kernel_mt. /* Set Kernel Arguments */ clSetKernelArg(kernel_mt. 0. dev_mts. &num_compute_unit. /* Number of random numbers to generate for each MT parameter */ 120: 121: 122: 123: clSetKernelArg(kernel_mt. sizeof(num_generator). &ret). NULL). /* Random numbers clSetKernelArg(kernel_mt. 5. /* Each compute unit process 4 warps */ warp_per_compute_unit = 4. CL_TRUE. clEnqueueWriteBuffer(command_queue. 2.
local_item_size_pi. &ev_mt_end). 1. /* Number of random numbers to process per work-group */ Number of MT parameters */ random numbers used */ global_item_size_mt[1] = 1. &ev_copy_end). sizeof(cl_uint)*num_generator. NULL. 0. sizeof(cl_mem). 4. (void*)&count). &num_rand_all). 0. sizeof(num_rand_per_compute_unit). local_item_size_mt. local_item_size_mt[2] = 1. NULL. 0. /* Number of clSetKernelArg(kernel_pi. (void*)&rand). sizeof(cl_uint)*128. &ev_pi_end). sizeof(num_compute_unit). clSetKernelArg(kernel_pi. local_item_size_pi[2] = 1. local_item_size_mt[0] = num_work_item. 5. /* clSetKernelArg(kernel_pi. sizeof(cl_mem). NULL. NULL). global_item_size_pi[1] = 1. kernel_mt. 3. kernel_pi. 237 .B01-02632560-294 The OpenCL Programming Book 124: 125: 126: 127: 128: 129: 130: 131: 132: 133: 134: 135: 136: 137: 138: 139: 140: 141: 142: 143: 144: 145: 146: 147: 148: /* Average the values of PI */ /* Get result */ clEnqueueReadBuffer(command_queue. sizeof(num_rand_all). global_item_size_mt. 0. count. result. global_item_size_mt[2] = 1. global_item_size_pi[2] = 1. /* Memory used for counter */ within circle (output of calc_pi) */ (input to calc_pi) */ &num_rand_per_compute_unit). &num_compute_unit). 2. clSetKernelArg(kernel_pi. /* Random numbers clSetKernelArg(kernel_pi. 1. /* Create a random number array */ clEnqueueNDRangeKernel(command_queue. local_item_size_pi[0] = 128. 1. /* Counter for points clSetKernelArg(kernel_pi. local_item_size_mt[1] = 1. CL_TRUE. global_item_size_pi. local_item_size_pi[1] = 1. NULL. NULL. 0. /* Set the number of work-groups and work-items per work-group */ global_item_size_mt[0] = num_work_item * num_compute_unit. global_item_size_pi[0] = num_compute_unit * 128. /* Compute PI */ clEnqueueNDRangeKernel(command_queue.
0).0).0)). i<(int)num_compute_unit. sizeof(cl_ulong). &prof_pi_end. (prof_pi_end . 238 . clReleaseEvent(ev_pi_end).prof_mt_end)/(1000000. clReleaseProgram(program). clReleaseCommandQueue(command_queue). printf("pi = %. for (i=0. pi). NULL). } pi = ((double)count_all)/(num_rand_all) * 4. sizeof(cl_ulong). clReleaseEvent(ev_mt_end). clReleaseEvent(ev_copy_end). &prof_mt_end. NULL).prof_pi_end)/(1000000. NULL). clReleaseContext(context). clGetEventProfilingInfo(ev_pi_end. i++) { count_all += result[i]. clGetEventProfilingInfo(ev_copy_end. &prof_copy_end. sizeof(cl_ulong). printf(" mt: %f[ms]¥n" " pi: %f[ms]¥n" " copy: %f[ms]¥n".20f¥n". clReleaseKernel(kernel_pi). CL_PROFILING_COMMAND_QUEUED. (prof_mt_end . sizeof(cl_ulong). CL_PROFILING_COMMAND_END.prof_start)/(1000000. NULL). clReleaseMemObject(rand).B01-02632560-294 The OpenCL Programming Book 149: 150: 151: 152: 153: 154: 155: 156: 157: 158: 159: 160: 161: 162: 163: 164: 165: 166: 167: 168: 169: 170: 171: 172: 173: 174: 175: 176: 177: 178: 179: 180: count_all = 0. clGetEventProfilingInfo(ev_mt_end. /* Get execution time info */ clGetEventProfilingInfo(ev_mt_end. CL_PROFILING_COMMAND_END. CL_PROFILING_COMMAND_END. (prof_copy_end . &prof_start. clReleaseMemObject(count). clReleaseKernel(kernel_mt).
int shift0. __local uint *st. __local uint *state) { 011: 012: 013: 014: 015: 016: 017: 018:} 019: 020:/* Update state */ 021:static void update_state(__local const mt_struct *mts. shiftB. uint maskB.nn. the write to * st[k] by each work-item occurs after read from st[k+1] and st[k+m] */ int n = 17. __local const mt_struct *mts.ww. i++) state[i] &= mts->wmask. uint uuu = mts->umask. 239 . return 0. i < mts->nn. int mm. lll = mts->lmask. uint aaa. shift1. uint aa = mts->aaa.lmask. shiftC.19: Device-side optimized for NVIDIA GPU 001:typedef struct mt_struct_s { 002: 003: 004: 005: 006: 008: 009:/* Initialize state using a seed */ 010:static void sgenrand_mt(uint seed. int wlid) { 022: 023: 024: 025: 026: 027: 028: 029: 030: if (wlid < 9) { /* Accessing indices k+1 and k+m would normally have dependency issues. uint wmask. x.umask.rr. free(result). int k. 007:} mt_struct . } for (i=0.B01-02632560-294 The OpenCL Programming Book 181: 182: 183: 184:} free(kernel_src_str). seed = (1812433253 * (seed ^ (seed >> 30))) + i + 1. int i. maskC. for (i=0. * but since the operations within a warp is synchronized.lim. i < mts->nn. List 6. i++) { state[i] = seed. m = 8.
x ^= (x << mts->shiftC) & mts->maskC. int num_rand. if (i == n-1) m = num_rand%nn. n. j. wlid). No problem as long as the read operation * for each work-item within a work-group is finished before writing */ k = wlid. i < n. int i. st[k] = st[k+m] ^ (x>>1) ^ (x&1U ? aa : 0U). x = (st[k]&uuu)|(st[k+1]&lll):static inline void gen(__global uint *out. n = (num_rand+(nn-1)) / nn. i++) { int m = nn. int wlid) { 051: 052: 053: 054: 055: 056: 057: 058: 059: 060: 061: 062: 063: 064: 065: if (wlid < m) { /* tempering performed within 1 warp */ int j = wlid. } if (wlid < 7) { /* Same reasoning as above. x ^= x >> mts->shift0. update_state(mts. 240 . x = (st[k]&uuu)|(st[k+1]&lll). st[n-1] = st[m-1] ^ (x>>1) ^ (x&1U ? aa : 0U). nn = mts->nn. x ^= (x << mts->shiftB) & mts->maskB. __local uint *state. k = wlid + 9. state. for (i=0. uint x = state[j]. st[k] = st[k+m-n] ^ (x>>1) ^ (x&1U ? aa : 0U). } } if (wlid == 0) { x = (st[n-1]&uuu)|(st[0]&lll). const __local mt_struct *mts.
generator_id ++) mts = mts + warp_id. /* Store state in local memory */ uint num_param_per_warp. generator_id < end. out[i*nn + j] = x.aaa.int num_rand. int wid = get_group_id(0). int num_generator. /* Local ID within the warp */ 241 . mts->mm = mts_g[generator: { if (wlid == 0) { /* Copy MT parameters to local memory */ mts->aaa = mts_g[generator_id]. int workitem_per_warp = 32. int lid = get_local_id(0). 072:__kernel void genrand(__global uint *out. __local mt_struct *mts){ int warp_per_compute_unit = 4. __local uint *state = state_mem + warp_id*17. __local uint *state_mem.B01-02632560-294 The OpenCL Programming Book 066: 067: 068: 069: 070:} 071: } } x ^= x >> mts->shift1. int warp_id = wid * warp_per_compute_unit + lid / workitem_per_warp. end. if (end > num_generator) end = num_generator. /* Loop for each MT parameter within this work-group */ for (generator_id = num_param_per_warp*warp_id.mm. int generator_id. end = num_param_per_warp*warp_id + num_param_per_warp.__global mt_struct *mts_g. int wlid = lid % workitem_per_warp.
__global uint *rand. end = begin + num_rand_per_compute_unit. int count = 0. mts->ww = mts_g[generator_id]. int num_rand_per_compute_unit. mts. 121: 122: 123: 124: 125: 126: 127: 128: 129: 130: 131: 132: /* Reduce the end boundary index it is greater than the number of random numbers to generate*/ /* Indices to be processed in this work-group */ begin = gid * num_rand_per_compute_unit. 242 . mts->shift0 = mts_g[generator_id]. mts->maskC = mts_g[generator_id]. /* Initialize random numbers */ gen(out + generator_id*num_rand.shift0. wlid).lmask.shift1. mts->shift1 = mts_g[generator_id].maskB. mts->shiftC = mts_g[generator_id]. mts->umask = mts_g[generator_id]. int lid = get_local_id(0). mts->lmask = mts_g[generator_id]. (__local uint*)state.shiftB. int num_compute_unit.ww.shiftC. mts. sgenrand_mt(0x33ff*generator_id.rr. (__local uint*)state). mts->wmask = mts_g[generator_id].B01-02632560-294 The OpenCL Programming Book 101: 102: 103: 104: 105: 106: 107: 108: 109: 110: 111: 112: 113: 114: 115: 116: 117:} 118: } } mts->nn = mts_g[generator_id].maskC. num_rand.wmask.nn. mts->maskB = mts_g[generator_id]. /* Generate random numbers */ 119:/* Count the number of points within the circle */ 120:__kernel void calc_pi(__global uint *out. begin.umask. end. mts->shiftB = mts_g[generator_id]. int i. int num_rand_all. __local uint *count_per_wi) { int gid = get_group_id(0). mts->rr = mts_g[generator_id].
/* Process 128 elements at a time */ for (i=begin. y. /* Distance from the origin */ if (len < 1) { /* sqrt(len) < 1 = len < 1 */ count++. y. /* x coordinate */ y = ((float)(rand[i]&0xffff))/65535.0f. } } /* Process the remaining elements */ if ((i + lid) < end) { float x. // Compute the reference address corresponding to the local ID rand += lid. i+=128) { float x. x = ((float)(rand[i]>>16))/65535. i<end-128.0f. /* y coordinate */ len = (x*x + y*y). /* Wait until all work-items are finished */ 243 . /* Distance from the origin*/ if (len < 1) { /* sqrt(len) < 1 = len < 1 */ count++. /* y coordinate */ len = (x*x + y*y). x = ((float)(rand[i]>>16))/65535.B01-02632560-294 The OpenCL Programming Book 133: 134: 135: 136: 137: 138: 139: 140: 141: 142: 143: 144: 145: 146: 147: 148: 149: 150: 151: 152: 153: 154: 155: 156: 157: 158: 159: 160: 161: 162: 163: 164: 165: 166: 167: 168: if (end > num_rand_all) end = num_rand_all. len.0f. } } count_per_wi[lid] = count. len.0f. /* Sum the counters from each work-item */ barrier(CLK_LOCAL_MEM_FENCE). /* x coordinate */ y = ((float)(rand[i]&0xffff))/65535.
for (i=0. i++) { count += count_per_wi[i]. } 244 . i < 128. /* Sum the counters */ } out[gid] = count.B01-02632560-294 The OpenCL Programming Book 169: 170: 171: 172: 173: 174: 175: 176:} if (lid == 0) { int count = 0.
. 948. 245 .. Reston.nvidia. it is common for the device to not run an OS. In AFIPS Conference Proceedings vol. 10:. 13: At present. but this will require calls to multiple CUDA-specific library functions. 483-485.ibm. (2) Because of the limited memory.html 9: The kernel can be executed without using the <<<>>> constructor. 12: Some reasons for this requirement are (1) Each processor on the device is often only capable of running small programs. pp.M.top500.org/wiki/Parallel_computing 2: Flynn. IEEE Trans.com/object/cuda_home.com/en-us/articles/optimizing-software-applications-for-numa/ 6: Amdahl. The OpenCL implementation should be chosen wisely depending on the platform.. Validity of the single-processor approach to achieving large scale computing capabilities. This is actually beneficial. Some Computer Organizations and Their Effectiveness.com/design/pentium/datashts/242016. due to limited accessible memory. Va. 1972. 8:. Comput.wikipedia. This should be kept in mind when programming a platform with multiple devices from different chip vendors. 1967.com/developerworks/power/cell/index.. Apr. 18-20). Chapter 2 7: NVIDIA uses the term "GPGPU computing" when referring to performing general-purpose computations through the graphics API. AFIPS Press.intel. pp.. The term "GPU computing" is used to refer to performing computation using CUDA or OpenCL. and an OpenCL runtime library by another company. N. M. 30 (Atlantic City.html 11: The devices that can be used depend on the OpenCL implementation by that company.. 3: TOP500 Supercomputer Sites.org 4: Intel MultiProcessor Specification. This text will use the term "GPGPU" to refer to any general-purpose computation performed on the GPU.B01-02632560-294 The OpenCL Programming Book Notes Chapter 1 1: Wikipedia. C-21. as it allows the processors to concentrate on just the computations. Vol. 5:. G. it is not possible to use an OpenCL compiler by one company.
this only allows the kernel to read from the specified address space. Chapter 6 18: 19:. each kernel would have to be compiled independently.3. the host is only allowed access to either the global memory or the constant memory on the device. but it is most probable that this would be the case. If the "CL_MEM_READ_ONLY" is specified.com/compute/cuda/2_3/toolkit/docs/NVIDIA_CUDA_Progra mming_Guide_2. Chapter 5 16: The OpenCL specification does not state that the local memory will correspond to the scratch-pad memory. Victor Podlozhnyuk.nvidia. each containing corresponding number of DC parameters. 17: At the time of this writing (Dec 2009). as one only need to know that three variables are declared with the names "mts128". "mts512". but the image objects are not supported. 15: Otherwise. the Mac OS X OpenCL returns CL_TRUE.pdf 20: The "mts. Also. and does not imply that the buffer would be created in the constant memory. Also.download.B01-02632560-294 The OpenCL Programming Book Chapter 4 14: The flag passed in the 2nd argument only specifies how the kernel-side can access the memory space. "mts256". for those experienced in CUDA.h" is not explained here. as it corresponds to the shared memory. Parallel Mersenne Twister 246 . note that the OpenCL local memory does not correspond with the CUDA local memory. | https://www.scribd.com/document/51894199/OpenCL-Programming | CC-MAIN-2017-39 | refinedweb | 52,374 | 60.92 |
This document contains rules useful when you're porting selected KDE library to win32. Most of these rules are also valid for porting external libraries code, like application's libraries and even application's standalone code.
!!1. Before you start
!!2. Absolute dirs checking Look for '/' and "/" and change every single code like:
~pp~
if (path[0]=='/')
~/pp~ or:
~pp~
if (path.startsWith('/'))
~/pp~ with:
~pp~
if (!QDir::isRelativePath(path))
~/pp~ (or "QDir::isRelativePath(path)" if there was used path[[0]!='/')
!!3. Ifdefs
3.1. __C++ code.__
Macros for C++ code are defined in qglobal.h file. If you've got included at least one Qt header, you probably have qglobal.h included already, otherwise, include it explicity.
Use ~pp~
#ifdef Q_WS_X11 .... #endif
~/pp~ for any C++ code that looks like X11-only.
Use ~pp~
#ifdef Q_OS_UNIX .... #endif
~/pp~ for any C++ code that looks like UNIX-only, for example uses UNIX-specific OS features.
Use ~pp~
#ifdef Q_WS_WIN .... #endif
~/pp~ for any C++ code that is MSWindows-only.
3.2. __C code.__ Note that qglobal.h is C++-only, so instead use ~pp~
#ifdef _WINDOWS .... #endif
~/pp~
for any C code that is MSWindows-only (regardless to compiler type). In fact, you could use built-in _WIN32 but it's not defined on incoming 64bit MS Windows platform (_WIN64 is used there). So, there's a global rule for kdelibs/win32 defined globally in your build system (you don't need to include any file for this).
3.3. Rare cases: How to check in Windows-only code which compiler is used?
__MS Visual C++ - Qt-independent code (especially, C code)__ ~pp~
#ifdef _MSC_VER ....//msvc code #endif
~/pp~
__MS Visual C++ - Qt code__ ~pp~
#ifdef Q_CC_MSVC ....//msvc code #endif
~/pp~
__Borland C++ - Qt-independent code (especially, C code)__ ~pp~
#ifdef __BORLANDC__ ....//borland code #endif
~/pp~
__Borland C++ - Qt code__ ~pp~
#ifdef Q_CC_BOR ....//borland code #endif
~/pp~
3.4. General notes. In many places using #ifdef Q_OS_UNIX / #else / #endif is more readable than separate #ifdefs.
3.5. Related links
!!4. Header files 4.1. __Common header file__. Unless there is are any header file from kdelibs included in your header file, you need to add: ~pp~
#include <kdelibs_export.h>
~/pp~ at the beginning of your header file to have some necessary system-independent macros defined.
4.2. __Export macros__.:
~pp~ class KDEFOO_EXPORT FooClass { ... }; ~/pp~
_:
~pp~
~/pp~: ~pp~
if (WIN32) # for shared libraries/plugins a -DMAKE_target_LIB is required string(TOUPPER ${_target_NAME} _symbol) set(_symbol "MAKE_${_symbol}_LIB") set_target_properties(${_target_NAME} PROPERTIES DEFINE_SYMBOL ${_symbol}) endif (WIN32)
~/pp~
4.3. __Exporting global functions__. Also add the same ***_EXPORT at the beginning of public functions' declaration and definition (just before function's type). This also includes functions defined within a namespace.
Example: ~pp~ namespace Foo {
KDEFOO_EXPORT int publicFunction();
} ~/pp~
4.4. __What not to export?__
~pp~ template <class T> class KGenericFactoryBase ~/pp~
4.5. __Visibility__ There are classes or functions that are made "internal", by design. If you really decided anybody could neven need to link against these classes/functions, you don't need to add **_EXPORT macro for them.
4.6. __Deprecated classes__ Before porting KDElibs to win32, I realized that deprecated classes already use KDE_DEPRECATED macro. We're unable to add another macro like this:
~pp~ class KDEFOO_EXPORT KDE_DEPRECATED FooClass { //< - bad for moc! ... }; ~/pp~
..because moc'ing will fail for sure. We've defined special macros like that in kdelibs_export.h file (fell free to add your own if needed):
~pp~
~/pp~
So, we have following example of deprecated class:
~pp~ class KABC_EXPORT_DEPRECATED FooClass { //<- ok for moc ... }; ~/pp~
.. which is ok for __moc__. Note that sometimes KDE_DEPRECATED is also used at the end of functions. You don't need to change it for win32 in any way.
!!5. Loadable KDE modules/plugins
5.1. __K_EXPORT_COMPONENT_FACTORY macro__
Use K_EXPORT_COMPONENT_FACTORY( libname, factory ), defined in klibloader.h, instead of hardcoding: ~pp~
extern "C" {void *init_libname() { return new factory; } };
~/pp~ ...because the former way is more portable (contains proper export macro, which ensures visiblility of "init_libname" symbol).
Examples: ~pp~ K_EXPORT_COMPONENT_FACTORY( ktexteditor_insertfile,
GenericFactory<InsertFilePlugin>( "ktexteditor_insertfile" ) )
K_EXPORT_COMPONENT_FACTORY( libkatepart, KateFactoryPublic ) ~/pp~
5.2. __More complex case__
Sometimes you need to declare a factory which defined as a template with multiple arguments, eg.:
~pp~ extern "C" {
void* init_resourcecalendarexchange() { return new KRES::PluginFactory<ResourceExchange,ResourceExchangeConfig>(); }
} ~/pp~
... but compiler complains about too many arguments passed to K_EXPORT_COMPONENT_FACTORY. To avoid this, you can use __typedef__:
~pp~ typedef KRES::PluginFactory<ResourceExchange,ResourceExchangeConfig> MyFactory; K_EXPORT_COMPONENT_FACTORY(resourcecalendarexchange, MyFactory) ~/pp~
The same trick can be used if the constructor of the factory takes multiple arguments. | https://techbase.kde.org/index.php?title=Projects/KDE_on_Windows/Porting_Guidelines&oldid=19576 | CC-MAIN-2015-11 | refinedweb | 763 | 50.94 |
#include <assert.h>
Go to the source code of this file.
Macro to do binary search for first occurence.
Invariant: x[l] < key and x[u] >= key and l < u
Macro to do binary search for last occurence.
Invariant: x[l] <= key and x[u] > key and l < u
Declare all functions for an array.
Define a function to clone an array.
Free the contents of an array.
Define a get function for an array.
Initialize an array.
Define a set function for an array.
Define all functions for an array.
Macro to compute diff of two arrays, which as a side effect will be sorted after the operation has completed.
Resize an array. | https://dev.mysql.com/doc/dev/mysql-server/latest/xdr__utils_8h.html | CC-MAIN-2020-45 | refinedweb | 114 | 79.16 |
Writing a parser in Haskell
Previously I wrote an interpreter for an imperative programming language, “JimScript”. JimScript programs looked like this:
writeTheAlphabet :: E writeTheAlphabet = ESeq (ESet "x" (EInt 65)) (EWhile (EBinOp Lte (EGet "x") (EInt 90)) (ESeq (EWriteByte (EGet "x")) (ESet "x" (EBinOp Add (EGet "x") (EInt 1)))))
JimScript programs were rather unreadable,
because they were written as Haskell expressions.
But now JimScript has its own syntax,
and the above program can be written like this file,
write_the_alphabet.jimscript:
#/usr/bin/env jimscript (set x 'A') (while (<= x 'Z') (write x) (set x (+ x 1))) # increment
The JimScript interpreter reads files like the above
and transforms them into Haskell values before execution.
After reading
write_the_alphabet.jimscript,
we get the Haskell string
"#/usr/bin/env [...] (+ x 1))))".
To transform this
String into an
E (expression value),
there are three stages: tokenization, nesting, then parsing.
main :: IO () main = do (f:_) <- Environment.getArgs script <- readFile f let e = parse . nest . tokenize $ script eval Map.empty e return ()
The token type
T includes literals, symbols, and parentheses.
data T = TOpen | TClose | TSymbol String | TInt Int
After tokenization, the string is transformed into a flat list of tokens.
Our
write_the_alphabet.jimscript program is tokenized to
[TOpen, TSymbol "set", TSymbol "x", ..., TClose, TClose, TClose].
Notice there is no
TChar,
so the expression
'A' in the source syntax is sugar for the integer
65.
The next stage, nesting, uses the
TOpen and
TClose tokens
to make a tree structure which I call a nest, type
N:
data N = NList [N] | NSymbol String | NInt Int
(The source syntax might look like S-expressions, but they’re not quite.
S-expressions have an additional form
(a . b) which makes them binary trees;
my “nest” type is not a binary tree but a rose tree.)
After nesting, our
write_the_alphabet.jimscript looks like:
NList [NSymbol "seq", NList [NSymbol "set", NSymbol "x", NInt 65], NList [NSymbol "while", NList [NSymbol "<=", NSymbol "x", NInt 90], NList [NSymbol "write", NSymbol "x"], NList [NSymbol "set",NSymbol "x", NList [NSymbol "+", NSymbol "x", NInt 1]]]]
Finally, the nested lists are parsed to produce the abstract syntax proper;
the
writeTheAlphabet value which you first saw.
Now let’s look at the implementation of tokenization, nesting, and parsing.
Tokenization looks at the first character of the string and uses this to determine which kind of token is first; with this decision, it greedily takes the largest possible token of that type. Tokenization also strips out whitespace and comments.
tokenize :: String -> [T] tokenize [] = [] tokenize ('#':cs) = tokenize $ dropWhile (/= '\n') cs tokenize ('(':cs) = TOpen : tokenize cs tokenize (')':cs) = TClose : tokenize cs tokenize ('\'':'\\':'\\':'\'':cs) = TInt (Char.ord '\\') : tokenize cs tokenize ('\'':'\\':'\'':'\'':cs) = TInt (Char.ord '\'') : tokenize cs tokenize ('\'':'\\':'n':'\'':cs) = TInt (Char.ord '\n') : tokenize cs tokenize ('\'':c:'\'':cs) = TInt (Char.ord c) : tokenize cs tokenize (c : cs) | Char.isNumber c = TInt (read $ c : takeWhile Char.isNumber cs) : tokenize (dropWhile Char.isNumber cs) | isSymbolChar c = TSymbol (c : takeWhile isSymbolChar cs) : tokenize (dropWhile isSymbolChar cs) | Char.isSpace c = tokenize cs | otherwise = error $ "unexpected character: " ++ [c] isSymbolChar c = Char.isAlphaNum c || elem c "=+<-/%"
Nesting is a slightly subtle process,
which uses two mutually recursive functions,
nestOne and
nestMany.
nestOne finds the first nest from the list of tokens,
and also hands back any remaining tokens.
nestMany takes as many nests as possible.
nestOne :: [T] -> ([N], [T]) nestOne [] = ([], []) nestOne (TOpen : ts) = let (ns, ts') = nestMany [] ts in ([NList ns], ts') nestOne (TSymbol s : ts) = ([NSymbol s], ts) nestOne (TInt i : ts) = ([NInt i], ts) nestOne (TClose : ts) = ([], ts) nestMany :: [N] -> [T] -> ([N], [T]) nestMany prev ts = case nestOne ts of ([], ts') -> (prev , ts') (ns, ts') -> nestMany (prev++ns) ts' nest :: [T] -> N nest ts = case nestMany [] ts of (ns, []) -> NList $ NSymbol "seq" : ns _ -> error "unexpected content"
Finally, parsing takes a nest and converts it to the more restrictive expression datatype
E.
Each expression form has one way to be represented as a nest.
parse matches on the nest to find the appropriate expression.
parse :: N -> E parse (NInt i) = EInt i parse (NList [NSymbol "-", NInt i]) = EInt $ negate i parse (NList [NSymbol "+", a, b]) = EBinOp Add (parse a) (parse b) parse (NList [NSymbol "-", a, b]) = EBinOp Sub (parse a) (parse b) parse (NList [NSymbol "/", a, b]) = EBinOp Div (parse a) (parse b) parse (NList [NSymbol "%", a, b]) = EBinOp Mod (parse a) (parse b) parse (NList [NSymbol "=", a, b]) = EBinOp Eq (parse a) (parse b) parse (NList [NSymbol "!=", a, b]) = EBinOp Neq (parse a) (parse b) parse (NList [NSymbol "<", a, b]) = EBinOp Lt (parse a) (parse b) parse (NList [NSymbol "<=", a, b]) = EBinOp Lte (parse a) (parse b) parse (NList [NSymbol "and", a, b]) = EBinOp And (parse a) (parse b) parse (NList [NSymbol "not", a]) = ENot (parse a) parse (NList [NSymbol "get", NSymbol a]) = EGet a parse (NList [NSymbol "set", NSymbol a, b]) = ESet a (parse b) parse (NList [NSymbol "if", a, b, c]) = EIf (parse a) (parse b) (parse c) parse (NList (NSymbol "seq" : xs)) = foldr1 ESeq $ map parse xs parse (NList (NSymbol "while" : a : bs)) = EWhile (parse a) (foldr1 ESeq $ map parse bs) parse (NList [NSymbol "do-while", a, b]) = EDoWhile (parse a) (parse b) parse (NList [NSymbol "skip"]) = ESkip parse (NList [NSymbol "write", a]) = EWriteByte (parse a) parse (NList [NSymbol "read"]) = EReadByte parse (NSymbol a) = EGet a parse r = error $ "did not match: " ++ show r
The JimScript source is on GitHub.. | https://jameshfisher.com/2018/03/09/writing-a-parser-in-haskell | CC-MAIN-2019-09 | refinedweb | 887 | 66.57 |
Search the Community
Showing results for tags 'tweenmax'..
Why GSAP for HTML5 Animation?. Feature frustrating "gotchas". You need things to just work. new standard for HTML5 animation replay !
Rotation in latest GSAP with Bezier Plugin
djanes376 posted a topic in GSAPI have an animation that uses the bezier plugin and the autoRotate function within it. After updating to the latest version of GSAP the rotation is no longer occurring, causing the animated element to look unnatural. I haven't run any significant tests as I am pressed for time with other projects but I was just wondering if there is a quick fix on my end. I know there were some updates to how rotation is handled in the latest update and if there is anything I need to do to update my code any assistance would be appreciated. Here is he snippet that makes the path and rotation: {bezier:{type:"soft", curviness:1.25, values:[{x:0, y:-200}, {x:300, y:-400}, {x:800, y:-200}, {x:1010, y:-300}], autoRotate:["x","y","rotation",1.5,true]} I can't provide the link to the source since it's a closed production environment but if it helps I can create a test page somewhere when I get some time. Thanks.
-.y ease:Quart.easeInOut, blurFilter:{blurX:0, blurY:20, remove:true}});!
- In our support forums.
Random Animate Letters in TweenMax Box
freedom1k posted a topic in GSAP (Flash)Hello, this is the best site since sliced-bread! I'm new to the site and was drawn to it from a youtube video that had GreenStock as an import. Anyway, I this tutorial and modified the code to place random Letters in the boxes with dynamic text. (the link above also has source file.) the code: import com.greensock.*; /** Alphabet collection */ var alphabet:Vector.<String>; /** Reset alphabet */ function reset():void { alphabet = new <String>[ "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z" ]; } /** Get random letter from alphabet */ function getRandomLetter():String { return (alphabet.splice(int(Math.random() * alphabet.length), 1)[0]); } /** Shuffle alphabet collection */ function shuffleAlphabet():Vector.<String> { var alphabetShuffled:Vector.<String> = new Vector.<String>(); while (alphabet.length > 0) { alphabetShuffled.push(getRandomLetter()); } return alphabetShuffled; } // get a random letter: reset(); var randomLetter:String = getRandomLetter(); trace("Can you find the letter: " + randomLetter + "?"); // display entire alpha shuffled: reset(); trace(shuffleAlphabet()); var numBoxes:Number = 160; var tl:TimelineMax = new TimelineMax({paused:true, onReverseComplete:playAgain}); for(var i:uint = 0; i < numBoxes; i++){ var boxTxt:BoxTxt = new BoxTxt(); boxTxt.x = 1200/numBoxes*i boxTxt.y = 800; boxTxt.alpha = 0; var alphabetShuffled:Vector.<String> = new Vector.<String>(); while (alphabet.length > 0) { alphabetShuffled.push(getRandomLetter()); var randomLetter2:String = getRandomLetter(); boxTxt.randomLetter_txt.text = "" + randomLetter2; } TweenMax.to(boxTxt.solid_mc, 0, {tint:Math.random()*0xffffff}) tl.insert(TweenMax.to(boxTxt, 1, {x:Math.random()*1200, y:Math.random()*800, rotationY:360, onComplete:deBlur, onCompleteParams:[boxTxt]}), i*.01); tl.insert(TweenMax.to(boxTxt, .1, {alpha:0.5, dropShadowFilter:{color:0x000000, alpha:1, blurX:5, blurY:5, strength:0.8, distance:2}}), i*.01); addChild(boxTxt); } I was able to the random text to appear in each box, but only random at run-time. each character was the same. Is there a way to get each box to display a different letter in each box in the same way the color is randomized using TweenMax.to. ? You help is greatly appreciated. hmm, this site would not allow me to upload the .fla file, so basically all you need is a MC in the library named "boxTxt" and within that dynamicText named "randomLetter_txt" over a graphics named "solid_mc".
_gsTransform is undefined
Zki posted a topic in GSAPHi, I am new to GreenSock and trying to use Draggable plug in of GreenSock. I have added the JS libraries in following order <script src="../../../js/jquery-1.10.1.min.js"></script> <script src="../../../js/jquery-ui.min.js"></script> <script src="../../../js/greensock/plugins/ThrowPropsPlugin.min.js"></script> <script src="../../../js/greensock/TweenMax.min.js"></script> <script src="../../../js/greensock/plugins/Draggable.min.js"></script> My HTML code structure is as follows: <div id="imageContainer"> <div class="galleryImg"></div> </div> I completely flush #imageContainer and add few elements, with class name galleryImg, to it dynamically. To make the imageContainer draggable I added following code in my JS Draggable.create("#imageContainer"); However, when I try to drag the element, an error is thrown and the element is not draggable. The error from FireBug is as follows: TypeError: i._gsTransform is undefined ...},this.content=o,this.element=t,this.enable()},H=function(i,n){t.call(this,i),i The error also occurs as soon as I click (mouseDownEvent) the imageContainer. Is there anything am I doing wrong over here ? Which JS files do I need to add to use Draggable? Where can I find standalone sample examples on how to use GreenSock plugins? (e.g. JQuery has this "View Source" link which can display full HTML to understand the whole example) Please guide. | https://staging.greensock.com/search/?tags=tweenmax&updated_after=any&page=20&sortby=newest | CC-MAIN-2022-40 | refinedweb | 845 | 52.15 |
Constraint-Based Workflow Management Systems: Shifting Control to Users
- Clemence Short
- 1 years ago
- Views:
Transcription
1 Constraint-Based Workflow Management Systems: Shifting Control to Users
2 Copyright c 2008 by Maja Pešić. All Rights Reserved. CIP-DATA LIBRARY TECHNISCHE UNIVERSITEIT EINDHOVEN Pešić, Maja Constraint-Based Workflow Management Systems: Shifting Control to Users / by Maja Pešić. - Eindhoven: Technische Universiteit Eindhoven, Proefschrift. - ISBN NUR 982 Keywords: Workflow Management Systems / Business Process Management / Flexibility / Declarative Process Models / Constraint-Based Systems / Socio-Technical Systems The work in this thesis has been carried out under the auspices of Beta Research School for Operations Management and Logistics. Beta Dissertation Series D106 Printed by University Press Facilities, Eindhoven
3 Constraint-Based Workflow Management Systems: Shifting Control to Users woensdag 8 oktober 2008 om uur door Maja Pešić geboren te Belgrado, Servië
4 Dit proefschrift is goedgekeurd door de promotor: prof.dr.ir. W.M.P. van der Aalst Copromotor: dr. F.M. van Eijnatten
5 To Boris.
6 vi
7 Contents 1 Introduction Business Processes Management Characterization of Business Processes Characterization of Decision Making Adjusting to the Environment Combining Social and Technical Aspects The Tradeoff between Flexibility and Support Problem Definition and Research Goal Contributions Constraint-Based Process Models Generic Constraint-Based Process Modeling Language A Prototype of a Constraint-Based Workflow Management System Constraint-Based Approach in the BPM Life Cycle Combining Traditional and Constraint-Based Approach Road Map Related Work Workflow Flexibility Taxonomy of Flexibility by Heinl et al Taxonomy of Flexibility by Schonenberg et al Flexibility by Design Flexibility by Underspecification Flexibility by Change Flexibility by Deviation Workflow Management Systems Staffware FLOWer YAWL ADEPT Other Systems Workflow Management Systems and the Organization of Human Work Two Contrasting Regimes for the Organization of Work Socio-Technical Systems Workflow Management Systems and the Structural Parameters Summary Outlook
8 viii Contents 3 Flexibility of Workflow Management Systems Contemporary Workflow Management Systems The Control-Flow Perspective The Resource Perspective The Data Perspective Summary Taxonomy of Flexibility Flexibility by Design Flexibility by Underspecification Flexibility by Change Flexibility by Deviation Summary A New Approach for Full Flexibility Constraint-Based Approach Activities, Events, Traces and Constraints Constraint Models Illustrative Example: The Fractures Treatment Process Execution of Constraint Model Instances Instance State Enabled Events States of Constraints Ad-hoc Instance Change Verification of Constraint Models Dead Events Conflicts Compatibility of Models Summary Constraint Specification with Linear Temporal Logic LTL for Business Process Models ConDec: An Example of an LTL-Based Constraint Language Existence Templates Relation Templates Negation Templates Choice Templates Branching of Templates ConDec Constraints Adjusting to Properties of Business Processes Dealing with the Non-Determinism Retrieving the Set of Satisfying Traces ConDec Models ConDec Model: Fractures Treatment Process Execution of ConDec Instances Instance State Enabled Events States of Constraints Ad-hoc Change of ConDec Instances Verification of ConDec Models Activity Life Cycle and ConDec Possible Problems Available Solutions Summary
9 6 DECLARE: Prototype of a Constraint-Based System System Architecture Constraint Templates Constraint Models Execution of Instances Ad-hoc Change of Instances Verification of Constraint Models The Resource Perspective The Data Perspective Conditional Constraints Defining Other Languages Languages Based on LTL Languages Based on Other Formalizations Combining the Constraint-Based and Procedural Approach Decomposition of declare and YAWL Processes Dynamic Decompositions Integration of Even More Approaches Summary Using Process Mining for the Constraint-Based Approach Process Mining with the ProM Framework Verification of Event Logs with LTL Checker The Default LTL Checker Combining the LTL Checker and declare The SCIFF Language Verification of Event Logs with SCIFF Checker Discovering Constraints with DecMiner Recommendations Based on Past Executions Summary Conclusions Evaluation of the Research Goal Contributions Flexibility of the Constraint-Based Approach Support of the Constraint-Based Approach The Constraint-Based Approach and Organization of Human Work Combining the Constraint-Based Approach with Other Approaches Business Process Management with the Constraint-Based Approach Limitations Complexity of Constraint-Based Models Evaluation of the Approach Directions for Future Work Summary Appendices 232 A Work Distribution in Staffware, FileNet and FLOWer 233 A.1 Staffware A.1.1 Work Queues A.1.2 Resource Allocation A.1.3 Forward and Suspend A.2 FileNet ix
10 x Contents A.2.1 Queues A.2.2 Resource Allocation A.2.3 Forward and Suspend A.3 FLOWer A.3.1 Case Handling A.3.2 Authorization Rights A.3.3 Distribution Rights A.3.4 Distribution of Instances A.3.5 Distribution within an Instance A.4 Summary B Evaluation of Workflow Patterns Support 257 B.1 Control-Flow Patterns B.2 Resource Patterns B.3 Data Patterns Bibliography 261 Summary 279 Samenvatting 283 Acknowledgements 287 Curriculum Vitae 289
11 Chapter 1 Introduction An organization produces value for its customers by executing various business processes. Due to complexity and variety of business processes, contemporary organizations use information technology to support activities and possibly also automate their processes. Business Process Management systems (or BPM systems) are software systems used for automation of business processes. Once a BPM system is employed in a company, it has a significant influence on the way business processes are executed in the company. Contemporary BPM systems tend to determine the way companies organize work and force companies to adjust their business processes to the system. In other words, a company that uses a BPM system is not likely to be able to implement its business processes in the way that is most appropriate for the company. Instead, business processes must be implemented such that they fit the system, which can cause various problems. First, due to a mismatch between the preferred way of work and the system s way of work, companies may be forced to run inappropriate business processes. Second, two parallel realities may be created: the actual work is done outside the system in one way, and later registered in the system in another way. These problems may prevent a company from using a BPM system. In this chapter we introduce the research presented in this thesis, which aims at enabling a better alignment of BPM systems with business processes in companies. We start by introducing business processes and BPM systems in Section 1.1. The nature of contemporary business processes is described in Section 1.2. Section 1.3 describes the way today s organizations manage their work. The tradeoff between flexibility and support in BPM systems is shortly discussed in Section 1.4. Section 1.5 defines the problem addressed by this research and the research goal. Finally, a short overview of research contributions is given in Section 1.6 and the outline of the thesis is provided in Section 1.7.
12 2 Chapter 1 Introduction 1.1 Business Processes Management A business process defines a specific ordering of activities that are executed by employees, available input and required output, and the flow of information. Business Process Management (BPM) is a method to continuously improve business processes in order to achieve better results. BPM includes concepts, methods and techniques to support the design, implementation, enactment and diagnosis of business processes [93]. Figure 1.1 shows the BPM life cycle as a continuous cycle consisting of four phases. diagnosis process enactment process design process implementation Figure 1.1: BPM life cycle [93] The BPM life cycle starts with process design, where the business processes are identified, reviewed, validated and finally represented as process models [266]. A process model describes (a part of) a business process by defining how documents, information, and activities are passed from one participant to another [93,266]. Process models are developed using a process modeling language, e.g., Business Process Modeling Notation (BPMN), Business Process Execution Language (BPEL), Unified Modeling Language (UML), Event-driven Process Chains (EPCs), etc. In some cases, process models can be verified against inconsistencies and errors [89, 254]. Next, the process model is implemented in order to align work of the employees with the prescribed process model. In the process enactment phase, the business process should be executed within the organization in the way prescribed in the implemented process model. The process diagnosis phase uses information about the actual enactment of processes in order to evaluate them. The results from the diagnosis phase are used to close the BPM life cycle in order to continuously improve business processes, i.e., based on the diagnosis, the processes are redesigned, etc. Business processes can be supported by various types of software products. BPM systems support collaboration, coordination and decision making in business processes [93, 110, 266]. Various BPM systems provide for different degrees of automation of ordering and coordination of activities. Figure 1.2 shows two extreme types of BPM systems: groupware systems and workflow management systems. Groupware systems focus on supporting human collaboration, and codecision. Ordering and coordination of activities in these systems cannot be
13 Section 1.1 Business Processes Management 3 groupware workflow management systems decisions about the ordering and coordination of activities users system Figure 1.2: BPM systems automated [110]. Instead, users of groupware control the ordering and coordination of activities while executing the business process (i.e., on the fly ) [110]. Groupware systems range from enhanced electronic mail to group conferencing systems. Workflow management systems focus on the business process by explicitly controlling ordering, coordination and execution of activities with possibly little human intervention [110]. In general, humans merely influence the execution of business processes by entering necessary data. A workflow management system automates a set of business processes by the definition and execution of process models [93, 266]. Moreover, most contemporary systems support three phases of the BPM cycle, as shown Figure 1.3. First, process design is conducted by defining process models, which define (1) the execution order of activities, (2) which employees are allowed to execute which activities, and (3) which information will be available during the execution. In addition, in some systems it is possible to verify models against errors. Second, process models are implemented thus allowing for the automatic enactment of process instances in the system. A process model can be seen as a template that workflow management systems use for execution of concrete process instances. Thus, by executing process models, workflow management systems determine in which order activities can be executed, which employee executes which activity and which information is available. diagnosis process enactment process design process implementation WORKFLOW MANAGEMENT SYSTEMS Figure 1.3: Workflow management systems and the BPM life cycle presented in Figure 1.1
14 4 Chapter 1 Introduction 1.2 Characterization of Business Processes Not all business processes are the same. Even within one organization, business processes can be very different in terms of their essential properties. Business processes can be characterized based on various properties [110]. For example, the nature of the business process depends on its complexity, predictability and repetitiveness. The complexity of a business process refers to the complexity of collaboration, coordination, and decision making [110]. The more complex collaboration, coordination, and decision making in the business process are, the higher the degree of complexity of the process is. Figure 1.4 shows examples of several business processes with various degrees of complexity. Simple business processes (e.g., exchanging personal messages and handling travel requests) require trivial collaboration, coordination and decision making. On the other hand, handling medical treatments is a complex business process because it is non-trivial from the view point of collaboration, coordination and decision making. personal staff evaluation travel requests sales proposals loan applications disaster handling medical treatments low simple complexity high complex Figure 1.4: Complexity of business processes The predictability of a business process depends on how easy it is to determine in advance the way the process will be executed. The more predictable possible future executions of the business process are, the more predictable the process is. Figure 1.5 shows examples of several business processes with various degrees of predictability. For example, handling travel requests has a high degree of personal disaster handling medical treatments sales proposals staff evaluation loan applications travel requests low unpredictable predictability high predictable Figure 1.5: Predictability of business processes
15 Section 1.2 Characterization of Business Processes 5 predictability because it is quite certain how it will be executed. On the other hand, it is hard to predict how personal messages can be exchanged, i.e., this business process is unpredictable. The repetitiveness of a business process refers to the frequency of process execution. The more times the business process is executed, the higher degree of the repetitiveness of the process is. For example, a business process that is executed once per year has a lower degree of repetitiveness than a process executed more that a thousand times per year. Figure 1.6 shows examples of several business processes with various degrees of repetitiveness. For example, disaster handling (e.g., floods, earthquakes, etc.) is a business process with a low degree of repetitiveness because it does not happen frequently. On the other hand, exchanging personal messages is a frequent and, thus, repetitive business process. disaster handling staff evaluation sales proposals loan applications medical treatments travel requests personal low non-repetitive repetitiveness high repetitive Figure 1.6: Repetitiveness of business processes Note that one business process can have different degrees of complexity, predictability and repetitiveness. Figure 1.7 shows that the nature of a business process is determined by the degrees of complexity, predictability and repetitiveness. low repetitiveness high predictability low high low complexity high Figure 1.7: Complexity, predictability and repetitiveness determine the nature of business processes
16 6 Chapter 1 Introduction For example, medical treatments are very complex processes with a high degree of repetitiveness and a low degree of predictability (cf. figures 1.4, 1.5 and 1.6). BPM systems (cf. Section 1.1) aim at supporting complex and repetitive processes, as Figure 1.8(a) shows. As described in Section 1.1, there are two extreme types of BPM systems: groupware and workflow management systems. Because in groupware systems users control the ordering and coordination of activities, they are suitable for unpredictable processes [110], as shown in Figure 1.8(b). Workflow management systems fully automate the ordering and coordination of activities by executing predefined process models. Therefore, workflow management systems support highly predictable business processes [110]. workflow management systems low repetitiveness high low predictability high BPM SYSTEMS low repetitiveness high low predictability high groupware low complexity high complexity (a) applicability of BPM systems (b) available BPM systems Figure 1.8: Automation of business processes with BPM systems 1.3 Characterization of Decision Making Decision making determines to a great extent the way people work and influences their productivity. If decisions about how to work are made centrally, then we speak about centralized decision making. If the workers who do the work make decisions themselves, then we speak of local decision making. At the middle of the twentieth century, schools of organizational science were divided into two groups that propagated two extreme styles of decision making. The so called soft hard locally decision making centralized Figure 1.9: Two extreme styles of BPM
17 Section 1.3 Characterization of Decision Making 7 hard approaches propagated centralized decision making, while the so-called soft approaches propagated local decision making, as shown in Figure 1.9. A good illustration of the differences between the two extreme approaches is given by the motivation theory of McGregor: Theory X and Theory Y [169]. Table 1.1 shows the main principles of the two basic modes. Table 1.1: Theory X and Theory Y [169] Theory X ( hard approach) Humans inherently dislike working and will try to avoid it if they can. Because people dislike work they have to be coerced or controlled by management and threatened so they work hard enough. Average employees want to be directed. People don t like responsibility. Average humans are simple and need security at work. Theory Y ( soft approach). Theory X characterizes authoritarian and repressive hard approaches with centralized decision making. This theory takes a pessimistic view on workers, i.e., it is assumed that humans do not like to work, can t be trusted, and need to be closely supervised and controlled [169]. The result is a limited and depressed culture of work and a constant decrease of worker s motivation and productivity. Hard approaches advocate detailed division and specialization of work, and centralized decision making at its extreme [105,114,242,262]. Workers are specialized and prepared for the execution of small and monotonous tasks, and they do not participate in decision making. This way of thinking emerged with the industrialization. It was believed that the automation of business processes increases productivity by minimizing participation of humans and, thus, minimizing human errors and throughput times. A worker was considered to be an extension to the machine, which merely performs tasks that cannot be automatized. Theory Y can be characterized by liberating and developmental soft approaches with local decision making. This theory takes an optimistic view on workers, i.e., it is assumed that humans enjoy working, may be ambitious, selfmotivated, anxious to accept greater responsibility, and exercise self-control, selfdirection, autonomy and empowerment [169]. Soft approaches advocate local-
18 8 Chapter 1 Introduction ized decision making where all relevant decisions about work are made directly by people who actually do the work [168,204]. In this way, workers are in a full control and share responsibility for their work. Satisfaction of doing a good job is a strong motivation and, therefore, will lead to constant increase of productivity. The two extreme approaches to decision making were criticized and mostly abandoned in the second half of the twentieth century. Relying on either a soft or hard manner seems to represent unrealistic extremes. In reality, companies aim at achieving an optimal ratio between local and centralized decision making, depending on the specific situation. Moreover, contemporary companies commonly place decision making somewhere between the soft and hard approach, as Figure 1.10 shows. optimal soft common hard locally decision making centralized Figure 1.10: Optimal decision making New approaches consider multiple aspects of business processes. Each organization is seen as a unique open system with inputs, transformations, outputs and feedback. Therefore, each company should consider the influence of the environment and the integration of social and technical aspects of business processes when choosing for the optimal style of decision making [69,99,102,103,128,244, 246] Adjusting to the Environment Changes in the environment can have a major influence on an organization (e.g., change in customer requirements, appearance of new competitors, etc.). Therefore, business processes must constantly be adjusted to the environment (cf. Section 1.1) [65, 104]. Environments with a low degree of turbulency are stable environments, and environments with a high degree of turbulency are turbulent environments. The degree of turbulency of the environment influences the predictability of business processes (cf. Figure 1.5 on page 4) and the nature of decision making. The more turbulent the environment is, the more often it will be necessary to adjust business processes to it and the more unpredictable business processes are. For example, medical processes have a low degree of predictability because each treatment must be adjusted to specific environment (e.g., available medications, conditions of the patient, etc.), while handling travel requests has a higher degree of predictability because traveling conditions do not change so frequently.
19 Section 1.3 Characterization of Decision Making 9 Hard and soft approaches advocate centralized and local decision making, regardless the nature of the environment, as Figure 1.11(a) shows. However, the need for localized decision making rises with the turbulency of the environment and, thus, the unpredictability of business processes, as Figure 1.11(b) shows [65, 104]. In other words, unpredictable business processes require localized decision making because decisions about how to adjust the process to new requirements must be frequently made on the fly. For example, decisions about how to adjust the medical treatment to the specific patient must be frequently made. Therefore, these decisions should be made by the involved medical staff on the spot. Because handling travel requests is a predictable process, decisions about how to handle the process can be made outside the process, i.e., in a centralized manner. local soft local decision making centralized low turbulent environment hard business proccess predictability (a) soft vs. hard high stable environment decision making centralized low turbulent environment optimal business proccess predictability (b) optimal high stable environment Figure 1.11: Influence of environment on decision making Combining Social and Technical Aspects Technology used for the automation of business processes influences the way of work. While hard approaches praise the automation for taking over the decision making from workers [105, 114, 242, 262], soft approaches see automation as a means for suppressing the motivation and capabilities of people by imposing a centralized decision making [58, 168, 204]. However, organizations can significantly benefit from using the best that both humans and technology have to offer [246]. Table 1.2 shows a list of things that humans can do better than machines and vice versa [96,134]. Instead of being replaceable, humans and machines should complement one another [142,224,229]. Moreover, both technical and social aspects of an organization must be optimized in order to achieve the best results [246]. For example,
20 10 Chapter 1 Introduction Table 1.2: People versus machines [134] people are better in: Detection of certain forms of very low energy levels. Sensitivity to an extremely wide variety of stimuli. Perceiving patterns and making generalizations about them. Ability to store important information for long periods and recalling relevant facts at appropriate moments. Ability to exercise judgment where events cannot be completely defined. Improving and adopting flexible procedures. Ability to react to unexpected lowprobability events. Applying originality in closing problems (i.e., alternative solutions). Ability to profit from experience and alter course of action. Ability to perform fine manipulation, especially where misalignment appears unexpectedly. Ability to continue to perform when overloaded. Inductive reasoning. machines are better in: Monitoring (both men and machines). Performing routine, repetitive, or very precise operations. Responding very quickly to control signals. Storing and recalling large amounts of information in long time periods. Performing complex and rapid computation with high accuracy. Sensitivity to stimuli beyond the range of human sensitivity (infrared, radio waves, etc.), Doing many different things at one time. Exerting large amounts of force smoothly and precisely. Insensitivity to extraneous factors. Ability to repeat operations very rapidly, continuously, and precisely the same way over a long period. Operating in environments that are hostile to man or beyond human tolerance. Deductive reasoning. technology can be used for decision making involving complex and rapid computation using large amounts of data, while humans can make decisions regarding unpredicted and exceptional situations. Therefore, instead of replacing humans and technology with each other, modern organizations strive to optimally benefit from both aspects, as Figure 1.12 shows. optimal hard soft technology humans Figure 1.12: Technology and humans in decision making
21 Section 1.4 The Tradeoff between Flexibility and Support The Tradeoff between Flexibility and Support The flexibility that users have and the support that users get while working with BPM systems (cf. Section 1.1) have a major influence on both satisfaction and productivity. Figure 1.13 shows flexibility and support as two opposed properties of business processes. Flexibility refers to the degree to which users can make local decisions about how to execute business processes. Support refers to the degree to which a system makes centralized decisions about how to execute business processes. As discussed in Section 1.1, groupware and workflow management systems are the two (extreme) types of BPM systems. The main difference between the two types of systems is decision making, as shown in Figure 1.2 on page 3. While users make decisions locally in groupware systems, the system makes decisions centrally in workflow management systems. Thus, groupware systems provide a high degree of flexibility and a low degree of support, while workflow management systems provide a high degree of support and a low degree of flexibility, as shown in Figure In order to be able to align decision making with the predictability of business processes (cf. Section 1.3.1), companies use groupware systems to automate highly unpredictable business processes, and workflow management systems to automate highly predictable business processes, as shown in Figure 1.8(b) on page 6. high low groupware flexibility support workflow management systems decision making local centralized Figure 1.13: Tradeoff: flexibility or support in BPM systems [90] 1.5 Problem Definition and Research Goal Companies rarely choose for extreme centralized or localized decision making, as hard and soft approaches. Instead, a modern company constantly strives towards an optimal balance between the two styles decision making (cf. Section 1.3). The balance between centralized and local decision making must be aligned with the specific situation, i.e., namely with the degree of the environ-
22 12 Chapter 1 Introduction ment turbulency and the predictability of the business process, as described in Section The more unpredictable the business process is, the more localized decision making should be, as shown in Figure 1.11(b). The complexity of contemporary business processes raises the need for organizations to use technology for automation of supporting people in decision making while executing business processes. Technology should not be seen as a means that can and should replace humans completely (i.e., hard approaches), nor as an ultimate evil which should be completely exterminated from business processes (i.e., soft approaches). Instead, the best results are achieved by combining the expertise of both humans and technology (cf. Section 1.3.2). BPM systems are software systems that aim at automating business processes by supporting collaboration, coordination and decision making (cf. Section 1.1). These systems can offer different degrees of flexibility and support in business processes (cf. Section 1.4). Figure 1.13 shows two extreme types of BPM systems that offer either flexibility or support. First, groupware systems offer a high degree of flexibility and a low degree of support by allowing users to make all decisions about how to execute business processes. Second, workflow management systems make decisions about how to execute business processes, i.e., they offer a high degree of support but not enough flexibility 1. Due to the typical complexity of contemporary business processes, companies need BPM systems to support workers in difficult decision making. For example, a workflow management system can provide support by centrally making decisions involving complex manipulation of large amounts of data. However, a workflow management system typically does not allow for flexibility, which disables users to make local decisions about exceptional situations in unpredictable business processes. Thus, a workflow management system forces a company to stick to centralized decision making and work according to the hard approach. A groupware system, on the other hand, provides for flexibility by allowing users to make local decisions necessary to handle unpredictable business processes. However, a groupware system does not provide for necessary support while handling complex business processes. Therefore, a company that uses a groupware system is forced to stick to local decision making, which is advocated by the soft approaches. Because BPM systems do not offer an optimal ratio between flexibility and support, companies that use these systems are not able to choose an optimal balance between centralized and local decision making, as Figure 1.14 shows. The research presented in this thesis is concerned with the following problem: BPM systems force companies to implement either centralized or local decision making, instead of allowing for an optimal balance between the two. The goal of the research is to enable companies that use BPM systems to 1 Note that in this context we have in mind mainstream commercial workflow management systems.
23 Section 1.6 Contributions 13 groupware flexibility support workflow management systems soft common optimal hard Figure 1.14: Problem definition achieve an optimal balance between local and centralized decision making. We hope to achieve this (i.e., the goal in the research) (1) by proposing a new approach towards process support and (2) by developing a prototype of a workflow management system that can offer an optimal ratio between flexibility and support, as described in Section Contributions In this section we briefly describe contributions of this thesis. The three main contributions are: The definition of a constraint-based approach to process modeling (cf. Section 1.6.1). The definition of a modeling language for the development of constraintbased process models (cf. Section 1.6.2). The development of a prototype of a constraint-based workflow management system (cf. Figure 1.6.3). Two additional contributions are: The application of the constraint-based approach to the whole BPM life cycle (cf. Section 1.6.4). Showing that a combination of traditional and constraint-based approaches is possible (cf. Section 1.6.5) Constraint-Based Process Models Starting point for our constraint-based approach is the observation that only three types of scenarios can exist in a business process : (1) forbidden scenarios
24 14 Chapter 1 Introduction should never occur in practice, (2) optional scenarios are allowed, but should be avoided in most of the cases, and (3) allowed scenarios can be executed without any concerns. This is illustrated in Figure 1.15(a). As described in Section 1.1, workflow management systems enable definition and execution of models of business processes, which specify the ordering of activities in business processes. In traditional workflow management systems process models explicitly specify the ordering of activities, i.e., the control-flow of a business process. In other words, during the execution of the model it will be possible to execute business process only as explicitly specified in the control-flow, as Figure 1.15(b) shows. Due to the high level of unpredictability of business processes, many allowed and optional executions often cannot be anticipated and explicitly included in the control-flow. Therefore, in traditional systems it is not possible to execute substantial subsests of the allowed scenarios. forbidden optional allowed (a) forbidden, optional and allowed in business processes possible control-flow (b) traditional approach constraints constraints constraints constraints (c) constraint-based approach Figure 1.15: New constraint-based approach We propose a constraint-based approach to process models, which makes it possible to execute both allowed and optional scenarios in business processes. Instead of explicitly specifying what is possible in business processes, constraintbased process models specify what is forbidden, as shown in Figure 1.15(c). The possible ordering of activities is implicitly specified with constraints, i.e., rules that should be followed during execution. Moreover, there are two types of constraints: (1) mandatory constraints focus on the forbidden scenarios, and (2) optional constraints specify the optional ones. Anything that does not violate mandatory constraints is possible during execution. In addition to execution, our constraint-based process models also allow for verification against errors and change during execution (i.e., the so called ad-hoc change). Our constraint-based approach to process modeling enables flexibility without sacrificing support. On the one hand, constraint-based models tend to offer
25 Section 1.6 Contributions 15 more possibilities for execution than the traditional models. This allows users to make local decisions about how to execute business process. On the other hand, a constraint-based process model supports users by being able to keep track of multiple constraints in multiple business processes and preventing users from violating these constraints. In addition, it is also possible to distinguish between the constraints that must be followed (i.e., mandatory) and constraints that should be followed (i.e., optional). In the first case, users will be prevented from violating the constraints. In the second case, users can violate the constraints, but they will be warned in advance about the soft violation. Moreover, our constraint-based approach enables achieving a ratio between flexibility and support that is optimal for the situation at hand: more constraints in a model mean less flexibility and more support, while less constraints mean more flexibility and less support Generic Constraint-Based Process Modeling Language Constraint-based process models are composed of constraints, which specify rules that should be followed during execution of business processes. A process modeling language used by a workflow management system must fulfill two important criteria. First, the process models developed in the language must be understandable for end-users. Second, process models developed in the language must have formal semantics in order to be executable in a workflow management system. We propose a new constraint-based process modeling language ConDec, which fulfils both criteria. ConDec is based on constraint templates, i.e., types of constraints. Each template has (1) a graphical representation that will be presented to users, and (2) Linear Temporal Logic (LTL) formula specifying the semantics. Our approach and implementation are generic, i.e., templates can be easily changed, removed from, or added to the language. Templates are used to create constraints in ConDec process models. Each constraint inherits the graphical representation and semantics (i.e., LTL formula) from its template. Figure 1.16 shows an example of a ConDec constraint, which specifies that activities A and B should not be executed both in one instance of the business process. Users see this constraint as a line with special symbols between two activities, while the LTL semantics remains hidden. While the LTL semantics enable execution of ConDec models, graphical representation makes models understandable by non-experts. Activities A and B should not be executed both in one instance of the business process. A B Figure 1.16: A constraint
26 16 Chapter 1 Introduction A Prototype of a Constraint-Based Workflow Management System We developed the declare system as a prototype of a constraint-based workflow management system. This prototype can be downloaded from sf.net. declare can support different constraint-based modeling languages and is grounded on our constraint-based approach, as Figure 1.17 shows. Although the default version of the prototype includes the ConDec language, any other constraint-based language based on LTL can easily be added. In addition, constraint-based languages that use formalizations other than LTL can be added by simple extensions of the prototype. Further, declare allows for definition, verification, execution of constraint-based process models, and ad-hoc change of running instances. ConDec... definition DECLARE definition verification execution ad-hoc change constraint-based languages constraint-based process models Figure 1.17: The declare prototype Constraint-Based Approach in the BPM Life Cycle Workflow management systems can be used together with process mining tools for support of all phases of the BPM life cycle shown in Figure 1.1 on page 2. Figure 1.3 on page 3 shows that workflow management systems support design, implementation, and enactment of business processes. Process mining tools support the diagnosis phase by using various process mining techniques for analysis of executed business processes [28]. For example, ProM is a process mining tool that can be used for many kinds of analysis of business processes executed in various workflow management systems [28,91]. Our constraint-based approach can be applied to all phases of the BPM life cycle. On the one hand, declare is a prototype of a workflow management system and, thus, supports design, implementation, and enactment of constraintbased process models, as shown in Figure On the other hand, declare languages and models can be re-used in the diagnosis phase by the ProM tool for the analysis of business processes already executed in declare. The results of this analysis can be used for two purposes. First, the results can indicate that process models should be changed, i.e., the cycle is re-entered. Second, declare can use the analysis results during execution of constraint-based models,
27 Section 1.6 Contributions 17 as history-based recommendations. The recommendations generated from past executions are presented to users executing declare models as additional information that can help them deal with uncertain situations, i.e., recommendations provide support for declare users without sacrificing available flexibility. ProM process mining tool diagnosis DECLARE workflow management system process enactment process implementation process design Figure 1.18: Constraint-based approach in the BPM life cycle Combining Traditional and Constraint-Based Approach As described in sections 1.3 and 1.4, the level of flexibility and support that users should get in workflow management systems depends on the nature of the business process at hand. Contemporary BPM systems exclusively focus on one type of business processes and offer either support or flexibility (cf. Figure 1.8(b) on page 6 and Figure 1.13 on page 11). However, business processes of different types are typically interleaved, even within the same organization. Consider for example, business processes in the medical domain. Unpredictable medical processes, like, e.g., treating urgent severe injuries, require a high degree of flexibility in order for the staff involved to be able to make local decisions based on each particular patient. This process is very complex and it consists of several other business processes. For example, while treating the injury it might be necessary to perform a blood analysis in the laboratory, which is another business process. Laboratory tests are critical processes and, in order to guarantee reliability of results, they must be executed exactly according to predefined procedures. In other words, instead of flexibility, the blood analysis process requires a high degree of support. The medical domain is one of many examples where a mixture of processes requiring either a lot of support or a lot of flexibility is needed. Therefore, it is important to support the full spectrum. The declare prototype can be combined with the YAWL system [11, 23, 32,210,212] for defining arbitrary decompositions of constraint-based and traditional process models. YAWL is a traditional workflow management system developed at both Queensland University of Technology and Eindhoven University of Technology. The service-oriented architecture of YAWL allows for arbitrary decompositions of various process models. Figure 1.19 shows that the connection
28 18 Chapter 1 Introduction between declare and YAWL models is twofold: (1) a YAWL model can be a sub-model of a declare model and (2) a declare model can be a sub-model of a YAWL model. Decomposition of YAWL and declare models allows for combining different degrees of flexibility and support within one business process. In this way, different parts of one business process can offer different degrees of flexibility and support. Note that YAWL and declare are just two examples, i.e., using a service oriented architecture different styles of modeling can be combined. DECLARE constraint-based subprocess of subprocess of YAWL traditional Figure 1.19: Combining different approaches 1.7 Road Map The remainder of this thesis is organized as follows: Chapter 2 provides an overview of the related work in the area of flexibility of workflow management systems. Chapter 3 explains in detail workflow management systems and factors that determine the flexibility of these systems. Chapter 4 formalizes our constraint-based approach to process modeling. Chapter 5 presents the ConDec language as one example of a constraint-based process modeling language, which uses Linear Temporal Logic for the formal specification of constraints. The principles described in this chapter can be applied to any other LTL-based language. Chapter 6 describes the declare system as a prototype workflow management system that supports the constraint-based approach. declare provides full support the constraint-based approach and the ConDec language presented in Chapters 4 and 5, respectively. Chapter 7 describes how process mining techniques can be applied to the constraint-based approach. Chapter 8 concludes this thesis, discusses existing problems and proposed future work. In addition, two appendices are provided. Appendix A analyses work distribution in three widely-used commercial workflow management systems, while Appendix B presents evaluation results of the workflow pattern [10, 35] support in these three systems. These appendices provide details related to the discussion of flexibility of workflow management systems in Chapter 3.
29 Chapter 2 Related Work This chapter provides an overview of related work. Section 2.1 discusses the various proposals to deal with flexibility described in literature. Section 2.2 introduces several workflow management systems that are interesting from the viewpoint of flexibility. Section 2.3 discusses related work on the organization of human work. Finally, Section 2.4 concludes the chapter with an outlook. 2.1 Workflow Flexibility The importance of flexibility of workflow management systems has been acknowledged by many researchers [66, 109, 125, 196]. The main problem regarding the flexibility of workflow technology remains the requirement to specify business processes in detail, although these processes cannot be predicted with a high certainty [77, 109, 125, 143, 153, 166, 188, 233], and need to be constantly adapted to changing environments [77,188,233]. Based on experiences from practice, Reijers provides a brief discussion about the fact that workflow technology failed to bring the intended flexibility by extracting the notion of the business process coordination logic from applications [196]. The paradigm of workflow management systems is based on extracting the business process logic from applications, which should provide for flexibility by making it easier to change the model of the underlying business process [159, 196]. In [196] Reijers argues that, instead of flexibility, workflow management systems improved the logistical aspects of work: managers benefit from decreased through-put times and workers from the fact that the system provides automatically all relevant data and steers the business process. In [64], Bowers at al. report on a case study conducted in a print industry office that started using a workflow management system. This study revealed that, instead of improving the work in the print office, workflow technology causes serious interruptions in the work of employees because the system completely took over the work and workers were no longer able to handle many unpredictable
30 20 Chapter 2 Related Work situations. In [125], Heinl et al. addressed the issue of flexibility in the context of a case study conducted in a large market research company that uses workflow technology for support of more than 400 processes. The case study showed that inflexible workflow technology caused problems because: (1) it is almost impossible to identify all steps in the business process in advance, (2) even if a step is identified, it is not obvious whether it should be included in the process model or not, (3) it is not always possible to predict the order of identified steps in advance, and (4) mapping of business processes to process models is prone to errors [125]. Moreover, the authors suggest concrete measures that can improve the flexibility of systems. Namely, it is advocated that flexible systems should allow users to select from multiple execution alternatives and change process models at run-time [125]. Besides the above mentioned theoretical and practical approaches to the problem of flexibility of workflow technology, there have been several attempts to classify flexibility. Snowdon et al. identify three factors that motivate the need for different types of flexibility [233]. First, the need for type flexibility arises from the variety of different information systems. Second, volume flexibility is needed to deal with the amount of information types. Third, structural flexibility is necessary because of the need to work in different ways. Soffer uses concepts from the Generic Process Model (GPM) and the theory of coordination to classify flexibility into short-term flexibility and long-term flexibility [234]. Short-term flexibility implies the ability to deviate temporarily from a standard way of working, while the long-term flexibility allows for changing the standard way of working. In [232], Carlsen et al. propose a quality evaluation framework, which they use to evaluate five workflow management products (including commercial systems and prototypes), and identify desirable flexibility features. The framework is based on the quality of a process model and the quality of a modeling language. Evaluation of workflow products identified a large set of desirable flexibility features for workflow management systems (e.g., flexible error handling support, quick turnaround for model changes, etc.). In addition, evaluation showed that none of the five workflow products were flexible along all identified features, and some of features were not covered by any product. The first comprehensive taxonomy of concrete features that enhance flexibility of workflow management systems was given in 1999 by Heinl et al. [36, 125]. In 2007 Schonenberg et al. conducted a follow-up study and adjusted the original taxonomy to recent developments in workflow technology [ ]. The remainder of this section is organized as follows. First, we present taxonomies of Heinl et al. and Schonenberg et al. in sections and 2.1.2, respectively. Second, we present related work classified by flexibility types of Schonenberg et al. in sections 2.1.3, 2.1.4, 2.1.5, and
31 Section 2.1 Workflow Flexibility Taxonomy of Flexibility by Heinl et al. In [125], Heinl et al. use a case study conducted in a large marketing company as an indication of the need for flexibility in workflow technology. This study showed that serious problem arise due to the fact that it is hard to predict all alternatives in business process execution when specifying a process model, and that flexibility in the context of execution of instances of process models is needed to cope with these problems. Flexibility of a workflow management system is seen as a degree to which users can choose between various alternatives while executing process models. Flexibility by selection and flexibility by adaptation are identified as two concepts that should be supported by a flexible workflow management system, as Figure 2.1 shows. Objective flexibility Concept flexibility by selection flexibility by adaptation Method late modeling advance modeling type adaptation instance adaptation Figure 2.1: Classification scheme for flexibility of workflow management systems by Heinl et al. [125] Flexibility by selection gives a user a certain degree of freedom by offering multiple execution alternatives. This type of flexibility can be achieved by advance modeling and late modeling. Advance modeling means that multiple execution alternatives are implicitly or explicitly specified in the process model. When it comes to late modeling, parts of a process models are not modeled before execution, i.e., they are left as black boxes, and the actual execution of these parts is selected only at the execution time. The limitation of flexibility by selection is that it has to be anticipated and included in the process model. Flexibility by adaptation considers adding one or more unforeseen execution alternatives to a process model while the model is being executed. This can be achieved via type adaptation or instance adaptation. In the case of type adaptation, a process model is changed while running instances of that model are not affected by the change. In case of instance adaptation, the change is applied to running instances Taxonomy of Flexibility by Schonenberg et al. In [ ], Schonenberg et al. revisited the issue of flexibility and extended the original taxonomy by Heinl et al. [125]: the terminology changed, one flexibility
32 22 Chapter 2 Related Work type is abandoned, and one flexibility type is added, as Table 2.1 shows. These changes reflect the recent innovations in the area of workflow technology. In addition, Schonenberg et al. evaluated several state-of-the art workflow management systems with respect to flexibility types support. Table 2.1: Two taxonomies of flexibility Heinl et al. [125] Schonenberg et al. [ ] flexibility by advance modeling flexibility by design selection late modeling flexibility by underspecification flexibility by type adaptation flexibility adaptation instance adaptation by change flexibility by deviation In [ ], Schonenberg et al. propose four types of flexibility: 1. Flexibility by design is the ability to specify alternative execution alternatives in the process model, such that users can select the most appropriate alternative at run-time for each process instance. This type of flexibility refers to advance modeling of Heinl et al. 2. Flexibility by underspecification is the ability to leave parts of a process model unspecified. These parts are later specified during execution of process instances. In this way, parts of the execution alternatives are left unspecified in the process model, and are specified later during the execution. This type of flexibility refers to late modeling of Heinl et al. 3. Flexibility by change is the ability to modify a process model at run-time, such that one or several of the currently running process instances are migrated to the new model. Change enables adding one or more execution alternatives during execution of process instances. This type of flexibility refers to instance adaptation of Heinl et al. 4. Flexibility by deviation is the ability to deviate at run-time from the execution alternatives specified in the process model, without changing the process model. Deviation enables users to ignore execution alternatives prescribed by the process model by executing an alternative not prescribed in the model. This is a new type of flexibility introduced by Schonenberg et al. and is inspired by new approaches (cf. FLOWer [39,180] and declare [183] ) Further in this section we present relevant research conducted in the area of each of the four types of flexibility proposed by Schonenberg et al.: flexibility by design in Section 2.1.3, flexibility by underspecification in Section 2.1.4, flexibility by change in Section 2.1.5, and flexibility by deviation in Section
33 Section 2.1 Workflow Flexibility Flexibility by Design Flexibility by design, as the ability to include multiple execution scenarios in process models, has drawn much research attention in the area of workflow technology. While some approaches advocate that flexibility by design can be increased by adjusting the way existing technology is used [22,109,137,149,194,198], other approaches propose radical innovations in the area [55,56,92,115,186,187,256]. Softening Traditional Approaches Some researchers propose concrete methods for developing process models using existing modeling languages in a way that models offer as many execution alternatives as possible [22, 137, 194, 198]. For example, in [194, 198], Reijers et al. propose a set of heuristics (the so-called best practices ) that can improve flexibility by design. One of the proposed heuristics advocates that parallel activities in process models imply more execution alternatives than sequential activities. In [22], van der Aalst goes a step further and describes the applicability of measures that can increase the number of available execution alternatives. For example, this author suggests that putting subsequent tasks in parallel can only have a considerable positive effect if the following conditions are satisfied: resources from different classes execute the tasks, the flow times of the parallel subprocesses are of the same order of magnitude,... [22]. Other researchers propose relaxing a process model by introducing optional areas. In [149], Klingemann propose splitting up a process model into two parts. First, the mandatory part consists of activities that must be executed in a predefined order, i.e., this is the traditional notion of a process model. Second, the flexible part consists of activities that can be selected depending on requirements at run-time. Similar concepts are proposed by Georgakopoulos in [109], who claims that flexible processes specify prescribed and optional activities.... During execution, prescribed activities are always required, while users decide themselves whether and when to execute optional activities. Thus, optional activities allow users to impose the process structure when it is necessary, i.e., they allow for multiple execution alternatives. Data-Driven Approaches There are several approaches that focus on the data availability in order to improve flexibility. In [117], Grigori et al. propose anticipation as a means for more flexible execution of traditional process models. Anticipation allows an activity to start its execution when all input data parameters are available, which may be earlier then specified in the control-flow of the process model. The idea to focus on the product data instead of the control-flow when deciding the order of activities was introduced by van der Aalst in [15,18]. Here the author proposes the automatic generation of a process model (represented
34 24 Chapter 2 Related Work as a Petri Net [29,72,93]) from a given Bill Of Materials (BOM). This method was worked out in more detail by van der Aalst et al. in [197], where authors propose a method called Product-Driven Workflow Design (PDWD) for deriving a process model. PDWD takes a product specification in the form of BOM and three design criteria (i.e., quality, costs and time) as a starting point for deriving a favorable new design of the process model. In addition, the authors demonstrate how the ExSpect tool [3] can support PDWD. The possibility to support PDWD by case-handling systems [26, 39] was presented by Vanderfeesten et al. in [249,250]. While the approaches mentioned in the previous paragraph focus on deriving a process model from a BOM, more advanced approaches advocate the direct execution of the BOM. In [251,252], Vanderfeesten et al. present an implementation of a system for direct execution of Product Data Models (PDMs) 1. Similarly, in [257], Wang et al. present an execution framework for a document-driven workflow management system that does not require an explicit control-flow. Instead, the execution of a process is driven by input documents. Proposals for New Process Modeling Languages In [66, 135], Jablonski et al. propose meta-modeling of workflows in the system called MOBILE. In [135] authors distinguish between prescriptive and descriptive workflows. In prescriptive workflows eligible instances are known a priori, while in descriptive workflows instances are not known beforehand but are determined during processing. MOBILE supports meta-modeling of both prescriptive and descriptive process models by means of control predicates [66], which are internally presented by Petri Nets [29, 72, 93]. Moreover, this approach allows for combining prescriptive and descriptive processes in the same framework by decomposing the two types of models [135]. Several approaches propose using intertask dependencies for specification of the process models. In [55, 56], Attie et al. propose using Computational Tree Logic (CTL) [74] for the specification of intertask dependencies amongst different unique events (e.g., commit dependency, abort dependency, conditional existence dependency, etc.). Dependencies are transformed into automata, which are used by a central scheduler to decide if particular events are accepted, delayed or rejected. In [186,187], Raposo et al. propose a larger set of basic interdependencies and propose modeling their coordination using Petri Nets [29,72,93]. Another popular stream of research is applying rule-based or constraint-based process modeling languages [92, 115, 256] that are able to offer multiple execution alternatives and, therefore, can enhance flexibility by design. In [115], Glance et al. use process grammars for definition of rules involving activities and documents. Process models are executed via execution of rules 1 Product Data Models are a special kind of BOM where the building blocks are data elements, instead of physical parts.
35 Section 2.1 Workflow Flexibility 25 that trigger each other. The Freeflow prototype presented in [92] uses constraints for building declarative process models. Freeflow constraints represent dependencies between states (e.g., inactive, active, disabled, enabled, etc.) of different activities, i.e., an activity can enter a specific state only if another activity is in a certain state. Plasmeijer et al. apply the paradigm of functional programming languages embedded in the itask system to workflow management systems [185]. On the one hand, the itask system supports all workflow patterns [10,35,208,211,213]. On the other hand, it offers additional features like suspending activities, passing activities to other users and continuing with a suspended activity. Another interesting property of this approach is the possibility to automatically generate a multi-user interactive web-based workflow management system. Some approaches consider process models based on dependencies between events involving activities [80, 256]. For example, the constraint-based language presented in [256] uses rules involving (1) preconditions that must hold before an activity can be executed, (2) postconditions that must hold after an activity is executed and (3) parconditions that must hold in general before or after an activity is executed. In addition, the Tucupi server is implemented in Prolog [239], which is a prototype of a system supporting this approach. A similar idea is presented in [141] by Joeris, who proposes flexible workflow enactment based on event-condition-action (ECA) rules. In [162, 163], a temporal constraint network is proposed for business process execution. The authors use thirteen temporal intervals defined by Allen [50] (e.g., before, meets, during, overlaps, starts, finishes, after, etc.) to define selection constraints (which define activities in a process) and scheduling constraints (which define when these activities should be executed). Moreover, using the notion of Business Process Constraint Network (BPCN) ensures execution of process models that conforms to specified constraints and detection of (possible) conflicting constraints. After a knowledge worker invokes a special build function to dynamically adapt the instance template (instance templates define total order of task execution), translation of Interval Algebra (IA) network generated from constraints to Point Algebra (PA) network [60] is used to validate whether the given instance template conforms to given constraints. If this validation is satisfactory, the execution continues according to the instance template. New languages for specification of process models that offer flexibility by design have also been proposed in the areas of web services and contracting. In [78], CTR-S, an extended version Concurrent Transaction Logic [62], is proposed for process modeling in the context of contracts in web services. The authors propose using CTR-S for specifying contracts as formulas that represent various choices that are available for the parties in the contract. This language allows stating the desired outcomes of the contract execution and to verify that the outcome can be achieved as long as all parties obey to the rules of the contract. The declarative SCIFF language is developed for the specification, monitoring
36 26 Chapter 2 Related Work and verification of interaction protocol of web services [48,49]. SCIFF envisages a powerful logic-based language with a clear declarative semantics. The SCIFF language is intended for specifying social interaction, and is equipped with a proof procedure capable to check at run-time or a-posteriori whether a set of interacting entities is behaving in a conforming manner with respect to a given specification. Due to its high abstraction level, SCIFF can be used for dependency specifications in various domains. For example, in the area of business processes, SCIFF specifications can include activities (i.e., the control-flow), temporal constraints (i.e., deadlines) and data dependencies. The possibility to learn SCIFF specifications from past executions is presented in [154, 155]. The SCIFF language is described in more detail in Section 7.3 of this thesis. In [269], Zaha et al. propose a language called Let s Dance for modeling interactions of web services. This language focuses on flexible modeling of message exchange between services. A straight-forward graphical notation is used to represent patterns in message exchange, while π -calculus [174] captures the execution semantics [81]. Our approach is more comprehensive than the approaches discussed above: it includes (1) a formal definition of a constraint-based approach on an abstract (i.e., language-independent) level, (2) a concrete, formal constraint-based language that enables deadlock-free execution, ad-hoc change and verification, (3) a working proof of concept prototype, (4) application of the constraint-based approach to the whole BPM cycle, and (5) combining procedural and constraintbased process models (cf. Section 1.6). To our knowledge, none of the new languages discussed above include these five aspects together Flexibility by Underspecification Underspecification in process models has been addressed by several researchers. In [129, 130], Herrmann et al. advocate vagueness in models of socio-technical systems. This approach proposes the semi-structured modeling language SeeMe [129], which allows uncertain, questionable and unknown knowledge to be included in models, as well as checked and committed. For example, SeeMe allows for definitions about ordering of activities, process decomposition, role allocation, etc. to be specified as uncertain, or even omitted from the model. The vagueness allows knowledge workers to decide at later stages about the actual process. Van der Aalst proposes enhancing flexibility of process models with generic processes [16, 21]. Besides elementary activities and routing elements, processes models can contain of non-atomic concrete processes and generic processes. While activities are directly executed by users, non-atomic concrete and generic processes decompose to process models. In the case of a non-atomic concrete process, the process to be executed is already specified in the original model. In the case of a generic process, the model to be executed must be selected at the execution time, i.e., generic processes refer to unspecified placeholders that are
37 Section 2.1 Workflow Flexibility 27 specified only at the execution time. In [167], Mangan and Sadiq propose building instances from partially defined process models in order do deal with the fact that it is often not possible to completely predefine business processes. The idea is that a partially defined process model is only fully specified at run-time and may be unique for each instance. Instances can be built of activities and subprocesses (i.e., modeling fragments), ordered in sequences, parallel branches and as multiple executions (i.e., modeling constructs). In addition, the building of instances is supported by three groups of previously defined domain-specific constraints, as rules under which valid instances can be built. First, selection constraints define which fragments are available for the instance. Second, due to the lack of an explicit termination activity, termination constraints are needed to define when an instance is completed. Third, additional restrictions in an instance are imposed by build constraints. An instance is dynamically built in a valid manner for as long as all constraints are satisfied. In addition, Sadiq et al. propose using this approach to build pockets of flexibility, which are (together with predefined activities) components of process models [220]. Trajcevski et al. propose process model specification based on the known effects of process activities [243]. In addition to the known, process models can also contain the ignorance (unknown values allow specifying situations of dealing with incomplete information). In addition, the authors define an entailment relation which enables verifying the correctness of process models (in terms of achieving a desired goal). The OPENflow system is an example of a system that directly supports flexibility by underspecification [121]. This system allows for incorporating genesis activities in process models. A genesis activity represents a placeholder for an undefined subprocess. The actual structure of a genesis activity is determined at run-time. In [41,44,45] Adams et al. describe the Worklet Service as a means to dynamically build process instances based on the specific context. The idea is to dynamically substitute an activity with a new instance of a contextually selected process, i.e., a worklet. The decision about which worklet to select for a given activity depends on the activity data and existing ripple down rules. In this manner, worklet activities in process models represent unspecified parts of the model, which are to be determined by the Worklet Service at run-time 2. Staffware [12,237] is a popular commercial workflow management system that supports flexibility by underspecification via dynamic process selection [116]. Staffware process models consist of activities and subprocesses. Besides static process selection, where subprocesses are already specified in the model, dynamic process selection allows for selection of an appropriate sub-process at execution time based on the instance data [116]. The idea is comparable to the worklets 2 The Worklet Service is described in more detail in sections and of this thesis.
38 28 Chapter 2 Related Work approach [41, 44, 45], i.e., subprocesses in Staffware instances are dynamically selected based on the instance data (e.g., a specific data element may carry the name of the subprocess to be launched) Flexibility by Change Flexibility by change is achieved when instances can be changed at run-time. This topic has driven much research attention in two areas. First, we will describe the research conducted in the area of adaptive systems (i.e., systems that enable runtime change of instances). Second, we describe some examples of the research in the area of ad-hoc systems (i.e., systems that enable run-time construction of instances). Adaptive Approaches Run-time (i.e., dynamic) change of instances of process models has drawn much attention amongst researchers [24,36,46,68,71,100,101,109,140,145,148,151,189, 219, 230, 265]. A comprehensive overview of the existing approaches to dynamic change in the context of workflow technology is given by Rinderle et al. in [201]. The authors present a good classification of existing approaches and evaluate the approaches against identified correctness criteria. At the most advanced level, it might be necessary to change an instance in an ad-hoc manner (i.e., ad-hoc change), which implies that the change is unpredictable and often applied as a response to unforeseen situations [201]. Systems like Breeze [219], WASA 2 [265] and ADEPT [164,189,202] use advanced compliance checks (i.e., to check whether the current execution can be applied to the changed instance), correctness properties of the process model (e.g., regarding data flow), etc., to support ad-hoc change [201]. Unfortunately, today s commercial systems do not provide sufficient support for ad-hoc change. Systems like InConcert, SER Workflow and FileNet Ensemble are rare examples of commercial systems that, to some extent, enable ad-hoc change of running instances [201]. Another important functionality when it comes to dynamic change is the socalled migration, where a dynamic change is applied to multiple running instances (e.g., due to a change in law regulations, all running instances must be migrated from the old to the new process) [201]. Besides for the basic problems that accompany change of a single instance, migration of multiple instances introduces some additional difficulties. Namely, additional challenges emerge in systems that aim at concurrently supporting both types of change, e.g., in cases when conflicting changes must be resolved at different levels [203]. Unfortunately, not many systems support this type of change. The ADEPT system has been 3 The dynamic process selection in Staffware is explained in more detail in Section of this thesis.
39 Section 2.1 Workflow Flexibility 29 extended to support change of multiple instances of a process model, resulting in the second version of the system, i.e., ADEPT2 [192,193] 4. Some systems support pre-planned and automated instance changes, where necessary changes and their scope are already known at the design phase (i.e., while the process model is developed) [201]. This type of change is supported by ADEPT [164,189,191,202], WASA 2 [265], InConcert [133], etc. [201]. In order to support pre-planned instance changes, a system must be able to (1) detect failures that cause the change, (2) determine necessary changes, (3) identify instances that must be changed, (4) correctly introduce the change to those instances, and (5) notify the users about the conducted change(s) [29,201]. A problem that often arises during dynamic change is pointed out by Ellis et al. as the dynamic change bug [101]. A dynamic change is preformed on running instance, i.e., the instance already has a history that puts the instance in a certain state when the change takes place. The complexity of the dynamic change stems from the fact that for the current state of an instance, an appropriate state in the new model has to be found, and this is not always possible. In [20], van der Aalst proposes an approach for dealing with this problem by calculating the safe change region. A dynamic change is only allowed if an instance is in this region. As a means of comparing various approaches to process control-flow change, Weber et al. [260] propose a set of seventeen change patterns and six change support features. First, change patterns are classified into adaptation patterns and patterns for predefined change. While adaptation patterns cover unpredictable changes, predefined patterns consider only the changes that are predefined in the process model at the design time. Second, change support features of workflow management systems that are identified are, e.g., version control, change correctness, change traceability, etc. In [36], van der Aalst and Jablonski propose a scheme for classifying workflow changes in detail based on six criteria: (1) the reason for change can be a development outside or inside the system, (2) the effect of change can be momentary or evolutionary, (3) the effected perspectives can be process, organization, information, operation or integration perspective, (4) the kind of change can be extending, reducing, replacing or re-linking, (5) the moment at which change is allowed can be at entry time or on-the-fly, and (6) the choice what to do with running process instances can be to abort old instances, proceeded according to the old model, etc. Ellis and Keddara propose a Modeling Language to support Dynamic Evolution within Workflow Systems (ML-DEWS) as a means for modeling the process of dynamic change [100]. The language supports a variety of predefined change schemes, e.g., the abort scheme as a disruptive change strategy where the instance is simply aborted, the defer scheme that allows the instance to proceed 4 A more detailed description of ADEPT is given in Section of this thesis.
40 30 Chapter 2 Related Work according to the old process model, the ad-hoc scheme that supports changes whose components are not fully specified at design time, etc. In [118], Günther et al. apply process mining techniques [28] to change logs created by adaptive systems. The authors propose using process mining to provide an aggregated view of all changes that happened so far in process instances. The mining results can trigger various process improvement actions, e.g., a result may indicate that, due to frequent changes, a process redesign is necessary. Weber et al. introduce a framework for the agile mining of business processes that supports the whole process life cycle by using Conversational Case-based Reasoning (CCBR), adaptive business process management and process mining [259]. Process mining techniques are used to extract and analyze information about realized process adaptations. Integration of the ADEPT [164,189,191,202] and CBRFlow [261] prototypes enables using CCBR to perform ad-hoc changes of single process instances, to memorize these changes, and to support their reuse in similar future situations. Collected information can be used by process engineers to adapt process models and to migrate related process instances to the new model. Ad-Hoc Approaches Ad-hoc approaches to workflow management systems provide a powerful mechanism to increase flexibility by change by allowing users to build the process model for each instance while executing the instance. In [94], Dustdar investigates the relevant criteria for process-aware collaboration and proposes an ad-hoc approach to workflow management systems implemented in the system Caramba. Besides the traditional execution of process models, this approach allows users to execute ad-hoc instances that are not based on a predefined process model. Instead of the system, users coordinate activities in ad-hoc instances. InConcert is an example of a commercial ad-hoc workflow management system, which allows users to design or modify process models [29,133]. InConcert allows for the creation of a new instance in four manners [29]. First, a new instance can be created based on an existing process model. Second, it is possible to create a new instance based on a previously changed existing process model. Third, an ad-hoc instance can be initiated by specifying a sequence of activities. Fourth, a new instance can be initiated as a free routing process, i.e., the instance is created based on an empty process, and the actual process model is created on the fly Flexibility by Deviation Deviation from predefined process models is recognized as a means for increasing flexibility in the workflow area [76, 264]. While some approaches propose
41 Section 2.1 Workflow Flexibility 31 methods for deviations from traditional process models [39, 76, 180, 264, 265], other approaches focus on specialized mechanisms to handle unexpected situations [42, 98, 218, 241]. Deviation in Traditional Approaches In [76], Cugola proposes deviating from process models when it comes to situations unforeseen in the design phase, i.e., not incorporated in the model. The author describes two types of inconsistencies that may occur at unexpected situations. First, a domain-level inconsistency occurs when an actual instance does not follow the process model. Second, an environment-level inconsistency occurs when a business process is executed outside of the system, and the system has no knowledge about how the process is executed. These inconsistencies are caused by domain-level deviations and environment-level deviations, i.e., as actions that system users undertake in order to deal with unforeseen situations. The author describes the PROSYT system, which is able to tolerate domain-level deviations, and, thus, minimize environment-level deviations. The PROSYT system deals with unforeseen situations by allowing a deviation policy and a consistency handling policy to be specified for a process model. A deviation policy identifies which forms of deviation are tolerated, while a consistency handling policy ensures any allowed deviations do not impact the overall correctness of the system. In the context of the WASA prototype [264, 265], Weske nominates three user-initiated operations. These operations are: skip activity that has not been started yet, stop activity that is currently being executed, and repeat activity that has already been executed [264]. These three operations allow for deviations from normal workflow execution, but do not change the original process model. FLOWer [180] is an example of a commercial system that allows deviations from process models. FLOWer is a case-handling system [39] that allows users to open activities that are not supposed to be executed yet, skip activities that should be executed, and redo an activity that has been executed before. FLOWer allows for these deviations by applying a powerful mechanism that ensures consistency of running instances (e.g., data elements are taken into account). A more detailed description of FLOWer is given in Section Exception Handling Another area of research that is concerned with deviations from what is specified in process models is exception handling. Exception handling provides a means for handling errors without explicitly including them in the process model. Exceptions are seen as errors/failures that can occur during execution of process models [42, 43, 98, 218, 241]. Although we consider exception handling as a technique to handle (technical) problems rather than supporting flexibility, it is related to the above approaches. Therefore, we mention some work on exception handling.
42 32 Chapter 2 Related Work Strong et al. investigated exception handling of an operational process in one organization. They suggested points for further research on the roles of people in computerized systems and design of computer-based systems that can handle multiple conflicting goals [241]. In [98], Eder et al. discuss advanced concepts concerning recovery from system failures and semantic failures in the context of workflow transactions. The authors identity different failure sources (i.e., workflow engine failures, activity failures, and communication failures) and failure classes (i.e., system failures and semantic failures) in process-oriented and document-oriented workflows. Saastamoinen et al. propose using a set of (1) formal organizational rules, (2) informal group rules, and (3) informal individual rules for handling exceptions and assuring that the goal of the process is achieved after the change [218]. The work presented in [41 43,208,209] is a comprehensive attempt to provide a concrete framework for exception handling in workflow management systems. In [43], Adams et el. propose using the Worklet Service [41,44,45] for exception handling of events that occur while executing process models. When such an event occurs, a repository of rules is used to select the procedure that should be used to handle the exception. In [208, 209], Russell et al. define a rigorous classification framework for workflow exception handling independent of modeling approaches and technologies, i.e., a set of workflow exception patterns is identified. Based on these exception patterns, in [42] Adams et al. present their implementation of a Exception Service, which provides a fully featured exception handling paradigm for detecting, handling and incorporating exceptions as they occur. Moreover, this implementation allows for handling both predicted and unpredicted exceptions. Various approaches to exception handling have been implemented in a number of systems, e.g., WAMO [97], ConTracts [200], Exotica [51], OPERA [119, 120], TREX [240], WIDE [67], etc. 2.2 Workflow Management Systems Numerous workflow management systems are available on the market today in addition to the open source products and academic prototypes [29]. In this section we shortly present several workflow management systems that are able to offer one or more types of flexibility. We start by presenting two popular commercial systems. In Section we present the traditional system Staffware [238], which provides support for flexibility by underspecification and a limited support for flexibility by change. In Section we present the case-handling system FLOWer [180], which provides flexibility by deviation. Then, we present two academic systems (i.e., these systems are developed under the supervision of academic workflow researchers). In Section we present YAWL [23, 210, 212], which supports flexibility by underspecification and exception handling. Finally,
43 Section 2.2 Workflow Management Systems 33 in Section 2.2.4, we present ADEPT [164, 189, 191, 202], a workflow management systems focusing on supporting flexibility by change Staffware Staffware is one of the most used workflow management systems in the world. For example, in 1998, it was estimated that Staffware had 25% of the world market. Staffware consists of several components that are, in general, used to define process models, users and their roles, and to execute instances of process models. For example, Figure 2.2 shows one process model in the Staffware component for model definition, i.e., the Graphical Workflow Definer, and Figure 2.3 shows the Work Queue Manager - a client tool which is used by users to execute activities of running instances. Figure 2.2: A process model in Staffware Figure 2.3: A work queue with a work item in Staffware Despite its reputation of being an inflexible system, in recent years considerable efforts have been undertaken to enrich Staffware with features that enhance its flexibility. Since recently, Staffware supports flexibility by underspecification and, to a limited extent, flexibility by change.
44 34 Chapter 2 Related Work Dynamic Process Orchestration in Staffware The new paradigm of Dynamic Process Orchestration [116] introduces flexibility by underspecification in Staffware. Staffware process models can be composed of atomic activities and subprocesses. There are four ways in which subprocesses can be invoked from their parent processes [116]: (1) static process selection, (2) dynamic process selection, (3) multiple process selection, and (4) goal-driven process selection. Static process selection. In a conventional way, subprocesses are invoked in a static manner. This means that the subprocess is known in advance and explicitly specified in the process model. The result is that the same subprocess will be invoked for all instances of the parent process model. Dynamic process selection. Often it is the case that a range of subprocesses can be invoked, the choice of which depends on specific circumstances. The names of such subprocesses and the conditions of their usage is known in advance, and specified in the parent process model. However, the circumstances are only know at the execution time, and then it is decided which subprocess should be invoked. The dynamic process selection is achieved by specifying in a process model a range of possible subprocesses and conditions (involving data elements) for their invocation. The result is that, each instance will decide, based on current values of its data elements, which one of the available subprocesses to invoke. Multiple process selection. This is an extension of the dynamic process selection in a way that, instead of invoking only one subprocess, multiple subprocesses can be invoked in a dynamic manner. The parent process will proceed to the next activity only when all invoked subprocesses have completed successfully. Goal-driven process selection. In some circumstances, the name(s) of subprocesses might not be known in advance and, thus, cannot be specified in the process model. Instead, only a goal (e.g., examine patient ) that the subprocess should achieve is know and specified in the parent process. Further, on the system level each subprocess is tagged with the goal it achieves (e.g., examine patient, perform tests ) and the entry conditions (e.g., age > 65 ). The result is that, for each instance, the system will automatically invoke a subprocesses that achieves the given goal and satisfies the entry condition based on the current instance data. Change on the Instance Level in Staffware Staffware does not support dynamic change in the ad-hoc manner described in Section This means that it is not possible to apply a dynamic change
45 Section 2.2 Workflow Management Systems 35 directly on a running instance. According to [201], a change of a process model can trigger change of its running instances. However, there are several problems arising when it comes to this kind of dynamic change in Staffware, as pointed out by Rinderle et al. in [201]. First, it might happen that activities in running instances are automatically deleted, without informing the users who are working on these instances. Second, if a deleted activity has already been executed, the results of this activity are lost. Third, Staffware suffers from the so-called changing the past problem, i.e., some changes might influence the past of the instance, which may lead to missing data values or even program failures. Finally, dynamic changes in Staffware are too restrictive (e.g., if an activity is activated, insertions before it are no longer possible) FLOWer FLOWer is a case-handling system [26, 39], i.e., there are two major differences when compared with traditional workflow management systems. First, the execution order of activities is heavily influenced by data elements in case-handling systems. Second, while traditional systems offer atomic activities from running instances to users for execution, case-handling systems offer whole cases to users. In traditional workflow management systems, the execution order of activities in instances is explicitly defined by the control-flow definition of the underlying process model. In FLOWer, users can follow the execution order of activities defined in the control-flow of the instance. Figure 2.4 shows the control-flow specification of one process model in FLOWer. However, in addition, the controlflow ordering may as well be violated. On the one hand, FLOWer considers an activity to be successfully executed as soon as all its mandatory data elements become available. This means that, if all mandatory data elements of an activity become available before the manual execution of this activity, the activity is considered to be successfully completed, and a manual execution is no longer necessary. On the other hand, an activity can be executed or skipped even if it contradicts to the control-flow specification. In this way, FLOWer is one of the rare commercial systems that offers flexibility by deviation. Besides executing activities according to the control-flow specification (i.e., executing currently enabled activities), users of FLOWer can also [39,180]: open an activity that is not enabled yet (i.e., disabled activity), skip an activity that has not been executed yet, i.e., not execute an activity that should be executed according to the control-flow, and re-do an activity that has already been executed before, i.e., choose to execute again an activity that has already been executed. A major difference between traditional workflow management systems and case-handling systems is how the work available in running instances is offered to users. On the one hand, traditional systems typically offer work to users as
46 36 Chapter 2 Related Work Figure 2.4: A process model in FLOWer atomic activities, i.e., a user has access to a list containing all currently enabled activities for all running instances (e.g., Figure 2.3 shows such a list in Staffware). On the other hand, case-handling systems try to avoid the narrow view provided by activities, and therefore present the whole instance to the user. Note that, although a whole case is presented to a user, he(she) can only work with activities for which he(she) is authorized. In more detail, for each activity in FLOWer, users can get authorizations to execute, skip and redo the activity. Figure 2.5 shows how FLOWer presents a whole instance to one user. In the instance shown in this figure, activities Claim Start and Register Claim have already been successfully executed (as indicated by the check symbol). After that, activity Get Medical Report was skipped (as indicated by the skip arrow symbol). Currently, activities Get Police Report, Assign Loss Adjuster and Witness Statements are currently enabled, i.e., available for execution. The last two activities (i.e., Policy Holder Liable and Close Case) are not enabled yet. Figure 2.5: FLOWer Wave Front Note that skipping, opening and redoing an activity in FLOWer may also affect other activities. First, when a disabled activity is opened, all preceding
47 Section 2.2 Workflow Management Systems 37 not yet executed activities will automatically become skipped. For example, if activity Close Case would be opened or skipped at the moment presented in Figure 2.5, then activities Get Police Report, Assign Loss Adjuster, Witness Statements, and Policy Holder Liable would automatically become skipped. If activity Close Case was opened, then it can be directly executed. If activity Close Case was skipped, then the case execution continues after this activity, e.g., the instance presented in Figure 2.5 is completed. Second, if an activity is re-done, all succeeding executed activities must also be re-done. For example, if activity Claim Start would be re-done at the moment presented in Figure 2.5, then activities Register Claim and Get Medical Report would also need to be re-done after activity Claim Start. A drawback of the described side-effects is that the deviation becomes more extensive than intended, e.g., while attempting to re-do only one activity, a FLOWer user might end up being forced to also re-do many preceding activities YAWL YAWL is a workflow management system developed in a collaboration between the Eindhoven University of Technology and the University of Queensland [11, 23, 32, 210, 212]. YAWL is developed in the context of the workflow patterns initiative [10, 32, 35, 208] and aims at supporting all workflow patterns, i.e., it aims at supporting various features offered by existing workflow management systems. In simple words, YAWL is driven by the ambition to be able to provide a comprehensive support for most patterns while using a relatively simple language. In its essence, YAWL is built as a traditional workflow management system, i.e., ordering of activities is defined in the traditional control-flow manner. While executing activities in running instances in YAWL, users must follow the order strictly specified in the control-flow of the underlying process model. Figure 2.6 shows a process model in YAWL. Figure 2.6: A process model in YAWL YAWL s architecture is based on the so-called service-oriented architecture, and the systems can be easily extended by various functionalities 5. Thanks to 5 The architecture of the YAWL system is described in more detail in Section 6.11.
48 38 Chapter 2 Related Work its service-oriented architecture, the YAWL system nowadays offers two important features that enhance its flexibility to a great extent. These features are YAWL Worklet Service [41, 44, 45], which directly provides for flexibility by underspecification, and YAWL Exception Service [41 43,208,209], which represents a powerful mechanism for exception-handling. Moreover, the approach presented in these papers is also implemented into YAWL. The Worklet Service The main idea behind the Worklet Service [41, 44, 45] is to dynamically select subprocesses (i.e., worklets) that should be invoked in YAWL instances. There are two types of activities in YAWL process models. First, atomic activities are activities that should be executed by users. Second, instead of being executed by a user as an atomic activity, an activity can refer to a subprocess (i.e., a worklet activity ). At the execution time, worklet activities and relevant instance data are delegated to the Worklet Service. The service then uses a predefined set of ripple down rules and the received data to select the most appropriate YAWL process model and automatically invoke it as the selected worklet. Ripple down rules specify which YAWL process model should be invoked as a worklet, given the actual data of the parent instance 6. In other words, each worklet activity is dynamically decomposed into a YAWL process, which enriches the YAWL system with flexibility by underspecification. The Exception Service The worklet paradigm is reused in YAWL to enable a powerful exception handling mechanism realized via the Exception Service [41 43, 208, 209]. The Exception Service provides a fully featured exception handling paradigm for detecting, handling and incorporating exceptions as they occur. This service operates similarly like the Worklet Service, i.e., when an exception occurs in an instance, the Exception Service uses ripple down rules and instance data to select and automatically invoke a YAWL process (i.e., exlet) that will be executed in order to handle the exception. Moreover, this implementation allows for handling both predicted and unpredicted exceptions. Unpredicted exceptions are especially interesting: a YAWL user can, at any point during the execution of an instance, report the occurrence of an exception and let the Exception Service invoke a suitable exlet ADEPT ADEPT is a workflow management system that focuses on dynamic change. ADEPT was developed at the University of Ulm [189, , 202]. This system uses powerful mechanisms that allow users to change running instances of process 6 The Worklet Service is described in more detail in Section
49 Section 2.2 Workflow Management Systems 39 models by, e.g., adding, deleting or replacing activities or jumping forward in the process [189]. Besides the dynamic change of instances, ADEPT also enables definition of the control-flow, the data-flow, temporal constraints (i.e., minimal and maximal duration of activities, deadlines, etc.) and preplanned exceptions (e.g., forward and backward jumps [190]) in process models, etc. Moreover, the system guarantees static and dynamic correctness properties (e.g., prevents missing input data, deadlocks, etc.) [191]. Therefore, ADEPT offers a comprehensive support for flexibility by change. Dynamic change is supported in two ways in ADEPT. First, the so-called ad-hoc change relates to changing a single running instance [189, 191]. Figure 2.7 shows a screen of ADEPT handling a dynamic change. The system offers a complete set of operations for defining dynamic changes at a semantic level and ensures correctness via pre- and post-conditions for changes. Complexity associated with the change (e.g., missing data due to activity deletions) is hidden from users. The second type of run-time change provided by ADEPT is the socalled propagation of model changes to its running instances [ ]. In case of the model change propagation, the change will be applied only to its instances for which the model change does not conflict with the current instance state or previous ad-hoc changes. Figure 2.7: Visualizing a dynamic change in ADEPT Besides the support for dynamic change, ADEPT incorporates other useful features. For example, it considers inter-workflow dependencies and semantical correctness of dynamic change, as described in the following paragraphs. Most of the workflow management systems do not consider inter-workflow dependencies and allow instances to execute independently from each other. However, different instances are often semantically inter-related in some way [126]. ADEPT uses interaction expressions and interaction graphs to enable the specification and implementation of such dependencies [191].
50 40 Chapter 2 Related Work Another useful feature of ADEPT is the semantic check of dynamic changes [164, 165]. For example, in the medical domain it is often the case that certain medications should not be combined. Semantical constraints can be used to define undesired combinations of medications, e.g., activities administer Aspirin and administer Marcumar should not be executed both because these two medicines are not compatible. Verification of semantic constraints (1) detects situations where these two medications are used together and (2) alerts the user performing the change about this problem. Note that semantic constraints are not necessarily enforced. Instead, an authorized user can commit a change event if it violates a constraint. In this case, the user needs to document the reason for violating the constraint. This approach enables for more flexibility and traceability in situations when problems indeed occur after the change Other Systems There is a variety of workflow management systems available on the market. Although all systems have the same aim, i.e., automating business processes, each system has some unique features. For the purpose of illustration, in this section we will briefly describe three more systems: FileNet, InConcert, and COSA. FileNet [107] is a conventional workflow management system. Despite its traditional approach to process modeling and execution, FileNet offers the possibility of voting to its users, i.e., it is possible to specify in the model that users should vote during the execution in order to decide the routing of the process. FileNet Ensemble allows on-the-fly adaptations of running instances [201]. Another interesting feature of FileNet is the possibility to monitor states of running instances and perform extensive statistical analysis of past executions. In addition, FileNet offers the possibility to evaluate process models by means of simulation. InConcert was a workflow management system 7 built on the workflow design by discovery paradigm, which allows for the creation of templates based on the actual execution of instances. The motivation behind InConcert was to tempt the users to design the instance on-the-fly, i.e., while executing it. As an extreme, InConcert allowed for ad-hoc building of instances via free routing processes, where the created instance is initially empty, and its actual routing is created on-the-fly (cf. Section 2.1.5). COSA [235,236] allows for definition of subprocesses and events that trigger them. Process models can be modified at run-time, but the change is applied only for future instances. COSA supports deadlines in a way that on a deadline expiry a compensating activity can be launched. In addition, a compensating activity can also be invoked manually. COSA also allows for run-time deviations by reordering, skipping, re-doing, postponing or terminating activities. 7 InConcert is not available anymore on the market.
51 Section 2.3 Workflow Management Systems and the Organization of Human Work Workflow Management Systems and the Organization of Human Work The related work discussed so far originates from the information systems field, i.e., information technology is used to support business processes without much consideration for the role of humans in these processes and the organizational context. In this section we try to provide an overview of related work on the organization of human work. In Section we describe two contrasting regimes for the organization of work: Autocratic Work Regime (AWR) and Democratic Work Regime (DWR). In Section we describe structural parameters for AWR and DWR defined by the organizational theory of Socio-Technical Systems (STS) [246]. In Section we evaluate workflow management systems against the structural parameters, and in Section we summarize this overview Two Contrasting Regimes for the Organization of Work An autocracy is a form of government in which unlimited power is held by a single individual. An Autocratic Work Regime is a practice of management in which there is a strict division of labor by allocating control and execution to separate individuals, i.e., managers and workers. An AWR is further characterized by a hierarchical organization with formal authority; an emphasis on formal, standardized rules; fixed specialized tasks per position; an absolute split into management and technical-support tasks ( staff and line ), and a fragmentation of the executive work into multiple, short-cycled, tasks. In other words, decision making is centralized (cf. Section 1.3) in an AWR. By far the best-known AWRs are the approach to Scientific Management [114, 242] and the Classical Organization or Ideal Bureaucracy [105, 262]. Although their dominance is fading, these two AWRs still serve as organizational archetypes for both industrial and service organizations. A democracy is a form of government that aspires to serve under the people rather than ruling over them. A Democratic Work Regime is defined as a management practice in which people actively take part in the actual decision-making process. In other words, decision making is local (cf. Section 1.3) in a DWR. Two ideal patterns can be distinguished: representative democracy and participative democracy. In [102], Emery defines representative democracy as choosing by voting from among people who offer themselves as candidates to be our representatives (page 1). In [103], Emery and Emery define participative democracy as locating responsibility for coordination clearly and firmly with those whose efforts require coordination (page 100). In a representative democracy the influence of people on decision-making is rather indirect. This form, called political participation, is defined by Abrahamsson [40] as participation involving the right to control organization s executive (...) /involvement in high-level goal setting and long-term planning (pages ). In a participative democracy the
52 42 Chapter 2 Related Work influence of people on decision making is direct. This form, called socio-technical participation, is defined by Abrahamsson as participation in the organization s production, i.e., in the implementation of decisions taken on higher levels [40]. By far the best-known DWR is the team-based organization [99,231, ] Socio-Technical Systems Socio-Technical Systems (STS) [246] is an organizational theory that promotes a Self-Managed Work Team (SMWT) as the prime organizational unit of analysis and design. In STS, participative democracy is practiced by giving any potential member of an SMWT the opportunity - as well as the authority - to perform every single task, no matter whether it is executive, managerial, or supportive in character [99, ]. As its name says, STS advocate optimizing benefits from both social and technical aspects of work. We selected STS as a representative organizational DWR theory because it, like workflow technology, extensively considers the operational aspect of flexible work. Moreover, as its name says, STS advocates benefiting from both social and technical aspects in human work [246]. Therefore, we will shortly describe STS and its relation with workflow technology. Within the STS school, De Sitter et al. identified a set of structural parameters that can be used as a typology for the characterization of AWRs and DWRs [231], as Table 2.2 shows. The semantics of each of the parameters will be explained in Section The structural parameters refer to the basic organization of production (e.g., number of parallel processes), and the various aspects concerning the division of labor (e.g., performance and control). An AWR is characterized by functional concentration, separation of performance and control, performance specialization, performance differentiation, division of control functions, control specialization, and control differentiation [231]. This is best typified by bureaucratic office work in which each employee is doing one single, simple performance task only, for all sorts of different project assignments. Supervisors allocate individual tasks on a daily basis, while specialized technical staff members care for the planning of work and for the administering of quality procedures. A DWR is characterized by functional deconcentration, integration of performance and control, multiple performance integrations, and multiple control integrations. A typical representative of DWRs is self-managed office work, in which teams of employees carry out whole projects by allocating, planning, and controlling full project assignments without additional help of a supervisor or technical specialist Workflow Management Systems and the Structural Parameters To characterize the style of work imposed by workflow management systems, we evaluate these systems against the structural parameters of De Sitter et al.
53 Section 2.3 Workflow Management Systems and the Organization of Human Work 43 Table 2.2: Evaluation of AWRs and DWRs with respect to the STS structural requirements of De Sitter [231] Socio-Technical requirements AWR DWR 1 functional deconcentration NO YES (multiple parallel processes) 2 integration of performance and control NO YES 3 performance integration A NO YES (whole tasks) 4 performance integration B NO YES (prepare + produce + support) 5 control integration A NO YES (sensing + judging +selecting + acting) 6 control integration B NO YES (quality + maintenance + logistics + personnel, etc.) 7 control integration C NO YES (operational + tactical + strategic) [231], as shown in Table 2.3. The evaluation of workflow technology against the structural parameters shows that workflow technology enforces an AWR (cf. Table 2.2), and thus, prevents the functioning of DWRs (e.g., SMWTs). Table 2.3: Evaluation of workflow management systems with respect to the STS structural requirements of De Sitter [231] Socio-Technical requirements workflow management systems 1 functional deconcentration NO: work is repeatedly executed (multiple parallel processes) in the same manner. 2 integration of performance and control NO: people perform and the system controls the work. 3 performance integration A NO: people execute specialized, (whole tasks) small activities. 4 performance integration B NOT (prepare + produce + support) APPLICABLE 5 control integration A NO: people cannot execute a (sensing + judging +selecting + acting) selected control action. 6 control integration B NOT (quality + maintenance + logistics + APPLICABLE personnel, etc.) 7 control integration C NO: the system is in charge (operational + tactical + strategic) of the operational control. Functional deconcentration. This parameter refers to grouping and coupling of performance functions (process models) with respect to work orders [231]. If all orders undergo the same procedure, then we talk about function concentra-
54 44 Chapter 2 Related Work tion [231]. If, due to their variety, orders undergo different procedures, then we talk about functional deconcentration [231]. Traditional workflow management systems lack flexibility and force people to repeatedly execute their work in the same manner [77, 109, 125, 143, 153, 166, 188, 233]. Therefore, business processes are constantly executed in the same way, regardless of the nature of work orders. Integration of performance and control. In a DWR, the same people who perform the work are also authorized and responsible for control [231]. This socalled integration of performance and control is not possible in a conventional workflow management systems because, due to lack of flexibility, system users cannot influence the way they work. Instead, process models that prescribe the way people execute their work is developed by external experts [77, 109, 125, 143, 153, 166, 188, 233]. Integration into whole tasks. Instead of specialized, short-cycled tasks, STS advocates whole tasks that form a meaningful unit of work [231]. When working in DWRs, people deal with more variety in their work. However, in workflow management systems, a business process is represented by a large process model consisting of individual activities [29, 66, 93, 109, 110, 125, 266]. To this end, bigger, meaningful, units of work are divided into separate, short-cycled tasks that should be executed by authorized individuals. Working with conventional workflow management systems implies performance specialization. So, any form of performance integration is lacking. Integration of preparation, production, and support. Preparation, production and support functions must be integrated at the workplace level [231]. Workflow technology is used to support only the production function [29,93,266]. Preparation and support functions are allocated elsewhere within the company, and workflow technology does not influence this integration. Therefore, this parameter is considered to be not applicable in the evaluation of workflow technology. Integration of control functions: sensing, judging, selecting, and acting. The functions of a control cycle are: (1) sensing the process states, (2) judging about the need for a corrective action, (3) selecting the appropriate correction action, and (4) acting with the selected control action [231]. In a DWR, the four control functions should be integrated [231]. Although, when working with a workflow management system, people can sense the need for control and are able to successfully judge any controls needed, they do not have the authorizations and/or possibilities to select and execute the appropriate control activities [109,125]. Due to lack of flexibility of workflow management systems, it
55 Section 2.4 Outlook 45 is not possible to successfully integrate the control functions when working with such a system. Integration of the control of quality, maintenance, logistics, personnel, etc. Control of quality, maintenance, logistics, personnel, etc. should be conducted at the workplace level [231]. Workflow technology is used to support only the production function [29, 93, 266]. Control of quality, maintenance, logistics, and personnel are allocated elsewhere within the company, and workflow technology does not influence this integration. Therefore, this parameter is considered to be not applicable in the evaluation of workflow technology. Integration of operational, tactical and strategic controls. Operational, tactical and strategic controls should be integrated at the workplace level [231]. Independently of the workflow technology, an organization can integrate (or not) operational, tactical, and strategic controls. Although the use of a workflow management system does not explicitly influence tactical and strategic control, it prevents this control integration at the workplace level because workers are not made responsible for the operational control [77, 109, 125, 143, 153, 166, 188, 233]. In this case, operational control is external - i.e., managers and business-process modelers control the operational design of work. Therefore, this integration cannot be established at the workplace level and this parameter is not supported by conventional workflow management systems Summary This section provided an overview of the work related to the requirements for flexible style of human work in the organizational context. It showed that, despite the fact that there is not much work that combines the fields of IT and organizational science, the lack of flexibility of workflow technology indeed disables DWRs advocated by many organizational theories, like, e.g., STS and SMWTs [246]. 2.4 Outlook In this chapter we presented the research conducted in the area of flexibility of workflow management systems. As indicated by many researchers, workflow management systems lack flexibility due to the fact that users cannot adjust the execution of processes to requirements imposed by specific situations. We also described why inflexible systems prevent implementation of democratic regimes of work in practice. We used a taxonomy of features that enhance flexibility of workflow management systems to classify the relevant work in this field: (1) design of flexible process models, (2) underspecification in process models, (3) change of running processes, and (4) deviation from prescribed process models.
56 46 Chapter 2 Related Work Although much research has been done in each of these areas, a unique approach that unifies all relevant features is still lacking. In this thesis, we propose a new, constraint-based, approach to workflow management systems which primarily aims at enhancing flexibility by design, but also enables all other types of flexibility (i.e., flexibility by underspecification, change, and deviation). First we define the class of constraint-based process modeling languages on an abstract level and one concrete formal language for constraint specification. Then we present a proof of concept prototype that shows how the proposed constraintbased approach can be applied to workflow management systems.
57 Chapter 3 Flexibility of Workflow Management Systems Workflow management systems influence to a great extent the way employees execute their work. As discussed in sections 1.3 and 2.3,, modern organizational theories advocate democratic work regimes where people can control their work. In order to be able to support this kind of work, workflow management systems must offer a high degree of flexibility. Flexibility of workflow management systems represents the degree to which users can choose how to do their work, instead of having a workflow management system decide how to work [125, ]. In this chapter we describe contemporary workflow management systems and the features that enhance flexibility of these systems. The remainder of the chapter is structured as follows. First, in Section 3.1 we describe the functionality of contemporary workflow management systems based on the three dominant workflow perspectives: the control-flow, the resource and the data perspective. Second, in Section 3.2 we illustrate the flexibility of contemporary workflow management systems using simple, system and language-independent examples. Finally, in Section 3.3 we conclude this chapter by proposing a new approach to process modeling that is able to offer all types of flexibility. 3.1 Contemporary Workflow Management Systems Despite the complexity of workflow management systems and the high impact they have on business processes, a good standardization is still lacking in the area of workflow technology. Vendors of workflow management systems tend to offer different functionalities in their systems. A standardization is also lacking with respect to the terminology used in workflow technology. On the one hand, the same concepts often have different names in various systems, which creates an illusion that a particular functionality is different in systems. On the other hand,
58 48 Chapter 3 Flexibility of Workflow Management Systems it might happen that systems use the same name for different concepts or functionalities. In this section we describe in the main concepts and functionalities of contemporary workflow management systems. The Workflow Management Coalition (WFMC) [9] aims at standardizing workflow technology. One of the efforts of the WFMC in the direction of standardization was proposing the reference model for general architecture of workflow management systems [75]. The WFMC s reference model shows many possible components of workflow management systems. However, three of those components are the core of every system: a process definition tool, a workflow engine, and a workflow client application. These three components are shown in Figure 3.1. record album process definition tool creates process models enter data add song make cover process model developer Metallica: Master Of Puppets enter data add song make cover workflow engine creates/manages process instances Madonna: Like a Virgin enter data add song make cover Usnija Redzepova: The Best Of enter data add song make cover workflow client application worklists execution of activities instance : activity instance : activity - Metallica MOP: add song - Madonna LV : add song instance: Madonna LV activity : add song title Material Girl duration 3:53 OK... - Usnija R. TBO: make cover user user Figure 3.1: The three main components of a workflow management system First, a process definition tool is used by process model developers to create process models. For example, Figure 3.1 shows that a model for the record album process can be developed using such a tool. Second, a workflow engine is needed to manage the execution of instances of process models. In the example presented in Figure 3.1, this ensures that each artist can record an album in the way it is prescribed in the record album model. Based on the definition of a process model, the workflow engine decides which activity(-ies) can be executed by which users and at what point in time. Third, a workflow client application
59 Section 3.1 Contemporary Workflow Management Systems 49 presents the so-called worklist (i.e., a list of all activities that can be executed in running instances) to each user. Using this tool each user can execute activities available in the worklist. In Figure 3.1 we see two users: (1) the worklist of the first user contains two activities add song (i.e., for two running instances), (2) the worklist of the second user contains activity make cover for the third running instance, and (3) the first user is currently executing activity add song for one of the instances. Each time a user executes an activity in an instance, the workflow engine decides, based on the process model and the current state of the instance, which activities can be executed next, and updates the worklists of all users with this information. For example, people working on any of the three instances presented in Figure 3.1 will be able to execute activity add song only after executing activity enter data. Figure 3.2 shows the three main perspectives of process models: the controlflow perspective, the resource perspective, and the data perspective. These three perspectives determine the order in which activities will be executed, which users can execute which activities, and which information will be available during execution. The control-flow perspective of a process model defines in which order activities can be executed [29,35,208,213]. For example, Figure 3.2(a) shows that the control-flow perspective of the process model record album specifies that (1) the process starts with activity enter data, (2) followed by an arbitrary number of executions of activity add song, and (3) the process ends by executing activity make cover. activity data element role sound technician enter data add song make cover (a) control-flow perspective album artist enter data add song make cover enter data add song make cover producer designer title duration cover (b) resource perspective (c) data perspective Figure 3.2: The three perspectives of process models The resource perspective defines which (human) resources are authorized to execute each of the activities and how the actual resources are allocated to execute the activities [10, 29, 35, 106, 208, 211, 216]. Figure 3.2(b) shows the resource perspective of the process model record album with four users having three roles. The producer can enter data, each of the two sound technicians can add song
60 50 Chapter 3 Flexibility of Workflow Management Systems and the designer can make cover. If a user has the appropriate role to execute an activity, then we say that the user is authorized to execute the activity. Naturally, an activity that is supposed to be executed next will be offered only to the worklists of authorized users. The data perspective of a process model defines which data elements are available in the process and how users can access them while executing activities [29, 208, 210, 212]. Figure 3.2(c) shows the data perspective of the record album process model. First of all, there are five data elements in this process, i.e., artist, album, song title, song duration and cover. Second, for each activity it is defined (1) which data elements are available (e.g., album, artist, duration and title are available in activity add song) and (2) how these data elements can be accessed (e.g., data elements album and artist can be seen but not edited while duration and title can be edited in activity add song). If the value of a data element can be accessed but not edited in an activity, then we say that this is an input data element for this activity. If the value of a data element can be edited in an activity, then we say that this is an output data element for this activity. Figure 3.2(c) shows that, for activity add song: (1) album and artist are input data elements and (2) title and duration are output data elements. In the remainder of this section each of the perspectives is discussed in detail: the control-flow perspective in Section 3.1.1, the resource perspective in Section 3.1.2, and the data perspective in Section Each of these three sections starts with a short description and it is organized as follows. CPN models. First, we present the perspective in a system-independent way using Colored Petri Nets (CPNs) models [1, 138, 139, 152]. CPNs are an extension of classical Petri nets [199]. There are several reasons for selecting CPNs as the language for modeling in the context of workflow management: (1) CPNs have formal semantics and are independent of any workflow system, (2) CPNs are executable and allow for rapid prototyping, gaming, and simulation, (3) CPNs have a graphical representation and their notation is intuitively related to existing workflow languages, and (4) the CPN language is supported by CPN Tools a graphical environment to model, enact and analyze CPNs. Workflow management systems. Second, we present how the perspective is realized in three commercial workflow management systems: Staffware [238], FileNet [107] and FLOWer [180]. The goal is to provide insight into the functionality and look-and-feel of contemporary systems. As discussed in Section 2.2, Staffware and FileNet are two typical examples of traditional workflow management systems, while FLOWer is a case-handling system [195]. As such, these three systems provide a good overview. Workflow patterns. Third, several patterns are described for the perspective in order to present the perspective in an system-independent way. In an attempt to identify unified solutions of standard issues in workflow tech-
61 Section 3.1 Contemporary Workflow Management Systems 51 nology, the workflow patterns initiative [10, 35] identified many workflow patterns [208, 211, 213]. In addition to the three basic perspectives (i.e., control-flow, resource and data) patterns are also identified for the exception handling perspective. Workflow patterns [10]. Note that a large number of patterns is identified for each of the perspectives. However, due to page limits we present only few of the patterns for the illustration purpose. However, insights obtained through the complete set of patterns are used throughout this thesis. Overview. Finally, a summarized overview of the perspective is given The Control-Flow Perspective Despite the different languages (i.e., notations) used in various commercial and academic tools, the control-flow perspective plays an important role in process models because it determines the order in which users can execute activities. CPN Model(s) For illustration purposes we will use a simple example of the Handle Complaint process [29]. Figure 3.3 shows the CPN model of the Handle Complaint process. A CPN model consists of places and transitions connected by arcs. Places (represented by ovals) are typed, i.e., the tokens in a place have values of a particular type (or color in CPN jargon). These types are a subset of the default data types in Standard ML such as integer and string and additional types can be composed using constructs such as tuple, list and record. The number of tokens per place can vary over time. The value of a token indicates the properties of the object represented by this token. There are nine places in the CPN shown in Figure 3.3 and all of them are of the type ID, which stands for the complaint identification number. Transitions (represented by rectangles) may consume tokens from places may and produce tokens in places, as specified by inscriptions on arcs between places and transitions. There are seven transitions in the CPN in Figure 3.3, i.e., start, contact department, contact client, assess, pay, send letter and file. A more detailed discussion of the CPN concepts is beyond the scope of this paper. In the remainder, we assume that the reader is familiar with the CPN language and refer to [1,138,152] for more details. The CPN in Figure 3.3 defines the control-flow of the Handle Complaint process. After receiving a complaint from a client and starting the process, the officer can in parallel (i.e., in any order) contact the client and contact the
62 52 Chapter 3 Flexibility of Workflow Management Systems department to collect information about the conforming. After gathering this information, the officer assesses the complaint. If the assessment is positive, the complaint is accepted and the department pays the client. If the assessment is negative, a notification letter is sent to the client. At the end of the process, the complaint and the assessment result are filed in the archive. i p1 i ID CONTACT CLIENT i p3 i ID if accept then 1`i else empty p5 i ID PAY i i1 ID i START i i ASSESS i p7 ID i FILE i o1 ID p2 i ID CONTACT DEPARTMENT i p4 ID if accept then empty else 1`i p6 i ID SEND LETTER Figure 3.3: CPN for the Handle Complaint process Workflow Management Systems Workflow management systems tend to use system-specific notations (i.e., languages) for specifying the control-flow perspective in process models. Figures 3.4, 3.5 and 3.6 show the control-flow perspective of the Handle Complaint process in three commercial workflow management systems: Staffware, FileNet and FLOWer, respectively. Indeed, this process is modeled differently in these three systems: while Staffware and FileNet present the whole model on a single level, modeling decisions in FLOWer (i.e., to execute activity pay or send letter) are typically modeled on a separate level in the model. Also, the three systems present the model using different graphical elements and styles. Figure 3.4: Handle Complaint process in Staffware Patterns Control-flow patterns [35, 208, 213] represent typical constructs that can occur in process models. The twenty initial patterns presented in [33 35] were revised and extended with twenty three new patterns in [213]. For illustration purposes,
63 Section 3.1 Contemporary Workflow Management Systems 53 Figure 3.5: Handle Complaint process in FileNet (a) main model (b) decision Figure 3.6: Handle Complaint process in FLOWer we first describe four simple patterns and present them using CPN models in Figure 3.7. First, the sequence pattern is used to specify that an activity is enabled after the completion of a preceding activity [213]. Figure 3.7(a) shows a CPN model representing a sequence of two activities A and B [213]. Second, the parallel split pattern represents the divergence of one control-flow thread into two or more branches that execute concurrently [213]. Figure 3.7(b) shows a CPN model representing a parallel split after activity A into parallel branches with activities B and C [213]. For example, the Handle Complaint process starts with a parallel split into two branches, i.e., contact department and contact client. Third, the synchronization pattern represents the convergence of two or more input branches into a single output thread only after the activities in all input branches have been completed [213]. Figure 3.7(c) shows a CPN model representing the synchronization of two branches containing activities A and
64 54 Chapter 3 Flexibility of Workflow Management Systems i1 i A i p1 i B i o1 i1 ID i A i i p1 p2 ID i i B C i i o1 o2 ID ID ID ID ID ID (a) sequence (b) parallel split i1 i2 i ID i A B i i p1 p2 i ID i C i o1 ID i1 i ID A i p1 i i ID B C i i o1 o2 ID ID ID ID (c) synchronization (d) exclusive choice Figure 3.7: CPN models of several control-flow patterns [213] B into a single control-flow thread containing activity C [213]. For example, activities contact department and contact client in the Handle Complaint process are synchronized into one control thread containing activity assess. Finally, the exclusive choice pattern represents divergence to two or more branches, where the thread of control is passed to only one outgoing branch [213]. Figure 3.7(d) shows a CPN model representing an exclusive choice between activities B and C after activity A [213]. The Handle Complaint process contains exclusive choice between activities pay and send letter after activity assess. Some of the control-flow patterns are much more complex than the patterns shown in Figure 3.7. Consider, for example, the blocking discriminator pattern that represents a kind of the so-called 1-out-of-M join [213]. This pattern represents a situation when two or more (i.e., M ) branches join into a single control-flow branch. For example, when handling a cardiac arrest, the check breathing and check pulse activities run in parallel. Once the first of these has completed, the triage activity is commenced. Completion of the other activity is ignored and does not result in a second instance of the triage activity [213]. The CPN model of this pattern is presented in Figure 3.8: here we can see how two branches (i.e., transitions A1 and Am) are joined into one branch with transition B. In addition to identifying 43 control-flow patterns, several commercial workflow systems are evaluated in [35, 213] based on the pattern support. The evaluation results [213] of control-flow pattern support in Staffware, FileNet and FLOWer is given in Appendix B.1 of this thesis. For example, each of the three systems (i.e., Staffware, FileNet and FLOWer) supports sequence, parallel split, synchronization and exclusive choice patterns (cf. Figure 3.7) and none of them supports the blocking discriminator pattern (cf. Figure 3.8), as shown in Table 3.1. Evaluation of systems shows that, although different systems tend to
65 Section 3.1 Contemporary Workflow Management Systems 55 [not(elt(1,is))] i1 i t1 i p1 i A1 ID is ID i 1::is m::is [not(elt(m,is))] i i i2 tm is triggered input p2 [] INPUT i [] is Am i p3 ID (m-1)`i () reset 1`i p4 i () () UNIT B i p5 i o1 ID ID ID ID Figure 3.8: CPN model of the blocking discriminator pattern [213] support different patterns, older systems tend to support only approximately half of identified patterns [35, 213]. For example, Staffware fully supports 14 patterns [35,213]. FileNet fully supports 17 patterns and partially supports one pattern [35,213]. FLOWer fully supports 16 patterns and provides partial support for 8 additional patterns [35,213]. The fact that none of the systems supports all patterns reflects the diversity of the way various workflow management systems handle the control-flow perspective. Table 3.1: Support for some control-flow patterns in Staffware, FileNet and FLOWer Pattern Staffware FileNet FLOWer sequence parallel split synchronization exclusive choice blocking discriminator (+ = support, - no support) Introducing standards into the workflow technology remains an important challenge in the field. For example, BPEL [53,54,178] is one of the most popular initiatives to standardize in the workflow technology by proposing a standard language and its execution framework. Although it is widely accepted as a standard by both industry and research, BPEL does not support all the control-flow patterns. In fact, BPEL supports 17 patterns directly, 4 patterns only partially and does not provide any support for 22 patterns [35,213]. Overview The control-flow perspective of current workflow management systems typically has a procedural nature. In other words, process models are constructed using control-flow patterns which specify in detail the exact procedure of how the work
66 56 Chapter 3 Flexibility of Workflow Management Systems should be done. The procedural nature of process models is suitable for highly structured processes with a high repetition rate, i.e., when the work is repeatedly done in the same manner (cf. Section 1.2). However, when it comes to processes that should be controlled by users (i.e., people can choose how to work), the procedural process models become too complex. Consider, for example, the branched discriminator pattern presented in Figure 3.8 Even though it is applied in rather simple situations, many systems do not support this pattern. The control-flow perspective of current systems implies detailed specification of exactly how the control flows through the model. This makes it very hard or even impossible to specify more relaxed concepts that people use in their work. For example, there is no control-flow pattern that would specify that two (or more) activities should never be executed both in the same process instance, regardless of how often and at which point of time one of them is executed. Consider, for example, a medical process that contains (amongst others) activities examine prostate and examine uterus. Regardless of the fact if these operations are executed at all and how many times, they cannot be both executed for one patient. In the best case, implementing this simple requirement in a procedural model using control-flow patterns would require an extensive work-around resulting in a complex model The Resource Perspective After the system makes a decision which activities are the next in line to be executed based on the control-flow specification, the resources that can/will execute these activities are selected based on the resource perspective. The resource perspective depends on (1) how resources and their roles are defined and classified in the system and (2) how the system decides who and when can execute enabled activities. We refer to the mechanism that handles the resource perspective in a system as to the work distribution of the system. Just like it is the case with the control-flow perspective, the resource perspective is handled differently in different workflow management systems due to system-specific work distribution mechanisms. Note that we present the resource perspective in a more detailed manner than the control-flow perspective. The reason is that less attention has been devoted to the resource perspective in workflow literature. Therefore, it is worthwhile to discuss this perspective in more detail. CPN Model(s) We have developed a CPN model that represents a simple work distribution mechanism of a generic and simple workflow management system. We refer to this model as to the basic model. Colors presented in Table 3.2 are used in the basic model to represent the main concepts of workflow management systems. An activity is represented as a string carrying the name of the activity and an
67 Section 3.1 Contemporary Workflow Management Systems 57 instance as a number identifying the instance. A work item is a combination of an instance identifier and activity name, i.e., it represents an activity that needs to be executed for an instance. Each user, role and group is represented as a string carrying the object name. While a role represents qualifications of users (e.g., secretary), a group represents an organizational department (e.g., sales). Table 3.2: CPN colors representing basic workflow concepts colset ACTIVITY = string; colset INSTANCE = int; colset WI = product INSTANCE*ACTIVITY; colset USER = string; colset ROLE = string; colset GROUP = string; A life cycle model of a work item shows how a work item changes states during the work distribution [29, 91, 93, 136, 160, 175]. The basic model uses a simple model of the life cycle of work items and it covers only the general, rather simplified, behavior of workflow management systems (e.g., errors and aborts are not considered). Figure 3.9 shows the life cycle of a work item in the basic model. After the new work item has arrived, it is automatically also enabled and then taken into distribution (i.e., state initiated). Next, the work item is offered to the user(s). Once a user selects the work item, it is assigned to him/her, and (s)he can start executing it. After the execution, the work item is considered to be completed. This may trigger new work items based on the control-flow perspective of the model and the user can begin working on the next work item. removed from the distribution the user is executing the work item executed started completed waiting for the preconditions enabled can not be assigned selected again by other users withdrawn from the other queues new ready to be distributed selected initiated the distribution is allocating users offered in the queues, waiting to be selected Figure 3.9: Basic model - work item life cycle To simulate (execute) the work distribution model in the CPN tools, it is necessary to initiate the model by defining input elements. Table 3.3 shows the four input elements of the basic model. For every input element in Table 3.3 the element name is shown (i.e., system users, new work items, activity maps and user maps). Besides the name, there are a short description of the element, the CPN color that represents the element and a simple example showing a possible initial element value. Figure 3.10 shows input elements from Table 3.3 graphically. First, there are two system users (i.e., Mary and Joe), two roles (i.e., secretary and manager) and one group (i.e., sales). User maps define which users have
68 58 Chapter 3 Flexibility of Workflow Management Systems which roles and to which groups they belong to. For example, in the user maps it is specified that Mary has the role of the secretary in the sales department. Activity maps define what role and group a user needs to have in order to be able to execute an activity. For example, in the activity maps it is specified that activity contact client can be only by users that have role secretary and belong to group sales. Initial available work items are shown as new work items in Table 3.3. For example, work items for activities contact department and contact client from the instance with identification 1 are initially available in the work distribution. As a model of an abstract workflow management system, we have developed the basic model on the basis of three simplifying assumptions: (1) we abstract from the control-flow perspective (i.e., how the system decides which activities are enabled and creates work items for them), (2) we only consider the normal behavior (i.e., work items are completed successfully; errors and aborts are not included), and (3) we abstract from the user interface. The basic model is organized into two modules: the work distribution and the work lists module, as shown in Figure The CPN language allows for the decomposition of complex nets into sub-pages, which are also referred to as subsystems, sub-processes or modules. By using such modules we obtain a layered hierarchical structure. The two modules communicate by exchanging messages via six places. These messages contain information about a user and a work item, i.e., each place is of the type user work item (colset UWI = product User * WI). Table 3.4 shows the description of the semantics of different messages that can be exchanged in the model. For each message the name of the referring CPN place is given in the first column and a short description in the second column. The work distribution module manages the distribution of work items by making sure that work items are executed correctly. This module allocates (identifies) users to whom the new work items should be offered, based on authorization (AMap) and organization (UMap) data. Figure 3.12(a) shows the work distribution module. The new work items are determined as input values (i.e., initial marking) generated based on the control-flow perspective in place new work items. The first to fire is the transition offers, which uses the function offer to decide to which user the work item should be offered and creates user work items in place to be offered. For a given activity, this function first retrieves the authorized role and group from amaps in place activity map and then retrieves the authorized user(s) with this role and group from umaps in place user map. As a result a message is sent to the work lists module to offer the work item to selected users. Although in the basic model users authorized to execute an activity are the users that have the role and are in the group specified in the amaps for the activity, this criterion may vary from system to system. The work lists module sends a message that a user wishes to select a work item by placing a user work item token in place selected. The message (token) contains the infor-
69 Section 3.1 Contemporary Workflow Management Systems 59 Table 3.3: Input for the basic model system users CPN color: example: user maps CPN color: example: activity maps CPN color: example: new work items CPN color: example: a set of available users; colset Users = list User; iuser = 1 Mary++1 Joe; the organizational structure is used to map users to organizational entities such as roles and groups; colset UMap = product User * Roles * Groups; (where colset Roles = list Role; colset Groups = list Group;) iumaps = [(Mary, [secretary], [Sales]),(Joe, [manager], [Sales)]; for every activity authorization is defined with a role and a group; colset AMap = product Activity * Role * Group; iamaps = [(contact department, secretary, Sales), (contact client, secretary, Sales), (assess, manager, Sales),(send letter, secretary, Sales),(pay, manager, Sales),(file, secretary, Sales)]; work items that have arrived and are ready to be distributed to users; colset WI = product Instance * Activity; iwi = 1 (1,contact department)++1 (1,contact client); Mary system users Joe USER MAPS secretary roles manager sales groups ACTIVITY MAPS contact department contact client send letter activities file pay assess Figure 3.10: Graphical illustration of the basic model input from Table 3.3
70 60 Chapter 3 Flexibility of Workflow Management Systems work distribution workdistribution to be offered UWI withdrawn offer selected approved rejected UWI UWI UWI UWI completed UWI work lists worklists Figure 3.11: Basic model Table 3.4: Exchange of messages between modules work distribution and work lists place to be offered withdrawn offer selected approved rejected completed message A work item is offered to the user. Withdraw the offered work item from the user. The user requests to select the work item. Allow the user to select the work item. Do not allow the user to select the work item. The user has completed executing the work item. mation about the work item and the user that requests to select it. If the related work item already has been selected, transition reject cancels this request. If not, transition selects transfers the user work item from place offered work items to place assigned work items, approves the request from the work lists by putting a token in place approved, and withdraws all the other offers for the related work item via place withdrawn offer. Finally, when the user completes a work item, the related token appears in place completed. Transition completes matches the user work item tokens in places completed with tokens in place assigned work items, removes them from those two places, and produces the referring user work item token in place completed work items. This user work item is considered to be completed by the user, and it is archived as a closed work item. Figure 3.12(b) shows the work lists module. This module receives messages from the work distribution module regarding work items that need to be offered to specified users. The work lists module further manages events associated with the activities of users. It is decomposed into three sub-modules, which correspond to three basic actions users can perform: log on and off (cf. Figure 3.12(c)) in the system, select work (cf. Figure 3.12(d)), start work (cf. Figure 3.12(e)), and stop work (cf. Figure 3.12(f)). In the log on and off sub-module (cf. Figure 3.12(c)) every user can freely choose when to log on (transition log on) to or off from (transition log off ) the system. Users who are currently logged-on to the system are represented as tokens in place logged on and users who are currently logged-
71 Section 3.1 Contemporary Workflow Management Systems 61 (* input *) umaps (* input *) iamaps activity map amaps AMaps amaps [elt(wi,wis)] iumaps iwi new work items user map selects umaps UMaps (* input *) (* allow user to select wi the work item *) (* function "offer" takes new work items, and offers them to users, based on task maps and user maps. *) wis wi::wis [not(elt(wi,wis))] [] wis (u,wi) offered work items reject rejected Out WIs (* work item cannot UWI wis be selectd (u,wi) del(wi,wis) more than once *) wi wi wi offers assigned work items completes completed work items WI offer(wi,amaps,umaps) (u,wi) (u,wi) offer(wi,amaps,umaps) approved Out UWI (* prevent users to select the work item again, after someone withdrawn offer has selected it*) Out WI UWI WI uwi (a) work distribution to be offered Out UWI selected In UWI completed In UWI (* users that are logged on working/avalaible I/O at the moment *) USER log off iuser (* users that are logged off currently not working/available *) USER u u [] (c) log on and off to be offered (* offer work items In to users *) UWI uwi insert uwi (u,wi) (u,wi) u selected select Out UWI (u,wi) Text (* send request for the work item *) requested Out UWI u u uwi active work items UWI (d) select work log on withdrawn offer In uwi UWI delete (* remove the offered work item *) [] logged on I/O USER to be offered In UWI withdrawn offer In UWI selected Out UWI uwi rejected In UWI approved In UWI abort (* the user is executing the work item *) selectwork (* request rejected *) (* request approved *) select work requested uwi UWI startwork start work in progress (* request approvement for executing the work item *) (* request has been sent, wait for the response *) UWI logged on (* only the user which is logged on can work*) logonandoff User logon and off requested(* the request is approved *) In (* the work item UWI is assigned to (u,wi) the user *) uwi u [] approved start logged on In UWI I/O USER (u,wi) (* the user is currently (* the user is logged on *) executing the work item *) in progress Out UWI (e) start work in progress In UWI (u,wi) [] u complete logged on I/O USER (* when transition "complete" (u,wi) fires, execution of a work item is completed *) completed Out UWI stop work (* the user has completed the work item *) stopwork (b) work lists completed Out UWI Figure 3.12: Modules of the basic model (f) stop work
72 62 Chapter 3 Flexibility of Workflow Management Systems off from the system are represented as tokens in place logged off. The select work sub-module (cf. Figure 3.12(d)) automatically fires transition insert and moves the user work item token from place to be offered to place active work items. If the work item is withdrawn, the token is removed from place active work items. When a user wishes to select a work item, transition select fires creating a token in place selected (to send a request to the work distribution module) and archives this request by creating a token in place requested. Note that only users that are logged on to the system can select work items. The work lists module (cf. Figure 3.12(b)) proceeds with the user work item in place requested following one of the two alternative scenarios. First, if a message (user work item token) arrives at place rejected, transition abort automatically fires and removes the token from places rejected and requested. Second, if a message (user work item token) arrives at place approved, the user can select the work item and further flow is directed to the start work sub-module. In the start work sub-module (cf. Figure 3.12(e)) transition start removes the user work item token from places requested and approved and creates a token in place in progress. Note that only users who are currently logged-on to the system can start work items. Users that are logged-on to the system can complete a work item that by removing a token from place in progress and creating a token in place completed (cf. Figure 3.12(f)). Workflow Management Systems In order to analyze work distribution of Staffware, FileNet and FLOWer, we have developed a CPN model of work distribution mechanism for each of them. These three systems (and workflow management systems in general) tend to use different work distribution concepts and completely different terminologies. To maintain a common basis for the models of work distribution in Staffware, FileNet and FLOWer, we have extended the basic model for the three specific systems. Due to the size and complexity of work distribution models for Staffware, FileNet and FLOWer [182], we present these models in Appendix A of this thesis. The work distribution CPN models of these systems indeed show that, although some concepts are represented and named differently in the systems, they are, actually, very similar (cf. Appendix A). On the other hand, work distribution and the related CPN model of FLOWer is more complex, due to the fact that users have many more actions available (i.e., execute, open, skip, undo and redo work items) when working with this case-handling system, as shown in Appendix A.3. Patterns The workflow resource patterns [10, 35, 208, 211, 216] capture the various ways in which resources are represented and utilized in workflows. In this chapter we do not elaborate on each of the 43 patterns described in [216], but we discuss four of them for the purpose of illustration. None of the modeled systems (i.e.,
73 Section 3.1 Contemporary Workflow Management Systems 63 Staffware, FileNet and FLOWer) supports patterns round robin, shortest queue, piled execution, and chained execution (cf. Appendix B.2). Round robin and shortest queue are push patterns, i.e., a work item is offered to only one user who has to execute it. As auto-start patterns, piled execution, and chained execution enable the automatic start of the execution of the next work item once the previous has been completed. The round robin and shortest queue patterns push the work item to only one user of all users that qualify. The round robin pattern allocates work on a cyclic basis and the shortest queue pattern allocates to the user with the shortest queue in his/her worklist. This implies that each user has a counter to: (1) count the sequence of allocations in round robin and (2) count the number of pending work items in shortest queue. Figures 3.13 shows that these two patterns can be implemented in a similar way in the work distribution module of, e.g., the basic model. The required changes to the basic model are minimal. A counter is introduced for each user as a token in place available (colset UCounter = product User * INT; colset UCounters = list UCounter) and functions round robin and shortest queue are used to select one user from the set of possible users based on these counters. These allocation functions are used in the inscription on the arc(s) between the transition offers and place to allocate. Both functions take two parameters: (1) user work items created by the classical allocation function offer from the basic model, and (2) appropriate counters. Both functions allocate the work item to the right user via three steps: (1) take the set of user work items created by the allocation function offer, (2) for every user work item search for the value of the counter, and (3) select and return only the user work item where the user has the smallest value of the counter. In this way, push allocation functions can be seen as a filter that selects only one allocation from of the set of all possible allocations. The model for shortest queue has an additional connection (i.e., the two arcs between the transition complete and place available) that updates the counter when a work item is completed to remove it from the queue (decrease the value of the counter for the referring user). Piled execution and chained execution are auto-start patterns, i.e., when a user completes the execution of current work item the next work item starts automatically. This prevents the user from repeatedly switching between worklist and application for routine tasks. When working in chained execution, the next work item will be for the same instance as the completed one the user works on different activities for one instance. Similarly, if the user works in piled execution the next work item will be for the same activity as the completed one the user works on the same activity for different instances. Figures 3.14(a) and 3.14(b) show that piled execution and chained execution are implemented similarly in the stop work sub-module. Users can choose to work in the normal mode or in the auto-start mode (which is represented by the token in place special mode). A parameter x is passed via the arc between place ready and transition complete special: parameter x carries activity name in piled execution or instance identi-
74 64 Chapter 3 Flexibility of Workflow Management Systems iamaps activity map AMaps amaps iumaps umaps user map UMaps wis iwi new work items wi offers WI (* round_robbin selects one from all the offers on the basis of couters *) [not(rr(offer(wi,amaps,umaps),rrcs)=null)] RR(offer(wi,amaps,umaps),rrcs) rrcs allocate(u,rrcs,count) (u,wi) UWI [] (u,wi) wi::wis available allocate to be offered rrcs Out RRA available UWI UCounters count count [] offered work items WIs counter (a) round robin INT to allocate (* counts the allocations *) iamaps iwi activity map new work items AMaps amaps wi offers (* shortest_queue selects one from all the offers on the basis of couters *) SQ(offer(wi,amaps,umaps),sqcs) umaps (u,wi) sqcs UWI allocate(u,sqcs,1) iumaps [] (u,wi) user map wi::wis available allocate to be offered sqcs wis Out UMaps UCounters UWI SQ available [] offered work items (* when the work item is completed, WIs (*... *) sqcs remove it from the users' queue *) assigned work items WI wi complets wi closed work items WI WI [not(sq(offer(wi,amaps,umaps),sqcs)=null)] allocate(u,sqcs,(~1)) (u,wi) (b) shortest queue to allocate completed In UWI Figure 3.13: Two push patterns - work distribution module fication in chained execution. This parameter is used for a possible auto-start. These two models show that transition complete special, besides the usual connection to places completed and request, has connections to places active work items, select and special mode. If the user is in the special mode, this transition retrieves work items from place active work items, and produces items in places request and select. The inscriptions on arcs leading to places request and select first check if the user is working in the special (i.e., auto-start ) mode. If this is the case, the next user work item is auto-started, i.e., an appropriate token is produced in places request and select. Function select is implemented to search for the next matching work item based on the parameter x, i.e., (1) a work item with the same activity in piled execution or (2) a work item for the same instance in chained execution. Overview The CPN models of work distribution of Staffware, FileNet and FLOWer show that there is no consensus on terminology and functionality among contemporary systems (cf. Appendix A). For example, CPN models of work distribution in Staffware and FileNet are remarkable similar. However, after using these two systems one tends to have the impression that they handle work distribution in distinctive manners. On the other hand, the CPN model of work distribution in FLOWer shows that some systems can provide much quite different features than other systems. Various workflow management systems are evaluated based on the support of resource patterns in [208, 211, 216]. Not only that this evaluation can serve
75 Section 3.1 Contemporary Workflow Management Systems 65 [] u logged on I/O User complete (u,wi) (u,wi) in progress In UWI [] logged on I/O u User complete (u,wi) (u,wi) in progress In UWI (* check if there are available work items for the same activity *) [] active work items I/O UWIs uwis (u,(i,x)) completed Out UWI if elt(u,us) andalso not(select(u,x,uwis) = NoUWI) then 1`select(u,x,uwis) else nil ready UWI (* ACTIVITY IS THE AUTO-START CRITERIA (u,(i,a)) -> (u,(i,x)) *) (* users can (u,(i,x)) [] choose special to be in the mode "auto-start" I/O us Users mode*) complete special (* function "select" picks the next work item with the activity "x" *) if elt(u,us) andalso not(select(u,x,uwis) = NoUWI) then 1`select(u,x,uwis) else nil (* check if there are available work items for the same instance *) [] active work items I/O UWIs uwis (u,(x,a)) completed Out UWI ready complete special if elt(u,us) andalso not(select(u,x,uwis) = null) then 1`select(u,x,uwis) else nil UWI (* INSTANCE IS THE AUTO-START CRITERIA (u,(i,a)) -> (u,(x,a)) *) [] (* users can (u,(x,a)) choose special to be in the mode I/O "auto-start" Users mode*) us (* function "select" picks the next work item for the instance "x" *) if elt(u,us) andalso not(select(u,x,uwis) = null) then 1`select(u,x,uwis) else nil select Out UWI (* automatically start the next work item for the same activity*) request Out UWI (a) piled execution select Out UWI (* automatically start the next work item for the same instance*) request Out UWI (b) chained execution Figure 3.14: Two auto-start patterns - stop work module as comparison of systems, but the evaluation results indeed show that systems support different patterns and, thus, handle the resource perspective in unique ways. In Appendix B.2 of this thesis we show the evaluation results of the CPN models of Staffware, FileNet and FLOWer, with respect to the patterns support. In addition, in Appendix B.2 we also show the evaluation results of the pattern support of our basic model: due to its simplicity, the basic model supports only few resource patterns. CPN models of the round robin, shortest queue, piled execution and chained execution patterns show that it is remarkably simple to implement these patterns on top of the work distribution of existing systems. Therefore, the lack of support for these patterns in Staffware and FileNet indicates the level of immaturity of contemporary systems with respect to the resource perspective, and especially when compared to the control-flow perspective. Despite of the high impact that the resource perspective has on the way people work, this perspective does not draw much attention in research and industry. While there have been many attempts to improve the control-flow perspective (e.g., many control-flow modeling languages like Petri Nets [29, 72, 93], EPCs, BPEL [53, 54], etc. covering a wide range of application areas), there has been less research and industry interest in the resource perspective. Research efforts like [182,208,211,216] are rare examples of investigations in the area of resource perspective. BPEL4People [150] and WS Human Task [47] are recent efforts aiming at enriching the resource perspective of workflow technology. Together with BPEL itself [53, 54, 178], BPEL4People is becoming a broadly recognized standard recognized by the Organization for the Advancement of Structured
76 66 Chapter 3 Flexibility of Workflow Management Systems Information Standards (OASIS) [7]. However, a pattern-based evaluation of these two standards [213, 217] indicates the necessity of further improvements in the area of the resource perspective The Data Perspective The data perspective of a process model defines which data elements are available in the process and how workflow participants can access data elements while executing activities. CPN Model(s) Process models contain data elements of certain types (e.g., string, numeric, date, etc.). Consider, for example, a process model from the medical domain containing several data elements: patient name, doctor name and description all of type string, and appointment of type date. Once data elements have been defined on a process level, their usage can be defined for each activity, i.e., whether a data element is available and how it can be accessed and modified in an activity (cf. Figure 3.2). However, this is only a simplified, basic, view at the data perspective in workflow management systems. Real systems use powerful and very complex mechanisms that handle data elements and their values on the process and activity level. The complexity of these mechanisms yields complex CPN models of the data perspective. For example, the CPN model of the newyawl workflow language 1, which aims at fully supporting the data perspective (and the other workflow perspectives), contains 55 modules, over 480 places, 138 transitions and over 1500 lines of ML code [208,210,212]. Therefore, we do not present the CPN models of the data perspective in this thesis. Instead, we refer the interested reader to the newyawl CPN model presented in [208,210,212]. Workflow Management Systems Staffware, FileNet and FLOWer each support the data perspective in a unique way. Generally, data elements are first defined on the process level and then for each activity it is defined which data elements are available and how users can access them. Figure 3.15 shows how data elements can be defined on the process level in Staffware, FileNet and FLOWer. Due to the complexity of the data perspective, we do not present this perspective in detail for the three workflow management systems in this thesis. Patterns Just like the control-flow and the resource perspective, various workflow management systems tend to handle the data perspective differently. In order to be able 1 The newyawl CPN model can be downloaded from [6].
77 Section 3.1 Contemporary Workflow Management Systems 67 (a) FileNet (b) Staffware (c) FLOWer Figure 3.15: Defining data elements in a process model to compare different systems with respect to the data perspective, a series of 40 workflow data patterns are identified in [214, 215]. Data patterns aim to capture the various ways in which data is represented and utilized in workflow management systems and are classified into four groups. First, data visibility patterns relate to the definition and scope of data elements and the manner in which they can be utilized by various components of a workflow process [214, 215]. Second, data interaction patterns focus on the manner in which data is communicated between process components and describe the various mechanisms by which data elements can be passed across the interface of a process component [214, 215]. Third, data transfer patterns focus on the manner in which the actual transfer of data elements occurs between workflow components. Finally, data-based routing patterns capture the various ways in which data elements can interact with other perspectives and influence the overall operation of the process [214,215]. Due to the complexity of the data perspective and data patterns, CPN models representing these patterns are very large and complex. The CPN model of the
78 68 Chapter 3 Flexibility of Workflow Management Systems newyawl language supports most of the data patterns [208,210,212]. Due to the complexity and size of this model, we do not present it in this thesis and refer the interested reader to [6,208,210,212]. The evaluation of the support of the 40 data patterns in various workflow management systems [214, 215] shows that systems tend to support different patterns. Roughly half of the patterns is supported in each of the evaluated systems [214,215]. This indicates a serious lack of uniformity in the way how the data perspective is handled amongst the systems. In Appendix B.3 of this thesis we present the data pattern support evaluation results for Staffware and FLOWer based on [208, 214, 215]. These results show that Staffware fully supports 13 and partially supports 12 patterns, while FLOWer fully supports 20 and partially 12 patterns. Unfortunately, this evaluation did not include the third system we use in this thesis, i.e., FileNet. Overview Although, at the first sight, the data perspective seems to be very simple, workflow management systems use very complex mechanisms to support this perspective. This yields complex data patterns [214, 215] and complex corresponding CPN models [6, 208, 210, 212]. Just like it is the case with the control-flow and resource perspectives, various workflow management systems tend to handle the data perspective in different ways. Results of the evaluation of the data patterns support shows that systems tend to support different patterns [214,215]. Similarly like the resource perspective, the data perspective has also not drawn as much research attention as the control-flow perspective in the workflow area. Work presented in [208, 210, 212, 214, 215] is the first attempt to initiate more investigation of the data perspective in research and industry Summary Workflow technology lacks a unique taxonomy and standards. Workflow management systems tend to handle the control-flow, resource and data perspectives in system-specific manners. The diversity of workflow patterns support [211, ] in various systems indicates the diversity of functionalities offered by workflow management systems. The control-flow perspective is the dominant workflow perspective in workflow technology research and industry. Despite of the high influence of the resource and data perspectives on the way people work, these two perspectives did not draw much attention in research and industry. While there have been many attempts to improve the control-flow perspective, there has been little interest in the other two perspectives. However, the data and resource perspectives have started to draw more attention recently. Papers like [182, 208, 211, ] are
79 Section 3.2 Taxonomy of Flexibility 69 first efforts in the direction of deeper investigations of the data and resource perspectives of workflow technology. Contemporary workflow management systems are of a procedural nature, i.e., models are detailed specifications of exactly how processes can and will be executed. Although this approach is suitable for highly-structured processes with high repetition rates, it is not appropriate for processes with high variation rate (e.g., processes that offer many execution alternatives to users). Already single control-flow patterns that offer advanced execution options tend to be very complex [213]. Thus, applying procedural models to processes that must offer many execution options tends to result in very complex process models. 3.2 Taxonomy of Flexibility There is a fundamental gap between the workflow management systems and modern organizational science, as already indicated in Chapter 1. While modern organizational theories advocate more localized decision making, workflow management systems tend to impose old-fashioned centralized decision making due to their imperative procedural nature. In order to align themselves with the contemporary democratic style of work, workflow management systems must become more flexible by allowing users to make more decisions about how to work. Flexibility is an important research topic in the field of workflow management (cf. Chapter 2). In 1999, Heinl et al. [125] presented a classification scheme of flexibility in the context of workflow management systems (cf. Section 2.1.1). In 2007, by Schonenberg et al. re-visited the taxonomy of flexibility by looking at contemporary workflow management systems [ ] (cf. Section 2.1.2). In this thesis, we use the four types of flexibility identified in [ ]: flexibility by design, flexibility by underspecification, flexibility by change and flexibility by deviation. In this section we will present these four types of flexibility: flexibility by design in Section 3.2.1, flexibility by underspecification in Section 3.2.2, flexibility by change in Section 3.2.3, and flexibility by deviation in Section Each type of flexibility is described with respect to the three workflow perspectives i.e., the control-flow, resource and data perspectives, using simple, system and language-independent illustrative examples Flexibility by Design If a process model can be developed in a way that it allows for many alternative executions (execution traces or paths), then we speak about flexibility by design. In Figure 3.16 an execution alternative is represented with a directed arc from the start until the end point. In other words, a degree of flexibility by design is determined by the variety of alternatives available at run-time while executing instances of process models. This type of flexibility is identified in both taxonomies: in [125] it is called flexibility by selection with advanced modeling and
80 70 Chapter 3 Flexibility of Workflow Management Systems in [ ] it is called flexibility by design. start end Figure 3.16: Flexibility by design [125, ] The control-flow perspective. The control-flow perspective determines which activities will be available for execution at run-time and in which order these activities can be executed (cf. Section 3.1.1). Consider, for example, the two process models presented in Figure These models consist of activities A, B, C and D and use the sequence, parallel split and synchronization control-flow patterns (cf. Figure 3.7). The control-flow of the process model in Figure 3.17(a) is defined as a sequence of activities A, B, C and D. Thus, users have only one option while executing this model, i.e., to execute these activities only in the following order: [A,B,C,D]. The control-flow perspective of the process model shown in Figure 3.17(b) starts with activity A, after which a parallel split to activities B and C follows. Finally, there is a synchronization of activities B and C before activity D. Because activities B and C can be executed in any order, users have two alternatives while executing this model: (1) they can execute these activities in order [A,B,C,D] or (2) they can execute these activities in order [A,C,B,D]. Due to the fact that the process model in Figure 3.17(a) has only one execution alternative (i.e., [A,B,C,D]) and the process model in Figure 3.17(b) has two execution alternatives (i.e., [A,B,C,D] or [A,C,B,D]), we say that the model in Figure 3.17(b) has a higher degree of flexibility by design than the model in Figure 3.17(a). B A B C D A C D (a) less flexible (b) more flexible Figure 3.17: Flexibility by design and the control-flow perspective The resource perspective. As discussed in Section 3.1.2, the resource perspective determines which users are authorized to execute activities and how these resources are allocated to execute the activities. Consider, for example,
81 Section 3.2 Taxonomy of Flexibility 71 the simple illustrative example presented in Figure This figure shows the resource perspective of two hypothetical process models. Each of the models consists of activities A, B and C, and considers two users. In the model in Figure 3.18(a) the first user is authorized to execute activities A and B and the second user is authorized to execute activity C. Thus, this model offers only one execution alternative, i.e., the first user will execute activities A and B and the second user will execute activity C. In the model in Figure 3.18(b) both users are authorized to execute each of the three activities. Therefore, this model offers more (i.e., eight) execution alternatives and has a higher degree of flexibility by design with respect to the resource perspective. A B C A B C (a) less flexible (b) more flexible Figure 3.18: Flexibility by design and the resource perspective The data perspective. The data perspective of a process model defines the availability of data during the execution (cf. Section 3.1.3). Flexibility by design is also influenced by the data perspective. Figure 3.19 shows the data perspective of two process models. Each of the models consists of activities A, B and C and uses two data elements. In the model in Figure 3.19(a) the first data element is output for activity A and input for activity B, while the second data element is output for activity B and input for activity C. The model in Figure 3.19(b) has a higher level of flexibility by design because both data elements are input and output elements for all three activities, i.e., all data elements can be accessed and edited in all three activities. Consider, for example, the situation where an incorrect value of the first data element is provided while executing activity A. On the one hand, in the model in Figure 3.19(a) this mistake cannot be corrected while executing activities B or C because the first data element is not an output data element for these two activities. On the other hand, in the model in Figure 3.19(b) this mistake can easily be corrected, because the first data element is an output data element for activities B and C. The same holds for input data elements: while the value of the first data element is presented to users only while executing activity B in the model in Figure 3.19(a), in the model in Figure 3.19(b) the value of this data element is presented to users while executing all three activities.
82 72 Chapter 3 Flexibility of Workflow Management Systems data element 1 data element 2 data element 1 data element 2 A B C A B C (a) less flexible (b) more flexible Figure 3.19: Flexibility by design and the data perspective Flexibility by Underspecification The possibility to only partially specify a process model, where certain parts of the model are left undefined as black boxes and will be defined later during execution is called flexibility by selection with late modeling [125] or flexibility by underspecification [226, 227, 227, 228]. In Figure 3.20 under-specified parts of a process model are presented as partial dashed lines of execution alternatives. An example of a system that allows for this type of flexibility is the worklet extension of the YAWL system [23, 44]. Some of the activities in YAWL models can be considered as unspecified parts of the model and during execution they are assigned to the Worklet Service [44] that chooses the exact specification (of a sub-process) that will be executed (cf. Section 2.2). start end pre-specified under-specified Figure 3.20: Flexibility by underspecification [125, ] The control-flow perspective. An under-specified control-flow perspective of a process model is shown in Figure 3.21(a). This process starts with activity A, followed by activity B and completes with an unspecified block. During each execution of this model (i.e., for each instance), it is necessary to define explicitly the unspecified block. For example, as Figure 3.21(b) shows, a possible execution scenario could be that, after activities A and B were executed in an instance (indicated by the special check box symbols), activity D was selected for the unspecified block. In another instance, some other activity (e.g., some activity G) may be executed for the unspecified block. Not only single activities can replace the unspecified blocks of the control-flow - an entire sub-process can be selected for an unspecified block. Note that, each time the process is executed i.e., for
83 Section 3.2 Taxonomy of Flexibility 73 each process instance), it is possible to select a different activity or sub-processes for the same unspecified block in the control-flow. A B (a) a model A B D (b) an execution scenario Figure 3.21: Flexibility by underspecification and the control-flow perspective The resource perspective. Figure 3.22 shows a simple example of underspecification in the resource perspective. Two users are defined in the resource perspective of a model presented in Figure 3.22(a). The first user is authorized to execute activities A and B, while the authorization for activity C is intentionally left unspecified. This underspecification leaves the opportunity to specify the authorized user(s) for activity C later, during the execution of the model. Each time this model is executed, another user(s) can be selected as authorized to execute activity C. One of the possibilities is to authorize both users to execute activity C after executing activities A and B, as shown in Figure 3.22(b). A B C (a) a model A B C (b) an execution scenario Figure 3.22: Flexibility by underspecification and the resource perspective The data perspective. Figure 3.23(a) shows underspecification of the data perspective in a process model that uses two data elements. The first data element is output for activity A and input for activity B, while the second data element output for activity B. The data access for activity C is intentionally left unspecified, i.e., a reference to the right data element can be specified each time the process is executed. For example, it might be the case that, after activities A and B were executed, it is specified that both data elements are input but only the second data element is output for activity C, as shown in Figure 3.23(b).
84 74 Chapter 3 Flexibility of Workflow Management Systems data element 2 data element 2 data element 1 data element 1 A B C A B C (a) a model (b) an execution scenario Figure 3.23: Flexibility by underspecification and the data perspective Flexibility by Change While an instance of a process model is being executed it may become necessary to change the set of execution alternatives of the model, e.g., by adding alternative(s) that were not initially foreseen when the model was developed. Flexibility by change [ ] or flexibility by instance adaptation [125] allows for adding execution alternatives to the model while executing the model, as Figure 3.24 shows. Ad-hoc (or run-time) change of models is an important property of socalled adaptive workflow systems like, e.g., ADEPT [189, ,202]. Systems like ADEPT are equipped with powerful mechanisms that enable ad-hoc change of one or more instances by allowing adding, deleting and moving activities in instances that are already being executed (cf. Section 2.1.5). Moreover, it is possible to apply the ad-hoc change (a) to one instance or (b) to all instances of the referring model, i.e., the so-called migration. adaptation start end start end added Figure 3.24: Flexibility by change [125, ] The control-flow perspective. An example of ad-hoc change in the controlflow perspective is shown in Figure As shown in Figure 3.25(a), the controlflow perspective of the model is defined as a sequence of activities A, B and C. Thus, according to the control-flow specification, this model will be executed by first executing activity A, then activity B and finally activity C. However, it is possible to add execution alternatives by ad-hoc change of the control-flow perspective. Figure 3.25(b) shows an instance of this model where activities A and B are executed and then, instead of executing activity C, activity D is inserted before activity C in the control-flow specification. After this ad-hoc
85 Section 3.2 Taxonomy of Flexibility 75 change, the instance continues with an execution of activity D followed by an execution of activity C. A B C A B D C (a) the original model (b) an ad-hoc change Figure 3.25: Flexibility by change and the control-flow perspective The resource perspective. Figure 3.26 shows an example of ad-hoc change in the resource perspective. In the process model shown in Figure 3.26(a) the first user is authorized to execute activities A and B and the second user is authorized to execute activity C. However, it is possible that, in an execution scenario where activities A and B were already executed, the authorization for activity C is removed from the second user and assigned to the first user, as shown in Figure 3.26(b). After this ad-hoc change, the first user will execute activity C, instead of the originally authorized second user. A B C (a) the original model A B C (b) an ad-hoc change Figure 3.26: Flexibility by change and the resource perspective The data perspective. Flexibility by change can also be applied to the data perspective, as shown in Figure In the process model shown in Figure 3.27(a) the first data element is output for activity A and input for activity B, while the second data element is output for activity B and input for activity C. However, flexibility by change allows to change the data perspective in instances of this model in an ad-hoc manner. For example it is possible that, after activities A and B are executed in an instance, this specification is changed so that the first data element is added as an input and output element for activity B, as shown in Figure 3.27(b).
86 76 Chapter 3 Flexibility of Workflow Management Systems data element 1 data element 2 data element 1 data element 2 A B C A B C (a) the original model (b) an ad-hoc change Figure 3.27: Flexibility by change and the data perspective Flexibility by Deviation Flexibility by deviation is the ability of a process instance to deviate from the execution alternatives prescribed in the instance s process model without changing the model. A deviation from the specified execution alternatives is illustrated with a thick line in Figure FLOWer [180] is an example of a system that allows for deviation from the process model by allowing users to skip an activity that should be executed and redo or undo an activity that was already executed before (cf. Section 2.2). start end devation Figure 3.28: Flexibility by deviation [ ] The control-flow perspective. Figure 3.29 shows an example of deviation from the control-flow perspective specified in a process model. The control-flow perspective of the model presented in Figure 3.29(a) is specified as a sequence of activities A, B and C. However, if deviation is applied during the execution of this model, it becomes possible to execute the model in ways other that specified (i.e., other than executing A, B and C in a sequence). For example, it is possible to, after executing activity A, skip activity B and execute directly activity C, as shown in Figure 3.29(b) by a thick line. Note that the corresponding model in Figure 3.29(b) is not changed. Instead, the thick line represents the deviation from the model, i.e., it represents the actual execution of the instance. The resource perspective. An application of flexibility by deviation on the resource perspective is presented in Figure The resource perspective of
87 Section 3.2 Taxonomy of Flexibility 77 A B C A B C (a) a model (b) a deviation Figure 3.29: Flexibility by deviation and the control-flow perspective a process model is shown in Figure 3.30(a): one user is authorized to execute activities A and B and the other user is authorized to execute activity C. This specification remains the same in the execution scenario (i.e., instance) shown in Figure 3.30(b). However, this instance deviates from the original specification because the first user executes activity C, although he/she is not authorized to execute this activity. Thick lines represent the actual execution of the instance shown in Figure 3.30(b), i.e., the first user executed all three activities in this instance. A B C (a) a model A B C (b) a deviation Figure 3.30: Flexibility by deviation and the resource perspective The data perspective. Figure 3.31 shows how flexibility by deviation can be achieved in the data perspective. The original specification of the data perspective, where the first data element is output for activity A and input for activity B, is shown in Figure 3.31(a). Further on, the second data element is output for activity B and input for activity C. An example of a deviating execution (i.e., instance)is shown in Figure 3.31(b). Here the actual execution is shown by thick lines, i.e., the value of first data element is not provided while executing activity A, and, despite the original specification, it is left unspecified Summary As already discussed in Section 3.1, the control-flow perspective remains in the focus of research and industry, while the resource and data perspectives tend to get less attention. This is also the case with respect to flexibility of workflow
88 78 Chapter 3 Flexibility of Workflow Management Systems data element 1 data element 2 data element 1 data element 2 A B C A B C (a) a model (b) a deviation Figure 3.31: Flexibility by deviation and the data perspective management systems. Note that the examples we presented in this section for the resource and data perspectives are hypothetical illustrative examples and, so far, flexible workflow management systems do not focus on these two perspectives. Instead, the control-flow perspective is dominant in the systems. For example, the YAWL system with worklets [23,44], ADEPT [189] and FLOWer [180] directly support flexibility by underspecification, change and deviation in the control-flow perspective, respectively (cf. Section 2.2). In sections 3.2.1, 3.2.2, and we deliberately use very simple examples of the control-flow, resource and data perspectives of process models to illustrate how different types of flexibility can be applied for each of the three workflow perspectives. In real-life process models and workflow management systems these three perspectives require much more advanced forms of flexibility, resulting in much more complicated models. These models often use many workflow patterns [208, 211, ] in order to offer multiple execution alternatives, which often increases the complexity of models, as discussed in Section 3.1. The control-flow perspective of current workflow management systems is of procedural nature (cf. Section 3.1.4). Thus, all execution alternatives must be explicitly specified in the process model. This has several consequences: Procedural models with multiple execution alternatives tend to be large and complex, which makes it hard to understand and maintain these models. As discussed before, multiple execution alternatives with respect to the control-flow perspective require usage of many complex control-flow patterns [213]. In a procedural approach all execution alternatives must be anticipated in advance [77, 109, 125, 143, 153, 166, 188, 233]. With respect to the flexibility by design, this means that all execution alternatives must be anticipated already in the development phase. With respect to the flexibility by underspecification, this means that it must be anticipated in the development phase exactly when the unspecified block should be executed. Flexibility by change and deviation might lead to frequent ad-hoc changes and deviations, i.e., each time a new alternative is identified a new ad-hoc change or
89 Section 3.3 A New Approach for Full Flexibility 79 deviation must be applied. Explicitly specifying the procedure in the model can result in overspecifying the process [181]. To illustrate this, consider the case when it is necessary to specify the requirement that in one instance of a process model activities A and B cannot be executed both, i.e., it is possible to execute A once or more times as long as B is not executed and vice versa. It is also possible that none A nor B is executed at all. In a procedural approach one tends to over-specify this as shown in Figure A decision activity X is introduced as an activity that needs to be executed at a particular time and requires conditions c1 and c2 to make this decision. Note that, although the requirement A and B exclude one another is very simple, all kinds of questions are introduced: When should X be executed?, How many times should X be executed?, Who executes X?, What are c1 and c2?, etc. X c1 c2 A B Figure 3.32: Over-specification in procedural models [181] In this chapter we used few languages/systems (i.e., CPNs, Staffware, FLOWer, and FileNet) to describe how contemporary approaches specify the control-flow perspective of business processes. However, a procedural approach is dominant in the contemporary workflow technology, despite the variety of existing process modeling languages. On the one hand, formal languages like, for example, Petri nets [87,177,199], process algebras [57,61,127,131,173] and state charts [267] use various formalisms to explicitly specify the procedure of a processes. On the other hand, procedural approach is also adopted by languages that are frequently used in practice and by many available tools, like, for example, Event-Driven Process Chains (EPCs) [146,147,225], Business Process Execution Language for Web Services (BPEL) [54], and Business Process Modeling Notation (BPMN) [179]. 3.3 A New Approach for Full Flexibility As discussed in Section 3.2.5, the fact that the procedural approach requires all execution alternatives to be explicitly specified in the model causes some problems with respect to flexibility of workflow management systems. Descriptive or declarative languages are considered to be more suitable for achieving a higher degree of flexibility because they do not require explicit specification of execution
90 80 Chapter 3 Flexibility of Workflow Management Systems alternatives [48, 49, 55, 56, 80, 92, 115, 256, 269]. Instead, a declarative approach allows for the implicit specification of execution alternatives. In this thesis we propose a declarative approach for achieving a higher degree of flexibility in workflow technology. The proposed approach is based on using activities and constraints for declarative specification of the control-flow perspective of process models. Constraints are rules that should be followed during the execution. Note that constraints of a model implicitly specify the possible execution alternatives: everything that does not violate constraints is allowed. Figure 3.33 shows an example of a constraint. This constraint involves two activities (i.e., A and B) and it specifies that these two activities cannot both be executed in the same process instance. By using this constraint in a process model, we implicitly specify its execution alternatives as all alternatives where (1) A is executed at least once and B is never executed, (2) B is executed at least once and A is never executed and (3) neither A nor B are executed. Execution of other activities does not influence this constraint, e.g., any other activity can be executed at any point of time, as long as activities A and B are not executed both. Note that, although the constraint in Figure 3.33 represents a very simple and useful rule, there is no control-flow pattern that represents it [213]. Instead, implementing such a rule in the procedural manner requires over-specification, as shown in Figure Moreover, by over-specifying this rule in the procedural approach many of the intended execution alternatives are discarded, i.e., either activity A or activity B must be executed exactly once after activity X. A B Figure 3.33: A constraint: activities A and B should not be executed both The difference between the procedural and our constraint-based declarative approach to process modeling is shown in Figure Procedural models take an inside-to-outside approach: all execution alternatives are explicitly specified in the model and new alternatives must be explicitly added to the model. Declarative models take an outside-to-inside approach: constraints implicitly specify execution alternatives as all alternatives that satisfy the constraints and adding new constraints usually means discarding some execution alternatives. Our approach focuses on the dominant workflow perspective, i.e., the controlflow perspective, and it can support all types of flexibility, as Figure 3.35 shows. In this approach we advocate using constraints for the specification of the controlflow perspective of process models. Possible execution alternatives of a model are implicitly derived from the constraints: any alternative that satisfies all constraints is possible. Thus, even simple constraint-based process models can offer many execution alternatives, i.e., our approach is appropriate for achieving flexi-
91 Section 3.3 A New Approach for Full Flexibility 81 forbidden behavior constraints constraints PROCEDURAL MODEL adding execution alternatives constraints constraints Figure 3.34: Declarative vs. procedural approach bility by design. Moreover, other types of flexibility can also be supported. With help of the YAWL system and its worklets [44] it is possible to create arbitrary decompositions of procedural and declarative models and achieve flexibility by underspecification. Flexibility by change can be achieved by ad-hoc change, which can be easily applied to the constraint-based approach. The so-called optional constraints allow for flexibility by deviation. flexibility by design flexibility by underspecification flexibility by change flexibility by deviation control-flow perspective resource perspective data perspective Figure 3.35: A new approach for all types of flexibility Note that it is possible to develop procedural models that allow for flexibility by design. Consider, for example, a process model consisting of activities A, B and C, where any execution alternative is possible. Figure 3.36 shows a CPN representing this model. Between the initiation and termination of the process (represented by transitions start and end, respectively) it is possible that (1) each of activities A, B and C is executed zero or more times, (2) activities A, B and C are executed concurrently and (3) activities A, B and C are executed in any order. Thus, this model allows for infinitely many execution alternatives and offer a high degree of flexibility by design (cf. Section 3.2.1). However, adding some simple rules that should be followed during execution is often too costly
92 82 Chapter 3 Flexibility of Workflow Management Systems in the procedural approach because models become to complex and large, a lot of time and human efforts are needed, etc. Adding a simple rule, e.g., like the one presented in Figure 3.33, would result in a very complex model. Adding several similar rules might easily result in a model that is too large and complex to understand and maintain. A B i i i1 INT i start i i p1 i INT p2 INT i i i end i o1 INT p3 INT i C Figure 3.36: Any execution alternative of activities A, B, and C is possible between activities start and end The remainder of this thesis is organized as follows. In Chapter 4 we provide a formal foundation for a constraint-based language on an abstract level. In Chapter 5 we present a language that can be used to specify constraints in process models and in Chapter 6 we present a prototype of a workflow management system implemented based on concepts presented in chapters 4 and 5. In Chapter 7 we will show how popular techniques for process mining [8] can be applied to the constrain-based approach both for analysis of past executions and generating run-time recommendations for users. Finally, in Chapter 8 we discuss and conclude the thesis.
93 Chapter 4 Constraint-Based Approach A truly flexible approach to workflow management systems must provide for several aspects of workflow flexibility [125, ], as discussed in Chapter 2. In Chapter 3 we showed that contemporary commercial tools use imperative process models that explicitly specify how to execute the process. This requires process model developers to predict all possible execution scenarios in advance and explicitly include them in the model. However, it is often not the case that all scenarios can be foreseen in advance [125]. Therefore, the procedural nature of process models makes it very difficult for contemporary workflow management systems to provide for a high degree of flexibility by design [125, ]. In this chapter we present a formal foundation for a constraint-based approach for business processes. Instead of explicitly specifying the control flow, constraintbased process models focus on constraints as rules that have to be followed during the process execution. Possible executions of constraint models are specified implicitly as all executions that satisfy the model constraints, which makes it not necessary to explicitly predict all possible executions in advance. Due to the declarative nature of constraint models that offer a variety of possibilities for execution, the constraint-based approach is flexible by definition [ ]. Moreover, all other types of flexibility that were discussed in [ ] can also be achieved using this approach. First, while executing instances, people can violate one type of constraints and achieve flexibility by deviation [ ]. Second, it is possible to change models of already running instances, which allows for the flexibility by change [ ]. Although the flexibility by underspecification [ ] is not explicitly built into the approach presented in this chapter, in Section 6.11 we will show how this type of flexibility can be achieved with the help of the YAWL system [23, 32, 210, 212] and its worklets [41, 44, 45]. We start this chapter by describing the notion of constraints in Section 4.1 and constraint models in Section 4.2. An illustrative example is presented in Section 4.3. In Section 4.4 we describe how instances of constraint models are executed. Ad-hoc change of already running instances is described in Section 4.5.
94 84 Chapter 4 Constraint-Based Approach Section 4.6 presents how verification of constraint models can detect serious errors and help develop correct constraint models. A summarized overview of the chapter is given in Section Activities, Events, Traces and Constraints Constraint-based models consist of activities and constraints. An activity is a piece of work that is executed as a whole by a resource (e.g., one person, a computer, etc.). A constraint specifies a certain rule that should hold in any execution of the model. Consider, for example, a model without constraints that contains only activities perform surgery and prescribe rehabilitation. Medical staff members that execute this model have the ultimate freedom to execute the two activities an arbitrary number of times and in any order. However, if the constraint if perform surgery, then eventually prescribe rehabilitation would be added to the model, this would limit (i.e., constrain) the possibilities that resources have: if activity perform surgery is executed, this constraint will demand to afterwards also execute activity prescribe rehabilitation. The time needed for the resource to execute one activity can vary depending on the activity s complexity, the resource capabilities, etc. For example, one activity might take a few minutes to execute (e.g., prescribe rehabilitation) and another might take several hours (e.g., perform surgery). One can imagine that the execution of activities can overlap, that some executions might fail and others might complete successfully. Resources execute models by triggering events involving activities, i.e., transferring activities through various states in their life cycles [29,91,93,136,160,175]. Figure 4.1 shows an example of a simple activity life cycle. In this life cycle, an activity can be in one of the four states: initial, execution, execution successful, and execution failed. At the beginning of its life cycle, the activity is in the initial state. As shown in Figure 4.1, each event triggers a state change. A resource starts to actively work on an activity by triggering the event started. After the started event, the activity can be completed (i.e., successfully executed) or cancelled (i.e., execution has failed). initial started ts execution completed t c cancelled t x execution successful execution failed Figure 4.1: Three event types - started (t s), completed (t c) and cancelled (t x)
95 Section 4.1 Activities, Events, Traces and Constraints 85 In the remainder, we will use the following terms (concepts): A is the set of all activities, i.e., a universe of activity identifiers, T is the set of all event types, and E = A T is the set of all events. Although the set of event types can by definition contain arbitrary types, for the purpose of simplicity we adopt the three event types from the life cycle in Figure 4.1. In other words, in the remainder of this thesis we will assume that T contains three event types T = {t s,t c,t x }, such that t s = started, t c = completed and t x = cancelled. Note that, however, the set of event types T is customizable, i.e., the proposed constraint-based approach can be applied to any set of event types T. One execution of a model is defined as one trace, i.e., a sequence of events that represents the chronological order of events that occurred and were recorded during the execution (cf. Definition 4.1.1). Definition (Trace) Trace σ E is a finite sequence of events, where E is the set of all traces composed of zero or more elements of E. We use σ = e 1,e 2,...,e n to denote a trace. σ = n represents the length of the trace, Empty trace is denoted by, i.e., = 0, σ[i] denotes the i-th element of the trace, i.e., σ[i] = e i, e σ denotes 1 i< σ σ[i] = e, σ i denotes the suffix of σ starting at σ[i], i.e., σ i = σ[i],σ[i + 1],...,σ[n], We use + to concatenate traces into a new trace, i.e., e 1,e 2,...,e n + f 1,f 2,...,f m = e 1,e 2,...,e n,f 1,f 2,...,f m, We use = to denote equal traces, i.e., if σ = γ then σ = γ and 1 i σ σ[i] = γ[i]. We use to denote non-equal traces, i.e., σ γ denotes that σ = γ does not hold. Example illustrates the concepts of activities, events and traces in a medical department. Example (Activities, events and traces) Consider an example of a medical department where staff members can execute activities a e,a s,a r,a m A where a e = examine patient, a s = perform surgery, a r = prescribe rehabilitation, a m = prescribe medication, a x = perform X ray. These activities are executed by triggering events that can be of the three types t s,t c,t x T where t s = started, t c = completed and t x = cancelled.
96 86 Chapter 4 Constraint-Based Approach Staff of this department can trigger any of the event types on any of the activities. Therefore, possible events in this example are: e es = (a e,t s ), e ec = (a e,t c ),..., e xc = (a x,t c ), where e ij = (a i,t j ) and e ij E for all i {e,s,r,m,x} and j {s,c,x}. By triggering events, staff creates a trace for each patient as a sequence of triggered events. Treatments of three patients refer to three traces σ 1,σ 2,σ 3 E, such that: σ 1 = e es,e ec,e ms,e mc,e ss,e sc, σ 2 = e es,e ec,e ms,e ss,e sc,e ms,e mc,e rs,e rc, and σ 3 = e ms,e mc,e es,e ec,e ms,e mc. We introduce the trace projection as a preliminary operation on traces. Projection σ E of a trace σ E on a set of events E E is specified in Definition The projection σ E is a set of traces such that for each projection trace γ σ E it holds that (1) γ is of the same length as σ, (2) events from E are the same and exactly on the same positions in γ and σ, and (3) events not contained in E do not have to be the same γ and σ, but need to fill the same positions in γ and σ. In other words, the projection of a trace σ on a set of events E depends on both occurrences and positions of events e E and only positions of events e / E in trace σ. Definition (Trace projection σ E ) Let σ E be a trace and E E be a set of events. The projection of trace σ on a set of events E is the set of traces defined as follows: σ E = { } if σ = ; { e 1,e 2,...,e σ E 1 i σ (σ[i] E e i = σ[i]) (σ[i] / E e i / E))} otherwise. Note that a trace is always an element of its projection, i.e., σ σ E. Trace projection is used as a kind of trace equivalence regarding constraint satisfaction and is used to decide whether a trace satisfies a constraint, as will be specified later in Definition Consider, for example, the three traces in Figure 4.2 where we want to measure if they satisfy rule Before the diploma is issued, the student has to enroll and to pass one course by choice., i.e., events (enroll,t c ) and (diploma,t s ) cannot appear next to each other.. In this case, both (1) appearances and positions of events (enroll,t c ) and (diploma,t s ) in the trace and (2) positions of other events (i.e., events (enroll,t s ), (coursea,t s ), (coursea,t c ), (courseb,t s ), (courseb,t c ) and (diploma,t c )) in the trace are important when deciding if the trace satisfies the constraint. Consider, for example, trace σ 1 = (enroll,t s ),(enroll,t c ),(coursea,t s ),(coursea,t c ), (diploma,t s ),(diploma,t c ) where the student enrolled and passed the coursea
97 Section 4.1 Activities, Events, Traces and Constraints 87 time σ 1 enroll course A diploma ts tc ts tc ts tc σ 2 enroll course B diploma ts tc ts tc ts tc σ {( enroll, t 1 ),( diploma, c t s )} σ 3 enroll ts tc diploma ts tc σ {( enroll, t 1 ),( diploma, c t s )} Figure 4.2: Example illustrating trace projection: σ 1 σ E 2, σ2 σ E 1, but σ3 / σ E 1, σ3 / σ E 2 before the diploma was issued. Projection of this trace on the set of events E = {(enroll,t c ),(diploma,t s )} is a set σ E 1 containing all traces from E that have the form of e 1,(enroll,t c ),e 2,e 3, (diploma,t s ),e 4, such that events e 1,...,e 4 E are not contained in E (i.e., e 1,...,e 4 / E). For example, trace σ 2 is in the projection σ E 1 because events (enroll,t c ) and (diploma,t s ) have exactly the same positions relative to each other and events e / {(enroll,t c ),(diploma,t s )} in σ 1 and σ 2. On the other hand, trace σ 3 is not in the trace projection σ E 1 because positions of events e / {(enroll,t c ),(diploma,t s )} do not match positions of such events in σ 1. In other words, in σ E 1 it is not important which course the student took, as long as the diploma was not issued directly after enrolling. A constraint is a rule that should be followed during the execution. For example, constraint if perform surgery is completed, then afterwards eventually prescribe rehabilitation is completed ensures that every patient who had a surgery successfully completes the rehabilitation. Another example is the constraint cannot start perform surgery before completing perform X ray, which makes sure that X rays of a patient are taken before surgery. As specified in Definition 4.1.4, a constraint is defined by its namespace and a function that evaluates to true or false for a given execution trace. The notion of namespace plays an important role when deciding if a trace satisfies the constraint. Definition (Constraint) A constraint is a pair c = (E,f), where: E = (A T) is the namespace of c such that A A, T T. We say that c is a constraint over E, f is a function f : E {true,false}. Let σ E be a trace. We denote f(σ) = true by σ f and f(σ) = false by σ f. If f(σ) = true, then we say that σ satisfies c, denoted by σ c, If f(σ) = false, then we say that σ violates c, denoted by σ c,
98 88 Chapter 4 Constraint-Based Approach it holds that γ (σ E ) : f(γ) = f(σ), i.e., satisfaction of a trace is decidable on the appearances of elements from the namespace in that trace, We say that E c = {σ E σ c} is a set of all traces that satisfy constraint c. Further on, we use the following shorthand notation Π A (c) = A, Π T (c) = T and Π f (c) = f. We use C to denote the set of all constraints. A constraint specifies a relation between events contained in its namespace. Using events instead of plain activities in the constraint namespace enables defining more sophisticated rules. For example, constraints c 1 = cannot start perform surgery before completing perform X ray and c 2 = cannot start perform surgery before starting perform X ray are semantically different. In case of the first constraint c 1, perform surgery can start only after completing the perform X ray. In case of the second constraint c 2, perform surgery can start immediately after starting the perform X ray, i.e., activity perform surgery can be started and completed even if the X ray photo is not available yet or if the activity perform X ray fails (does not complete). Since an execution is represented by a trace, a constraint is a boolean expression that evaluates to true or false for every trace σ E. Because a constraint defines a relation between elements in its namespace, trace satisfaction should be decidable only on the occurrences and positions of namespace elements in the trace, i.e., if a trace satisfies the constraint, then all traces in the projection of the trace on the namespace also satisfy the constraint and vice versa. Example illustrates the notions of a constraint, namespace and satisfaction of traces. Example (Constraints) The medical department from Example needs to follow two constraints c 1,c 2 C where: c 1 = (E 1,f 1 ) is a constraint over E 1 = {e sc,e rc } specifying that f 1 = If perform surgery is completed then, afterwards at some point in time, prescribe rehabilitation is also completed. 1, i.e., f 1 = event e sc is eventually followed by event e rc, and c 2 = (E 2,f 2 ) is a constraint over E 2 = {e es,e ec,e ex } specifying that f 2 = Cannot execute any other activity until completing examine patient., i.e., f 2 = only events e es and e ex are possible before event e ec. For the three traces (patients) σ 1, σ 2 and σ 3 from Example it holds that: 1. trace σ 1 = e es,e ec,e ms,e mc,e ss,e sc violates c 1 (i.e., σ 1 c 1 ) because event e sc is not followed by event e rc in σ 1, i.e., this patient had a surgery but did not have a rehabilitation after the surgery; for all traces γ (σ E 1 1 ) (i.e., all traces of form e 1,e 2,e 3,e 4,e 5,e sc, with 1 i 5 e i / E 1 ) it holds that γ c 1, 1 The exact formal definition of constraints is not relevant at this point.
99 Section 4.2 Constraint Models 89 satisfies c 2 (i.e., σ 1 c 2 ) because event e ec is preceded only by event e es in σ 1, i.e., this patient was examined at the beginning of the treatment; for all traces γ (σ E 2 1 ) (i.e., all traces of form e es,e ec,e 1,e 2,e 3,e 4, with 1 i 5 e i / E 2 ) it holds that γ c 2, 2. trace σ 2 = e es,e ec,e ss,e sc,e ms,e mc,e rs,e rc satisfies c 1 (i.e., σ 2 c 1 ) because event e sc is followed by event e rc in σ 2, i.e., this patient had a surgery and a rehabilitation after the surgery; for all traces γ (σ E 1 2 ) (i.e., all traces of form e 1,e 2,e 3,e sc,e 4,e 5,e 6,e rc, with 1 i 6 e i / E 1 ) it holds that γ c 1, satisfies c 2 (i.e., σ 2 c 2 ); for all traces σ (σ E 2 2 ) (i.e., all traces of form e es,e ec,e 1,e 2,e 3,e 4,e 5,e 6, with 1 i 6 e i / E 2 ) it holds that σ c 2, 3. trace σ 3 = e ms,e mc,e es,e ec,e ms,e mc satisfies c 1 (i.e., σ 3 c 1 ) because there is no event e sc in trace σ 3 to be followed by event e rc, i.e., this patient did not have a surgery; for all traces γ (σ E 1 3 ) (i.e., all traces of form e 1,e 2,e 3,e 4,e 5,e 6, with 1 i 6 e i / E 1 ) it holds that γ c 1, violates c 2 (i.e., σ 3 c 2 ) because there are events other than e es and e ex in trace σ 3 before event e ec, i.e., medications were prescribed before the examination for this patient; for all traces γ (σ E 2 3 ) (i.e., all traces of form of e 1,e 2,e es,e ec,e 3,e 4, with 1 i 4 e i / E 2 ) it holds that γ c 2. Note that the set of satisfying traces E c 1 and E c 2 of constraints c 1 and c 2, respectively, are infinite sets, e.g., prescribe rehabilitation can be repeated an arbitrary number of times. Also, a set of satisfying traces E c of constraint c = (E,f) contains all sequences (traces) of all events e E that satisfy constraint c, i.e., traces in the set of satisfying traces contain both events from the namespace E and events not in the namespace E. In this chapter we only provide informal constraint specification, i.e., a natural (i.e., English) language is used to specify the semantics of constraints. In Chapter 5 we propose a formal language that can be used (1) to specify the semantics of each constraint and (2) for retrieving a finite representation of the set of all traces that satisfy such a constraint. 4.2 Constraint Models As specified in Definition 4.2.1, a constraint model consists of activities, mandatory constraints and optional constraints. Each of the constraints is over names-
100 90 Chapter 4 Constraint-Based Approach pace (A T), i.e., constraints can only define relationships between events involving activities from the model. If a model activity is contained in the namespace of any of the constraints, then we say that this activity is constrained by the model. Definition (Constraint model cm) A constraint model cm is defined as a triple cm = (A,C M,C O ), where: A A is a set of activities in the model, C M C is a set of mandatory constraints where every element (E,f) C M is a constraint over E, such that E (A T), C O C is a set of optional constraints where every element (E,f) C O is a constraint over E, such that E (A T). The set of constrained activities in cm is defined as Π CA (cm) = c C M C O Π A (c). We use U cm to denote the set of all constraint models. The set of satisfying traces of a model contains all traces that satisfy the model, i.e., traces that satisfy all mandatory constraints in the model (cf. Definition 4.2.2). If a trace is not in this set, that means that the trace violates at least one mandatory constraint in the model. According to Definition 4.2.2, any trace satisfies a constraint model without mandatory constraints. This means that users can execute such a model in any way they can execute any of the activities from the model an arbitrary number of times (including zero times) and they can execute these activities in an arbitrary order. For example, every trace σ E satisfies a constraint model cm = ({A,B,C},,C O ), i.e., E cm = E. Note that Figure 3.36 on page 82 shows a procedural version of model ({A,B,C},, ). Definition (Constraint model satisfying traces E cm ) Let cm U cm be a constraint model where cm = (A,C M,C O ). The set of satisfying traces for model cm is defined as { E E cm = if C M = ; otherwise. c C M E c If for trace σ E it holds that σ E cm, then we say that σ satisfies model cm. If for trace σ E it holds that σ / E cm, then we say that σ violates model cm. As specified in Definition 4.2.2, the set of traces that satisfy a constraint model is compositional [73, 205] with respect to traces that satisfy each of the constraints in the model, i.e., if a trace satisfies each of the constraints, then the trace also satisfies the model. Note that the set of traces that satisfy a constraint model E cm can contain many traces, which makes constraint models flexible by design [ ]. The set E cm can even contain infinitely many traces. To enable computer-supported
101 Section 4.2 Constraint Models 91 execution of constraint models, a finite representation of E cm is needed. In Chapter 5 we present how this finite representation can be obtained and used in the constraint-based approach. While traces that violate mandatory constraints cannot satisfy the model, traces that violate optional constraints can still be satisfying traces of the model (cf. Definition 4.2.2). Optional constraints are used as guidelines for executions. Their satisfaction can be measured and presented to users as additional information, but it does not determine the satisfaction of the model. As their name says, it is optional if an execution trace will satisfy them or not. In other words, when it comes to optional constraints, users can deviate from the model if they prefer to do so. Therefore, optional constraints enable flexibility by deviation [ ] in constraint models. Note that the set of traces that satisfy a constraint model contains all traces that satisfy mandatory constraints, i.e., all sequences of all events e E that satisfy mandatory constraints of the model. For example, trace σ E cm that satisfies model cm = (A,C M,C O ) can contain events involving activities that are not in the model itself, i.e., it is possible that (a,t) σ where a / A and t T. This is due to the fact that traces that satisfy constraints can contain events that are not in the namespace of the constraint (e.g., Example 4.1.5). This might seem odd, but is an important property that enables changes of models during execution. As it will be described in Section 4.5, it is possible to change constraint models while they are being executed (the so-called ad-hoc instance change). For example, consider model cm = (A,C M,C O ) and trace σ. It is possible to, after executing activity a A (i.e., (a,t s ),(a,t c ) σ), remove this activity from the model. Although model cm does not contain activity a anymore and trace σ does (i.e., a / A and (a,t s ),(a,t c ) σ), this trace can still satisfy the model (i.e., σ E cm ) if it satisfies all mandatory constraints from the model. However, after removing a from a model cm it will no longer be possible to execute this activity, i.e., events involving activity a cannot be added to the trace σ in the future. The execution of instances of constraint models and instance change is out of scope of this section and will be described in detail in Sections 4.4 and 4.5. A constraint-based approach to process modeling allows developing models that offer a large number of possible executions to users while still enforcing the users to follow a set of basic rules - constraints. It is often the case that constraint models have an infinite set of satisfying traces, which allows people to choose from a large number of traces that satisfy the model and to select the trace that is the most appropriate for the specific situation people are working in. Consider, for example, a simple constraint model presented in Example This model consists of three activities and one constraint. Due to this constraint, only a trace where all events (curse, completed) are eventually followed by at least event (pray,completed) satisfies this model.
102 92 Chapter 4 Constraint-Based Approach Example (A constraint model) Let cm R U cm be a constraint model where cm R = (A R,CM R,C O R ) such that: A R = {pray,curse,bless} is a set of activities, C R M = {c 1} is a set of mandatory constraints where c 1 = (E 1,f 1 ) such that E 1 = {(curse,completed),(pray,completed)} and f 1 = Every occurrence of event (curse,completed) must eventually be followed by at least one occurrence of event (pray,completed)., and CO R = is a (empty) set of optional constraints. The set of accepting traces for this model is E = cm R c CM R E c = E c 1. Constrained activities in the model are Π CA (cm R ) = {curse,pray} because Π A (c 1 ) = {curse,pray}. The model cm R from Example allows people who are executing this model to choose from an infinite number of traces that satisfy this model. Table 4.1 shows only a few examples of traces (i.e., σ 1, σ 2,..., σ 7 ) that satisfy the model cm R and one example of a trace (i.e., σ 8 ) that does not satisfy this model. Each column refers to a trace and the rows refer to events in particular traces. For simplicity, we assume that activities do not overlap in these traces, i.e., each listed activity refers to a subsequent starting and completing of the activity. For example, each bless in the trace σ 2 refers to a sequence of two events (bless, started) and (bless, completed). The first seven traces satisfy the model. First, the empty trace satisfies the model, i.e., σ 1 E, because there is no cm R event (curse,completed) in the empty trace σ 1 =. Second, any of the activities can occur an arbitrary number of times in a satisfying trace (e.g., traces σ 2, σ 3, σ 4, σ 5, σ 6 and σ 7 ). Third, it is possible that activities curse and pray do not occur at all (e.g., traces σ 1 and σ 2 ). Fourth, it is possible that activity pray occurs an arbitrary number of times (1) without an occurrence of activity curse (e.g., traces σ 3 and σ 6 ) and (2) before the occurrence of activity curse (e.g., traces σ 4 and σ 7 ). Fifth, it is possible that activity curse occurs multiple times followed by one occurrence of activity pray in order to satisfy constraint c 1 (e.g., trace σ 5 ). Finally, it is also possible to first curse, then pray, and later again curse and pray (e.g., trace σ 7 ). The last trace, i.e., trace σ 8 does not satisfy model cm R because there is no occurrence of activity pray after the second occurrence of activity curse in this trace. Note that some of the traces in Table 4.1 satisfy the model cm R from Example although they contain events that involve activities that are not in the model. Traces σ 2 and σ 6 satisfy model cm R although they contain events on activity become holy A, which is not in the model cm R (i.e., become holy / A R ). This property of satisfying traces enables easy adding and removing activities in models that are already being executed (ad-hoc instance change), which will be described in detail in Section 4.5. For illustration, assume that the original model cm R = (A R {become holy},cm R,C O R ) contained activity become holy and that
103 Section 4.2 Constraint Models 93 Table 4.1: Examples of traces that (do not) satisfy the model cm R from Example σ i E cm, i {1,...,7} R σ 8 / E cm R σ 1 σ 2 σ 3 σ 4 σ 5 σ 6 σ 7 σ 8 1 bless pray pray curse bless pray pray 2 become holy curse pray bless curse curse 3 bless curse bless become holy bless bless 4 bless curse pray pray pray 5 pray bless bless curse curse 6 bless curse bless bless 7 pray pray 8 bless users executed activities bless and become holy in the trace σ 2 and then decided to remove activity become holy from the model. At this point, the model did not contain activity become holy anymore (model cm R ), but the trace σ 3 still satisfies the new model cm R. However, after the activity become holy was removed from the model it was no longer possible to trigger new events on activity become holy. Adding and removing activities and optional constraints to and from constraint models does not have any effect on the set of traces that satisfy the model, because this set is not dependent on the activities in the model (cf. Definition 4.2.2). Property proves that the set of traces that satisfy the model does not change when activities or optional constraints are added to or removed from the model. Consider, for example the model cm R from Example and model cm R = (A R {become holy},cm R,C O R {c}) where become holy A and c C. Although model cm R has more activities and optional constraints, the same traces satisfy both models cm R and cm R, because they have the same set of mandatory constraints CM R. For example, traces σ 1,...,σ 7 from Table 4.1 satisfy both cm R and cm R while trace σ 8 does not satisfy either cm R nor cm R. Property (Activities and optional constraints have no effect on E cm ) Let cm,cm U cm be constraint models where cm = (A,C M,C O ) and cm = (A,C M,C O ), then E cm = E cm. Proof. If C M =, then E cm = E cm = E (cf. Definition 4.2.2). If C M, the set of traces that satisfy cm is E cm = c C M E c (cf. Definition 4.2.2) and the set of traces that satisfy cm is E cm = c C M E c, i.e., E cm = E cm. Property shows that, the more mandatory constraints the model has, the fewer traces can satisfy the model. In other words, the more mandatory constraints a model has, the less flexibility the users will have while executing the model.
104 94 Chapter 4 Constraint-Based Approach Property (Mandatory constraints can influence E cm ) Let cm,cm U cm be constraint models where cm = (A,C M,C O ) and cm = (A,C M,C O ) such that C M C M, then E cm E cm. Proof. If C M = then E cm = E and C M. Further it holds that E cm = c C M E c and E cm E, i.e., E cm E cm. If C M then E cm = c C M E c and C M. Further it holds that E cm = c (C M (C M \C M )) E c = c C M E c c (C M \C M) E c = E cm c (C M \C M) E c, i.e., E cm E cm. Consider, for example, the model cm R from Example and a constraint c 2 C where c 2 = (E 2,f 2 ) such that E 2 = {(pray,completed)} and f 2 = Complete activity pray at least once.. Let cm R be a constraint model where c 2 is added as a mandatory constraint to model cm R, i.e., cm R = (A R,C R M {c 2},C R O ). The set of satisfying traces of model cmr contains all traces σ E that satisfy constraint c 1, while the set of satisfying traces of model cm R contains all traces σ E that satisfy both constraint c 1 and constraint c 2. In other words, the set of traces that satisfy cm R is obtained by removing all traces that do not satisfy constraint c 2 from the set of traces that satisfy model cm R. For example, although they satisfy model cm R, traces σ 1 and σ 2 from Table 4.1 do not satisfy model cm R because they do not satisfy constraint c 2. Understanding the effect of adding/removing mandatory constraints to/from a constraint model change is important for several purposes. First, while creating a model it is important to understand that the more mandatory constraints the model has, the less freedom (options) the users will have while executing the model and vise versa, i.e., fewer mandatory constraints mean more freedom for users. Second, in case of ad-hoc instance change, only adding mandatory constraints can cause errors and thus cause the instance change to fail, as it will be discussed in Section 4.5. In order to successfully perform such a runtime change after all, it is necessary to resolve (eliminate) this error, which can be achieved only by removing mandatory constraints. Third, errors can be introduced in models only by adding mandatory constraints and vice versa, errors can be eliminated only by removing mandatory constraints, as it will be described in Section Illustrative Example: The Fractures Treatment Process As opposed to traditional models where all possible executions have to be predicted in detail in advance (i.e., during modeling), developers of constraint models need to specify a number of constraints (rules) that should be followed during the execution. This set of constraints indirectly determines the set of possible
105 Section 4.3 Illustrative Example: The Fractures Treatment Process 95 executions of the constraint model any execution that satisfies mandatory constraints is possible. Moreover, it is often the case that users can choose from an infinite number of possible executions of the constraint model and, therefore, adjust each execution to the specific situation as long as all mandatory constraints are satisfied. Consider the medical Fractures Treatment process described in Example Although it consists of several important rules that have to be satisfied in order to prevent mistakes, the medical staff must make situationdependent decisions and treat each patient in a way that is the most suitable for the specific patient s fracture. Example (Fractures Treatment process definition) A team of medical staff is in charge of treatment of patients with fractures. The treatment of every patient begins with the examination of the patient, although the patient can be examined (again) at multiple stages during the treatment. Depending on the type(s) of the fracture(s), the staff can apply different types of treatments: Applying cast is the most common treatment for fractures. The cast is removed after the fracture has healed. Dislocations are treated by repositioning. A surgery can be performed for complex injuries. When none of the above mentioned treatments can be applied, the patient uses a sling for a prescribed period of time. In cases of complex fractures, the staff can decide to use an arbitrary combination of the above mentioned treatment procedures. It is obligatory to take X ray of the fracture(s) before applying of cast, repositioning or performing the reposition or surgery. If needed, X ray photos can be taken several times during the treatment. Due to danger of X rays, it is important to avoid any mistakes and the staff needs to check risks of X rays (e.g., pregnancy) and to take these risks into consideration each time before taking an X ray. The staff can prescribe necessary medication or prescribe rehabilitation an arbitrary number of times during the treatment. In general, the hospital has the policy to send patients to rehabilitation at least once after a surgery. When developing a constraint model for the process described in Example 4.3.1, it is not necessary to predict all possible treatment tracks in advance. Moreover, the medical staff that treats fractures can perform many different treatments for different patients. Instead of predicting all possible treatments, it is sufficient to detect the activities and the constraints that should be followed during treatments. The constraint model for the Fractures Treatment process is given below.
106 96 Chapter 4 Constraint-Based Approach Example (Fractures Treatment constraint model) Recall that t s,t c,t x T are event types where t s = started, t c = completed and t x = cancelled. Model cm FT U cm, cm FT = (A FT,CM FT,C FT O ) is a constraint model for the Fractures Treatment process described in Example such that A FT = {a 1,a 2,a 3,a 4,a 5,a 6,a 7,a 8,a 9,a 10 } is a set of activities where: a 1 = examine patient, a 6 = prescribe sling, a 2 = apply cast, a 7 = prescribe medication, a 3 = remove cast, a 8 = check X ray risk, a 4 = perform reposition, a 9 = perform X ray,and a 5 = perform surgery, a 10 = prescribe rehabilitation, CM FT = {c 1,c 2,c 3,c 4,c 5 } is a set of mandatory constraints where: c 1 = (E 1,f 1 ) where E 1 = {(a 1,t s ),(a 1,t c )(a 1,t x )} and f 1 = Start every treatment with an occurrence of the activity examine patient, i.e., f 1 = Before the first occurrence of event (a 1,t c ), only occurrences of events (a 1,t s ) or (a 1,t x ) are possible, c 2 = (E 2,f 2 ) where E 2 = {(a 2,t c ),(a 4,t c ),(a 5,t c ),(a 6,t c )} and f 2 = Perform at least one of the procedures: apply cast, perform reposition, perform surgery or prescribe sling, i.e., f 2 = At least one of the events (a 2,t c ), (a 4,t c ), (a 5,t c ) or (a 6,t c ) must occur, c 3 = (E 3,f 3 ) where E 3 = {(a 2,t c ),(a 3,t s ),(a 3,t c )} and f 3 = Can remove cast only after applying cast and always remove cast after applying cast, i.e., f 3 = Event (a 3,t s ) can occur only after occurrence of event (a 2,t c ) and every occurrence of event (a 2,t c ) must eventually be followed by at least one occurrence of event (a 3,t c ), c 4 = (E 4,f 4 ) where E 4 = {(a 2,t s ),(a 4,t s ), (a 5,t s ), (a 9,t c )} and f 4 = Must perform X ray before applying cast, repositioning and perform surgery, i.e., f 4 = Events (a 2,t s ), (a 4,t s ) and (a 5,t s ) can occur only after occurrence of event (a 9,t c ), c 5 = (E 5,f 5 ) where E 5 = {(a 8,t c ),(a 9,t s )} and f 5 = Check X ray risk before each new occurrence of activity perform X ray, i.e., f 5 = Each new occurrence of event (a 9,t s ) must be preceded by a at least one new occurrence of (a 8,t c ) ; and CO FT = {c 6} is a set of optional constraints where c 6 = (E 6,f 6 ) such that E 6 = {(a 5,t c ),(a 10,t c )} and f 6 = Eventually prescribe rehabilitation after perform surgery, i.e., f 6 = Every occurrence of event (a 5,t c ) must eventually be followed by at least one occurrence of event (a 10,t c ). The Fractures Treatment model allows for an infinite number of different treatments of patients, as long as all mandatory constraints are satisfied. For
107 Section 4.3 Illustrative Example: The Fractures Treatment Process 97 example, although every treatment has to start with the execution of the examine patient activity (constraint c 1 ), the staff can execute this activity multiple times at later stages of the treatment. At least one of the four procedures (apply cast, perform reposition, perform surgery, prescribe sling) has to be completed (constraint c 2 ), but it is always possible to combine several procedures during the treatment of one patient. The activity remove cast can be started only after the activity apply cast had been completed at least once and after activity apply cast has be completed activity remove cast will be completed at least once (constraint c 3 ). This constraint still makes it possible to handle situations when patients have two fractures both treated with cast for different periods of time. Only after completing activity perform X ray the staff can apply one of the four procedures (constraint c 4 ). However, it is still possible to execute the perform X ray activity several times if necessary in the treatment, as long as the check X ray risk activity is completed before starting each new perform X ray activity (constraint c 5 ). Although it is hospital policy to send all patients to rehabilitation after a completed surgery, it is possible to perform multiple surgeries and only after them one or more activities prescribe rehabilitation (optional constraint c 6 ). Also, it is possible to execute the activity prescribe rehabilitation for patients that did not undergo a surgery. Moreover, because c 6 is an optional constraint, it is also possible not to prescribe rehabilitation after a surgery. Table 4.2 shows examples of six execution traces for different patients undergoing the Fractures Treatment process according to the cm FT model from Example For reasons of simplicity, we assume that executions of activities did not overlap, i.e., each of the listed activities a A FT refers to executions of two successive events: (a,started) and (a,completed). Each of the first five traces satisfies the cm FT model (i.e., σ 1 E cm FT, σ 2 E cm FT, σ 3 E cm FT, σ 4 E cm FT and σ 5 E cm FT, denoted by the symbol) because each of these five traces satisfies all mandatory constraints in the model cm FT. This means that the first five patients were handled correctly according to the Fractures Treatment model cm FT. Note that despite the fact that trace σ 3 violates the optional constraint c 6 from the model (because the staff did not prescribe rehabilitation after the activity perform surgery was executed), this trace satisfies the model cm FT because it satisfies all its mandatory constraints. The last trace, trace σ 6 violates the constraint model (i.e., σ 6 / E cm FT, denoted by the symbol) because it violates its mandatory constraint c 5 : for this patient the activity check X ray risk was not executed before the activity perform X ray. 2 For the purpose of simplicity we give examples of traces that contain only activities from the cm FT model.
108 98 Chapter 4 Constraint-Based Approach Table 4.2: Examples of traces for six patients in the Fractures Treatment model from Example σ 1 - patient 1 σ 2 - patient 2 σ 3 - patient 3 1 examine patient examine patient examine patient 2 prescribe sling check X ray risk check X ray risk 3 check X ray risk prescribe medication perform X ray 4 perform X ray perform X ray prescribe sling 5 apply cast perform reposition examine patient 6 prescribe medication prescribe medication perform surgery 7 remove cast check X ray risk examine patient 8 examine patient perform X ray perform surgery 9 examine patient prescribe medication 10 prescribe rehabilitation σ 4 - patient 4 σ 5 - patient 5 σ 6 - patient 6 1 examine patient examine patient examine patient 2 prescribe sling check X ray risk perform X ray 3 prescribe rehabilitation perform X ray 4 perform surgery 5 examine patient 6 prescribe medication 4.4 Execution of Constraint Model Instances A constraint model can be executed an arbitrary number of times. We refer to one execution of a constraint model as to a constraint model instance. Actions that resources take while executing one instance form the execution trace of the instance. A constraint model instance consists of a constraint model and the instance trace, as specified in Definition Consider, for example, the traces presented in Table 4.2. These traces could belong to six instances of the Fractures Treatment model cm FT : instance ci 1 = (σ 1,cm FT ) relates to treatment of the patient 1, ci 2 = (σ 2,cm FT ) to patient 2, etc. Definition (Constraint model instance ci) A constraint model instance ci is defined as a pair ci = (σ,cm), where: σ E is the instance s trace, and cm U cm is a constraint model. We use U ci to denote the set of all constraint instances. Note that not all constraint models need to have related instances and that there can be an arbitrary number of instances having the same constraint model, where each instance has its own trace. For example, Figure 4.3 shows four instances and four models. There are two instances of model cm 1 (i.e., instances ci 1 = (σ 1,cm 1 ) and ci 2 = (σ 2,cm 1 )), one instance of model cm 3 (i.e., instance
109 Section 4.4 Execution of Constraint Model Instances 99 ci 3 = (σ 3,cm 3 )) and one instance of model cm 4 (i.e., instance ci 4 = (σ 4,cm 4 )). There are no instances of model cm 2. Uci ci1 ci2 ci3 ci4 σ 1 σ 2 σ 3 σ 4 U CM cm1 cm2 cm3 cm4 Figure 4.3: Constraint instances In the remainder of this section we describe three important issues when it comes to execution of instances of constraint-based models. First, in Section we describe how state of an instance changes during execution. Second, in Section we describe how instances are executed by triggering the so-called enabled events. Finally, in Section we describe the relationship between the state of an instance and states of its constraints Instance State During execution, the state of an instance can change depending on the instance trace. As specified in Definition 4.4.2, an instance is satisfied if the instance trace satisfies the model of the instance. If the instance trace does not satisfy the instance model, but there is a suffix that could be added to the trace such that it satisfies the model, the instance is classified as temporarily violated. Finally, if the trace does not satisfy the model and the trace cannot satisfy the model whatever suffix would be added, then the instance is classified as violated. Definition (Instance state ω) Let ci U ci be an instance where ci = (σ,cm). The function ω : U ci {satisfied, temporarily violated, violated} of instance ci is defined as: satisfied if σ E cm ; ω(ci) = temporarily violated if (σ / E cm ) ( γ E : σ + γ E cm ); violated otherwise. Table 4.3 shows four possible instances for the Fractures Treatment model from Example and their states. For each instance a sequence of events from the instance trace is given. In addition, for each instance a state change is given for each of event in the instance trace. For reasons of simplicity, we assume that executions of activities did not overlap, i.e., each of the listed activities a A FT refers to executions of two successive events: (a, started) and (a, completed). Due to constraints c 1 and c 2, each instance is temporarily violated at the beginning
110 100 Chapter 4 Constraint-Based Approach of the execution. After the execution of activities examine patient and prescribe sling in the first instance ci 1, this instance is satisfied because its trace at that moment satisfies all mandatory constraints. The state of the first instance remained satisfied after performing the next two activities: check X ray risk and perform X ray. However, after the activity apply cast was executed, the state of the instance became again temporarily violated. This is because this partial trace does not satisfy the model (does not satisfy mandatory constraint c 3 ), but it was possible to eventually execute activity remove cast and return the instance into state satisfied. Optional constraints do not influence the state of the instance: the state of the third instance is satisfied although the optional constraint c 6 is not satisfied, i.e., there is not prescribe rehabilitation after perform surgery. The last instance (ci 4 ) ends in the state violated and it is not possible to extend this trace with some suffix that would return the instance to the state satisfied because perform X ray was executed before check X ray risk despite the constraint c 5. Table 4.3: States of four instances of the Fractures Treatment model from Example ci 1 = (σ 1,cm FT ) - patient 1 ci 2 = (σ 2,cm FT ) - patient 2 σ 1 ω(ci 1 ) σ 2 ω(ci 2 ) start tv tv 1 examine patient tv examine patient tv 2 prescribe sling sat check X ray risk tv 3 check X ray risk sat perform X ray tv 4 perform X ray sat perform surgery sat 5 apply cast tv examine patient sat 6 prescribe medication tv prescribe medication sat 7 remove cast sat 8 examine patient sat ci 3 = (σ 3,cm FT ) - patient 3 ci 4 = (σ 4,cm FT ) - patient 4 σ 3 ω(ci 3 ) σ 4 ω(ci 4 ) start tv tv 1 examine patient tv examine patient tv 2 prescribe sling sat perform X ray v 3 prescribe rehabilitation sat ( sat =satisfied, tv =temporarily violated and v =violated) An instance can be successfully closed (i.e., users can stop working on an instance) if and only if the instance state is satisfied. For example, the medical staff can successfully close the first instance from Table 4.3 only after activities number two, three, four, seven or eight because the instance state is satisfied after execution of these activities. Similarly, the second instance ci 2 can be closed only after the fourth activity, the third instance ci 3 can be closed after the second activity while the fourth instance ci 4 can never be successfully closed.
111 Section 4.4 Execution of Constraint Model Instances Enabled Events An instance changes state by adding events to its partial trace. If added to the instance trace, some events would put the instance into the satisfied state, some into the temporarily violated state and some into the violated state. As specified in Definition 4.4.3, an event is enabled if and only if it refers to an activity contained in the instance model and if adding this event to the instance trace does not put the instance in the state violated. Hence, the declare prototype presented in Chapter 6 does not allow instance ci 4. Definition (Enabled event ci[e ) Let ci U ci be an instance where ci = (σ,(a,c M,C O )) and e E an event where e = (a,t). Event e is enabled, denoted as ci[e, if and only if a A and ω(σ + (a,t),(a,c M,C O )) violated. Users execute instances by executing events. Each executed event is added to the instance trace. By allowing users to execute only enabled events, the execution rule from Definition makes sure that users cannot execute events that would bring the instance into the state violated. In other words, this rule makes sure that users can eventually bring the instance to the state satisfied. Also, by allowing users to execute only enabled events, the execution rule makes sure that only activities belonging to the instance model can be executed and added to the instance trace. Note that it is possible that the instance trace already contains events involving activities that once were in the model, but no longer are. The execution rule allows this, as long as the new event that is being added to the trace refers to an activity that from the model at the moment of execution. Definition (Execution rule [ ) We define the execution relation [ U ci E U ci as the smallest relation satisfying (σ,cm) Uci e E : (σ,cm)[e (σ,cm)[e (σ + e,cm). Table 4.4 shows enabled and executed activities for one instance of the Fractures Treatment model cm FT from Example For the simplicity, we assume that execution of activities do not overlap, i.e., each of the listed activities a A FT in Table 4.4 refers to executions of two successive events: (a,started) and (a,completed) 3. At every moment during the execution of the instance some activities were enabled (marked with symbol ). For example, the first row in Table 4.4 shows that only events related to activity examine patient are enabled at the beginning of the execution. This is because the constraint c 1 from the model cm FT specifies that each instance has to start with examine patient. At this point, events involving other activities are disabled because adding them to the execution trace would put the instance in the state violated. The execution 3 This assumption is taken in order to simplify the examples of traces presented in Table 4.4. The same holds for traces that contain overlapping activities.
112 102 Chapter 4 Constraint-Based Approach rule (cf. Definition 4.4.4) makes sure that users cannot bring instances to the state violated while executing instances by only allowing execution of enabled events. Table 4.4 shows that users can indeed execute (shown by the symbol) only enabled events. At the beginning of the instance examine patient was executed, then check X ray risk, etc. Table 4.4: Enabled ( ) and executed ( ) events for instance ci = (σ,cm FT ) examine patient activities in the Fractures Treatment model apply cast remove cast instance trace - σ 1 examine patient 2 check X ray risk 3 prescribe medication 4 perform X ray 5 perform reposition 6 prescribe medication 7 check X ray risk 8 perform X ray 9 examine patient Consider the violated instance ci 4 from Table 4.3. If users would execute activities examine patient and perform X ray, this would bring the instance in the state violated because it would not be possible to satisfy the mandatory constraint c 5 from the model cm FT anymore. The execution rule prevents this: event (perform X ray, started) would not be enabled after the activity examine patient is executed (cf. Table 4.4) and users will not be able to execute activity perform X ray at this stage. Therefore, the violated instance (ci 4 ) in Table 4.3 is an example of an instance of the Fractures Treatment model that cannot be executed in the declare prototype (cf. Chapter 6). Note that some constraints from the Fractures Treatment model are the so-called safety constraints, i.e., they will prevent users to execute certain events that would bring the instance to the violated state. For example, constraint c 4 specifies that events (apply cast,t s ), (perform surgery,t s ) and (perform reposition,t s ) can occur only after event (perform X ray,t c ) has occurred. A direct consequence of this is that events (apply cast,t s ), (perform surgery,t s ) and (perform reposition,t s ) will be disabled before the execution of event (perform X ray,t c ), as shown in Table 4.4. In other words, it is not possible to execute activities apply cast, perform reposition and per- perform reposition perform surgery prescribe sling prescribe medication check X ray risk perform X ray prescribe rehabilitation
113 Section 4.4 Execution of Constraint Model Instances 103 form surgery until activity perform X ray is executed. Other types of constraints represent some expectation that has to be met in order to bring the instance to the state satisfied. For example, constraint c 2 specifies that At least one of the events (perform reposition,t c ), (apply cast,t c ), (prescribe sling,t c ) or (perform surgery,t c ) must occur. This type of a constraint will not influence enabled events, but it will prevent closing an instance before the expectation is met because the instance becomes satisfied only after the first occurrence of one of the four events. An event is enabled even if adding the event to the trace violates an optional constraint, i.e., the execution rule allows triggering events that permanently violate optional constraints (cf. definitions and 4.4.4). Although optional constraints do not directly influence the enabled events of constraint instances, they play an important role in the execution of instances. Optional constraints represent light rules that are used as guidance for users but users are not forced to follow them. In Chapter 6 we will present the declare prototype where, if the user is about to violate an optional constraint (by triggering an event), a special warning is issued and the user can choose whether to proceed (i.e., trigger the event and violate the optional constraint) or to abort (i.e., the event is not triggered and the optional constraint is not violated). In other words, optional constraints allow users to deviate from the constraint model and thus enable flexibility by deviation [ ] (cf. Section 3.2.4) States of Constraints Similarly like the instance itself, the instance s constraints also change states during execution. Definition specifies that, given an execution trace, a constraint can be in one of the three states: satisfied temporarily violated and violated. During execution of each instance, the state of each constraint in the instance can be presented to users as additional information that can help them understand the instance they are working on. Definition (Constraint state ν) Let c C be a constraint and σ E an execution trace. The function ν : (C E ) {satisfied,temporarily violated,violated} is defined as: satisfied if σ E c ; ν(σ,c) = temporarily violated if (σ / E c ) ( γ E : σ + γ E c ); violated otherwise. Table 4.5 shows states of constraints during the execution of one instance Fractures Treatment model from Example At the beginning of the execution all constraints are in the state satisfied, except constraint c 2, which is temporarily violated because it specifies that at least one of the activities apply
114 104 Chapter 4 Constraint-Based Approach cast, perform reposition, perform surgery or prescribe sling has to be executed. Indeed, the constraint c 2 becomes satisfied only after the activity prescribe sling is executed. After the activity apply cast is executed, constraint c 3 becomes temporarily violated because this constraint requires that activity remove cast has to be executed after activity apply cast. Therefore, this constraint remains in the temporarily violated state until activity remove cast is executed. Execution of the activity perform surgery brings optional constraint c 6 into state temporarily violated. This constraint remains temporarily violated because activity prescribe rehabilitation is never executed after activity perform surgery. Table 4.5: Constraint states in an instance (σ,cm FT ) of the Fractures Treatment model from Example instance instance constraints instance trace σ state c 1 c 2 c 3 c 4 c 5 c 6 start tv sat tv sat sat sat sat 1 examine patient tv sat tv sat sat sat sat 2 prescribe sling sat sat sat sat sat sat sat 3 check X ray risk sat sat sat sat sat sat sat 4 perform X ray sat sat sat sat sat sat sat 5 perform surgery sat sat sat sat sat sat tv 6 examine patient sat sat sat sat sat sat tv 7 apply cast tv sat sat tv sat sat tv 8 prescribe medication tv sat sat tv sat sat tv 9 remove cast sat sat sat sat sat sat tv 10 examine patient sat sat sat sat sat sat tv ( sat =satisfied and tv =temporarily violated) Table 4.5 shows that there is a relation between the state of the instance and states of its mandatory constraints and that optional constraint c 6 has no influence on the state of the instance. This is because the state of the instance depends only on the sets of satisfying traces of all mandatory constraints (cf. Definition 4.4.2). In some cases the instance state can be determined based on the states of mandatory constraints: (1) all mandatory constraints are satisfied in a satisfied instance, and (2) if at least one of the mandatory constraints in an instance is violated, then the instance is also violated. In all other cases, as we will show later, the instance state cannot always be derived from the states of individual mandatory constraints. As specified in Definition 4.2.2, the set of traces that satisfy a constraint model is composed of traces that satisfy all constraints in the model. Compositionality in this context means that it is enough to prove that a trace satisfies each of the constraints in order to prove that the trace satisfies the model [73,205].
115 Section 4.4 Execution of Constraint Model Instances 105 Property shows that, if an instance trace satisfies each of the mandatory constraints in the instance, then this trace also satisfies the instance and vice versa, i.e., if a trace satisfies an instance, then it also satisfies all mandatory constraints in the instance. Property (All mandatory constraints are satisfied in a satisfied instance) Let ci U ci be an instance where ci = (σ,(a,c M,C O )). Then ( c C M : ν(σ,c) = satisfied) (ω(ci) = satisfied). Proof. ( c C M : ν(σ,c) = satisfied) ( c C M : σ E c ) (σ c C M E c ) (σ E cm ) (ω(ci) = satisfied). Property shows that, if at least one of the mandatory constraints is violated, then the instance is also violated. To prove this, Property shows that, if the instance trace does not and cannot (in the future) satisfy (at least) one mandatory constraint in the instance, then this trace also does not and cannot (in the future) satisfy the instance. Property (If at least one mandatory constraint in an instance is violated then the instance is violated) Let ci U ci be an instance where ci = (σ,(a,c M,C O )). Then ( c C M : ν(σ,c) = violated) (ω(ci) = violated). Proof. ( c C M : ν(σ,c) = violated) ( c C M : (σ / E c γ E : σ + γ E c )) ((σ / c C M E c ) ( γ E : σ + γ c C M E c )) ((σ / E cm ) ( γ E : σ + γ E cm )) ω(ci) = violated. Note that it might be the case that, although none of the mandatory constraints in an instance is violated but some are temporarily violated, the instance itself is violated. Example shows how an instance can be violated although none of its mandatory constraints is violated. Example (A violated instance without violated constraints) Recall that t s,t c T are two event types such that t s = started and t c = completed. Let ci U ci, ci = (σ,cm) be a constraint instance where cm = (A,C M,C O ) such that: A = {a 1,a 2 } is a set of two activities a 1,a 2 A, C M = {c 1,c 2 } is a set of two mandatory constraints c 1,c 2 C such that c 1 = ({(a 1,t c )}, a 1 has to be completed exactly once ); c 2 = ({(a 1,t c )}, a 1 has to be completed exactly twice ); C O = is an (empty) set of optional constraints, and σ = (a 1,t s ),(a 1,t c ) is the execution trace of the instance ci.
116 106 Chapter 4 Constraint-Based Approach States of constraints in instance ci from Example (given the instance execution trace σ) are: ν(σ,c 1 ) = satisfied because σ E c 1, and ν(σ,c 2 ) = temporarily violated because it can be satisfied by executing a 1 once more. Constraint c 1 is satisfied because trace σ satisfies c 1 and constraint c 2 is temporarily violated because trace σ does not satisfy c 2 but there is a suffix (e.g., suffix (a 1,t s ),(a 1,t c ) ) that, when added to σ, satisfies c 2. Although c 1 is satisfied and c 2 is temporarily violated, there is no suffix that, when added to σ, satisfies both constraints c 1 and c 2, i.e., it is not possible to execute activity a 1 exactly once and twice at the same time. This means that the instance is violated, i.e., (σ / E cm ( γ E : σ + γ E cm )) ω(ci) = violated. The instance presented in Example cannot be satisfied because the instance model contains an error. Two constraints that can never be satisfied at the same time cause this error, i.e., there is no trace that satisfies both constraints. This type of error can occur in constraint models and is called conflict. Verification of constraint models and how to detect errors like conflicts will be described in Section 4.6. The formal language for constraint specification that will be presented in Chapter 5 enables execution of constraint instances because it is able to (1) determine state of an instance (cf. Definition 4.4.2) and (2) determine which events are enabled (cf. Definition 4.4.3). Moreover, it is possible to monitor states of each constraint from the model. Because the prototype declare uses this language for constraint specification, it enables execution of constraint models, as will be described in Chapter Ad-hoc Instance Change In some cases, it is necessary that the model of the instance changes (i.e., add and remove activities, mandatory and optional constraints) although the instance is already executing and the instance trace might not be empty. We refer to such a change as to an ad-hoc instance change or instance change. Workflow management systems that support ad-hoc change are called adaptive systems. For example, ADEPT [189, ,202] is a workflow management system that uses powerful mechanisms to support ad-hoc change of procedural process models by allowing adding activities to the control-flow, removing activities from the control-flow and moving activities in activities to the control-flow at run-time (cf. Section 2.2). On the one hand, as we discussed in Section 2.1.5, various problems can occur when it comes to ad-hoc change of imperative process models (e.g., the dynamic change bug [101] and other problems described in [201]). On the other hand, the constraint-based approach offers a simple method for ad-hoc
117 Section 4.5 Ad-hoc Instance Change 107 change that is based in a single requirement: the instance should not become violated after the change. An instance of a constraint model refers to one execution of this model, i.e., the instance assigns one execution trace to the model. Due to the fact that, in adhoc change the trace remains the same and the model changes, it might happen that the instance state changes according to the new model (cf. Definition 4.4.2). For example, it is possible that a satisfied instance would become violated if a mandatory constraint would be added to the instance, which is an undesired state of constraint instances (cf. Definition 4.4.4). Therefore, instance change can be applied (i.e., is successful) if and only if the change does not bring the instance into the violated state. After a successful change, the instance continues execution with the original trace. In Definition ad-hoc instance change is defined as a function that changes the instance model and assigns the changed model to the instance without changing the instance trace if and only this does not bring the instance in the state violated. Definition (Ad-hoc instance change ) Let : U ci U cm U ci be a partial function with domain dom( ) = {((σ,cm),cm ) U ci U cm ω((σ,cm )) violated }. For all ((σ,cm),cm ) dom( ) it holds that ((σ,cm),cm ) = (σ,cm ). Figure 4.4 shows examples of a successful and unsuccessful ad-hoc instance change. Originally, there are four instances in Figure 4.4(a): instances ci 1 = (σ 1,cm 1 ), ci 2 = (σ 2,cm 1 ), ci 3 = (σ 3,cm 3 ), and ci 4 = (σ 4,cm 4 ). The state of every instance is represented by a special line: instances ci 1, ci 2 and ci 4 are satisfied and instance ci 3 is temporarily violated. Figure 4.4(b) shows that it is possible to change the model of instance ci 2 to model cm 2, because ω((σ 2,cm 2 )) = temporarily violated. On the other hand, Figure 4.4(c) shows that it is not possible to change the model of instance ci 2 to model cm 3 (indicated with the Ø symbol on the arrow), because ω((σ 2,cm 3 )) = violated. Consider, for example an instance ci = (σ,cm FT ) of the Fractures Treatment model cm FT from Example where the patient was first examined, then the sling was prescribed and finally rehabilitation was prescribed. The state of this instance is ω(ci) = satisfied because the trace satisfies all mandatory constraints, i.e., σ E. Assume now that users want to add a cm FT mandatory constraint c C to instance ci where c = (E,f) such that E = {(prescribe sling,t c ),(perform X ray,t c )} and f = Can not complete prescribe sling before completing perform X ray ). This cannot be done because adding c as a mandatory constraint to the instance would bring the instance into state violated, i.e., the sling was already prescribed before the perform X ray was completed. On the other hand, adding activity consult external A to the instance is allowed because it leaves the instance in the satisfied state. Ad-hoc adding or removing activities and optional constraints to a satisfied or temporarily violated instance will always be successful, i.e. this change always
118 108 Chapter 4 Constraint-Based Approach instance satisfied Instance temporarily violated Instance violated Uci ci1 ci2 c3 ci4 σ 1 σ 2 σ 3 σ 4 U CM cm1 cm2 cm3 cm4 (a) constraint instances Uci U CM Uci U CM ci1 σ 1 cm1 ci1 σ 1 cm1 ci2 σ 2 cm2 ci2 σ 2 cm2 ci3 σ 3 cm3 ci3 σ 3 cm3 ci4 σ 4 cm4 (b) successful instance change ci4 σ 4 cm4 (c) unsuccessful instance change Figure 4.4: Instance change results in a changed instance, as shown in Property This is because these types of model changes do not influence the set of traces that satisfy the model (cf. Property 4.2.4), and, therefore, to not change the state of the instance (cf. Definition 4.4.2). On the one hand, the only effect that adding an activity to an instance model has it that this activity becomes available for execution (cf. Definition 4.4.4). On the other hand, after an activity has been removed from an instance, its users can no longer execute events involving this activity but the activity might still be in the instance trace if it was executed prior to the change. Property (Ad-hoc adding and removing activities and optional constraints cannot cause failure of ) Let ci U ci be a constraint instance such that ω(ci) violated where ci = (σ,(a,c M,C O )) and let cm U cm be a constraint model where cm = (A,C M,C O ), then it holds that (ci,cm ) = (σ,cm ), i.e., the change is successful. Proof. It holds that E cm = E cm (cf. Property 4.2.4). Therefore, it also holds that ω(ci) = ω(σ,cm ) (cf. Definition 4.4.2) and (ω(ci) violated) (ω(σ,cm ) violated), i.e. (ci,cm ) dom( ) and (ci,cm ) = (σ,cm ). Since mandatory constraints influence the set of traces that satisfy the model (cf. Property 4.2.5) and state of an instance (cf. Definition 4.4.2), adding and removing these constraints can have an effect on the ad-hoc instance change. A mandatory constraint can always be successfully removed from a satisfied or temporarily violated instance because this type of ad-hoc change will never cause a violated state of the instance, as shown in Property
119 Section 4.6 Verification of Constraint Models 109 Property (Ad-hoc removing mandatory constraints cannot cause failure of ) Let ci U ci be a constraint instance such that ω(ci) violated where ci = (σ,(a,c M,C O )) and let cm U cm be a constraint model where cm = (A,C M,C O) such that C M C M, then it holds that (ci,cm ) = (σ,cm ), i.e., the change is successful. Proof. Because ω((σ,cm)) violated it holds that (σ E cm ) ( γ E : σ + γ E cm ) (cf. Definition 4.4.2). Because it holds that E cm E cm (cf. Property 4.2.5), it also holds that ((σ E cm ) ( γ E : σ + γ E cm )) ((σ E cm ) ( γ E : σ + γ E cm )), i.e., it holds that ω(cm ) violated. Therefore, it holds that (ci,cm ) dom( ) and (ci,cm ) = (σ,cm ). As we showed in this section, the only requirement of ad-hoc change in constraint models is that the instance does not become violated after the change. In Chapter 5 we will show a formal language for constraint specification that offers a simple method for determining the state of a changed instance. Using this language, the declare prototype supports ad-hoc change by accepting it if the new state of the instance is not violated. If it is, the prototype reports this error and the instance continues it execution based on the original model. The ad-hoc change in declare will be described in Chapter Verification of Constraint Models Verification of process models enables automated detection of possible errors in models [89]. Many verification techniques for identifying possible errors in procedural process models aim at detecting syntactical errors (e.g., deadlck and livelock) [221] and semantical errors (e.g., can a process always lead to an acceptable final state ) [89]. On the one hand, verification of models specified in languages with a formal mathematical definition (e.g., Petri nets [87,177,199]) classifies the model as a correct or incorrect [19,31,83,89,124,254,255]. On the other hand, various verification techniques have been proposed for informal process modeling languages that use graph reduction [30, 86, 161, 170, 177, 222, 223, 268] or translation of an informal model to a formal one, which is then verified [17,82,88,157,170]. Just like the procedural ones, constraint models can also contain errors. In this section we describe verification of constraint models. While executing an instance of a constraint model cm = (A,C M,C O ) users execute activities a A by triggering events e A T on these activities (cf. the execution rule in Definition 4.4.4). During the execution the instance can change state depending on the events that were triggered. At the end of the execution, an instance must be satisfied, in order to satisfy all mandatory constraints (cf. Section 4.4.1). Certain combinations of constraints can cause two types of errors in constraint models that affect later executions of model instances: (1) dead events and (2)
120 110 Chapter 4 Constraint-Based Approach conflicts. An event is dead in a model if none of the traces that satisfy the model contains this event, i.e., a dead event cannot be executed in any instance of the model. A model contains conflicts if there is no trace that satisfies the model, i.e., instances of the model can never be satisfied Dead Events An event is dead in a model if none of the traces that satisfy the model contains this event, as specified in Definition In other words, if a dead event would be triggered and added to the execution trace of a model instance, the instance would become violated. For example, it is possible that, due to the combination of two mandatory constraints in model cm = (A,C M,C O ), event (a,started) (where a A) is dead in the model, i.e., none of the traces that satisfy the model contains event (a, started). The consequence of event (a, started) being dead is that, during the execution of instances of this model, event (a,started) will never be enabled (cf. Definition 4.4.3) and it will never be possible to start (and execute) activity a from the model because adding this event to the instance trace would cause the instance to become violated (cf. Definition 4.4.4). Although it will be possible that users execute instances of the model in a way that instances are satisfied by never executing activity a, it is important to verify the model against this error and to issue a warning that, although activity a is in the model, it will never be possible to execute it. Definition (Dead event) Let cm U cm be a constraint model. Event e E is a dead in model cm, if and only if σ E cm : e σ. The set of dead events of model cm is defined as Π DE (cm) = {e E σ E cm : e σ} The composition of all mandatory constraints in the model determines which traces satisfy the model (cf. Definition 4.2.2). Therefore, a combination of mandatory constraints may cause an event to be dead. The smallest subset of mandatory constraints for which an event is dead is called the cause of the dead event. Definition (Cause of dead event) Let cm U cm be a constraint model where cm = (A,C M,C O ) and e Π DE (cm) be a dead event in cm. The set of constraints C C M is a cause of the dead event e if and only if it holds that (e Π DE ((A,C,C O ))) ( C C : e / Π DE ((A,C,C O ))). We use Ce dead (cm) to denote the set of all causes of dead event e in model cm, i.e.,.ce dead (cm) = {C C M (e Π DE ((A,C,C O ))) ( C C : e / Π DE ((A,C,C O )))} Consider the constraint model cm R from Example containing four activities and three constraints. Event (curse, completed) is dead in constraint
121 Section 4.6 Verification of Constraint Models 111 model cm R. Due to the constraint c 2 all traces that satisfy model cm R must contain event (become holy, completed). On the other hand, due to constraint c 3 all traces that satisfy model cm R and contain event (become holy,completed) cannot contain event (curse, completed). Therefore, due to the combination of constraints c 2 and c 3 event (curse,completed) is a dead event in this model, i.e., (curse,completed) Π DE (cm R ) and constraints {c 2,c 3 } are the only cause of this dead event (i.e., C dead e (cm) = {{c 2,c 3 }}). Example (A constraint model with a dead event) Recall that t c T is an event type where t c = completed. Let cm R U cm, cm R = (A R,CM R,C O R ) be a constraint model such that: A R = {pray,curse,bless,become holy} is a set of activities, C R M = {c 1,c 2,c, and = is a (empty) set of optional constraints. If there is a dead event in a model, then there must be at least one cause of the dead event, i.e., then must be at least one combination of mandatory constraints that causes this event to be dead. A dead event can be removed from a model if all sets that cause the dead event are removed from the model. In other words, it is necessary to remove at least one constraint from each of the sets that cause the dead event in order for the dead event to become alive again. Property (A dead event is removed from a constraint model if and only if at least one mandatory constraint is removed from each of the constraint sets that cause the dead event) Let cm U cm be a constraint model where cm = (A,C M,C O ) and e E an event such that e Π DE (cm). Let cm U cm be a constraint model where cm = (A,C M,C O ) such that C M C M, then it holds that ( C Ce dead (cm) : C C M ) e / Π DE(cm ). Proof. First, we prove that it holds that ( C C dead e (cm) : C C M ) e / Π DE (cm ). We will prove that this holds by showing that it does not hold that
122 112 Chapter 4 Constraint-Based Approach ( C Ce dead (cm) : C C M ) e Π DE(cm ). If it holds that e Π DE (cm ), then it also holds that C Ce dead (cm ) : C C M. Because it holds that C Ce dead (cm ) : C C M (cf. Definition 4.6.2) and C M C M, it also holds that C Ce dead (cm ) : C C M and, therefore, it holds that C Ce dead (cm ) : C Ce dead (cm). In other words, if there is a cause C C M of dead event e in model cm, then this cause also exists in the original model cm, i.e., C C M. Therefore, it holds that C Ce dead (cm) : C C M, which is in contradiction with the statement C Ce dead (cm) : C C M, and it does not hold that ( C Ce dead (cm) : C C M ) e Π DE(cm ). Second, we prove that it holds that ( C Ce dead (cm) : C C M ) e Π DE (cm ). If it holds that C Ce dead (cm) : C C M, then it holds that C C M : e Π DE(A,C,C O ), i.e., it holds that C C M : σ E A,C,C : e / σ. O Because C C M it holds that E (A,C M,C O ) E A,C,C (cf. Property 4.2.5) O and, therefore, it holds that σ E A,C M,C : e / σ, i.e., it holds that e is dead O in cm (i.e., e Π DE (cm )). A formal language for constraint specification will be presented in Chapter 5. This LTL-based language enables a simple method for verification of constraint models against dead events. This method is used in the declare prototype (that will be presented in Chapter 6) to verify constraint models against dead events and detect so-called dead activities. When verifying a model cm = (A,C M,C O ), for each model activity a A declare tests if events (a,started) and (a,completed) are dead. If at least one of these events is dead, the dead activity verification error is reported and the smallest set(s) that cause it. In this way, dead events can be easily detected and eliminated in declare by removing at least one constraint from each of the sets of constraints that cause this error Conflicts If there exists no trace that satisfies a constraint model, then this model has a conflict. Unlike models with dead activities, models with conflicts are not executable because they can never be satisfied by any trace, i.e., instances of models with conflicts are always violated. Definition (Conflict) Model cm U cm has a conflict if and only if E cm =. The composition of all mandatory constraints in a model determines which traces satisfy the model (cf. Definition 4.2.2). A certain combination of mandatory constraints may cause a conflict in the model. The smallest subset of mandatory constraints that contains a conflict is called the cause of conflict. Definition (Cause of conflict) Let cm U cm be a constraint model with a conflict where cm = (A,C M,C O ).
123 Section 4.6 Verification of Constraint Models 113 The set of constraints C C M is a cause of the conflict if and only if it holds that (E (A,C,C O ) = ) ( C C : E (A,C,C O ) ). We use C conf (cm) to denote the set of all causes of conflict in model cm, i.e., C conf (cm) = {C C M (E (A,C,C O ) = ) ( C C : E (A,C,C O ) )} Constraint model cm R from Example has a conflict. Due to constraints c 2 and c 4 all traces that satisfy model cm R must contain events (become holy, completed) and (curse, completed), respectively. On the other hand, due to constraint c 3 all traces that satisfy model cm R must not contain both events (become holy, completed) and (curse, completed). Therefore, due to the combination of constraints c 2, c 3 and c 4 this model has a conflict, i.e., E = and constraints {c cm R 2,c 3,c 4 } are the cause of this conflict (C conf (cm R ) = {{c 2,c 3,c 4 }}). Example (A constraint model with a conflict) Recall that t c T is an event type where t c = completed 4. Let cm R U cm, cm R = (A R,CM R,C O R ) be a constraint model such that: A R = {pray,curse,bless,become holy} is a set of activities, CM R = {c 1,c 2,c 3,c, c 4 = (E 4,f 4 ) such that E 4 = {(curse,t c )} and f 4 = Must complete at least one occurrence of activity curse, and = is a (empty) set of optional constraints. If a model has a conflict, then there must be at least one cause of the conflict, i.e., then must be at least one combination of mandatory constraints that cause the conflict. A conflict can be removed from a model if all sets that cause the conflict are removed from the model, i.e., it is necessary to remove at least one constraint from each of the sets that cause the conflict in order remove the conflict from the model. 4 For the simplicity we will assume in this example that an activity is successfully executed when event of the type completed was triggered on this activity.
124 114 Chapter 4 Constraint-Based Approach Property (A conflict is removed from a constraint model if and only if at least one mandatory constraint is removed from each of the constraint sets that cause the conflict) Let cm U cm be a constraint model where cm = (A,C M,C O ) such that E cm =. Let cm U cm be a constraint model where cm = (A,C M,C O ) such that C M C M, then it holds that ( C C conf (cm) : C C M ) E cm. Proof. First, we prove that it holds that ( C C conf (cm) : C C M ) E cm. We will prove that it holds that E cm by showing that it does not hold that ( C C conf (cm) : C C M ) E cm =. If it holds that E cm =, then it also holds that C C conf (cm ) : C C M. Because it holds that C C conf (cm ) : C C M (cf. Definition 4.6.6) and C M C M, it also holds that C C conf (cm ) : C C M and, therefore, it holds that C C conf (cm ) : C C conf (cm). In other words, if there if there is a cause C C M of dead event e in model cm, then this cause also exists in the original model cm, i.e., C C M. Therefore, it holds that C C conf (cm) : C C M, which is in contradiction with the statement C C conf (cm) : C C M, and it does not hold that ( C C conf (cm) : C C M ) E cm =. Second, we prove that it holds that ( C C conf (cm) : C C M ) E cm =. If it holds that C C conf (cm) : C C M, then it holds that C C M : E A,C,C =. Because C C O M it holds that E (A,C M,C O ) E A,C,C (cf. O Property 4.2.5) and, therefore, it holds that E cm =. If a model contains a conflict, then its set of satisfying traces is empty and therefore all events are dead in the model. Property (All events are dead in a model with conflict) Let cm U cm be a constraint model such that E cm =, then it holds that Π DE (cm) = E. Proof. If it holds that E cm =, then e E : σ E cm Π DE (cm) = E. : e σ, i.e., A constraint model cm U cm does not necessarily have a conflict even if all events are dead in cm because, depending on the model, the empty trace could satisfy the model. In Chapter 5 we will show a formal language for constraint specification with a simple method for detecting conflicts. This method is also used in the declare prototype to verify constraint models against conflicts and detect the smallest set(s) that cause this error, which enables easy detection and elimination of conflicts Compatibility of Models In some cases it is necessary to check if two or more processes are compatible with each other. This can be checked by first merging the process (constraint)
125 Section 4.6 Verification of Constraint Models 115 models and then verifying the merged model for errors. Constraint models can be merged together to create a new constraint model containing all activities and constraints from the two or more original models. Definition (Merging constraint models ) Let cm 1,cm 2...cm n U cm be n constraint models where 1 i n cm i = (A i,cm i,c O i ). The merged model cm1,cm 2... cm n is a constraint model cm = cm 1 cm 2... cm n where cm = (A,C M,C O ) such that A = A 1 A 2... A n, C M = C M 1 C M 2... C M n and C O = C O 1 C O 2... C O n. Note that by definition E cm = E cm E 1 cm... E 2 cm. n Example shows three constraint models cm 1, cm 2 and cm 3 and how these models can be merged into cm 1 cm 2, cm 1 cm 3, cm 2 cm 3 and cm 1 cm 2 cm 3. Example (Merging three constraint models) Recall that t c T is an event type where t c = completed. Let cm 1,cm 2,cm 3 U cm be three constraint models where: cm 1 = (A 1,CM 1,C O 1 ) such that: A = {pray, curse, bless, become holy} is a set of activities. C M = {c 1,c 2 } is a set of mandatory constraints where c 1 = (E 1,f 1 ) such that E 1 = {(curse,t c ),(pray,t c )} and f 1 = Must complete at least one occurrence of activity pray after every completed occurrence of activity curse, and c 2 = (E 2,f 2 ) such that E 2 = {(become holy,t c )} and f 2 = Must complete at least one occurrence of activity become holy, and C O = is a (empty) set of optional constraints. cm 2 = (A 2,CM 2,C O 2 ) such that: A 2 = {curse,become holy} is a set of activities, C 3 M = {c 3} is a set of mandatory constraints where c 3 = (E 3,f 3 ) such that E 3 = {(curse,t c ),(become holy,t c )} and f 3 = If completed occurrence of activity curse then cannot complete any occurrence of activity become holy and if completed occurrence of activity become holy then cannot complete any occurrence of activity curse, and CO 2 = is a (empty) set of optional constraints. cm 3 = (A 3,CM 3,C O 3 ) such that: A 3 = {curse} is a set of activities, C 2 M = {c 4} is a set of mandatory constraints where c 4 = (E 4,f 4 ) such that E 4 = {(curse,t c )} and f 4 = Must complete at least one occurrence of activity curse, and CO 2 = is a (empty) set of optional constraints.
126 116 Chapter 4 Constraint-Based Approach Merging models cm 1 and cm 2 yields the constraint model cm 1 2 = cm 1 cm 2 where cm 1 2 = (A 1 2,C 1 2 M,C 1 2 O ) such that: A 1 2 = A 1 A 2 = {pray,curse,bless,become holy}, CM 1 2 = C M 1 C M 2 = {c 1,c 2,c 3 }, and C 1 2 O = CO 1 C O 2 =. Merging models cm 1 and cm 3 yields the constraint model cm 1 3 = cm 1 cm 3 where cm 1 3 = (A 1 3,C 1 3 M,C 1 3 O ) such that: A 1 3 = A 1 A 3 = {pray,curse,bless,become holy}, C 1 3 M = C M 1 C M 3 = {c 1,c 2,c 4 }, and C 1 3 O = CO 1 C O 3 =. Merging models cm 2 and cm 3 yields the constraint model cm 2 3 = cm 2 cm 3 where cm 2 3 = (A 2 3,C 2 3 M,C 2 3 O ) such that: A 2 3 = A 2 A 3 = {curse,become holy}, C 2 3 M = C M 2 C M 3 = {c 3,c 4 }, and C 2 3 O = CO 2 C O 3 =. Merging models cm 1, cm 2 and cm 3 yields the constraint models cm = cm 1 cm 2 cm 3 where cm = (A 1 2 3,C M,C O ) such that: A = A 1 A 2 A 3 = {pray,curse,bless,become holy}, CM = CM 1 C M 2 C M 3 = {c 1,c 2,c 3,c 4 }, and CO = CO 1 C O 2 C O 3 =. Verification of a merged model can detect that the merging causes some events to be dead. This type of incompatibility suggests that, even though the event can be executed in individual models, if the two models (i.e., processes) would be merged, the event would be dead in the resulting model. Definition (Dead event incompatibility) Let cm 1,cm 2...cm n U cm be n constraint models and e E be an event that is not dead in these models, i.e. 1 i n e / Π DE (cm i ). If e Π DE (cm 1 cm 2... cm n ), then we say that e is dead due to the incompatibility of models cm 1,cm 2,...,cm n. Consider the original models cm 1, cm 2 and cm 3 from Example There are no dead events in the original models, i.e., 1 i 3 Π DE (cm i ) =. However, event (curse,completed) is dead in the merged model cm 1 2 = cm 1 cm 2, i.e., Π DE (cm 1 2 ) = {(curse,completed)} and the cause of this dead event are constraints Ce dead (cm 1 2 ) = {{c 2,c 3 }} (note that the model cm 1 2 is the same like the model from Example 4.6.3). Therefore, event (curse,completed) is dead due to incompatibility of models cm 1 and cm 2.
127 Section 4.7 Summary 117 An even more problematic type of incompatibility is conflict incompatibility, where the merged process are fully incompatible with each other. Definition (Conflict incompatibility) Let cm 1,cm 2,...,cm n U cm be n constraint models without conflicts, i.e., 1 i n E cm. If E i cm 1 cm 2... cm =, then we say that cm 1 cm 2... n cm n has a conflict due to incompatibility of cm 1,cm 2,...,cm n. Consider the original models cm 1, cm 2 and cm 3 from Example There are no conflicts in any of the original models, i.e., 1 n 3 E cm i. However, the merged model cm = cm 1 cm 2 cm 3 has a conflict, i.e., E cm = and the cause of this conflict are constraints C conf (cm ) = {{c 2,c 3,c 4 }} (note that the model cm is the same like the model from Example 4.6.7). Therefore, the merged model cm has a conflict due to the full incompatibility of models cm 1, cm 2 and cm Summary In this chapter we have presented a formal foundation for a constraint-based approach to process models is based on models that consist of activities, optional constraints and mandatory constraints. Mandatory constraints in the model determine which are the possible executions of the model s instances, i.e., they determine the set of satisfying traces for the model 5. The more mandatory constraints the model has, the fewer possible executions there are likely to be. The fewer mandatory constraints the model has, the more possible executions there are likely to be. As an extreme example, any execution is possible if a model does not have any mandatory constraints. People execute instances of constraint models by triggering events on activities from the instances (e.g., started, completed, etc.). Events that users trigger by executing activities are added to the instance trace. Each change of the instance trace causes the instance to change state given the instance trace, the instance can be satisfied, temporarily violated or violated. The execution rule makes sure that users execute each instance in a way that does not violate the instance model. Users can decide to finish executing a constraint instance (i.e., to close the instance) only if the instance state is satisfied. The constraint-based approach presented in this chapter offers all types of flexibility identified by [ ] (cf. Section 2.1.2). By allowing everything that does not violate mandatory constraints, constraint models offer many possibilities for execution of model instances while enforcing a number of basic rules expressed as constraints. Constraint models are, therefore, flexible by design [ ], unlike traditional approaches which require much more effort in order to support 5 Chapter 5 describes a formal specification of constraints and sets of satisfying traces for constraint models.
128 118 Chapter 4 Constraint-Based Approach this type of flexibility (cf. Chapter 3). Optional constraints make constraint models flexible by deviation [ ] because these constraints are used as a guidance for execution and people can decide whether to violate them or not. Although constraint models allow for many possible executions, sometimes it might be necessary to change the model of an already running instance (i.e., an instance that is being executed). This type of change is referred to as to ad-hoc change and it makes constraint models be able to offer flexible by change [ ]. Ad-hoc change can be applied on an instance if and only if this change does not result in a violated instance. Finally, in Section 6.11 we describe how the declare prototype offers flexibility by underspecification [ ] by allowing for model decomposition. Two types of errors can occur in constraint models. First, some events can be dead in the model, i.e., users can never trigger these events while executing model instances. This can lead to situations where users can never execute an activity, although the activity is in the model. Second, it might happen that some mandatory constraints are conflicting in the model. Due to this error, it is never possible for any model instance to become satisfied and users cannot execute such instances in a correct way. Verification of constraint models against dead activities and conflicts is necessary in order to support development of correct models. Moreover, constraint model verification needs to detect the smallest group of mandatory constraints that cause the error in order to help developers to understand and eliminate the error. In the remainder of this thesis we will present a formal language that can be used for constraint specification (in Chapter 5) and the declare system - a prototype of a workflow management system based on this language and the constraint-based approach (in Chapter 6). Note that the proposed language is one example of a formal language suitable for a constraint-based approach and that other languages could be used for the same purpose. The prototype declare is developed to support easy implementation of any other LTL-based language that can be used for this purpose.
129 Chapter 5 Constraint Specification with Linear Temporal Logic In Chapter 4 we presented a formal foundation for constraint-based process models. However, so far we did not propose a formal language for constraint specification and in all examples in Chapter 4 we used a natural language (i.e., English) to specify constraints (e.g., Example on page 96). In this chapter we propose Linear Temporal Logic (LTL) [74] as a formalism for constraint specification. We will show how LTL can be used to specify constraints and to retrieve a finite representation of the set of traces that satisfy a constraint and a model (cf. sections 4.1 and 4.2). Using LTL makes it easy to determine states of instances and their constraints (cf. Section 4.4), to change instances in an ad-hoc manner (cf. Section 4.5) and to verify constraint models (cf. Section 4.6). In this chapter we present one particular example of an LTL-based language the ConDec language. ConDec constraints and models are represented graphically to users, while underlying LTL formulas are hidden. Therefore, users of ConDec models do not need to have knowledge of LTL. Note that, although we will use the ConDec language throughout this chapter, the principles presented in this chapter can be applied to any other language based on LTL or similar temporal logics. All principles that are presented in this chapter are implemented in the DECLARE prototype, which is described in Chapter LTL for Business Process Models LTL is a special kind of logic and is used for describing sequences of transitions between states in reactive systems [74]. In LTL formulas time is not considered explicitly. Instead, LTL can specify properties like eventually some state is reached (cf. the so-called expectation properties in Section 4.4) or some error state is never reached (cf. the so-called safety properties in Section 4.4). This type of logic is extensively used in the field of model-checking: systems can be
130 120 Chapter 5 Constraint Specification with Linear Temporal Logic checked against properties specified in LTL [74]. For this purpose, many algorithms that generate automata from LTL formulas have emerged in the modelchecking field [74,111,112]. Automata generated from properties specified as LTL formulas can be used to check if a system satisfies the properties. For example, the SPIN tool [132] can check a system model specified in PROMELA (Process Meta-Language) against properties specified in LTL. In order to check a property against a system modeled in PROMELA, the SPIN tool generates two automata: one representing the system model and one representing the negation of the LTL formula representing the property. If the intersection of the two automata is empty, then the system satisfies the property. Due to its declarative nature, LTL can be used for the formal specification of constraints in constraint-based process models. Automata generated from LTL formulas can be used for automated execution and verification of such models. Automata generated from LTL are used to represent the constraint-based model itself and regulate the execution of the model in a way that satisfies the constraints specified as LTL formulas. However, there are two differences between the regular LTL and the LTL applied to business process models. The two special properties of LTL for business processes are illustrated in Figure 5.1. B AB AB A A... (a) regular LTL trace e1 e2 e3 e4 e5 e6 e7 (b) business process LTL trace Figure 5.1: LTL for business processes The first difference between regular LTL and the business processes LTL is the length of traces. On the one hand, the model-checking field is mostly concerned with complex systems that are designed not to halt, e.g., schedulers, telephone switches, power plants, traffic lights, etc. Therefore, regular LTL considers infinite traces (cf. Figure 5.1(a)). On the other hand, execution of each business process models eventually terminates (cf. Section 4). Because of this, the infinite semantics of regular LTL cannot be applied [74, 111, 112] to constraint models and we need to adjust LTL to consider finite traces (cf. Figure 5.1(b)) in order to apply LTL to constraint models. Note that it is possible to check a finite system in the SPIN tool by using the Stuttering extension: the system is modeled in such a way in PROMELA that the last state is repeated infinitely. However, in the constraint-based approach the LTL formula represents the system itself, and the finite trace semantics needs to be applied to LTL itself. In order to apply LTL to the finite semantics of execution traces of constraint models, we use a simple and efficient approach for applying LTL to finite traces presented by Giannakopoulou
131 Section 5.1 LTL for Business Process Models 121 et al. in [112]. Further in this section, we will consider the LTL for finite traces presented in [112] in more detail. The second difference between regular LTL and the business processes LTL is the semantics of elements in a trace. Regular LTL assumes that one element of the trace can refer to more than one property. For example, it is possible to monitor two properties: (A) the motor temperature is higher than 80 degrees and (B) the speed of the turbine is higher than 150 km/h. As Figure 5.1(a) shows, each element of the trace could then refer to: (1) none of the two properties, i.e., neither A nor B hold, (2) only property A, i.e., A holds and B does not hold, (3) only property B, i.e., B holds and A does not hold, or (4) properties A and B, i.e., both A and B hold. In the case of execution traces of constraint models (cf. Definition on page 85) we assume that only one property holds at one moment, i.e., each of the elements of the trace refers to exactly one event, as shown in Figure 5.1(b). In the remainder of this section we present how we adjust LTL itself and automata generated from LTL to finite traces consisting of single events, as shown in Figure 5.1(b). A well-formed LTL formula consists of classical logical operators and several temporal operators, and it evaluates to true or false given an execution trace (cf. Definition 5.1.1). Definition (Well-Formed LTL Formula) Recall that E is the set of all possible events (cf. Section 4.1). Let E E be a set of events. Every e E is a well-formed formula over E. If p and q are well-formed formulas, then also true, false,!p, p q, p q, p, p, p, puq and pwq are also well-formed formulas over E. From a semantical point of view, a well-formed LTL formula p over E is a function p : E {true,false}. Let σ E be a trace. If p is a well-formed formula and it holds that p(σ) = true then we say that p satisfies σ, denoted by σ p. If p(σ) = false then we say that p does not satisfy σ, denoted by σ p. Recall that σ i denotes the suffix of σ starting at σ[i] (cf. Definition on page 85). The semantics of LTL are defined as follows: proposition: σ e if and only if e = σ[1], for e E, not (!): σ!p if and only if not σ p, and ( ): σ p q if and only if σ p and σ q, or ( ): σ p q if and only if σ p or σ q, next ( ): σ p if and only if σ 2 p, until (U): σ puq if and only if ( 1 i n : (σ i q ( 1 j<i : σ j p)), and Also, abbreviations are used: implication (p q): for!p q, equivalence (p q): for (p q) (!p!q), true (true): for p!p,
132 122 Chapter 5 Constraint Specification with Linear Temporal Logic false (false): for!true, eventually ( ): for p = trueu p, always ( ): for p =!!p, and weak until (W): for pwq = (puq) ( p). As specified in Definition 5.1.1, a well-formed LTL formula can use classical logical operators (!, and ) and several additional temporal operators (, U, W, and ). The semantics of operators!, and is the same like in the classical logic, while operators, U, W, and have a special, temporal, semantics: Operator always ( p) specifies that p holds at every position in the trace (cf. Figure 5.2(a)), Operator eventually ( p) specifies that p will hold at least once in the trace (cf. Figure 5.2(b)), Operator next ( p) specifies that p holds in the next element of the trace (cf. Figure 5.2(c)), Operator until (puq) specifies that there is a position where q holds and p holds in all preceding positions in the trace (cf. Figure 5.2(d)), Operator weak until (pwq) is similar to operator until (U), but it does not require that q ever becomes true. p p p p p p p p p p] (a) always - [] p (b) eventually - <> p p p p p p p p q q q] (c) next - O p (d) until - p U q Figure 5.2: Semantics of some LTL operators The difference between the regular LTL and LTL in Definition is twofold. First, the finite semantics is expressed in the until (U) operator [112]. In regular LTL the until operator is defined as follows: σ ϕuψ if and only if ( 1 i : (σ i q ( 1 j<i : σ j p)). The fact that i does not have an upper bound (i.e., 1 i :... ) reflects the infinite semantics of regular LTL. The finite semantics of traces is reflected in the upper bound n of i in the until (U) operator in Definition (i.e., 1 i n :... because σ = n). Second, the fact that one element of a trace can refer to multiple properties in regular LTL and to one
133 Section 5.2 ConDec: An Example of an LTL-Based Constraint Language 123 property in LTL for business processes is expressed in the way the proposition is defined. On the one hand, in regular LTL it is defined that σ e if and only if e σ[1], for e E, i.e., each element σ[i] of a trace is a set of properties. On the other hand, in Definition we consider a special case of this requirement where each element of a trace refers to exactly one event, i.e., we check if the event is the first element of the trace e = σ[1]. Because LTL formulas evaluate execution traces to true or false (cf. Definition 5.1.1), LTL can be used to formally specify the semantics of a constraint c = (E,f) (cf. Definition on page 87), i.e., LTL can be used to formally specify f. We refer to any constraint for which f is a well-formed LTL formula over E as to a LTL constraint. Similarly, if all mandatory and optional constraints in a constraint model are LTL constraints, then we say that the constraint model is an LTL constraint model. Note that LTL is not the only language that can be used for formal specification constraints. Other declarative languages can be also used. For example, Computation Tree Logic (CTL) [74] is another logic that can be used to specify the semantics of constraints. Although LTL and CTL are similar languages, each of them has some advantages over the other. However, so far, the debate about which of these two languages is better remains unsolved [132, 253]. For example, there are some properties that can be specified only in LTL or in CTL, but not in both languages [132]. On the one hand, the fairness property (for each execution, there is some state at which a property starts to hold forever) can be expressed in LTL, but not in CTL. On the other hand, the reset property (from every state there exists at least one execution that can return the system to its initial state) can be expressed in CTL, but not in LTL. We chose LTL for the specification of constraints for two reasons. First, we were inspired by the so called LTL Checker plug-in [25] of the process mining tool ProM [8,91], which can be used for verification of past executions against properties specified in LTL (the LTL Checker is described in more detail in Chapter 7). Second, as a simple and straight-forward language, LTL seems to be be a good starting point for the constraint-based approach. 5.2 ConDec: An Example of an LTL-Based Constraint Language ConDec is a constraint-based language that uses LTL to formally specify the semantics of constraints. Because LTL formulas can be difficult to understand by non-experts, ConDec associates a graphical representation to each constraint. By using this approach, users of ConDec models do not need to have knowledge of LTL. Instead, they can learn the intuitive meanings of names and graphical representations of constraints. Traditional process modeling languages offer a small set of constructs that
134 124 Chapter 5 Constraint Specification with Linear Temporal Logic can be used to model relationships between process activities (e.g., sequence, choice branching, parallel branching and loops). Because it uses LTL for constraint specification, ConDec offers many constraint templates, i.e., types of constraints that can be used to create constraints in ConDec models. Consider, for example, the following two constraints: (1) the constraint c 1 from the model cm R in Example on page 92 specifying that f 1 = Every occurrence of event (curse, completed) must eventually be followed by at least one occurrence of event (pray,completed). and (2) the constraint c 6 from model cm FT in Example on page 96, which specifies that f 6 = Every occurrence of event (perform surgery, completed) must eventually be followed by at least one occurrence of event (prescribe rehabilitation, completed). The first constraint can be specified with formula ((curse, completed) (pray, completed)) and the second one with a similar formula ((perform surgery, completed) (prescribe rehabilitation, completed)). Both constraints impose the same type of relation, i.e., one event is a response of the other event, and their LTL specifications are similar: ((A, completed) (B, completed)). Instead of having to individually specify formulas for these two constraints in their models, the constraints can be created from the constraint template called response. Each template in ConDec has (1) a name, (2) an LTL formula and (3) a graphical representation. Figure 5.3 shows how templates are used to create constraints in ConDec models. ConDec TEMPLATES name LTL graphical response [] ( (A,completed) -> <> (B,completed)) A response B ConDec CONSTRAINTS semantics graphical [] ( (curse,completed) -> <> (pray,completed)) curse response pray [] ( (perform surgery,completed) -> <> (prescribe rehabilitation,completed)) perform surgery response prescribe rehabilitation check X ray risk perform surgery response prescribe rehabilitation ConDec MODELS curse response pray bless Religion init examine patient alt precedence perform X precedence ray reposition 1 of 4 apply cast prescribe sling succession remove cast Fractures Treatment Figure 5.3: ConDec templates, constraints and models
135 Section 5.2 ConDec: An Example of an LTL-Based Constraint Language 125 Instead of specifying the LTL formula and graphical representation separately for each constraint, a constraint is created based on a template: a constraint inherits the name, graphical representation and the LTL formula from its template. Constraints are presented graphically in ConDec models, while the underlying LTL formula remains hidden. For example, activities and constraints in the two ConDec models in Figure 5.3 are represented graphically. Activities are presented as rectangles and constraints as special lines between activities. The model cm R from Example on page 92 has three activities and one constraint and the Fractures Treatment model cm FT from Example on page 96 has nine activities and six constraints. ConDec models for these two examples are shown in Figure 5.3. Each of the constraints in these two models is created from a ConDec template, i.e., the graphical representation of the template represents the constraint while the underlying LTL formula remains hidden. Due to this fact, users do not need to be LTL experts in order to develop and work with ConDec models. Instead, it is enough to learn the graphical representation and semantics of ConDec templates, e.g., the response template between activities A and B is represented by the special line like in Figure 5.3 and it specifies that every occurrence of A has to be eventually followed by at least one occurrence of B. We refer to imaginary activities of a template as to template s parameters. For example, the response template has two parameters, i.e., A and B. The ConDec language is just one example of a language for constraint specification. Other languages can use different types of constraints depending on the application area, e.g., the DecSerFlow language [37, 38] for web services domain, the CIGDec language [176] for medical processes, etc. We used property specification patterns presented in [95] as an inspiration for developing ConDec templates. ConDec has more that twenty templates, which structured into three groups. First, there are existence templates that specify how many times or when one activity can be executed. For example, the 1..* template specifies that an activity must be executed at least once and the init template can be used to specify that execution of each instance has to start with a specific activity. Second, relation templates define some relation between two (or more) activities. For example, response is a relation template. Third, negation templates define a negative relation between activities. For example, the responded absence template defines that two activities cannot be executed both within the same instance. Finally, choice templates can be used to specify that one must choose between activities. An example of such a template is the 1 of 4 template, which is used to specify that at least one of the four given activities has to be executed Existence Templates Figure 5.4 shows the so-called existence templates. These templates involve only one activity and they define the cardinality or the position of the activity in a trace. For example, the first template is called existence and it is graphically
136 126 Chapter 5 Constraint Specification with Linear Temporal Logic represented with the annotation 1.. above the activity. This indicates that A is executed at least once. Templates existence2, existence3, and existence N all specify a lower bound for the number of occurrences of A. It is also possible to specify an upper bound for the number of occurrences of A. Templates absence, absence2, absence3, and absence N are also visualized by showing the range above the activity, e.g., 0..N for the requirement absence N+1. Similarly, it is possible to specify the exact number of occurrences as shown in Figure 5.4, e.g., template exactly N (A) is denoted by an N above the activity and specifies that A should be executed exactly N times. Finally, the template init(a) can be used to specify that activity A must be the first executed activity in an instance. existence(a) existence2(a) existence3(a) existence N (A) 1..* A 2..* A A N..* A N..* 3..* A absence2(a) absence3(a) absence N+1 (A) 0..1 A 0..2 A 0..N A 0..N A exactly1(a) exactly2(a) exactly N (A) init(a) 1 A 2 A N A init A N A Figure 5.4: Notations for the existence templates Table 5.1 provides the semantics for each of the notations shown in Figure 5.4, i.e., each template is expressed in terms of an LTL formula. Formula for template existence(a) defines that event (A,t c ) has to hold eventually which implies that in any instance A has to be executed at least once 1. Formula for template existence N (A) shows how it is possible to express a lower bound N for the number of occurrences of A in a recursive manner, i.e., existence N (A) = ((A,t c ) (existence N 1 (A))). Formula for template absence N (A) can be defined as the negation of existence N (A). Together they can be combined to express that for any full execution, A should be executed a pre-specified number N, i.e., exactly N (A) = existence N (A) absence N+1 (A). Formula for template init(a) defines that the only possible events before event (A,t c ) are events (A,t s ) and (A,t x ), i.e., events involving activities other that A can be executed only after event (A,t c ) Relation Templates Figure 5.5 shows the so-called relation templates. While an existence template describes the cardinality of one activity, a relation template defines a dependency between multiple activities. Figure 5.5 only shows binary relationships (i.e., between two activities), however, in ConDec there are also templates that can involve generalizations to three or more activities. For simplicity however, we 1 Recall that an event consists of an activity (e.g., A) and an event type (e.g., t c).
137 Section 5.2 ConDec: An Example of an LTL-Based Constraint Language 127 Table 5.1: LTL formulas for existence templates (recall that t s,t c,t x T are event types such that t s = started, t c = completed and t x = cancelled) name of formula LTL expression existence(a) (A,t c ) existence2(a) ((A,t c ) (existence(a))) existence3(a) ((A,t c ) (existence2((a)))) existence N (A) ((A,t c ) (existence N 1 (A))) absence2(a)!existence2(a) absence3(a)!existence3(a) absence N (A)!existence N (A) exactly1(a) existence(a) absence2(a) exactly2(a) existence2(a) absence3(a) exactly N (A) existence N (A) absence N+1 (A) init(a) ((A,t s ) (A,t x ))W(A,t c ) first focus on the binary relationships shown in Figure 5.5. All relation templates have activities A and B as parameters. The line between the two activities in the graphical representation is unique for the formula, and reflects the semantics of the relation. The responded existence template specifies that if activity A is executed, activity B also has to be executed (at any time, i.e., either before or after activity A is executed). According to the co-existence template, if one of the activities A or B is executed, the other one has to be executed as well. responded existence (A, B) A B alternate response(a, B) A B co-existence(a, B) A B alternate precedence(a, B) A B response(a, B) A B alternate succession(a, B) A B precedence(a, B) A B chain response(a, B) A B succession(a, B) A B chain precedence(a, B) A B chain succession(a, B) A B Figure 5.5: Notations for the relation templates While the first two templates do not consider the order of activities, templates response, precedence and succession do consider the ordering of activities. Template response requires that every time activity A executes, activity B has to be executed after it. Note that this is a very relaxed relation of response, because
138 128 Chapter 5 Constraint Specification with Linear Temporal Logic B does not have to execute straight after A, and another A can be executed between the first A and the subsequent B. The template precedence requires that activity B is preceded by activity A, i.e., it specifies that activity B can be executed only after activity A is executed. Just like in the response template, other activities can be executed between activities A and B. The combination of the response and precedence templates defines a bi-directional execution order of two activities and is called succession. In this template, both response and precedence relations have to hold between the activities A and B. Templates alternate response, alternate precedence and alternate succession strengthen the response, precedence and succession templates, respectively. In the alternate templates activities A and B have to alternate. If activity B is alternate response of activity A, then after the execution of an activity A activity B has to be executed and the activity A can be executed again only after activity B is executed. Similarly, in the alternate precedence every instance of activity B has to be preceded by an instance of activity A and the activity B cannot be executed again before the activity A is also executed again. The alternate succession is a combination of the alternate response and alternate precedence. Even more strict ordering relations are specified by the last three templates shown in Figure 5.5: chain response, chain precedence and chain succession. These templates require that the executions of the two activities (A and B) are next to each other. In the chain response template activity B has to be executed directly after activity A. The chain precedence template requires that the activity A directly precedes each B. Since the chain succession template is the combination of the chain response and chain precedence templates, it requires that activities A and B are always executed next to each other. Table 5.2 shows LTL formulas for the templates shown in Figure 5.5. The formula for responded existence(a,b) indicates that an occurrence of (A,t c ) should always imply an occurrence of event (B,t c ), either before or after (A,t c ). Formula for co existence(a,b) means that the existence of (A,t c ) implies the existence of (B,t c ) and vice versa. The formula for response(a,b) specifies that at any point in time when event (A,t c ) occurs there should eventually be an occurrence of event (B,t c ). The formula for precedence(a,b) is similar to response but now looking backwards, i.e., (B,t s ), (B,t c ) and (B,t x ) cannot occur before occurrence of event (A,t c ). The formula for succession(a,b) is defined by combining both into: response(a, B) precedence(a, B). The alternate response(a,b) formula specifies that any occurrence of (A,t c ) implies that in the next state and onwards no (A,t c ) may occur until a (B,t c ) occurs. The formula for alternate precedence(a, B) is a bit more complicated: it implies that at any point in time where (B,t c ) occurs and at least one other occurrence of (B,t c ) follows, an (A,t c ) should occur before the following occurrence of (B,t s ), (B,t c ) or (B,t x ). The formula for alternate succession(a,b) combines both into alternate response(a, B) alternate precedence(a, B). The formula for chain response(a,b) indicates that any occurrence of (A,t c ) should be di-
139 Section 5.2 ConDec: An Example of an LTL-Based Constraint Language 129 rectly followed by (B,t s ). The formula for chain precedence(a,b) is the logical counterpart: it specifies that any occurrence of (A,t c ) should be directly followed by (B,t s ) and any occurrence of (B,t s ) should be directly preceded by (A,t c ). Table 5.2: LTL formulas for relation templates (recall that t s,t c,t x T are event types such that t s = started, t c = completed and t x = cancelled) name of formula LTL expression responded existence(a,b) (A,t c ) (B,t c ) co existence(a,b) (A,t c ) (B,t c ) response(a,b) ((A,t c ) (B,t c )) precedence(a,b) (!((B,t s ) (B,t c ) (B,t x )))W(A,t c ) succession(a, B) response(a, B) precedence(a, B) alternate response(a, B) response(a, B) ((A,t c ) (precedence(b,a))) alternate precedence(a, B) precedence(a, B) ((B,t c ) (precedence(a,b))) alternate succession(a, B) alternate response(a, B) alternate precedence(a, B) chain response(a,b) response(a,b) ((A,t c ) (B,t s )) chain precedence(a,b) precedence(a,b) ( (B,t s ) (A,t c )) chain succession(a, B) chain response(a, B) chain precedence(a, B) Negation Templates Figure 5.6 shows the negation templates, which are the negated versions of the relation templates. (Ignore the grouping of constraints on the right-hand side of Figure 5.6 for the moment. Later, we will show that the eight constraints can be reduced to three equivalence classes.) The first two templates negate the responded existence and co-existence templates. The not responded existence template specifies that if activity A is executed activity B must never be executed (not before nor after activity A). The not co-existence template applies not responded existence from A to B and from B to A. It is important to note that the term negation should not be interpreted as the logical negation, e.g., if activity A never occurs, then both responded existence(a,b) and not responded existence(a,b) hold (i.e., one does not exclude the other). The not response template specifies that after the execution of activity A, activity B cannot be executed any more. According to the template not precedence activity B cannot be preceded by activity A. The last three templates are negations of templates chain response, chain precedence and chain succession. The not chain response template specifies that A should never be followed directly by B and the not chain precedence template specifies that B should never be preceded directly by A. Templates not chain response and not chain precedence are combined in the not chain succession template. Note that Figure 5.6 does not show negation templates for the alternating variants of response, precedence, and succession.
140 130 Chapter 5 Constraint Specification with Linear Temporal Logic The reason is that there is no straightforward and intuitive interpretation of the converse of an alternating response, precedence, or succession. not responded existence(a, B) A B not co-existence(a, B) A B A B not chain response(a, B) A B not response(a, B) A B not chain precedence(a, B) A B A B not precedence(a, B) A B A B not chain succession(a, B) A B not succession(a, B) A B Figure 5.6: Notations for the negations templates Table 5.3 shows the LTL expressions of the templates shown in Figure 5.6. Table 5.3 also shows that some of the templates are equivalent, i.e., not coexistence and not responded existence are equivalent and similarly the next two pairs of three formulas are equivalent. Note that a similar grouping is shown in Figure 5.6 where a single representation for each group is suggested. The formula for not responded existence(a,b) specifies that event (B,t c ) cannot occur if event (A,t c ) occurs. However, since the ordering here does not matter, not responded existence(a, B) = not responded existence(a, B) and hence coincides with not co existence(a, B). The formula for not response(a, B) defines that after any occurrence of (A,t c ), (B,t s ) may never happen (or formulated alternatively: any occurrence of (B,t c ) should take place before the first (A,t c )). The formula for not precedence(a,b) defines that, if (B,t s ) may occur in some future state, then (A,t c ) cannot occur in the current state. It is easy to see that not precedence(a,b) = not response(a,b) because both state that no (B,t c ) should take place after the first (A,t c ) (if any). Since the formula for not succession(a, B) combines both not response(a, B) and not precedence(a, B), also not succession(a, B) = not response(a, B). The last three formulas are negations of formulas chain response, chain precedence and chain succession. It is easy to see that they are equivalent not chain response(a,b) = not chain precedence(a,b) = not chain succession(a,b) Choice Templates Figure 5.7 shows the so-called choice templates, which specify that it is necessary to choose between several activities. The 1 of 2 template specifies that at least one of the two activities A and B has to be executed, but both can be executed and each of then can be executed an arbitrary number of times. The 1 of 3 template specifies that at least one of the three activities A, B and C has to be executed, but all three activities can be executed an arbitrary number of times as long as at least one of them occurs at least once. Similarly, a template 1 of N can
141 Section 5.2 ConDec: An Example of an LTL-Based Constraint Language 131 Table 5.3: LTL formulas for negation templates (templates grouped together are equivalent) (recall that t s,t c T are event types such that t s = started and t c = completed) name of formula LTL expression not responded existence(a,b) (A,t c )!( (B,t c )) not co existence(a, B) not responded existence(a, B) not responded existence(b, A) not response(a,b) ((A,t c )!( ((B,t s ) (B,t c )))) not precedence(a,b) ( (B,t s ) (!(A,t c ))) not succession(a, B) not response(a, B) not precedence(a, B) not chain response(a,b) ((A,t c ) (!(B,t s ))) not chain precedence(a,b) ( (B,t s )!(A,t c )) not chain succession(a, B) not chain response(a, B) not chain precedence(a, B) specify that at least one of the N activities has to be executed, but all activities can be executed an arbitrary number of times. It is also possible to specify that M of N activities has to be executed, but this has to be done explicitly, i.e., it is not possible to specify this in a recursive way like in the existence templates. For example, the 2 of 3 template specifies that at least two of the three activities A, B and C have to be executed, but all three can be executed an arbitrary number of times. 1 of 2 1 of 2 (A, B) A B exclusive 1 of 2 (A, B) A 1 of 2 B 1 of 3 1 of 3 (A, B) A B exclusive 1 of 3 (A, B) A 1 of 3 B C C 1 of N 1 of N (A1,..,AN) A1 AN exclusive 1 of N (A, B) A1 1 of N AN A2... A of 3 2 of 3 (A, B) A B exclusive 2 of 3 (A, B) A 2 of 3 B C C Figure 5.7: Notations for the choice templates The exclusive choice templates are stronger than the templates described above. The exclusive 1 of 2 template specifies that one of the two activities A and B has to be executed, while the other one cannot be executed at all. The exclusive 1 of 3 template specifies that one of the three activities A, B and C has to be executed one or more times, while the other two cannot be executed at
142 132 Chapter 5 Constraint Specification with Linear Temporal Logic all. The exclusive 1 of N template specifies that one of the N activities has to be executed one or more times, while the other N-1 cannot be executed at all. More general exclusive templates must be specified explicitly, i.e., the exclusive 2 of 3 template specifies that two of the three activities A, B and C have to be executed one or more times, while the remaining one cannot be executed at all. Table 5.4 shows LTL formulas for each choice template from Figure 5.7. Formula for 1 of 2(A,B) specifies that either (A,t c ) or (B,t c ) has to occur eventually. Formula 1 of 3(A,B,C) specifies that either (A,t c ) or (B,t c ) or (C,t c ) has to eventually occur. Similarly, formula 1 of N (A 1,...,A N ) specifies that one of the events (A,t c ),..., (B,t c ) has to eventually occur. The formula for 2 of 3(A,B,C) specifies that two of the events (A,t c ), (B,t c ) or (C,t c ) has to eventually occur. Formulas for exclusive templates strengthen the choice. Formula for exclusive 1 of 2(A,B) specifies that either (A,t c ) or (B,t c ) has to eventually occur, but they can occur both. The formula for exclusive 1 of 3(A,B,C) specifies that one of the events (A,t c ), (B,t c ) or (C,t c ) has to eventually occur, but the other two cannot occur at all. The formula for exclusive 1 of N (A 1,...,A N ) can be specified similarly so that it defines that one of the events (A 1,t c ),... or (A N,t c ) has to eventually occur, but the other N-1 events cannot occur at all. Finally, formula exclusive 2 of 3(A, B, C) specifies that two of the events (A,t c ), (B,t c ) or (C,t c ) have to eventually occur and the third one cannot occur at all. Table 5.4: LTL formulas for choice templates (recall that t s,t c T are event types such that t s = started and t c = completed) name of formula LTL expression 1 of 2(A,B) (A,t c ) (B,t c ) 1 of 3(A,B,C) (A,t c ) (B,t c ) (C,t c ) 1 of N (A 1,...,A N ) (A 1,t c )... (A N,t c ) 2 of 3(A,B,C) ( (A,t c ) (B,t c )) ( (B,t c ) (C,t c )) ( (A,t c ) (C,t c )) exclusive 1 of 2(A,B) ( (A,t c )! (B,t c )) (! (A,t c ) (B,t c )) exclusive 1 of 3(A,B,C) ( (A,t c )! (B,t c )! (C,t c )) (! (A,t c ) (B,t c )! (C,t c )) ( (A,t c )! (B,t c )! (C,t c )) exclusive 2 of 3(A,B,C) ( (A,t c ) (B,t c )! (C,t c )) ( (A,t c )! (B,t c ) (C,t c )) (! (A,t c ) (B,t c ) (C,t c )) Graphical representations of ConDec templates presented in Figures 5.5, 5.6 and 5.7 are used to hide the (potentially) complex LTL formulas. Special symbols in the graphical representation of a template should illustrate the template s semantics. Figure 5.8 shows explanations of intuition behind the graphical notations of ConDec templates.
143 Section 5.2 ConDec: An Example of an LTL-Based Constraint Language 133 The dot shows how to read the constraint, i.e., it means suppose that A occurs. The type of connection describes the type of constraint (in this case existence response ) and should be interpreted depending on the location of the dot. On top the lower-bound (N) and upper-bound (M) are specified. N..M A B If A occurs, then also B should occur (at any time), i.e., <>(A,tc) => <>(B,tc) Two dots, i.e., read the existence response constraint from both sides, i.e., <>(A,tc) <=> <>(B,tc) A The empty diamond symbol represents the choice and the label the kind of choice A B The arrow should be interpreted as is followed by or is preceded by (in this case both). A N of M B A B The negation symbol inverses the meaning of the connection, i.e., in this case is NOT followed by and is NOT preceded by. The filled diamond symbol represents the exclusive choice and the label the kind of choice A B A N of M B Figure 5.8: Explanation of the graphical notation Branching of Templates Each of the ConDec templates involves a specific number of activities. For example, templates existence(a), response(a,b) and 1 of 3(A,B,C) involve one, two, and three activities, respectively. This means that, when a constraint is created based on a template, the constraint will involve as many real activities as predefined in the template. In Figure 5.3 we showed how a real activity replaces each of the template s parameters in a constraint. However, each constraint can easily be extended to deal with more activities then defined by its template. Consider, for example, the response template that involves two activities A and B in Figure 5.9. In the simplest case, a response constraint will involve two activities, each of which will replace one of the template s parameter. For example, activities curse and pray simply replace parameters A and B, respectively, in the graphical representation and LTL formula of the response template (cf. plain constraint in Figure 5.9). In some cases it might be necessary to assign more than two activities a parameter in a template. When a template parameter is replaced by more than one activity in a constraint, then we say that this parameter branches. An example of a branched response constraint is shown in Figure 5.9: the parameter A is replaced by the activity curse and parameter B is branched on activities pray and confess. In case of branching, the parameter is replaced (1) by a multiple arcs to all branched activities in the graphical representation and (2) by a disjunction of branched activities in the LTL formula, as shown in the branched constraint in Figure 5.9. The semantics of branching can vary from template to template, depending on the LTL formula of the template. For
144 134 Chapter 5 Constraint Specification with Linear Temporal Logic example, the branched constraint in Figure 5.9 specifies that each occurrence of curse should eventually be followed by at least one occurrence of activity pray or activity confess. Note that it is possible to branch all parameters, one parameter or none of the parameters. GRAPHICAL LTL FORMULA TEMPLATE A B [] ((A,tc) => <> (B,tc)) PLAIN CONSTRAINT curse pray [] ((curse,tc) => <> (pray,tc)) BRANCHED CONSTRAINT curse pray confess [] ((curse,tc) => <> ((pray,tc) \/ (confess,tc)) Figure 5.9: Branching the response template The number of possible branches in ConDec constraints is unlimited. For example, it is possible to branch the parameter B in theresponse template to N alternatives, like shown inn Figure [] (acta => <> ( actb1 \/ actb2 \/... \/ actbn) ) actb 1 acta actb 2... actb N Figure 5.10: Branching the response template to multiple activities 5.3 ConDec Constraints ConDec constraints are created from ConDec templates, such that the LTL formula and graphical representation of the template are associated with the constraint, as shown in Figure Template parameters (i.e., A and B) are replaced with real activities (i.e., perform surgery and prescribe rehabilitation) both in the graphical and LTL specification of the constraint. A finite representation of the set of traces that satisfy a ConDec constraint is retrieved from the LTL formula associated with the constraint. Every LTL formula can be translated into a non-deterministic finite state automaton (cf.
145 Section 5.3 ConDec Constraints 135 [] ( (A,completed) -> <> (B,completed)) A response B (a) the response template c = ( E, f ) E = {(perform surgery,completed),(prescribe rehabilitation,completed)} f = [] ( (perform surgery,completed) -> <> (prescribe rehabilitation,completed)) (b) semantics of the constraint perform surgery response prescribe rehabilitation (c) graphical representation of the constraint Figure 5.11: A ConDec constraint and its template Definition 5.3.1) that represents exactly all traces that satisfy the LTL formula [74, , 158]. Definition (Finite State Automaton FSA) Finite state automaton FSA is a five tuple E,S,T,S 0,S F such that E is the alphabet, S is the finite set of states, T S E S it the transition relation, S 0 S is the set of initial states, and S F S is the set of accepting states. Figure 5.12 shows graphical representation of automaton created for constraint in Figure This automaton has two states s 0 and s 1 where s 0 is both an initial (marked with an incoming arrow without a source) and accepting (marked with double border) state. Labeled directed arrows between states represent transitions. For example, transition (s 0,(prescribe rehabilitation,t c ),s 0 ) T denotes that, if event (prescribe rehabilitation,t c ) occurs when the automaton is in the state s 0, then the automaton stays in state s 0. In other words, we say that event (prescribe rehabilitation,t c ) triggers this transition. The special label - denotes a transition that is triggered by any event e E. Each transition is an output transition for the state from which it is triggered and an input transition for the state to which it leads. For example, transition (s 0,,s 1 ) T is an output transition for state s 0 and an input transition for state s 1. We also say that s 0 is the source and s 1 is the target of this transition. (prescribe rehabilitation,tc) - - s0 s1 (prescribe rehabilitation,tc)!(perform surgery,tc) Figure 5.12: A finite state automaton FSA f for constraint in Figure 5.11 (recall that t c T is an event type such that t c = completed) Labels on automata transitions are considered to be boolean values of propositions [112, 158]. Consider, for example, three propositions a, b and c. Label
146 136 Chapter 5 Constraint Specification with Linear Temporal Logic!a!b c represents that (i.e., this transition will fire when) a and b do not hold and c holds, regardless whether other propositions hold or not. A transition labeled with a!b will fire if a holds and b does not hold, regardless whether other propositions hold or not. Label!a!b represents a transition that will fire if a and b do not hold, regardless whether other propositions hold or not. The remainder of this section is structured as follows. First, in Section we describe how we adjust the automata generated from LTL to specific properties of business processes, which were described in Section 5.1. Second, in Section we describe how we deal with the fact that the generated automata are non-deterministic automata. Finally, in Section 5.3.3, we describe how we retrieve the set of all traces that satisfy a constraint (cf. Definition in page 87) from the automaton generated for the constraint Adjusting to Properties of Business Processes In order to apply LTL to execution of business processes, we adjust the regular LTL in two ways, as we described in Section 5.1: (1) we consider LTL for finite traces, and (2) we consider LTL for traces of single events. These two adjustments must also be reflected on the automata generated from LTL. First, due to the finite semantics of execution traces of constraint models, we use the algorithm presented in [112,113] to create a finite state automaton FSA from ConDec constraint c = (E,f). We will use FSA f to denote an automaton (1) created for LTL formula f using the algorithm presented in [113], and (2) adjusted for finite traces using the method presented in [112]. Second, due to the fact that classical LTL considers traces which can contain an arbitrary number of propositions at each trace element, automata generated for LTL formulas in [112, 113] might contain transitions that refer to such trace elements. For example, it is possible that a transition has label (perform surgery,t s ) (perform reposition,t s ) because events are treated as propositions. This transition is blocked (i.e., it will never fire) in the setting of traces of business processes, because events are triggered one at a time. In other words, a transition is blocked if and only if its label requires more than one event to hold. Naturally, blocked transitions are ignored in the context of execution traces of business processes because they can not be triggered by any element of a trace (i.e., by any event). In order to use these automata for our approach, we unblock them in such a way that: (1) all blocked transitions are removed from the automaton, (2) all states that are unreachable from initial states are removed from the automaton and (3) all states from which an accepting state is not reachable are removed from the automaton. Further in this thesis, we assume that all automata are unblocked, unless denoted differently. Definition gives a formal specification of reachable states: the set of reachable states from state s of the automaton contains all states that can be reached with an arbitrary sequence of transitions from the state s. The unblocked version of an automaton
147 Section 5.3 ConDec Constraints 137 in specified in Definition Definition (Reachable states Rs FSA ) Let FSA = E,S,T,S 0,S F be a finite state automaton and s S be one state of FSA. State s S is reachable from state s if and only if it holds that σ E : (s,σ[1],s 1 ) T (s 1,σ[2],s 2 ) T... (s σ 1,σ[ σ ],s ) T. The set of states that are reachable from s is defined as Rs FSA = {s S σ E : (s,σ[1],s 1 ) T (s 1,σ[2],s 2 ) T... (s σ 1,σ[ σ ],s ) T }. Definition (Unblocked finite state automaton) Let FSA = E,S,T,S 0,S F be a finite state automaton and T B T be a set of all blocked transitions in T. Automata FSA UB = E,S UB,T UB,S0 UB,SF UB where S UB S, T UB T, S0 UB S 0 and SF UB S F is the unblocked version of FSA if and only if it holds that all blocked transitions are removed, i.e., T UB = T \ T B, and all states are reachable from the initial state(s), i.e., S UB and S0 UB are the biggest subsets of S such that it holds that s S UB \ S0 UB : s 0 S0 UB : s Rs FSAUB, and 0 form all states an accepting state is reachable, i.e., S UB and SF UB biggest subsets of S such that it holds that s S UB \ S UB S UB F. F are the : R FSAUB s In addition to unblocking automata, when presenting automata in figures in this thesis, we will simplify labels on transitions in the following manner: if, due to its label, a transition can be triggered only by a single event, then its label will be replaced by a label containing only that event. For example, a transition with the label e 1!e 2 can be triggered only by event e 1 and, therefore, in this thesis we will label it only with e 1. On the other hand, transition with the label!e 1!e 2 can be triggered with multiple events (i.e., all events except e 1 and e 2 ), and, therefore, its label will remain the same Dealing with the Non-Determinism Automata created for LTL formulas are non-deterministic automata, i.e., one state can have multiple output transitions that are triggered by the same event, but that have different target states. Consider, for example, state s 0 in the automaton in Figure If event (prescribe rehabilitation,t c ) E occurs at this state, all three output transitions (s 0,(prescribe rehabilitation,t c ),s 0 ), (s 0,!(perform surgery,t c ),s 0 ) and (s 0,,s 1 ) can be triggered. A run of a finite trace on non-deterministic automata represents the execution of the trace on the automata and it transfers automata form one set of possible states to another set of possible states, as specified in Definition We call such a run a non-deterministic run or nd-run.
148 138 Chapter 5 Constraint Specification with Linear Temporal Logic Definition (Non-deterministic run (nd-run)) Let FSA= E,S,T,S 0,S F be a finite state automaton and σ E be a trace. If finite sequence FSA(σ) = (S 0,σ[1],S 1 ),(S 1,σ[2],S 2 ),...,(S σ 1,σ[ σ ],S σ ) such that 1 i σ : S i = {s S s S i 1 : (s,σ[i],s) T } and 1 i σ : S i exists, then we say that FSA(σ) is the nd-run of trace σ on the automaton FSA. We use FSA to denote the set of all traces σ E such that FSA(σ) exists. If it holds that σ FSA, then we use Sσ FSA to denote the last set of states in FSA(σ), i.e., Sσ FSA = S σ. Table 5.5 shows the nd-run of a trace σ E containing eight events on the automaton FSA shown in Figure The first row refers to the initial state of the automaton, i.e., the empty trace, and each row underneath this row refers to one element (i.e., event) of the trace. The first column contains events from the trace σ. The second column shows all transitions that can be triggered by the event given the current set of possible states (this set is shown in the third column of the previous row). Finally, the third column shows the new set of possible states after the event occurred. As Table 5.5 shows, the nd-runof trace σ starts in the set of initial states S 0 = {s 0 } and ends in the set of possible states S FSA σ = {s 0,s 1 }. Table 5.5: A non-deterministic run on the automaton FSA shown in Figure 5.12 (recall that t s,t c T are event types such that t s = started and t c = completed) examine=examine patient, surgery=perform surgery, rehabilitation=prescribe rehabilitation. σ E transitions new states initial state S 0 = {s 0 } σ[1] = (examine,t s ) (s 0,!(surgery,t c ), s 0 ) S 1 = {s 0, s 1 } (s 0,, s 1 ) σ[2] = (examine,t c ) (s 0,!(surgery,t c ), s 0 ) S 2 = {s 0, s 1 } (s 0,, s 1 ) (s 1,, s 1 ) σ[3] = (surgery,t s ) (s 0,!(surgery,t c ), s 0 ) S 3 = {s 0, s 1 } (s 0,, s 1 ) (s 1,, s 1 ) σ[4] = (surgery,t c ) (s 0,, s 1 ) S 4 = {s 1 } (s 1,, s 1 ) σ[5] = (examine,t s ) (s 1,, s 1 ) S 5 = {s 1 } σ[6] = (examine,t c ) (s 1,, s 1 ) S 6 = {s 1 } σ[7] = (rehabilitation,t s ) (s 1,, s 1 ) S 7 = {s 1 } σ[8] = (rehabilitation,t c ) (s 1,, s 1 ) Sσ FSA = {s 0, s 1 } (s 1, (rehabilitation,t c ), s 0 ) If a trace brings the automata to an accepting state, then the automata accepts this trace. In case of a non-deterministic run, we say that the automata FSA accepts trace σ E if and only if the non-deterministic run of σ leaves
149 Section 5.3 ConDec Constraints 139 FSA in a set of possible states S such that at least one of the possible states is accepting, as specified in Definition Definition (Acceptance) Let FSA= E,S,T,S 0,S F be a finite state automaton and σ E be a trace. We say that FSA accepts trace σ f and only if it holds that (σ FSA) (S FSA σ S F ). The language of FSA, L(FSA) E, consists of all traces accepted by FSA Retrieving the Set of Satisfying Traces If FSA f is an automaton created for some well-formed LTL formula f, then the language of this automaton L(FSA f ) represents exactly all traces σ E that satisfy the formula f (i.e., σ f) [74,111,112,158]. Hence all traces that satisfy the LTL formula f are represented by the automaton FSA f generated from f (cf. Property 5.3.6). Property ( σ E : (σ f) (σ L(FSA f ))) Let f be a well formed LTL formula and FSA f = E,S,T,S 0,S F be the automaton generated for formula f, then it holds that σ E : (σ f) (σ L(FSA f )). Proof. The automaton FSA construction in [111,112,158]. Due to the fact that the language L(FSA f ) of automaton FSA f generated from the LTL formula f of constraint (E,f) C accepts exactly the traces that satisfy the LTL formula f, it holds that the automaton FSA f represents the set of traces that satisfy the constraint (E,f), as shown in Property Property (E (E,f ) = L(FSA f)) Let c C be a constraint where c = (E,f) such that f is a well formed LTL formula over E. If FSA f = E,S,T,S 0,S F is an automaton generated for formula f, then it holds that E c = L(FSA f). Proof. It holds that σ E : (σ f) (σ L(FSA f )) (cf. the automaton FSA construction algorithm in [111,112,158]), i.e., σ E : (f(σ) = true) (σ L(FSA f )) (cf. Definition 5.1.1). It also holds that E c = {σ E σ c} where (σ c) f(σ) = true (cf. Definition 4.1.4). Therefore, it holds that E (E,f ) = L(FSA f). Automata generated from LTL formulas are consistent with the two requirements of sets of accepting traces for constraints (and constraint models). First, note that, although it is built for constraint on two particular activities (i.e., E = {(perform surgery,t c ),(prescribe rehabilitation,t c )}), the automaton in Figure 5.12 can run traces that can contain events involving activities other than perform surgery and prescribe rehabilitation, i.e., it can contain events e / E
150 140 Chapter 5 Constraint Specification with Linear Temporal Logic that are not in the namespace. Transitions (s 0,!(surgery,t c ),s 0 ), (s 0,,s 1 ) and (s 1,,s 1 ) can trigger other events like, for example, events (examine,t s ) and (examine,t c ). Indeed, a trace constraining these two events can run on the automaton, as shown in Table 5.5. Moreover, these two events can be replaced by any other event not involving activities perform surgery and prescribe rehabilitation, e.g., by events (drink coffee,t s ) and (drink coffee,t c ). As explained in Chapter 4, this is an important property of sets of accepting traces that enables easy ad-hoc change (cf. Section 4.5). Second, trace σ from Table 5.5 is accepted by the automaton FSA f in Figure 5.12 for constraint c = (E,f) in Figure 5.11 because it leaves the automata in the set of possible traces Sσ FSA = {s 0,s 1 } where s 0 is an accepting state. Trace σ contains many events that are not in the namespace E of the constraint. In fact, if any of the events in σ that are not in the namespace e / E would be replaced by other events not in the name space e / E, the automaton would still accept such changed trace. This is in line with the requirement of Definition on page 87, which says that if a trace σ satisfies the constants, then all traces from the projection σ E satisfy the constraint and vice versa. Figure 5.13 shows another example of a ConDec constraint c = (E, f). This constraint is created from the precedence template (cf. Section 5.2.2), and, therefore, it has the template s graphical representation and LTL formula. This constraint specifies that activity perform surgery cannot be executed before activity perform X ray, i.e., formula f specifies that events (perform surgery,t s ) and (perform surgery,t c ) cannot occur before event (perform X ray,t c ). (!((B.ts) \/ (B.tc))) W (A,tc) precedence A B (a) the precedence template c = ( E, f ) E = {(perform X ray,tc),(perform surgery,ts),(perform surgery,tc)} f = (!((perform surgery,ts) V (perform surgery,tc))) W (perform X ray,completed)) (b) semantics of the constraint perform X ray precedence perform surgery (c) graphical representation of the constraint Figure 5.13: An example of a precedence constraint (recall that t s,t c T are event types such that t s = started and t c = completed) Figure 5.14 shows the finite state automaton FSA f generated from the LTL formula for the precedence constraint c in Figure Language L(FSA f ) of the automaton represents exactly all traces that satisfy constraint c. Indeed, the automaton prevents occurrence of events (perform surgery,t s ) and (perform surgery,t c ) before event (perform X ray,t c ) since transition (s 1,,s 1 ) is the only transition that can trigger (perform surgery,t s ) and (perform surgery,t c ) and state s 1 can be reached only via transition (s, (perform X ray,t c ),s 1 ). For illustration, in Table 5.6 we present non-deterministic runs (nd-run) of two traces on the finite state automaton FSA f in Figure 5.14, which is
151 Section 5.4 ConDec Models 141!(perform surgery,ts) /\!(perform surgery,tc) - s0 (perform X ray,tc) s1 Figure 5.14: A finite state automaton FSA f for constraint in Figure 5.13 (recall that t s,t c T are event types such that t s = started and t c = completed) generated for the constraint c = (E,f) in Figure Each trace starts at the initial state of the automaton. Further, for each event in the trace the new set of possible states of the automaton is given. Trace σ 1 is accepted by the automaton FSA f and, therefore, σ 1 satisfies constraint from Figure 5.13, i.e., events (perform surgery,t s ) and (perform surgery,t c ) are preceded by event (perform X ray,t c ) in σ 1. Trace σ 2 is not accepted by the automaton FSA f (i.e., it is not possible to trigger event (perform surgery,t s ) from state s 0 ) and, therefore, σ 2 does not satisfy constraint from Figure 5.13, i.e., event (perform surgery,t s ) is not preceded by event (perform X ray,t c ) in σ 2. Table 5.6: Non-deterministic runs of two traces on automaton in Figure 5.14 generated for the constraint c = (E, f) in Figure 5.13 (recall that t s,t c T are event types such that t s = started and t c = completed) examine=examine patient, surgery=perform surgery, X ray=perform X ray. σ 1 E c σ 2 / E c i σ 1 [i] new states σ 2 [i] new states initial state S 0 = {s 0 } initial state S 0 = {s 0 } 1 (examine,t s ) S 1 = {s 0 } (examine,t s ) S 1 = {s 0 } 2 (examine,t c ) S 2 = {s 0 } (examine,t c ) S 2 = {s 0 } 3 (X ray,t s ) S 3 = {s 0 } (surgery,t s ) {} 4 (X ray,t s ) S 4 = {s 0, s 1 } 5 (examine,t s ) S 5 = {s 0, s 1 } 6 (examine,t c ) S 6 = {s 0, s 1 } 7 (surgery,t s ) S 7 = {s 1 } 8 (surgery,t c ) Sσ FSA 1 = {s 1 } 5.4 ConDec Models ConDec constraint models are presented graphically to users: (1) activities are presented as labeled rectangles and (2) ConDec constraints are presented as graphical representations of their templates, i.e., special lines and symbols between activities. Figure 5.15 shows one ConDec model consisting of three activities (i.e., curse, pray and bless) and two mandatory constraints. The constraint between activities curse and pray is based in the response template, i.e., it spec-
152 142 Chapter 5 Constraint Specification with Linear Temporal Logic ifies that each occurrence of event (curse,t c ) has to be eventually followed by at least one occurrence of event (pray,t c ). The constraint on the activity pray is based on the 1..* template, i.e., it specifies that there has to be at least one occurrence of event (pray,t c ). As Figure 5.15 shows, LTL formulas are associated to constraints but they are hidden in the model, i.e., each constraint is represented as the graphical representation of its template. Note that both constraints in Figure 5.15 are mandatory, i.e., they are represented as full lines. Optional constraints are also represented as graphical representations of their templates, but as dashed lines. Further in this chapter, we will show a ConDec model with an optional constraint. [] ( (curse,tc) -> <> (pray,tc)) <> (pray,tc) curse response 1..* pray bless Figure 5.15: A ConDec model (recall that t c T is an event type such that t c = completed) Due to the fact that all mandatory and optional constraints in a ConDec model are ConDec constraints, i.e., LTL constraints, every ConDec model is an LTL constraint model. The set of satisfying traces of any LTL constraint model (e.g., a ConDec model) is determined based on the LTL formulas associated with mandatory constraints from the model. The mandatory formula of such a model is defined as a conjunction of LTL formulas for all mandatory constraints, as specified in Definition For example, the mandatory formula for ConDec model cm in Figure 5.15 is f cm = ( ((curse,t c ) (pray,t c ))) ( (pray,t c )). Definition (Mandatory formula f cm ) Let cm U cm be a constraint model where cm = (A,C M,C O ) such that all mandatory and optional constraints are LTL constraints, then the mandatory formula for model cm is defined as { true if CM = ; f cm = (E,f) C M f otherwise. Because the mandatory formula of a model corresponds to the conjunction of all mandatory constraints, the language of the automaton generated for this formula represents the set of satisfying traces for the model, as shown by Property
153 Section 5.4 ConDec Models 143 Property (E cm = L(FSA f cm )) Let cm U cm be a ConDec constraint model such that cm = (A,C M,C O ) and FSA fcm be a finite state automaton such that f cm is a mandatory formula for cm, then E cm = L(FSA f cm ). Proof. If it holds that C M =, then it holds that f cm = true (cf. Definition 5.4.1). Therefore, it holds that σ E : σ f cm (cf. operator true in Definition 5.1.1), i.e., L(FSA fcm ) = E (cf. Property 5.3.6). Further, because it holds that C M =, it also holds that E cm = E (cf. Definition on page 90). Therefore, it holds that E cm = L(FSA f cm ) = E. If it holds that C M, then, on the one hand, it holds that f cm = (E,f) C M f (cf. Definition 5.4.1). Therefore, it also holds that (σ L(FSA fcm )) ( (E,f) C M : f(σ) = true) (cf. operator in Definition and Definition 5.3.5). On the other hand, it holds that (σ E cm ) ( (E,f) C M : f(σ) = true) (cf. definitions on page 87 and on page 90). Therefore, it holds that (σ L(FSA fcm )) (σ E cm ), i.e., it holds that E cm = L(FSA f cm ). The automaton FSA fcm generated for the mandatory formula f cm for an LTL constraint model cm U cm (e.g., a ConDec model) is a finite representation of the set of satisfying traces E cm of the model cm. This approach makes checking if some trace σ E satisfies model cm a trivial operation it is enough to check if automaton FSA fcm accepts trace σ. Figure 5.16 shows the automaton FSA fcm generated for the mandatory formula f cm = ( ((curse,t c ) (pray,t c ))) ( (pray,t c )) of the ConDec model cm in Figure (pray,tc)!(curse,tc) s0 s1 - (pray,tc) Figure 5.16: Automaton FSA fcm that accepts the satisfying traces E cm for ConDec model cm in Figure 5.15 (recall that t c T is an event type such that t c = completed) The automaton in Figure 5.16 has one initial (i.e., s 0 ) and one accepting (i.e., s 1 ) state. The language of this automaton represents all traces that satisfy the ConDec model cm in Figure 5.15, i.e., it represents all traces that satisfy both mandatory constraints from the model. First, it is clear that only traces that contain at least one occurrence of event (pray,t c ) can satisfy cm, because the accepting set of states {s 0,s 1 } can be reached if and only if event (pray,t c ) occurs. Therefore, only traces that satisfy constraint 1..* can satisfy this model. Second, if event (curse,t c ) occurs, the automaton s state is a non-accepting set of states (i.e., {s 0 }), and the automaton will remain in this set of states until
154 144 Chapter 5 Constraint Specification with Linear Temporal Logic the first occurrence of event (pray,t c ). In other words, only traces that satisfy constraint response can satisfy this model. Table 5.7 shows three non-deterministic runs on the automaton FSA fcm shown in Figure 5.16, which is generated for the mandatory formula o the model cm in Figure On the one hand, traces σ 1 and σ 2 are accepted by this automaton, and, therefore, these traces satisfy the model cm. In these two traces each occurrence of event (curse,t c ) is followed by an occurrence of event (pray,t c ) (i.e., constraint response is satisfied) and there is one occurrence of event (pray,t c ) (i.e., constraint 1..* is satisfied). Therefore, traces σ 1 and σ 2 satisfy the model cm. On the other hand, trace σ 3 is not accepted by the automaton, and, therefore, this trace does not satisfy the model cm. The occurrence of event (curse,t c ) is not followed by an occurrence of event (pray,t c ) in trace σ 3 (i.e., constraint response is not satisfied) and there is one occurrence of event (pray,t c ) (i.e., constraint 1..* is satisfied). Therefore, trace σ 3 does not satisfy the model cm. Table 5.7: Examples of (non-)accepting traces for automaton FSA fcm in Figure 5.16 generated for model cm in Figure 5.15 (recall that t s,t c T are event types such that t s = started and t c = completed) σ 1 E cm σ 2 E cm σ 3 / E cm new new new i σ 1 [i] states σ 2 [i] states σ 3 [i] states initial {s 0 } initial {s 0 } initial {s 0 } 1 (bless,t s ) {s 0 } (bless,t s ) {s 0 } (pray,t s ) {s 0, s 1 } 2 (bless,t c ) {s 0 } (bless,t c ) {s 0 } (pray,t c ) {s 0, s 1 } 3 (curse,t s ) {s 0 } (curse,t s ) {s 0 } (curse,t s ) {s 0 } 4 (curse,t c ) {s 0 } (curse,t c ) {s 0 } (curse,t c ) {s 0 } 5 (bless,t s ) {s 0 } (become holy,t s ) {s 0 } (bless,t s ) {s 0 } 6 (bless,t c ) {s 0 } (become holy,t c ) {s 0 } (bless,t c ) {s 0 } 7 (curse,t s ) {s 0 } (curse,t s ) {s 0 } 8 (curse,t c ) {s 0 } (curse,t c ) {s 0 } 9 (pray,t s ) {s 0 } (pray,t s ) {s 0 } 10 (pray,t c ) {s 0, s 1 } (pray,t c ) {s 0, s 1 } Note that trace σ 2 is accepted by FSA cm (and satisfies cm) although σ 2 contains events (become holy,t s ) and (become holy,t c ), i.e., it contains events on activities that are not in the model cm (cf. Figure 5.15). As discussed in Chapter 4, this is a desirable property of constraint languages, which facilitates ad-hoc change. For example, if activity become holy was originally in the model, it possible that it was executed and later removed from the model. 5.5 ConDec Model: Fractures Treatment Process By using graphical notations of templates for representation of constraints, Con- Dec models can be easily represented to people not familiar with LTL. Figure 5.17
155 Section 5.5 ConDec Model: Fractures Treatment Process 145 shows a ConDec model cm FT for the Fractures Treatment process (cf. Example on page 96). Labels c 1,...,c 6 mark constraints from the model cm FT. All six constraints are created from ConDec templates. The first five constraints are mandatory, i.e., presented as full lines. The sixth constraint is optional and, therefore, presented as a dashed line. Constraint c 4 is created from the precedence template, but it is branched on several activities. This means that, although the precedence template has two parameters A and B, in this model we replace parameter A with activity perform X ray and we branch parameter B with activities apply cast, perform reposition and perform surgery. By doing this, we specify that none of the activities apply cast, perform reposition or perform surgery can be executed before activity perform X ray is executed. c2 c6 c5 check X ray risk alternate precedence perform surgery response prescribe rehabilitation c1 perform X ray precedence reposition 1 of 4 prescribe sling init examine patient apply cast succession remove cast c4 c3 Figure 5.17: ConDec model for the Fractures Treatment Although the ConDec model in Figure 5.17 presents constraints graphically, each of these constraints has a hidden LTL formula that is inherited from the constraint s template. As we explained before, when a constraint is created between activities in a ConDec model, then these activities replace the formal parameters in the template. Constraint c 1 is created from the init existence template (cf. Section 5.2.1) and constraint c 2 is based on the 1 of 4 choice template (cf. Section 5.2.4). Constraints c 3, c 4, c 5 and c 6 are created from succession, precedence, alternate precedence and response relation templates, respectively (cf. Section 5.2.2). Table 5.8 shows LTL formulas for all constraints in the Con- Dec model in Figure Constraint c 4 is a special case, i.e., here the second template parameter B is branched on activities apply cast, perform reposition and perform surgery, i.e., event (B,t s ) is replaced with the disjunction of events (apply cast,t s ), (perform reposition,t s ) and (perform surgery,t s ) and (B,t c ) is replaced with the disjunction of events (apply cast,t c ), (perform reposition,t c ) and (perform surgery,t c ).
156 146 Chapter 5 Constraint Specification with Linear Temporal Logic Table 5.8: LTL formulas for constraints in ConDec model in Figure 5.17 (recall that t s,t c,t x T are event types such that t s = started, t c = completed and t x = cancelled) constraint c i LTL formula f i c 1 = (E 1, f 1 ) f 1 = ((examine patient,t s ) (examine patient,t x )) U(examine patient,t c ) c 2 = (E 2, f 2 ) f 2 = ( (apply cast,t c )) ( (prescribe sling,t c )) ( (perform reposition,t c )) ( (perform surgery,t c )) c 3 = (E 3, f 3 ) f 3 = ( ((apply cast,t c ) (remove cast,t c )))!(((remove cast,t s ) (remove cast,t c ))W(apply cast,t c )) c 4 = (E 4, f 4 ) f 4 =!(((apply cast,t s ) (perform reposition,t s ) (perform surgery,t s )) ((apply cast,t c ) (perform reposition,t c ) (perform surgery,t c )))W(perform X ray,t c ) c 5 = (E 5, f 5 ) f 5 = ((!((perform X ray,t s ) (perform X ray,t c )) W(check X ray risk,t c )) (((perform X ray,t c ) ((!((perform X ray,t s ) (perform X ray,t c )) W(check X ray risk,t c )))))) c 6 = (E 6, f 6 ) f 6 = ( ((perform surgery,t c ) (prescribe rehabilitation,t c ))) 5.6 Execution of ConDec Instances Execution of instances of LTL constraint models, and therefore ConDec instances, is based on the LTL specifications of constraints and, more precisely, the automaton generated from the mandatory formula of the model. In sections 5.6.1, 5.6.2, and we show how this automaton is used to determine the state of the instance (cf. Section 4.4.1), enabled events (cf. Section 4.4.2), and states of constraints (cf. Section 4.4.3), respectively Instance State The automaton FSA fcm generated for the mandatory formula of instance s model and the instance trace σ is used to easily determine the state of an instance ci = (σ,cm) at any moment of execution. Property shows that the instance is satisfied if and only if the trace σ is accepted by the automaton FSA fcm. In other words, if the nd-run of σ on FSA fcm leaves the FSA fcm in a set of possible states that contains an accepting state, then the instance state is satisfied. Recall that ω(ci) = satisfied if σ E cm for some instance ci(σ,cm) (cf. Definition 4.4.2). Property (ω((σ,cm)) = satisfied if and only if σ L(FSA fcm ) ) Let ci U ci be an instance of a LTL constraint model cm U cm where ci = (σ,cm) and let FSA fcm be the automaton generated for the mandatory formula f cm. It holds that ω(ci) = satisfied if and only if it holds that σ L(FSA fcm ). Proof. Since (ω(ci) = satisfied) (σ E cm ) (cf. Definition 4.4.2) and E cm = L(FSA f cm ) (cf. Property 5.4.2), we know that (ω(ci) = satisfied) (σ L(FSA fcm )).
157 Section 5.6 Execution of ConDec Instances 147 An instance ci = (σ,cm) is in the temporarily violated state if σ is not accepted by the automaton FSA fcm generated for the mandatory formula but the trace σ can be run on FSA fcm, as Property shows. In other words, if the nd-run of σ on FSA fcm leaves the FSA fcm in a set of possible states that does not contain an accepting state, then the instance state is temporarily violated. Property (ω((σ, cm)) = temporarily violated if and only if σ / L(FSA fcm ) and σ FSA fcm ) Let ci U ci be an instance of a LTL constraint model cm U cm where ci = (σ,cm). Let FSA fcm = E,S,T,S 0,S F be the automaton generated for f cm and FSA fcm (σ) be the non-deterministic run of σ on FSA fcm. It holds that ω(ci) = temporarily violated if and only if it holds that σ / L(FSA fcm ) and σ FSA 2 fcm. Proof. If it holds that σ / L(FSA fcm ), then it holds that σ / E cm because E cm = L(FSA f cm ) (cf. Property 5.4.2). If it holds that σ FSA fcm, then it holds that S FSA fcm σ and it holds that s S FSA fcm σ : R FSA fcm s S F (cf. Definition 5.3.3). Therefore, it holds that γ E such that automaton FSA fcm accepts trace σ + γ (cf. Definitions and 5.3.5), i.e., γ E : σ + γ L(FSA fcm ) and it holds that γ E : σ + γ E cm (cf. Property 5.4.2). Because it holds that σ / E cm γ E : σ + γ E cm, it also holds that ω(ci) = temporarily violated. An instance ci = (σ,cm) is in the violated state if σ cannot be run on the automaton FSA fcm generated for the mandatory formula, as shown in Property In other words, if the nd-run of σ on FSA fcm does not exist, then the instance state is violated. Property (ω((σ,cm)) = violated if and only if σ / FSA fcm ) Let ci U ci be an instance of a LTL constraint model cm U cm where ci = (σ,cm). Let FSA fcm be the automaton generated for f cm and FSA fcm (σ) be the non-deterministic run of σ on FSA fcm. It holds that ω(ci) = temporarily violated if and only if it holds that σ / FSA fcm Proof. If it holds that σ / FSA fcm, then it holds that σ / L(FSA fcm ) and γ E : σ+γ FSA fcm (cf. definitions and 5.3.5). Therefore, it also holds that σ / E cm and γ E : σ + γ E cm, i.e., ω(ci) = temporarily violated. Figure 5.18 shows (a) a ConDec model cm and (b) the automaton created for the mandatory formula f cm of the model cm. The model has four 2 Recall that FSA fcm uses the unblocked automaton from which dead ends are removed (cf. definitions and 5.3.3)
158 148 Chapter 5 Constraint Specification with Linear Temporal Logic activities and three constraints. Constraint response specifies that every occurrence of event (curse,t c ) has to be eventually followed by at least one occurrence of event (pray,t c ). Constraint 1..* specifies that (pray,t c ) has to occur at least once. Constraint precedence specifies that events (become holy,t s ) and (become holy,t c ) cannot occur before the first occurrence of event (pray,t c ). In other words, constraints in this model specify that (1) if one curses, then one has to pray at least once afterwards, (2) one has to pray at least once, and (3) one cannot become holy until one has prayed. Note that automaton in Figure 5.18(b) is a simplified version of the original automaton generated for f cm using the algorithm presented in [112, 113]: for simplicity we show a smaller automaton with the same language as the language of the original automaton. [] ( (curse,tc) -> <> (pray,tc)) <> (pray,tc) (!(become holy,ts) W (pray,tc) ) curse response 1..* pray!(become holy,ts) /\!(become holy,tc)!(curse,tc) - - precedence s0 (pray,tc) s1 s2 become holy (pray,tc) (pray,tc) (a) ConDec model (b) automaton for ConDec model Figure 5.18: A ConDec model (recall that t s,t c T are event types such that t s = started and t c = completed) Figure 5.19 shows how a satisfied instance ci = (σ,cm) of the ConDec model cm in Figure 5.18(a) changes states depending on its execution on the automaton FSA fcm (cf. Figure 5.18(b)). This instance is correct because its state is never violated. Initially, the automaton is in the set of possible states {s 0 } and, therefore the instance is temporarily violated. The state of the instance is initially temporarily violated because none of the possible states is accepting (i.e., s 0 is not accepting) but an accepting state (i.e., s 1 ) is reachable from one of the possible states (i.e., s 1 is reachable from s 0 ). On the other hand, the instance state is satisfied every time when the set of possible states contains at least one accepting state (i.e., s 1 ). The states of the instance in Figure 5.19 reflect the mandatory constraint of the model in Figure The instance is temporarily violated until the first occurrence of event (pray,t c ) (the 1..* constraint). The instance becomes temporarily violated again after the occurrence of event (curse,t c ) and becomes satisfied only after a new occurrence event (pray,t c ) (the response constraint). The precedence constraint is fulfilled because events (become holy,t s ) and (become holy,t c ) occur only after event (pray,t c ). Figure 5.20 shows how a violated instance ci = (σ, cm) of the ConDec model cm in Figure 5.18(a) changes states depending on its execution on the automaton
159 Section 5.6 Execution of ConDec Instances 149 temporarily violated temporarily violated temporarily violated temporarily violated satisfied satisfied temporarily violated {S0} (bless,ts) {S0} (bless,tc) {S0} (pray,ts) {S0} (pray,tc) {S0,S1} (curse,ts) {S0,S1,S2} (curse,tc) {S0,S2} (become holy,ts) {S0,S1,S2} (pray,tc) {S0,S1,S2} (pray,ts) {S0,S1,S2} (pray,tc) {S0,S2} (pray,ts) {S0,S2} (become holy,tc) {S0,S2} satisfied satisfied satisfied temporarily violated temporarily violated temporarily violated Figure 5.19: A satisfied instance of the model in Figure 5.18 (recall that t s,t c T are event types such that t s = started and t c = completed) temporarily violated temporarily violated temporarily violated temporarily violated temporarily violated violated {S0} (curse,ts) {S0} (curse,tc) {S0} (bless,ts) {S0} (bless,tc) {S0} (become holy,ts) X Figure 5.20: A violated instance of the model in Figure 5.18 (recall that t s,t c T are event types such that t s = started and t c = completed) FSA fcm (cf. Figure 5.18(b)). Execution of event (become holy,t s ) brings this instance to the violated state, because there is no non-deterministic run of this trace on FSA fcm. Therefore, the trace is not accepted by FSA fcm and an accepting state is not reachable anymore. The reason for the violation is the fact that the precedence constraint cannot be fulfilled in the future, i.e., become holy is executed before the activity pray Enabled Events Enabled events are events that can be triggered during the executing of an instance of a constraint model, such that the instance does not become violated, as described in Section Event e A T is enabled in instance ci = (σ,cm) of an LTL constraint model (e.g., a ConDec model) if in the current set of possible states of the automaton generated for the mandatory formula f cm there exists an output transition that can be triggered by the event e, as shown in Property Consider, for example, the model and its automaton in Figure Event (become holy,t s ) is not enabled in instances of this model as long as the instance s automaton stays in the set of possible states {s 0 } because none of the transitions!(become holy,t s )!(become holy,t c ) or (pray,t c ) can trigger event (become holy,t s ). On the other hand, all other events involving activities in the model can be triggered via one or both transitions.
160 150 Chapter 5 Constraint Specification with Linear Temporal Logic Property (Enabled event) Let ci U ci be an instance of a LTL constraint model cm U cm where ci = (σ,cm), cm = (A,C M,C O ). Let FSA fcm = E,S,T,S 0,S F be the automaton generated for f cm. Event e A T is enabled in ci (denoted by ci[e ) if and only if it holds that σ FSA fcm and s S FSA fcm σ s S : (s,e,s ) T. Proof. If it holds that (s,e,s ) T, then it holds that σ + e FSA fcm (cf. definitions and 5.3.4). Then, it either holds that σ + e L(FSA fcm ), i.e., ω((σ + e,cm)) = satisfied (cf. Property 5.6.1), or it holds that σ + e / L(FSA fcm ), i.e., ω((σ + e,cm)) = temporarily violated (cf. Property 5.6.2). In other words, it holds that ω((σ + e,cm)) violated and it holds that ci[e States of Constraints States of constraints in an instance can provide useful information for users who are executing the instance (cf. Section 4.4.3). Properties 5.6.1, and can be used to monitor states of all constraints in and instance ci(σ,cm): for LTL formula of each constraint an automaton is created and analyzed given the trace σ to determine the state of the constraint. This method for monitoring states of constraints is used in our declare system presented in Chapter Ad-hoc Change of ConDec Instances As we discussed in Section 4.5, an ad-hoc change of constraint instances is successful if the change does not bring the instance in the violated state (cf. Figure 4.4). Automata generated for ConDec models enable easy implementation of ad-hoc change of ConDec instances. As shown in Property 5.7.1, ad-hoc change of a ConDec instance is successful if the instance trace can be replayed on the mandatory automaton of the new model. Recall that is the ad-hoc instance change function (cf. Definition on page 107). Ad-hoc change of an instance ci to model cm is successful (i.e., (ci,cm ) dom( )) if and only if the changed model does not bring the instance into the violated state. Property (Instance (σ,cm) is successfully changed to (σ,cm ) if and only if σ FSA fcm ) Let ci U ci be an instance of a LTL constraint model where ci = (σ,cm) and let cm U cm be a constraint model. Let FSA fcm be the automaton generated for f cm. It holds that ((σ,cm),cm ) dom( ) if and only if it holds that σ FSA fcm. Proof. If it holds that σ FSA fcm, then it holds that ω((σ,cm)) violated (cf. Property 5.6.3). Therefore, it holds that ((σ,cm),cm ) dom( ) (cf. Definition 4.5.1).
161 Section 5.7 Ad-hoc Change of ConDec Instances 151 Consider, for example, a temporarily violated instance ci = (σ, cm) in Figure The instance model cm is shown in Figure 5.21(a). This model consists of several activities and a response constraint between the activities curse and pray. This constraint specifies that, if curse is completed, then pray has to be also completed afterwards. The automaton FSA fcm generated for the mandatory formula of this model is shown in Figure 5.21(b). Figure 5.21 (c) shows the instance trace σ, i.e., it shows the nd-run of the trace on the automaton in Figure 5.21(b) and it shows the corresponding instance states. Next, we will show two examples of ad-hoc change of the instance in Figure 5.21 one example of a successful ad-hoc change and one example of an impossible (i.e., unsuccessful) ad-hoc change. [] ( (curse,tc) -> <> (pray,tc)) curse response pray!(curse,tc) - - s0 s1 bless become holy (pray,tc) (pray,tc) (a) ConDec model (b) automaton for ConDec model {S0} (curse,ts) {S0,S1} (curse,tc) {S1} (become holy,ts) {S1} (become holy,tc) {S1} (bless,ts) {S1} (bless,tc) {S1} satisfied satisfied temporarily violated temporarily violated temporarily violated temporarily violated temporarily violated (c) states for instance trace Figure 5.21: A ConDec instance ci = (σ,cm) (recall that t s,t c T are event types such that t s = started and t c = completed) Figure 5.22 shows an example of a successful ad-hoc change of the instance ci = (σ,cm) in Figure Figure 5.22(a) shows the new model cm S for the instance ci: activity bless is removed from the original instance model instance and constraint 1..* is added to the original instance model (cf. Figure 5.21(a)). The new 1..* constraint specifies that activity pray has to be executed at least once. Figure 5.22(b) shows the automaton FSA fcms generated for the mandatory formula of the new model cm S. Figure 5.22(c) shows the nd-run of σ on the automaton FSA fcms. In other words, the instance trace σ can be replayed on the new automaton FSA fcms in Figure 5.22(b) and, although the instance is temporarily violated, this is a valid ad-hoc change. Note that, even though the activity bless is removed from the model after it was executed (trace σ contains events (bless,t s ) and (bless,t c )) the ad-hoc change in Figure 5.21 is successful. This is due to the property that a set of satisfying traces of a model can contain activities that are not in the model (cf.
162 152 Chapter 5 Constraint Specification with Linear Temporal Logic [] ( (curse,tc) -> <> (pray,tc)) <> (pray,tc) curse response 1..* pray - (pray,tc)!(curse,tc) s0 s1 become holy - (pray,tc) (a) ConDec model (b) automaton for ConDec model {S0} (curse,ts) {S0} (curse,tc) {S0} (become holy,ts) {S0} (become holy,tc) {S0} (bless,ts) {S0} (bless,tc) {S0} temporarily violated temporarily violated temporarily violated temporarily violated temporarily violated temporarily violated temporarily violated (c) states for instance trace Figure 5.22: Ad-hoc change of ConDec instance ci in Figure 5.21 is successful (recall that t s,t c T are event types such that t s = started and t c = completed) Chapter 4). The only consequence of removing activity bless from the model is the fact that it will not be possible to execute this activity in the future (cf. enabled events in Definition and execution rule in Definition 4.4.4). Figure 5.23 shows an example of an impossible (unsuccessful) ad-hoc change of the instance ci = (σ,cm) in Figure Figure 5.23(a) shows the new model cm F for the instance ci: activity bless is removed from the original instance model instance and constraints 1..* and precedence are added to the original instance model (cf. Figure Figure 5.21(a)). Constraint 1..* specifies that activity pray has to be executed at least once while precedence specifies that activity become holy can be executed only after the activity pray. Figure 5.22(b) shows the automaton FSA fcmf generated for the mandatory formula of the new model cm F. Figure 5.23(c) shows that the nd-run of σ on the automaton FSA fcmf does not exist, i.e., this change would violate the instance. In other words, the instance trace σ can not be replayed on the new automaton FSA fcmf in Figure 5.22(b) because it is not possible to execute event (become holy,t s ) from the state s 0. Therefore, the ad-hoc change from Figure 5.23 is not possible. 5.8 Verification of ConDec Models In Section 4.6 we described two errors that can occur in constraint models: dead events and conflicts. These errors can be detected in ConDec and any other LTL-based models by analyzing automata generated form LTL formulas. An event is dead in ConDec model if none of the transitions of the automaton generated for the mandatory formula of the model can trigger the event, as shown in Property
163 Section 5.8 Verification of ConDec Models 153 [] ( (curse,tc) -> <> (pray,tc)) <> (pray,tc) (!((become holy,ts) V (become holy,tc)) W (pray,tc) ) curse response 1..* pray precedence!(become holy,ts) /\!(become holy,tc) s0 (pray,tc)!(curse,tc) s1 - s2 - become holy (pray,tc) (pray,tc) (a) ConDec model (b) automaton for ConDec model {S0} (curse,ts) {S0} (curse,tc) (become holy,ts) {S0} X satisfied satisfied temporarily violated violated (c) states for instance trace Figure 5.23: Ad-hoc change of ConDec instance ci in Figure 5.21 is not successful (recall that t s,t c T are event types such that t s = started and t c = completed) Property (A dead event cannot be triggered by any transition) Let cm U cm be a constraint model and FSA fcm = E,S,T,S 0,S F be the automaton generated for f cm and e E be an event. It holds that e Π DE (cm) (i.e., e is a dead in model cm, cf. Definition 4.6.1), if and only if s,s S(s,e,s ) T. Proof. If it holds that s,s S(s,e,s ) T, then it holds that σ FSA fcm : e / σ (cf. Definition 5.3.4). Therefore, it holds that σ L(FSA fcm ) : e / σ (cf. Definition 5.3.5). Because E cm = L(FSA f cm ) (cf. Property 5.4.2), it further holds that σ E cm : e / σ, i.e., e Π DE(cm) (cf. Definition 4.6.1). Figure 5.24(a) shows the ConDec model for the model in Example on page 111. Due to the fact that event (become holy,t c ) has to occur at least once (i.e., constraint 1..* on become holy) and events (become holy,t c ) and (curse,t c ) cannot occur both (i.e., constraint not co-existence), event (curse,t c ) is dead in this model. This dead event can be easily detected by analyzing the automaton in Figure 5.24(b), which is generated for the mandatory formula of the model: none of the transitions in the automaton can trigger event (curse,t c ). A ConDec model has a conflict if the automaton generated for the mandatory formula of the model is empty. Property shows that the automaton generated for the mandatory formula of the model with a conflict has no states. Property (The automaton is empty for a model with a conflict) Let cm U cm be a constraint model and FSA fcm = E,S,T,S 0,S F be the automaton generated for f cm. It holds that E cm = (i.e., cm has a conflict), if and only if S =. Proof. If it holds that S =, then it holds that L(FSA fcm ) = (cf. Definition and the algorithm to generate the automata [111, 112, 158]). Because
164 154 Chapter 5 Constraint Specification with Linear Temporal Logic [] ( (curse,tc) -> <> (pray,tc)) <> (become holy,tc) curse response pray!(curse,tc)!(curse,tc)! ( ( <> (curse,tc) ) /\ ( <> (become holy,tc) ) ) bless not co-existence 1..* become holy s0!(curse,tc) /\ (become holy,tc) s1 (a) ConDec model (b) automaton for ConDec model Figure 5.24: A ConDec model where event (curse,t c) is dead (recall that t s,t c T are event types such that t s = started and t c = completed) E cm = L(FSA f cm ) (cf. Property 5.4.2), it further holds that E cm = (cf. Definition 4.6.1). Figure 5.25(a) shows the ConDec model for the model in Example on page Due to the fact that each of the events (become holy,t c ) and (curse,t c ) have to occur at least once (i.e., constraints 1..* on become holy and curse) and events (become holy,t c ) and (curse,t c ) cannot occur both (i.e., constraint not co-existence) this model has a conflict. In other words, there is not trace that can satisfy this model (i.e., even the empty trace does not satisfy the model). The conflict in this model can be easily detected by analyzing the automaton generated for the mandatory formula of the model: the automaton has no states (cf. Figure 5.25(b)). [] ( (curse,tc) -> <> (pray,tc)) <> (become holy,tc) <> (curse,tc) 1..* curse response pray! ( ( <> (curse,tc) ) /\ ( <> (become holy,tc) ) ) bless not co-existence 1..* become holy (a) ConDec model (b) automaton for ConDec model Figure 5.25: A ConDec model with a conflict (recall that t s,t c T are event types such that t s = started and t c = completed) The cause of an error (dead event or conflict) in a ConDec model cm = (A,C M,C O ) can be found by searching through the powerset (all subsets) of the mandatory constraints. For each subset C C M an automaton FSA f is generated for the conjunction of all constraints in the subset, i.e., f = (E,f) C f and this automaton is analyzed for errors in the same way as for the whole model (cf. Properties and 5.8.2). The smallest subset of mandatory constraints for which the error is detected is the cause of the error. Clearly, all verification
165 Section 5.9 Activity Life Cycle and ConDec 155 and ad-hoc change concepts presented in Chapter 4 can be realized for ConDec and any other LTL-based language. Sometimes it is necessary to check if models are compatible with each other. For example, if two or more models share some activities but have different constraints, it might happen that the composition of these models contains inconsistencies, i.e., the models are incompatible. We showed in Section that constraints models can be incompatible with respect to a dead activity and with respect to a conflict. Compatibility analysis is performed by verifying the original and merged (combined) models against dead activities and conflicts. Compatibility of ConDec or any other LTL-based model can be analyzed using the merging procedure described in Definition and applying the verification techniques for ConDec models presented in Section Activity Life Cycle and ConDec ConDec templates do not consider any particular model of a life cycle of activities. Although the three event types (i.e., started, completed and cancelled) are used in the templates, the actual life cycle model (cf. Figure 4.1 on page 84) where activities are first started and afterwards completed or cancelled is not considered in templates. In other words, LTL formulas for ConDec templates allow for an arbitrary order of event types. Consequently, mandatory formulas (cf. Definition 5.4.1) do not consider the activity life cycle. This means that the language of the automaton FSA fcm generated for the mandatory formula of model cm contains traces where activities can be, e.g., completed before they are started or even without ever being started. For example, trace σ = (curse,started),(curse,completed),(pray,completed) is accepted by the automaton in Figure 5.16 on page 143. This means that, event though trace σ is not possible because activity pray is completed without being started before, this trace will be contained in the set of traces that satisfy the ConDec model presented in Figure 5.15 on page Possible Problems The absence of an explicit activity life cycle in ConDec can lead to serious errors in execution and verification of ConDec models. In particular, errors might occur when determining enabled events, constraint state and instance state during execution and when discovering dead activities and conflicts during verification. Moreover, the same holds for any LTL-based language because LTL itself does not impose an activity life cycle. Figure 5.26 shows an illustrative example of an LTL-based constraint model that can cause errors related to the activity life cycle. This model consists of activities x and y and a constraint error specifying that activity A cannot be started but it must be completed (i.e., formula
166 156 Chapter 5 Constraint Specification with Linear Temporal Logic (! (x, started)) ( (x, completed))). The automaton generated for the mandatory formula of this model is presented in Figure 5.26(b). Although none of the transitions in the automaton can be triggered by event (x,started), the accepting state can be reached only by triggering event (x,completed). (! <> (x,started) ) /\ ( <> (x,completed) )!(x,started)!(x,started) error x y s0 (x,completed) s1 (a) a model (b) automaton for (! <> (x,started) ) /\ ( <> (x,completed) ) Figure 5.26: A ConDec model with a conflict Four types of errors might occur in LTL-based languages: (1) while determining which events are enabled in an instance, (2) while determining instance and constraint states, (3) during model verification against dead events, and (4) during model verification against conflicts. Each of these problems is described bellow using the example shown in Figure Enabled events: Events referring to completion or cancellation of an activity might be enabled (and available for execution) although the activity was not started yet. For example, as long as the automaton in Figure 5.26(b) remains in its initial state s 0, events (x,completed), (y,started) and (y,completed) will be enabled. In other words, events (x,completed) and (y,completed) will be enabled at the beginning of the instance execution, i.e., before any of the two activities is started. Constraint and instance state: The state of an instance or constraint might be incorrect because an accepting state is reachable only by violating the activity life cycle. For example, as long as an instance remains in the initial state s 0 of the automaton in Figure 5.26(b) the instance state is temporarily violated, i.e., the accepting state s 1 is reachable via transition (s 0,(x,completed),s 1 ) (cf. Property 5.6.2). In other words, this automaton can reach an accepting state only by triggering event (x,completed), while event (x,started) can never be triggered. However, due to the activity life cycle, an occurrence of event (x,completed) must be preceded by an occurrence of event (x,started). Therefore, transition (s 0,(x,completed),s 1 ) will never be triggered and the accepting state s 1 will never be reached, i.e., the actual state of the instance is violated. Note that the same holds for the state of the error constraint. Dead events: Existing dead event might not be discovered because the event can be triggered only by violating the activity life cycle. Only event (x,started) will be discovered as a dead event in the model in Figure 5.26(a)
167 Section 5.9 Activity Life Cycle and ConDec 157 because this is the only event that cannot be triggered by any of the transitions of the automaton in Figure 5.26(b) (cf. Property 5.8.1). However, due to the activity life cycle and the fact that event (x,started) is dead, event (x, completed) can also never occur in traces that satisfy the model. Thus, although none of the traces that satisfy the model and the activity life cycle contains event (x, completed), verification procedure is not able to discover this event as a dead event (cf. Definition 4.6.1). Conflicts: Existing conflict might not be discovered because an accepting state is reachable only by violating the activity life cycle. According to Property 5.8.2, there is no conflict the model in Figure 5.26(a) because the automaton in Figure 5.26(b) is not empty. However, each trace in the language of the automaton contains event (x,completed) and does not contain event (x, started). Therefore, none of these traces actually complies with the activity life cycle model, i.e., there is no trace that satisfies the model and the model actually has a conflict. The first problem, i.e., enabling wrong events, can easily be solved by a correctly implemented workflow management system. For example, while executing an instance ci = (σ,cm) of a model cm = (A,C M,C O ), the declare prototype (cf. Chapter 6) will enable events (a,completed) and (a,cancelled) if and only if (1) a A, (2) there exists at least one occurrence of event (a,started) that has not been completed or cancelled yet, and (3) events (a,completed) and (a, cancelled) are enabled according to Property Also, event (a, started) will be enabled only if a transition that can be triggered by (a,completed) is reachable from the current set of possible states of the mandatory automaton. Problems referring to instance and constraint state, dead events and conflicts can, to some extent, be softened using one of the three approaches described in the next section Available Solutions In order to prevent violation of the activity life cycle in ConDec, for each activity a A in a model cm = (A,C M,C O ) we would need to find a way to specify in the mandatory automaton of the model that each occurrence of events (a, completed) or (a, cancelled) must be preceded by an unique occurrence of event (a, started). In other words, we would need to specify that an activity can be completed and cancelled exactly as many times as it has been started before. For example, it is not possible to start activity a once and then complete it twice, i.e., it is not possible that one occurrence of event (a,started) is followed by two occurrences of event (a,completed). Unfortunately, it is not possible to specify in LTL that an event can occur exactly as many times as another event occurred before because it is not possible to count how many times an event (in this case event (a,started)) occurred [74]. Therefore, the activity life cycle cannot be fully integrated into
168 158 Chapter 5 Constraint Specification with Linear Temporal Logic the ConDec language. Instead, three alternative partial solutions can be used to minimize possible problems. The first two solutions aim at imposing the precedence or alternate precedence activity life cycle requirements. These two solutions integrate to some extent the activity life cycle into the model cm = (A,C M,C O ) itself by (1) creating an additional partial life cycle LTL formula for each activity a A in the model, (2) adding these formulas to the mandatory formula (as a conjunction) and (3) then generating the automaton for the model. The same procedure must be applied when creating automata used to determine state of each of the constraints. In the third solution we make sure that LTL formulas for templates are specified carefully, so that errors are avoided as much as possible. However, these three solutions are only partial, i.e., they do minimize errors but do not guarantee that errors will be completely avoided. Moreover, these solutions come at a cost: they use larger LTL formulas and, therefore, the automata generation becomes less efficient [74,111,112,158]. The precedence activity life cycle requirement. For each activity we can specify a precedence life cycle requirement: the activity cannot be completed or cancelled before it is started. For each activity a A in a model cm = (A,C M,C O ) an LTL formula similar to the precedence template can be generated: (!((a,t c ) (a,t x )))W(a,t s ). These formulas are added as a conjunction to the mandatory formula and formulas used to monitor states of constraints. Figure 5.27 shows an example of the LTL formula for the precedence life cycle requirement generated for some activity a A and the automaton generated for this formula. It is clear that this solution only partially imposes the activity life cycle, i.e., an activity must be started at least once before it can be completed or cancelled, but once it was started it can be completed and cancelled an arbitrary number of times. For example, it is possible to start the activity once (i.e., one occurrence of event (a,t s )) and then complete it twice (i.e., two occurrences of event (a,t c )). (! ( (a,tc) V (a,tx) ) ) W (a,ts)!(a,tc) /\!(a,tx) - s0 (a,ts) s1 Figure 5.27: The precedence activity life cycle requirement (recall that t s,t c,t x T are event types such that t s = started, t c = completed and t x = cancelled) The alternate precedence activity life cycle requirement. For each activity we can specify an alternate precedence life cycle requirement: the activity
169 Section 5.9 Activity Life Cycle and ConDec 159 cannot be completed or cancelled before it is started and after the activity is completed it cannot be completed or cancelled again until it is started again. This requirement can be represented with LTL formula similar to the alternate precedence template: ((!((a,t c ) (a,t x )))W(a,t s )) ( ((a,t c ) ((!((a,t c ) (a,t x )))W(a,t s )))). Such formulas are generated for all activities in the model and added as a conjunction to the mandatory formula and formulas used to monitor states of constraints. Figure 5.28 shows an example of the LTL formula for the alternate precedence life cycle requirement generated for some activity a A and the automaton generated for this formula. This solution overcomes the shortcoming of the previous one (cf. Figure 5.27), i.e., it is not possible to complete an activity more times that it was started. However, this solution introduces another shortcoming: now it is not possible to concurrently execute one activity multiple times for the same instance. For example, it is not possible to first start an activity twice and then complete it twice (e.g., represented with trace (a,t s ),(a,t s ),(a,t c ),(a,t c ) ). (! ( (a,tc) V (a,tx) ) ) W (a,ts) /\ [] ( (a,tc) -> O ( (! ( (a,tc) V (a,tx) ) ) W (a,ts) ) )!(a,tc) /\!(a,tx)!(a,tc) s0 (a,ts) (a,ts) - s1 Figure 5.28: The alternate precedence activity life cycle requirement (recall that t s,t c,t x T are event types such that t s = started, t c = completed and t x = cancelled) Carefully defining templates. As the third alternative solution we propose carefully defining templates. For example, the precedence template can be defined with formula (!(b,t s ))W(a,t c ) to specify that activity b cannot be started until activity a is completed. This formula is simple, but can cause problems because its automaton is generated such that completing and cancelling activity b is possible while starting the same activity is prohibited, as Figure 5.29(a) shows. To avoid this problem, we can use a safer (but larger) formula for the precedence template to prevent starting, completing and cancelling activity b before completing activity a, i.e., we can use formula (!((b,t s ) (b,t c ) (b,t x )))W(a,t c ) (cf. Figure 5.29(b)). On the other hand, the existence template has formula (a,t c ) and it specifies that activity a has to be completed at least once. This allows for situations where a is completed before being started (i.e., event (a,t s ) does not precede event (a,t c )), which also violates the activity life cycle. Clearly, formula ((a,t s ) (a,t c )) would be more appropriate for the existence template. In
170 160 Chapter 5 Constraint Specification with Linear Temporal Logic general, it is advisable that all templates are defined such that (1) if starting an activity is prohibited, then completing and cancelling the activity is also prohibited, and (2) if completing or cancelling an activity is expected, then starting the activity is expected before. (! ( (b,ts) ) ) W (a,tc) (! ( (b,ts) \/ (b,tc) \/ (b,tx) ) ) W (a,tc)!(b,ts) -!(b,ts) /\!(b,tc) /\!(b,tx) - s0 (a,tc) s1 s0 (a,tc) s1 (a) simple version (b) safer version Figure 5.29: Two formulas for the precedence template (recall that t s,t c,t x T are event types such that t s = started, t c = completed and t x = cancelled) If the precedence or alternate precedence requirement is applied to a constraint model, then the requirement is also enforced in all constraints in the model. However, relying on carefully defined templates does not enforce the (alternate) precedence requirement. Consider, for example, the precedence template presented in Figure 5.29 and the precedence requirement presented in Figure The safer formula indeed prevents starting, completing and cancelling activity b before completion of activity a. However, it still allows for completing activities a and b before starting them, i.e., the precedence requirement for activities a and b is omitted. For example, trace (a,t c ),(b,t c ) is accepted by the automaton in Figure 5.29(b) and, thus, satisfies this formula. Carefully defining templates indeed requires attention when defining every template. For example, using the safer version of the precedence template only makes sense if, in all other templates, whenever event (a,t s ) is prohibited, event (a,t c ) is also prohibited. Having this in mind, the safer version of the not response template would be ((a,t c )!( ((b,t s ) (b,t c ) (b,t x )))) Summary In this chapter we showed how LTL can be used to specify constraints in constraint models. We presented an example of an LTL-based language called Con- Dec. This language consists of a set of constraint templates - constructs that are used to create constraints in ConDec models. Due to the fact that templates are based on LTL formulas and graphical representation, it is easy to change or remove existing and add new templates to the ConDec language. This makes 3 The same holds for the comparison of the precedence template and the alternate precedence requirement.
171 Section 5.10 Summary 161 ConDec an open language that can evolve over time. Similar languages can be created using different templates, which are specific to an application area (e.g., DecSerFlow [37,38] for process models of web services domain and CIGDec [176] for medical processes). Graphical representation of templates and constraints hides the underlying LTL formula and makes ConDec models easier to understand. The set of satisfying traces of ConDec constraints and ConDec models has a finite representation the automaton generated for the underlying LTL formula(s). These automata enable execution of ConDec instances, i.e., the state of the instance and enabled events are determined based on the run of the instance trace on the automaton. The generated automata are also used for ad-hoc change of ConDec instances. If the trace can be replayed on the automaton for the new model, the change is successful. Otherwise, the change is rejected. Note that many problems described in literature (e.g., the dynamic change bug and other problems for procedural languages [101,201]) can be avoided, thus making change easy [184]. Finally, the generated automata are used for verification of ConDec models. If an event cannot be triggered by any of the transitions, then this event is dead. If the automaton is empty, i.e., it does not have any state, then the model has a conflict. These verification techniques can also used of the compatibility analysis of ConDec models. Unfortunately, we were not able to find a way to fully incorporate the activity life cycle model in ConDec and other LTL-based languages. This can cause serious problems during execution of instances and verification of models. Three available partial solutions can minimize the problems, but impose larger LTL formulas. Larger LTL formulas decrease the efficiency of automata generation [74,111,112,158] and negatively influence performance of a workflow management system supporting the LTL approach. Although we presented just one LTL-based language (i.e., ConDec) all the techniques presented in this chapter rely on LTL specification of constraints and, therefore, can be applied to any LTL-based language. In fact, the same ideas could be applied to other temporal logics. This illustrates that the framework presented in the previous chapter is truly generic. The declare prototype presented in Chapter 6 supports LTL-based constraint language (e.g., ConDec). All principles presented in this chapter are implemented in declare. Note that LTL is just one example of a suitable language for the specification of constraints. For example, other types of logics can also be used. As discussed in Section 5.1, CTL can also be used for the constraint specification. Another alternative is to use operators from Interval Algebra (IA) (i.e., before, meets, during, overlaps, starts, finishes, after, etc.) [50] for constraint specification, and translation of IA networks to Point Algebra (PA) networks [60] for verification and execution of instances [162, 163]. Note that LTL, CTL and IA consider time
172 162 Chapter 5 Constraint Specification with Linear Temporal Logic implicitly (i.e., via their temporal operators) [50, 60, 74]. Using languages like, e.g., Extended Timed Temporal Logic [63] and LogLogics [123], would enable usage of explicit time in constraints (e.g., the response template can be extended with a deadline: activity A must be executed after activity B within N time units). While timed automata for can be used for verification and execution of constraints specified in Extended Timed Temporal Logic, a mature efficient software support of the LogLogics-based language would still need to be developed.
173 Chapter 6 DECLARE: Prototype of a Constraint-Based System In this chapter we introduce declare - a prototype based on the constraintbased approach presented in Chapter 4. Although declare currently supports LTL-based languages like, for example, the ConDec language (cf. Chapter 5), it is possible to extend the prototype with other suitable constraint-based languages. In fact, declare also supports DecSerFlow [37, 38] and CIGDec [176]. declare is an open-source tool implemented in Java [171], distributed under the GNU General Public License [14], and it can be downloaded from sf.net. The remainder of this chapter is organized as follows. In Section 6.1 the prototype s architecture is described. How to define constraint-based languages and templates is described in Section 6.2 and the development of models in Section 6.3. Section 6.4 describes instance execution and Section 6.5 ad-hoc change in declare. Verification of declare models is described in Section 6.6. Tool s simple resource and data perspectives are described in sections 6.7 and 6.8, respectively. Using data elements for defining conditions on constraints is described in Section 6.9 and extending the tool to support other languages in Section Section 6.11 shows how declare can be combined with other approaches. Finally, Section 6.12 summarizes this chapter. 6.1 System Architecture The architecture of the declare system is shown in Figure 6.1. The core of the system consists of the following basic components: Designer, Framework and Worklist. declare can be used in combination with two other tools. First, by combining declare and the workflow management system YAWL [11, 23, 32, 210, 212] constraint-based models can be combined with procedural models. Second, the ProM tool [8,27] can be used for process mining [28] and run-time
174 164 Chapter 6 DECLARE: Prototype of a Constraint-Based System recommendations for users [258].... procedural models Worklist access to instances execution of activities DECLARE... Worklist event logs of executed instances YAWL process decomposition Framework recommendation ProM instance enactment and ad-hoc change Designer language export model export model development and verification constraint models constraint templates organizational structure Figure 6.1: The architecture of declare The Designer component is used for setting up the system level by creating constraint templates and a simple organizational structure consisting of users and roles representing their qualifications, as it will be explained in Section 6.7. Constraint models are created and verified in the Designer, as described in Sections 6.3 and 6.6, respectively. Instances of constraint models are enacted by the Framework tool. This tool also allows selecting people that will work on each instance. One module of this tool is the workflow engine that manages the execution of instances and creates execution logs for all instances. Framework also contains a module for ad-hoc change of running instances, which includes main change operations: ad-hoc change of one instance, ad-hoc change of all instances of a given model (i.e., the so called migration) and change of the model itself [202]. While Framework centrally manages execution of all instances, each user uses his/her Worklist component to access active instances. Also, a user can execute activities in active instances in his/her Worklist. The Worklist component will be described in Section 6.4. declare can be used together with the YAWL system [11,23,32,210, 212] for combining declare constraint and YAWL procedural models, as we will describe in Section With declare and YAWL, it is possible to define arbitrary decompositions of constraint and procedural models, i.e., various constraint and procedural models can be sub-processes of each other. Second, declare can used in a combination with the process mining ProM tool [8, 27] for process
175 Section 6.2 Constraint Templates 165 mining and recommendations. The Framework component creates event logs that contain information about instances (e.g., which activities were executed, by whom, when, etc.). Constraint templates and constraint models can be exported to ProM and used for verification of past executions recorded in these event logs. Also, ProM can provide run-time recommendations for users (e.g., which activity should be executed next) based on past executions. In Chapter 7 we will describe in more detail how declare and ProM can be used for process mining and recommendations. 6.2 Constraint Templates An arbitrary number of constraint-based languages can be defined in declare. For example, Figure 6.2 shows how the ConDec language (cf. Chapter 5) is defined in the Designer tool. A tree with the language templates is shown under the selected language. On the panel on the right side of the screen the selected template is presented graphically. Figure 6.2: Defining a language An arbitrary number of templates can be created for each language. Figure 6.3 shows a screen-shot of the Designer while defining the response template. First, the template name and additional display are entered. Next, it is possible to define an arbitrary number of parameters in the template. The response template has two parameters: A and B. For each parameter it is specified if it can be branched or not. When creating a constraint from a template in a model, an activity replaces each of the template s parameters. If a parameter is branchable, then it is possible to replace the parameter with more activities. In this case, the parameter will be replaced by a disjunction of selected activities
176 166 Chapter 6 DECLARE: Prototype of a Constraint-Based System in the formula (cf. Section on page 133). The graphical representation of the template is defined by selecting the kind of symbol that will be drawn next to each parameter and the style of the line. Figure 6.3 shows that the response template is graphically represented by a single line with a filled circle next to the first activity (A), and a filled arrow symbol next to the second activity (B). Furthermore, a textual description and an LTL formula are given. Figure 6.3: Constraint template response declare uses the life cycle of activities presented in Figure 4.1 in Chapter 4. The formula for the response template in Figure 6.3 is ( A.completed ( B.completed )). Events (A,started), (A,completed) and (A,cancelled) are denoted as A.started, A.completed and A.cancelled in declare templates, respectively. A shorter way to denote event (A, completed) in declare is by using only A. As we described in Chapter 4, a constraint model consists of mandatory and optional constraints. Optional constraints are not obligatory, i.e., users can violate them. However, allowing users to violate an optional constraint without any warning would totally hide the existence of the constraint. Therefore, when the user is about to violate an optional constraint, declare first issues a warning about the violation to the user. Then, the user can decide to proceeded and violate the optional constraint, or to abort and not to violate the constraint. The violation warning contains some information about the constraint. A part of this information is the group to which the optional constraint belongs to. Groups that can be used for optional constraints are defined on the system level in the Designer. Each group has a name and a short description, as illustrated by Figure 6.4. For example, it should be easier to decide to violate a constraint belonging to the Hospital Policy, then a constraint that belongs to the Medical Policy.
177 Section 6.3 Constraint Models 167 Figure 6.4: Constraint groups 6.3 Constraint Models Constraint models can be developed in the Designer tool for each of the languages defined in the system. For example, we want to use ConDec templates described in Section 5.2 for constraints in the Fractures Treatment model presented in Figure 5.17 on page 145 and Table 5.7 on page 145, and, therefore, we create this model in the declare Designer as a ConDec model. Figure 6.5 shows the Fractures Treatment model in declare. Activities are presented as labeled rectangles and constraints as special lines between activities. Figure 6.5: The Fractures Treatment model in declare Each constraint in the model in Figure 6.5 is created using a ConDec template. For example, the constraint between activities perform surgery and prescribe rehabilitation is created by applying the response template, as shown in Figure 6.6. The template is selected in the top left corner of the screen. Underneath the template all its parameters are shown. Activities are assigned to parameters by selecting one or more (in case of branching) activities from the
178 168 Chapter 6 DECLARE: Prototype of a Constraint-Based System model. On the right side of the screen some additional information can be given. First, constraint can have an arbitrary name, although the constraint initially gets its name from the template. Second, a constraint can have a condition involving some data element from the model (the handling of data elements and conditional constraints in declare will be explained in sections 6.8 and 6.9, respectively). For example, condition age < 80 on a constraint would mean that the constraint should hold only if the data element age has a value less than 80. Third, for each constraint it must be specified if the constraint is mandatory or optional. If a constraint is optional, then some additional information has to be provided: (1) a group (cf. Section 6.2), (2) the importance level on a scale from 1 (for low importance) to 10 (for high importance), and (3) some local message that should be displayed. This additional information for an optional constraint is presented in warnings when a user is about to violate this constraint. Figure 6.6: Defining a constraint in declare 6.4 Execution of Instances declare determines enabled events, the state of an instance and states of constraints using the approach presented in Section 5.6. Each instance is launched (i.e., created) in the Framework tool and users can work on instances via their Worklists (cf. Figure 6.1). All instances that a user can work on are presented in the user s Worklist. Figure 6.7 shows several screen-shots of a Worklist. All available instances are shown in the list on the left side of the screen. In Figure 6.7, there are two instances of the Fractures Treatment model presented in Figure 6.5 (the list with header instances in the upper left corner): 2: Fractures Treatment and 3: Fractures Treatment. The model of the selected instance is shown on the right side of the screen. After the user starts an activity by doubleclicking it, the activity will be opened in the activity panel under the model (cf. Figure 6.7(b)).
179 Section 6.4 Execution of Instances 169 (a) the initial state (b) after starting examine patient (c) after completing examine patient (d) after starting and completing prescribe sling Figure 6.7: Execution of a Fractures Treatment instance
180 170 Chapter 6 DECLARE: Prototype of a Constraint-Based System Although the structure of the process model is the same as in the Designer (cf. Figure 6.5), the Worklist uses some additional symbols and colors to present to users the current state of the instance (cf. sections and 5.6.1), enabled events (cf. sections and 5.6.2), and states of constraints (cf. sections and 5.6.3). First, each instance in Figure 6.7 has a color, which represents the state of the instance: green for a satisfied instance, orange for a temporarily violated instance and red for violated instance. Second, each activity contains start (play) and complete (stop) icons, that indicate if users can start/complete the activity at the moment by triggering events started or completed. The initial state of the process instance in Figure 6.7(a) shows that it is only possible to start activity examine patient, because the corresponding symbol is enabled. Starting and completing any of the other activities is not possible, as indicated by the disabled icons. In addition, all currently disabled activities are colored grey 1. This initial state of the process instance is influenced by the init constraint on the activity examine patient, i.e., this activity is the first activity to be executed and, therefore, the only enabled event is (examine patient,started). Third, each constraint is colored to indicate its state: (1) satisfied the constraint is represented by a green color, (2) temporarily violated the constraint is represented by a orange color, and (3) violated the constraint is represented by a red color. Figure 6.7(a) shows the initial states of constraints in a Fractures Treatment instance. Constraints init and 1 of 4 are temporarily violated (i.e., orange), while all other constraints in the instance are satisfied (i.e., green). Figure 6.7(b) shows the instance after starting activity examine patient. This activity is now open in the activity panel on the bottom of the screen. Data elements that are used in this activity are presented in the activity panel (data elements will be explained in Section 6.8). In this case, three data elements are available patient name, age and diagnosis. In this way, users can manipulate data elements while executing activities. The activity can be completed or cancelled by clicking on the buttons complete or cancel on the activity panel, respectively. The state of the instance after completing activity examine patient is shown in Figure 6.7(c). After the occurrence of the event (examine patient, completed) the init constraint is satisfied. This has two consequences: (1) this constraint becomes green and (2) it is now possible to start activities check X ray risk, prescribe rehabilitation, prescribe medication and prescribe sling. Activity perform X ray is still disabled due to the alternate precedence constraint, activities perform reposition, perform surgery and apply cast are disabled due to the precedence constraint and activity remove cast is disabled due to the succession constraint. Figure 6.7(d) shows the state of the instance after starting and completing activity prescribe sling. Execution of event (prescribe sling, completed) results in a state that satisfies the 1 of 4 constraint and, therefore, this constraint becomes 1 Note that we say that an activity a A is enabled if and only if event (a,started) is enabled.
181 Section 6.5 Ad-hoc Change of Instances 171 green. Because all mandatory constraints in the instance are now satisfied, the instance itself also becomes satisfied (i.e., green). In some cases, triggering an event or closing the instance can violate optional constraint(s). For example, consider an instance of the Fractures Treatment model where activity perform surgery was executed and the user tries to close the instance without executing activity prescribe rehabilitation. Closing the instance at this point would violate the optional constraint response (specification of this constraint is shown in Figure 6.6). Instead of automatically closing the instance, declare first issues a warning associated with the optional constraint, as shown in Figure 6.8. The user can now decide based on the information presented in the warning whether to close the instance and violate the constraint or not. Note that, in case of mandatory constraints, this is not possible, i.e., if a mandatory constraint would not be satisfied, the whole instance would not be satisfied, and it would not be possible to close the instance. Figure 6.8: Warning: closing the instance violates the optional constraint response 6.5 Ad-hoc Change of Instances Instances in declare can be changed in an ad-hoc manner (cf. Section 4.5) by adding and removing activities and constraints. declare fully supports the approach for ad-hoc change described in Section 5.7. In other words, after the change, declare creates an automaton for the mandatory formula of the new model. If the instance trace can be replayed on this automaton, the ad-hoc change is accepted. If not, the error is reported and the instance continues its execution based on the old model. Naturally, when applying an ad-hoc change, it is also possible to verify the new model against dead activities and conflicts
182 172 Chapter 6 DECLARE: Prototype of a Constraint-Based System but these errors will not prevent the change. Actually, the procedure for ad-hoc change is very similar to the procedure for starting instances in declare, as Figure 6.9 shows. It is possible to perform the basic model verification in both cases. The only difference is in the execution of the automaton. When an instance is started, the execution of the automaton begins from the initial state. In case of an ad-hoc change, declare first makes an attempt to replay the current trace of the instance on the new automaton, i.e., the new model is verified against the current trace. If this is possible, the ad-hoc change is successful and the execution continues from the current set of possible states of the new automaton, i.e., the instance state, enabled events, and states of constraints are determined based on the new automaton and the current trace (cf. Section 5.6). If this is not possible, the ad-hoc change failed and the instance must continue using the old model. verify against dead activities and conflicts create mandatory automaton set to initial state instance verification (replay trace) start from current state developing model?? starting instance changing instance X X X X? ok X X not ok report error not ok report error cancel change Figure 6.9: Procedure for starting and changing instances in declare Besides changing an instance, declare offers two additional options: migration of all instances and changing the original constraint model (cf. Figure 6.9). First, it is possible to request a migration of all instances, i.e., that the ad-hoc change is applied to all running instance of the same constraint model [202]. declare performs migration by applying the same procedure for ad-hoc change to all instances of the same constraint model, i.e., only instances with traces that can be replayed on the new automaton are changed. Second, it is possible to also change the original constraint model. In this case, all instances created in the future will be based on the new model. Consider, for example, two Fractures Treatment instances ci 1 = (σ 1,cm FT ) and ci 2 = (σ 2,cm FT ) where σ 1 = (examine patient,t s ),(examine patient,t c ) and σ 2 = (examine patient,t s ),(examine patient,t c ),(prescribe sling,t s ), (prescribe sling,t c ). Figure 6.10(a) shows a declare screen-shot of ad-hoc change of instance ci 1 where activity prescribe sling is added as a new branch in the precedence constraint. As a consequence of adding this branch, events (prescribe sling,t s ), (prescribe sling,t c ), and (prescribe sling,t x ) can now be executed only after the event (perform X ray,t c ) (cf. Table 5.2 on page 129). In addition, both the migration and the change of the model are requested. Fig-
183 Section 6.6 Verification of Constraint Models 173 ure 6.10(b) shows the declare report for the requested ad-hoc change. The migration is applied to two currently running instances of the Fractures Treatment model, i.e., to instances ci 1 and ci 2. The change is successfully applied to instance ci 1 and the change failed for instance ci 2, due to the violation of precedence constraint (because event (prescribe sling,t c ) already occurs before event (perform X ray,t c ) in trace σ 2 ). added branch (a) changed instance (b) declare report for ad-hoc change Figure 6.10: Ad-hoc change in declare 6.6 Verification of Constraint Models declare uses the methods presented in Section 5.8 to detect dead events and conflicts (cf. Section 4.6) and their causes in models. Information about dead events is used to detect the so-called dead activities. An activity in a model is dead if this activity can never be started and/or completed. Recall the example of a model with a dead event (curse,t c ) from Figure 5.24 on page 154. Figure 6.11(a) shows this model in declare and Figure 6.11(b)
184 174 Chapter 6 DECLARE: Prototype of a Constraint-Based System shows the declare verification report for the model: activity curse is dead due to constraints 1..* and not co-existence. (a) the model (b) activity curse is dead Figure 6.11: A declare model with a dead activity curse The example in Figure 5.25 on page 154 has a conflict. Figure 6.12(a) shows this model in declare. In Section 5.8 we presented one error in this model, i.e., the conflict. However, this model also contains two dead events. Verification in declare detects all errors in a model. figures 6.12(b), 6.12(c) and 6.12(d) show the verification report in declare for the model in Figure 6.12, where three errors are detected. First, Figure 6.12(b) shows that the conflict caused by constraints 1..* on curse, 1.* on become holy and not co-existence. Second, Figure 6.12(c) shows that activity become holy is dead due to constraints 1.* on curse and not co-existence. Third, Figure 6.12(d) shows that activity curse is dead due to constraints 1.* on become holy and not co-existence. The conflict in this model is, actually, caused by forcing the execution of the two dead activities, i.e., activities curse and become holy are dead and there is a 1..* constraint on each of these activities. Detailed verification reports in declare aim at helping model developers to understand error(s) in the model. The goal is to assist the resolution of such problems. As discussed in Section 4.6, errors can be eliminated from a model by removing at least one constraint from the group of constraints that together cause the error. For example, if one of the constraints 1..* on become holy or not co-existence would be removed from the declare model in Figure 6.11(a), activity curse would no longer be dead. Also, by removing at least one of the
185 Section 6.7 The Resource Perspective 175 (a) the model (b) a conflict (c) activity become holy is dead (d) activity curse is dead Figure 6.12: A declare model with a conflict constraints 1..* on curse, 1.* on become holy or not co-existence from the model in Figure 6.12(a) would remove the conflict in this model. 6.7 The Resource Perspective The resource perspective specifies which users can execute which activities in instances, as described in Section The resource perspective of declare is intentionally not designed to resemble the resource perspective of any of the existing workflow management systems. Instead, it is inspired by self-managed work teams, which are described in Section 2.3. A self-managed work team is
186 176 Chapter 6 DECLARE: Prototype of a Constraint-Based System responsible for a meaningful piece of work, i.e., for the team s assignment. This style of work assumes that team members have a high degree of knowledge and responsibility for their assignment. Therefore, the team is an autonomous unit and its members are able to fully make their own (local) decisions about how to execute the assignment. The resource perspective is defined in four steps in declare. First, on the system level, users and their system roles can be specified in the Designer component. Figure 6.13(a) shows eight system roles and Figure 6.13(b) shows six users. An arbitrary number of system roles can be assigned to each user. In Figure 6.13(b) we can see that system roles nurse and anesthesiologist are assigned to user Marry Stone. (a) system roles (b) users and their roles Figure 6.13: Setting up the system level in declare Second, in addition to system roles, model roles can be defined for each model. Figure 6.14 shows five model roles in the Fractures Treatment model. Each of the model roles is associated with one of the system roles. For example, an orthopedist on the system level can have a leading role in one model, and an advisory role in another model. Figure 6.14: Roles in the Fractures Treatment model from Figure 6.5 Third, for each activity in a model, one or more model roles authorized to execute this activity can be specified. For example, it might be the case that the activity examine patient in the Fractures Treatment model should be executed
187 Section 6.7 The Resource Perspective 177 only by a leader, as shown in Figure If several models roles are authorized to execute an activity, then the activity can be executed by any user that has at least one of the authorized roles. If authorized model roles are not defined for an activity, then this activity can be executed by any instance participant. Figure 6.15: Assigning model roles to activities Finally, when an instance is launched (created) in declare, users that will participate in the instance execution are selected, as shown in Figure For each model role an arbitrary number of users that have the system role associated with the model role can be selected. For example, a surgery assistant can be any user who has the nurse role on the system level (cf. Figure 6.14). Because users Marry Stone and Lia Walters have the nurse system role, they can be assigned as surgery assistants in this instance. Users assigned to model roles in an instance are called instance participants. An instance can be accessed only by its participants, i.e., an instance is shown only in Worklists of its participants. Moreover, an activity in an instance can be executed only by participants that have the role(s) authorized to execute this activity. Figure 6.17 shows an illustrative example of how resource perspective of the Fractures Treatment model is specified in declare. For the purpose of simplicity, we use only a part of the Fractures Treatment model, i.e., only two model roles (i.e., leader and radiologist) and two activities (i.e., examine patient and perform X ray). The first three steps of defining resource perspective are shown in Figure 6.17(a). First, two system roles (i.e., orthopedist and radiologist) are assigned to four users, i.e., John Smith and Jimmy Travolta are orthopedists and Jane Travic and Nick Bush are radiologists. Second, model roles leader and radiologist are defined in the model, such that the model role leader can be assigned
188 178 Chapter 6 DECLARE: Prototype of a Constraint-Based System Figure 6.16: Selecting participants for an instance only to users with the system role orthopedist and the model role radiologist only to users with the system role radiologist. Third, only model leaders are authorized to execute activity examine patient and only model radiologists are authorized to execute activity perform X ray. Figures 6.17(b), (c) and (d) show three examples of instances with different users allocated to them. Only users with corresponding system roles can be assigned to model roles in an instance as participants. For example, participants of the instance in Figure 6.17(b) are orthopedists John Smith and Jimmy Travolta as instance leaders and radiologists Jane Travic and Nick Bush as instance radiologists. An instance is accessible only by its participants. For example, the instance in Figure 6.17(b) will be shown in Worklists of all four users, the instance in Figure 6.17(c) will be shown in Worklists of John Smith, Jane Travic and Nick Bush, and the instance in Figure 6.17(d) will be shown in Worklists of John Smith and Jane Travic. An activity can be executed only by instance participants with the model role authorized for this activity. For example, only orthopedists John Smith and Jimmy Travolta are allowed to execute activity examine patient in the instance in Figure 6.17(b) because they are leaders in this instance. This is not the case for the instances in Figures 6.17(c) and (d): only John Smith is a leader in these instances and, therefore, only he can execute activity examine patient in these two instances. 6.8 The Data Perspective The data perspective defines the way in which information is handled while executing instances of process models. It defines which data elements are available and how these data elements can be accessed, as described in Section Therefore, defining the data perspective in declare involves (1) defining available data elements on the model level and (2) defining available data elements and their accessibility on the activity level. Each instance in declare carries its own data. For example, an instance of the Fractures Treatment model should contain the patient name, age and the diagnosis. This kind of instance-related information is stored in data elements of
189 Section 6.8 The Data Perspective 179 SYSTEM ROLES AND USERS orthopedist radiologist John Smith Jimmy Travolta Jane Travic Nick Bush INSTANCE PARTICIPANTS John Smith Jimmy Travolta Jane Travic Nick Bush MODEL ROLES leader radiologist INSTANCE ROLES leader radiologist MODEL ACTIVIES examine patient perform X ray INSTANCE ACTIVIES examine patient perform X ray (a) users, roles and a model (b) instance ci 1 INSTANCE PARTICIPANTS John Smith Jane Travic Nick Bush INSTANCE PARTICIPANTS John Smith Jane Travic INSTANCE ROLES leader radiologist INSTANCE ROLES leader radiologist INSTANCE ACTIVIES examine patient perform X ray INSTANCE ACTIVIES examine patient perform X ray (c) instance ci 2 (d) instance ci 3 Figure 6.17: The resource perspective in three instances the instance s process model and is clearly instance-specific. An arbitrary number of data elements can be defined in each model in declare. A data element has a name, type (e.g., string, integer, etc.) and possibly an initial value. Figure 6.18 shows three data elements in the Fractures Treatment model: patient name, age and diagnosis. These data elements are exclusively owned by instances of the model, i.e., each instance of the Fracture Treatment model will have its own patient name, patient age and diagnosis. Figure 6.18: Data elements in the Fractures Treatment model from Figure 6.5 As described above, each instance owns its own set of data elements defined in instance s model. Users can access values of instance s data elements while executing instance s activities. Therefore, an arbitrary number of model s data elements can be available in each activity of the model. Figure 6.19 (the bottom
190 180 Chapter 6 DECLARE: Prototype of a Constraint-Based System of the screen) shows how available data elements are assigned to an activity in the Designer tool, while developing a process model. First, any of the data elements defined in the model can be assigned to an activity. Figure 6.19 shows that data elements name, age and diagnosis are available in activity examine patient. Second, for each of the available data elements the type of accessibility must be defined. Recall that, in Section 3.1, we presented the two types of data accessibility: (1) the value of an input data element can be accessed but not edited in the activity and (2) the value of an output data element can be edited in the activity. declare uses a similar approach: data elements can have input, output or input-output accessibility in activities. The first two types of accessibility correspond to the two types described in Section 3.1, while an input-output data element is both input and output in the activity. Note that the definition of available data elements and their accessibility influences the way users can manipulate data in instances. Users can manipulate the activity data elements while executing an activity in a Worklist, as shown in Figure 6.7(b) on page 169. Figure 6.19: Adding an optional constraint 6.9 Conditional Constraints A condition can be defined for any constraint in the special field on the screen for defining a constraint, as shown in Figure A condition is a an expression involving data elements from a process model. Consider, for example, data elements in the Fractures Treatment process presented in Figure These data elements can be used to define conditions on any of the constraints in the
191 Section 6.9 Conditional Constraints 181 Fractures Treatment model. For example, it might be the case that the hospital changed the policy about prescribing rehabilitation after performing surgery, and that, from now on, rehabilitation must be prescribed after surgery to all patients who are less than eighty years old 2. This would require the response constraint between activities perform surgery and prescribe rehabilitation to be a mandatory constraint with condition age < 80, as shown in Figure If a constraint has a condition, then this condition is displayed in the model above the constraint. We refer to a constraint with a condition as to a conditional constraint. Figure 6.20: Condition on a constraint At any point during instance execution a condition value is either true (i.e., the related constraint is applicable) or false (i.e., the related constraint is not applicable). The value of a condition depends on the value of data elements involved in the condition. For example, if data element age has value 30 in an instance, then the value of condition age < 80 is true, and the response constraint between activities perform surgery and prescribe rehabilitation is applicable in the instance (cf. Figure 6.20). If, on the other hand, age has value 82 in an instance, then the value of this condition is false, and this constraint is not applicable in the instance. If the condition on a constraint is false, then this constraint is presented with a light gray color in Worklists (instead of the color representing constraint s state). declare handles a conditional constraint in an instance in a special way, depending on the value of the condition, as Table 6.1 shows. If the value of the 2 Note that, originally, the response constraint between perform surgery and prescribe rehabilitation was defined as optional (cf. Figure 6.5 on page 167).
192 182 Chapter 6 DECLARE: Prototype of a Constraint-Based System condition is true, then the related constraint is applicable, i.e., the state of the constraint is monitored and presented with the right color in Worklists and, if the constraint is mandatory, it is included in the mandatory formula/automaton (cf. Definition on page 142). If the value of the condition is false, then the related constraint is not applicable, i.e., the state of the constraint is not monitored, the constraint is grayed-out in Worklists and, even if the constraint is mandatory, it is discarded from the mandatory formula. Table 6.1: Conditional constraints condition constraint constraint state mandatory constraint true applicable monitor (present state color) consider false not applicable do not monitor (gray-out) discard As users execute activities in their Worklists, instance s data elements change values (cf. Figure 6.7 on page 169). Thus, it might happen that a condition changes its value several times during the execution of the instance. As Table 6.2 shows, declare handles a change of the value of a condition as a special type of ad-hoc change (cf. Section 6.5). We refer to this kind of ad-hoc instance change as to a conditional change. If the value of a condition changes from false to true, then the related constraint is added to the instance. If the value of a condition changes from true to false, then the related constraint is removed from the instance. Table 6.2: Conditional change as an ad-hoc change condition value instance old new ad-hoc change false true add constraint true false remove constraint A conditional change can be handled in a similar manner like an ad-hoc change, as Figure 6.21 shows 3. However, conditional change can be handled using two strategies: the unsafe or the safe strategy. Both strategies start with creating a new mandatory automaton, setting it to its initial state and then replaying the trace of the instance on the new automaton. If the unsafe strategy is used, then the execution of the activity that causes the conditional change is accepted, regardless the new state of the instance. In other words, the unsafe strategy allows conditional changes that bring the instance in the state violated. The violation of the instance can be caused when the value of the condition on a mandatory constraint changes from false to true. This is because the related mandatory constraint is added to the instance, 3 See Figure 6.9 for a similar diagram explaining ad-hoc change.
193 Section 6.9 Conditional Constraints 183 create mandatory automaton set to initial state instance verification (replay trace) start from current state UNSAFE strategy X X X X SAFE strategy X X? ok X not ok report error reject activity execution Figure 6.21: Two strategies for conditional change which might discard the current trace from the set of traces that satisfy the new model of the instance (cf. Property on page 94). The safe strategy resembles the ad-hoc change more than the unsafe strategy, because this strategy does not allow a conditional change that would bring the instance into the violated state. If executing an activity would change value(s) of data element(s) in such a way that the instance becomes violated, then this error is reported (just like in the ad-hoc change) and the execution of this activity is rejected. Currently, declare uses the unsafe strategy for conditional change. However, the safe strategy can also be easily implemented using existing techniques for ad-hoc change presented in Section 6.5. Conditional mandatory constraints can cause problems during verification of models in declare. This is because (1) conditions are independent from the formal specifications of constraints and (2) the verification procedure in declare does not take conditions on mandatory constraints into account, i.e., all constraints are treated as unconditional during the verification. This can cause declare to detect and report a verification error even if, due to condition(s), the error does not exist. For example, consider a model where a set of mandatory constraints causes an verification error (i.e., a dead event or a conflict), and two constraints c 1 and c 2 from this set with conditions age < 20 and age > 50, respectively. declare will not take these conditions into account during verification, i.e., it will detect this error and the set of mandatory constraints that causes it. However, because these two conditions can never evaluate to true at the same time, at any point of time at least one of these two constraints will be discarded from the instance. Therefore, the detected verification error does not actually exist (cf. Property on page 111 and Property on page 114). In order to overcome this problem to some extend, declare presents conditions of all constraints that cause a detected error (cf. figures 6.11 and 6.12 on pages 174 and 175, respectively).
194 184 Chapter 6 DECLARE: Prototype of a Constraint-Based System 6.10 Defining Other Languages The ConDec language (cf. Section 5.2) is not the only language that can be used in declare. In this section we show how another languages can be supported by declare. Section describes how other LTL-based languages can be defined (cf. DecSerFlow [37, 38], CIGDec [176], etc.). Besides LTL-based languages, the prototype can be extended to support languages based on other formalizations, as described in Section For illustration purposes we use a simple, hypothetical, language called the Simple Language. This language uses several procedural control-flow patterns [35, 213]. As Figure 6.22 shows, the Simple Language has five templates, i.e., sequence, parallel split, synchronization, exclusive choice and simple merge, such that each of them is illustrates a control-flow pattern [35,213]. Figure 6.22: Defining the Simple Language Templates from the Simple Language can be used to specify relations between activities in models. For example, the Handle Complaint process (cf. Figure 3.3 on page 52) can be modeled in the Simple Language, as Figure 6.23 shows. Note that, due to usage of identical control-flow patterns/templates, the Handle Complaint models shown Figures 3.3 and 6.23 have identical semantics, i.e., they represent the same process. Parameters, graphical representation and formal specification must be specified for each of the templates in the Simple Language. In sections and we describe how the templates of the Simple Language can be defined using LTL or some other formalism, respectively Languages Based on LTL Because declare is by default able to support LTL-based languages, defining such a language is trivial in the prototype. This is done in two steps. First, the name for the new language must be given. Second, all templates of the new language must be added. Further, it is important to define the appropriate
195 Section 6.10 Defining Other Languages 185 Figure 6.23: A Simple Language model graphical representation and LTL formula for each template. Figure 6.24 shows how the exclusive choice template of the Simple Language can be defined in LTL. Note that the given LTL formula specifies the semantics of the template. In this case, the exclusive choice template specifies that (1) activities B and C cannot be started before A is completed, (2) B or C must be completed after A is completed and (3) B and C cannot both be completed. Note that we selected a particular semantics for each of the five patterns. For simplicity, we did not consider the more advanced use of these five patterns (loops, etc.) Figure 6.24: The exclusive choice template in LTL Languages Based on Other Formalizations Although it currently uses LTL for constraint specification (cf. Chapter 8), the declare prototype can be extended to support other languages. In this case, defining the language, its templates and models remains the same like in the LTL based languages, e.g., ConDec. The only difference is that, the new formalization must be used when defining formulas of templates. This can be an arbitrary
196 186 Chapter 6 DECLARE: Prototype of a Constraint-Based System textual formalization. For example, Figure 6.25 shows how the exclusive choice template of the Simple Language can be defined using a hypothetical formal specification in the field formula. Figure 6.25: The exclusive choice template in a hypothetical formalization As shown in Figures 6.22, 6.25 and 6.23, declare already supports developing non-ltl-based languages and their models. However, in order to be able to verify models and execute instances of a new language, some extensions of the prototype are necessary. Implementing a non-ltl-based language in declare requires extending the prototype with several Java classes [171], as shown in Figure Figure 6.26(a) shows classes LTLInstanceExecution- Handler and SLInstanceExecutionHandler, which are responsible for execution of LTL and the Simple Language instances, respectively. These classes must implement methods of the IInstanceExecutionHandler, which are responsible for (1) execution of the next event (method next), (2) retrieving enabled events (method enabled), (3) information about possible violation of optional constraints (method violatesconstraints), (4) retrieving the state of constraints (method constraintstate), (5) retrieving the instance state (method instancestate), and (6) performing ad-hoc instance change (method reset). Figure 6.26(b) shows classes that are used for the verification of models and traces in ad-hoc change of LTL and the Simple Language instances. Classes LTLVerification and SLVerification extend ModelVerification and are responsible for verification of LTL and the Simple Language models. Classes LTLHistoryVerification and SLHistoryVerification, which extend HistoryVerification, verify the new model against current trace during ad-hoc change of LTL and the Simple Language instances. As Figure 6.26 shows, in order to implement a new language in declare, it is necessary to create three classes: (1) a class that implements the IInstanceExecutionHandler is responsible for execution of instances, (2) a class that extends
197 Section 6.11 Combining the Constraint-Based and Procedural Approach 187 «interface» IInstanceExecutionHandler next(in event : WorkItemEvent) : boolean enabled(in event : AbstractEvent) : boolean violatesconstraints(in event : AbstractEvent) : Collection<Constraint> constraintstate(in constraint : Constraint) : DefaultState instancestate() : DefaultState reset(in model : Model) : boolean ModelVerification ModelVerification(in model : Model) «interface» IVerification verify() : VerificationResult HistoryVerification HistoryVerification(in instance : Instance, in trace : Trace) LTLInstanceExecutionHandler SLInstanceExecutionHandler LTLVerification SLVerification LTLHistoryVerification SLHistoryVerification (a) execution (b) verification Figure 6.26: Implementation of a new language in declare ModelVerification for verification of models and (3) a class that extends HistoryVerification that is responsible for the verification of ad-hoc instance change. For example, classes SLInstanceExecutionHandler, SLVerification, and SLHistoryVerification handle models and instances of the Simple Language. In addition to the classes shown in Figure 6.26, it is advisable to also implement classes that check syntax of the formal specification of templates in the new language, i.e., the content of the formula field shown in Figure Combining the Constraint-Based and Procedural Approach The nature of a business process determines the appropriate approach with respect to workflow technology. On the one hand, flexible approaches (e.g., constraint-based approach), where users control the work, are appropriate for turbulent business processes. For example, the right treatment for each patient in the Fractures Treatment process (cf. Figure 6.5) depends on the specific injury. Therefore, this process must be flexible enough to allow the medical staff to use their expertise and experience and provide the best treatment for each patient. On the other hand, some business processes might require that a strict procedure is followed for each instance. For example, a blood analysis in a medical laboratory must always be conducted following a prescribed procedure, in order to deliver reliable results. The procedural approach where the system controls the work is more appropriate for business processes that must strictly follow some prescribed procedure. As discussed in Chapter 1, none of the two approaches is sufficient in its own. On the contrary, organizations often need to combine flexible and procedural approaches because procedural processes (e.g., laboratory analysis) and flexible processes (e.g., performing surgeries) are often integrated in organizations. In
198 188 Chapter 6 DECLARE: Prototype of a Constraint-Based System other words, a business process is often composed of both flexible and procedural subprocesses. Therefore, it is remarkable that commercial workflow management systems tend to support either one or the other approach. Moreover, there are many ICT approaches that use sophisticated methods for developing interfaces between different systems (e.g., Service Oriented Architectures [52]). Similar concepts can be applied to workflow technology in order to allow for arbitrary decompositions of process models developed in various languages and enacted by various systems. Figure 6.27 shows an illustrative example of how various processes can be combined in arbitrary decompositions. Instead of being executed as simple (manual or automatic) unit of work, an activity can be decomposed into a process modeled in an arbitrary language. A B Z... Figure 6.27: Decomposing processes using various modeling languages: A, B,..., Z Decomposition of declare and YAWL Processes YAWL is a workflow management system that supports the procedural approach developed in a collaboration between the Eindhoven University of Technology and the University of Queensland [11,23,32,210,212]. YAWL aims at supporting most of the workflow patterns [10, 35, 208] (cf. Section 3.1). The YAWL system consists of three typical components of a workflow management system (cf. Figure 3.1 on page 48), as Figure 6.28 shows. First, the YAWL Editor can be used to create process models. Second, the YAWL engine manages execution of instances. Finally, each user uses its YAWL Worklist to execute activities in running instances. YAWL editor process models YAWL engine YAWL Worklist YAWL Worklist Figure 6.28: Workflow management system YAWL The architecture of YAWL enables easy interaction between YAWL and other applications, i.e., the so-called custom YAWL services. Figure 6.29(a) shows the
199 Section 6.11 Combining the Constraint-Based and Procedural Approach 189 interface between YAWL and a custom service. First, YAWL can delegate an activity to the service, instead of the YAWL Worklist (cf. activity marked with letter S in Figure 6.29(a)). Second, a custom service can request launching of a new instance in YAWL. In both cases relevant instance and activity data elements are exchanged between YAWL and the service. The YAWL Worklist is a custom YAWL service. By default, all activities in YAWL models are delegated to the YAWL Worklist, i.e., all activities are by default executed by users in their YAWL worklists. If an activity should be delegated to another custom service, then this must be explicitly specified in the YAWL model. YAWL Engine S start activity completed activity start instance custom YAWL service YAWL Engine D * Y DECLARE completed instance (a) custom service interface (b) declare as a service Figure 6.29: declare as a custom YAWL service declare Framework can act as a custom YAWL service. This enables arbitrary decompositions of declare and YAWL models, as shown in Figure 6.29(b). First, it is possible that a YAWL activity triggers execution of a declare instance: when the YAWL activity becomes enabled declare will launch its instance. YAWL will consider the completion of the launched declare instance as a completion of its activity. Second, a declare activity can trigger execution of a YAWL instance. Note that users execute standard activities of declare and YAWL instances in a default manner in declare and YAWL worklists, as Figure 6.30 shows. DECLARE Designer Framework Worklist YAWL Engine YAWL Worklist Figure 6.30: Interface between declare and YAWL Arbitrary decompositions of declare and YAWL models allow for integrating the constraint-based and the procedural approach on different abstraction levels within one business process. This way the designer is not forced to make
200 190 Chapter 6 DECLARE: Prototype of a Constraint-Based System a binary choice between flexible and inflexible processes. Instead, an integration can be achieved, where parts of the processes that need a high degree of flexibility are supported by constraint-based declare models and parts of the processes that need centralized control of the system are supported by YAWL models. Consider, for example, the decomposition of a part of a health care process shown in Figure On the highest decomposition level, the main process is modeled using a procedural YAWL model. Each visit starts with opening the file and a quick preliminary examination. If the preliminary examination shows existence of an injury, the patient is accepted for the fractures treatment. Finally, each visit must be archived. On the second level, the declare Fractures Treatment model (cf. Section 6.3) offers a high degree of flexibility to medical experts that treat the injury. Finally, activity perform surgery is decomposed into another YAWL model. Note that, further decomposition can also be achieved. For example, activities prepare and schedule could consist of several smaller steps and, thus, also need to be decomposed into YAWL or declare subprocesses. DECLARE YAWL Figure 6.31: An example: decomposition into declare and YAWL process models in the health care domain In both YAWL and declare models activities are by default offered to users to manually execute them. If an activity should be delegated to an external application, then this must be explicitly defined in the process model. Figure 6.32 shows the definition of the perform surgery activity from the Fractures Treatment model shown Figure 6.31: this activity is decomposed into the Surgery Procedure
201 Section 6.11 Combining the Constraint-Based and Procedural Approach 191 YAWL model. Therefore, activity perform surgery is graphically presented with a special YAWL symbol in the Fractures Treatment model in Figure Note that, although activity perform surgery will not be executed manually by a user, model role leader is authorized for it. This means that instance participants with the role leader are authorized to decide when this activity (i.e., the referring YAWL model) can be executed. However, when an participant starts the activity, it will not be opened in the activity panel in the Worklist. Instead, it will be automatically delegated to YAWL, which will launch a new instance of its Surgery Procedure model. Figure 6.32: declare activity perform surgery launches a YAWL instance Similarly, in a YAWL process model it can be specified that activity should be delegated to a custom YAWL service, e.g., declare. Figure 6.33 shows a YAWL instance where activities D 1, D 2 and D 3 are delegated to declare. In the general scenario, declare users must manually select which declare model should be executed for each YAWL request. For example, declare users can select to execute Model B for activity D 1. If the decomposed YAWL activity contains an input data element with name delaremodel (cf. sections 3.1 and 6.8), then declare automatically launches a new instance of the referring model. For example, activity D 2 launches a new instance of Model A in declare. If the specified model cannot be found, declare users must manually select a declare process models to be executed. For example, users can select to execute an instance of model Model C for activity D 3. Because the delaremodel data element is also an output data element in activity D 3, declare will return the name of the executed model to YAWL. In this manner YAWL users are informed
202 192 Chapter 6 DECLARE: Prototype of a Constraint-Based System about the subprocess that was executed for the decomposed YAWL activity. YAWL Engine declaremodel Model A D1 input D2 declaremodel unknown D3 input-output Model A unknown Model C YAWL Worklist DECLARE? * Model A? Model B Model C Figure 6.33: YAWL activities D 1, D 2 and D 3 launch declare instances Dynamic Decompositions Decompositions of declare and YAWL models are dynamic, i.e., the decomposition structure can be changed at run-time. To some extent, this enables building process models/instances on the fly. As described in Section 6.5, declare models can be changed during execution. During the so-called ad-hoc change, constraints and activities can be added or removed from an instance. Moreover, because altering definitions of constraints and activities can be part of the change, this may influence the instance decomposition in multiple ways. Examples of some possible scenarios are shown in Figure Manipulating constraints (cf. Figure 6.34(a)) may influence execution of activities in several ways: (1) execution of a decomposed activity is no longer possible, (2) execution of a decomposed activity becomes a necessity, or (3) the moment at which a decomposed activity can be executed changes. Changes involving activities can also influence the decomposition of an instance: (1) the YAWL subprocess can be changed for a decomposed activity (cf. Figure 6.34(b)), (2) a decomposed activity can be removed from the instance (cf. Figure 6.34(c)), (3) a decomposed activity can be changed into a standard one (cf. Figure 6.34(d)), (4) a standard activity can be changed into a decomposed one (cf. Figure 6.34(e)), and (5) a new decomposed activity can be added to the instance (cf. Figure 6.34(f)). YAWL also contributes to dynamic decompositions because decomposition of a YAWL model might vary between instances of this model. As shown in Figure 6.35, it is possible to request execution of an instance of a specific declare
203 Section 6.11 Combining the Constraint-Based and Procedural Approach 193 * Y DECLARE ad-hoc Y DECLARE * * Y ad-hoc Y * Y DECLARE ad-hoc YAWL YAWL YAWL (a) constraint removed DECLARE * * Y ad-hoc (b) subprocess changed DECLARE * * ad-hoc Y (c) decomposed activity removed DECLARE * ad-hoc * Y YAWL YAWL YAWL (d) decomposed activity changed into standard (e) standard activity changed into decomposed (f) decomposed activity added Figure 6.34: Ad-hoc change of decompositions in declare model, by using the special declaremodel data element. Depending on the value of this data element, the activity can be decomposed into different declare models, as Figure 6.35 shows. Although all three instances refer to the same YAWL model, the activity D is decomposed into Model A, Model B or a manually selected Model C. Thus, by assigning different values to the declaremodel data element, YAWL users can determine at run-time to which declare model an activity should be decomposed. YAWL Engine declaremodel declaremodel Instance 1 Model A Instance 2 Model B Instance 3 D D declaremodel unnown D Model A Model A Model B Model B unknown Model C DECLARE * Model A Model B? Model C Figure 6.35: Decompositions of three instances of one process model in YAWL
204 194 Chapter 6 DECLARE: Prototype of a Constraint-Based System Integration of Even More Approaches YAWL is a service oriented system and it allows for integration of an arbitrary number of custom services [11,23,32]. For example, the Worklet Dynamic Process Selection Service (Worklet Service) is also a custom YAWL service [41, 44, 45]. The Worklet Service dynamically substitutes a YAWL activity with a new instance of a contextually selected YAWL process, i.e., worklet [41, 44, 45]. Figure 6.36 shows the architecture of the Worklet Service [41, 44, 45]. The decision about which worklet to select for a YAWL activity is made based on the activity data and the worklet repository. The worklet repository consists of existing YAWL process models that can be selected as worklets, ripple down rules (RDRs) used to select a worklet based on the activity data, and process and audit logs. The RDR Editor is used to create new or alter existing RDRs based on the logs. The RDR editor may communicate with the worklet service in order to override worklet selections based on the rule set additions. In this manner, the Worklet Service allows for dynamic compositions of YAWL instances of various YAWL process models based on the instance data and the rules induced from past executions. YAWL Engine W Worklet Service ripple down rules (RDRs) RDR Editor worklet YAWL Editor YAWL process models logs Figure 6.36: The worklet service in YAWL The service oriented architecture of the YAWL system enables integration of multiple workflow approaches. Any application that can act as a YAWL custom service can join the integration. For example, approaches of the procedural YAWL, constraint-based declare and dynamic Worklet Service can easily be combined, as Figure 6.37 shows. Thick lines represent activities delegated to external applications and thin lines activities offered to users for execution. Due to this integration, the selection of the appropriate approach is not an exclusive choice. Instead, an organization can combine multiple approaches on various abstraction (i.e., decomposition) levels. This way the initial idea expressed in Figure 6.27 is realized.
205 Section 6.12 Summary 195 DECLARE Worklist YAWL Worklist * Y Framework YAWL Engine W D W D Worklet Service Figure 6.37: The worklet service in YAWL 6.12 Summary declare is a prototype of a workflow management system based on the constraint approach presented in Chapter 4. An arbitrary constraint-based language can be defined by creating constraint templates. These templates are used in declare models to create constraints. Instances of declare models can be launched and executed. declare supports advanced features like model verification and ad-hoc change of instances. Although it focuses on the control-flow perspective (cf. Section 3.1.1), declare also provides a basic support for the resource and data perspectives. Currently, declare supports LTL-based constraint languages (cf. Chapter 5). However, the tool is implemented in such a way that it can be extended to support other languages suitable for the constraintbased approach. This was demonstrated in Section Integration of declare and the YAWL system allows for combining of various approaches. For example, procedural YAWL processes can be combined with constraint-based declare processes. In this manner, organizations do not need to make a binary choice and can combine different approaches. As shown, various approaches can be integrated in a single business process. Dynamic compositions of declare and YAWL processes allow for definition the exact execution of an instance at run-time. This provides flexibility by underspecification [125, ] (cf. Section 3.2.2). declare has more to offer than the features presented in this section. Chapter 7 describes how declare templates and models can be used for process mining [28] and generation of run-time recommendations [258] for users with the help of the ProM tool [8,27].
206 196 Chapter 6 DECLARE: Prototype of a Constraint-Based System
207 Chapter 7 Using Process Mining for the Constraint-Based Approach The idea of process mining is to discover, monitor and improve real business processes (i.e., not assumed processes) by extracting knowledge from event logs [28]. Figure 7.1 shows the role of process mining in context of Business Process Management. Process mining can provide different types of analysis, as shown with thick lines in Figure 7.1. First, there are many process mining techniques that enable discovery of process models from event logs. Second, techniques for conformance checking aim at comparing a given process model with a given event log and judging to which extent these two conform to each other. Third, process mining techniques can be used for log-based verification of processes against some (un)wanted properties by analyzing event logs. Fourth, execution of processes can be supported with recommendations generated based on event logs. operational process supports/ controls workflow management system refers to models configures records recommendation (un)desired properties process models process discovery conformance checking event logs log-based verification Figure 7.1: Process mining [89]
208 198 Chapter 7 Using Process Mining for the Constraint-Based Approach In this chapter we will describe how process mining can be used in the context of our constraint-based approach (cf. Chapter 4). In Section 7.1 we present the ProM framework [8,91], a process mining tool that includes a wide variety of plug-ins that support various types of mining techniques. In Section 7.2 we present the LTL Checker, which is a ProM plug-in able to verify event logs against properties specified in LTL [74]. In Section 7.3 we present the SCIFF language as a powerful declarative language that can be used for specification, verification, monitoring, discovery, etc. Two ProM plug-ins use SCIFF language: SCIFF Checker and DecMiner. First, the SCIFF Checker for verification of event logs against properties specified in the SCIFF language is presented in Section Second, Section presents the DecMiner, which is able to discover constraintbased SCIFF models. In Section 7.4 we present how process mining techniques can be used for generating recommendations that provide support for users during execution of process instances. We also describe how declare uses the Logbased Recommendations plug-in in ProM to provide run-time recommendations for users. Finally, Section 7.5 summarizes this chapter. 7.1 Process Mining with the ProM Framework The ProM framework [91] is an open-source infrastructure for process mining techniques. ProM is available as open source software (under the Common Public License, CPL [13]) and can be downloaded from [8]. It has been applied to various real-life processes, ranging from administrative processes and health-care processes to the logs of complex machines and service processes. ProM is plugable, i.e., people can plug-in new pieces of functionality. Some of the plugins are related to model transformations and various forms of model analysis (e.g., verification of soundness, analysis of deadlocks, invariants, reductions, etc.). Most of the plug-ins, however, focus on a particular process mining technique. Currently, there are more than 200 plug-ins of which about half are mining and analysis plug-ins. Event logs in MXML format are a starting point for ProM [90]. The MXML format is system-independent and by using ProMimport it is possible to extract logs from a wide variety of systems, i.e., systems based on products such as Staffware [238], FLOWer [180], YAWL [11,23,32,210,212], etc. declare creates event logs in MXML format, i.e., the execution of instances is recorded in the standard format for ProM. As shown in Figure 7.2, the Framework tool logs all events that users trigger while executing instances via their Worklists. Table 7.1 shows a part of the MXML log file created by declare. This MXML file contains information about executed instances of the Fractures Treatment model (cf. Figure 6.5 on page 167). Each event that occurred in an instance is recorded as an AuditTrailEntry, together with the time stamp, possible data elements and the user who triggered the event. For example, the entry in line 16
209 Section 7.1 Process Mining with the ProM Framework 199 Worklist Worklist DECLARE Framework ProM Worklist MXML event logs of executed instances Figure 7.2: Event logs, declare and ProM specifies that administrator completed activity examine patient at time T11:04: :00 (cf. fields Originator, EventType, WorkflowModelElement and Timestamp, respectively). Values of data elements referring to patient information (i.e., age, diagnosis and name) are also recorded. Table 7.1: Part of an MXML log file created by declare 1 <?xml version="1.0" encoding="utf-8"?> 2 <WorkflowLog xmlns:<source program="mxmlib"/> 4 <Process id="fractures Treatment"> 5 <ProcessInstance id="1"> 6 <AuditTrailEntry> 7 <Data> 8 <Attribute name="workitemid">1</attribute> 9 </Data> 10 <WorkflowModelElement>examine patient</workflowmodelelement> 11 <EventType>start</EventType> 12 <Timestamp> T11:04: :00</Timestamp> 13 <Originator>administrator</Originator> 14 </AuditTrailEntry> 15 <AuditTrailEntry> 16 <Data> 17 <Attribute name="age">9</attribute> 18 <Attribute name="diagnosis">broken arm</attribute> 19 <Attribute name="name">joe Smith</Attribute> 20 <Attribute name="workitemid">1</attribute> 21 </Data> 22 <WorkflowModelElement>examine patient</workflowmodelelement> 23 <EventType>complete</EventType> 24 <Timestamp> T11:04: :00</Timestamp> 25 <Originator>administrator</Originator> 26 </AuditTrailEntry> 27 <AuditTrailEntry> 28 <Data> 29 <Attribute name="workitemid">2</attribute> 30 </Data> 31 <WorkflowModelElement>prescribe sling</workflowmodelelement> 32 <EventType>start</EventType> 33 <Timestamp> T11:04: :00</Timestamp> 34 <Originator>administrator</Originator> 35 </AuditTrailEntry> </ProcessInstance> 38 </Process> 39 </WorkflowLog>
210 200 Chapter 7 Using Process Mining for the Constraint-Based Approach Because declare logs are stored in the MXML format, these logs can be directly accessed by ProM. Figure 7.3 shows an MXML file (part of this file is shown in Table 7.1) loaded into ProM. This event log contains three instances of the Fractures Treatment model presented in Figure 6.5 on page 167. For each instance, ProM presents the sequence of executed events, as stored in the referring MXML file shown in Table 7.1. Figure 7.3: declare event log loaded in ProM One of the basic features in ProM is the process discovery, i.e., deriving a model from some event log. This model is typically a process model. However, ProM offers many more interesting process mining techniques [28,79,89,91,206, 207,263]. For example, there are also techniques to discover organization models, social networks, more data-oriented models such as decision trees, etc. Figure 7.4 shows the result of three alternative process discovery algorithms: (1) the α miner shows the result in terms of a Petri net, (2) the multi-phase miner shows the result in terms of an EPC, and (3) the heuristics miner shows the result in terms of a heuristics net.
211 Section 7.2 Verification of Event Logs with LTL Checker 201 Figure 7.4: The output of three process discovery algorithms supported by ProM when analyzing the event log shown in Figure Verification of Event Logs with LTL Checker One of the plug-ins offered by ProM is the so-called LTL Checker [25]. The LTL Checker offers an environment to check (verify) predefined properties with respect to some event log in MXML format. For each process instance, it is determined whether the given property holds or not, i.e., given a property all process instances are partitioned in two classes: conforming and non-conforming. If the property holds in an instance, then this instance is classified as a conforming one. If the property does not hold in an instance, then this instance is classified as a non-conforming one. A-posteriori verification of properties can be particularly useful in the constraint-based approach. Because, generally, this approach allows for more flexibility and users can execute instances in various ways, a-posteriori verification can provide insights into the exact way the processes are being executed in practice. For example, while executing instances of the Fractures Treatment model presented in Section 5.5, users have a lot of freedom to decide which activity to execute and when to execute it. A-posteriori analysis of the Fractures Treat-
212 202 Chapter 7 Using Process Mining for the Constraint-Based Approach ment instances can provide many useful information. Consider, for example, the verification of the Fractures Treatment instances processed in the past against the following two properties: (1) activity perform surgery was executed in the instance, and (2) activity prescribe rehabilitation was executed after activity perform surgery in the instance. On the one hand, the verification could show that the first property holds in 80% of the instances processed in the previous year, while it holds in only 40% of the instances processed in the year before. This can be an indication that the hospital should hire more surgeons. On the other hand, the verification could show that the second property does not hold in 90% of the instances, i.e., the medical staff violated the optional constraint from the Fractures Treatment model in 90% of the instances. This result might indicate that this constraint should either be removed from the model or made mandatory. Properties are specified as predefined parameterized LTL [74] expressions in the LTL Checker. If the instance trace satisfies the LTL formula, then this instance is classified as conforming. If the instance trace does not satisfy the LTL formula, then this instance is classified as non-conforming. Recall that we defined an instance as a pair of a trace and a model in Definition on page 98. The LTL Checker considers only the instance trace stored in an MXML log file, while the instance model remains irrelevant (or even unknown). Consider, for example, the two instances ci 1 and ci 2 presented in Table 7.2. The first property (of the two properties) mentioned in the previous paragraph can be specified with the LTL formula f = (perform surgery,t c ) specifying that event (perform surgery,t c ) must occur at least once (cf. Section 5.1). On the one hand, instance ci 1 is conforming to formula f because its trace σ 1 satisfies formula f, i.e., σ 1 f because σ 1 [4] = (perform surgery,t c ). On the other hand, instance ci 2 is nonconforming to formula f because its trace σ 2 does not satisfy formula f, i.e., σ 2 f because (perform surgery,t c ) / σ 2. Note that instance models cm 1 and cm 2 do not influence the classification and, thus, do not have to be known when determining if traces σ 1 and σ 2 satisfy formula f. Table 7.2: A conforming and a non-conforming instance for formula f = (perform surgery,t c) ci 1 = (σ 1,cm 1 ), ci 1 U ci ci 2 = (σ 2,cm 2 ), ci 2 U ci σ 1 [1] = (examine patient,t s ) σ 2 [1] = (examine patient,t s ) σ 1 [2] = (examine patient,t c ) σ 2 [2] = (examine patient,t c ) σ 1 [3] = (perform surgery,t s ) σ 2 [3] = (apply cast,t s ) σ 1 [4] = (perform surgery,t c ) σ 2 [4] = (apply cast,t c ) σ 1 [5] = (examine patient,t s ) σ 2 [5] = (remove cast,t s ) σ 1 [6] = (examine patient,t c ) σ 2 [6] = (remove cast,t c ) σ 1 [7] = (prescribe rehabilitation,t s ) σ 2 [7] = (examine patient,t s ) σ 1 [8] = (prescribe rehabilitation,t c ) σ 2 [8] = (examine patient,t c ) ci 1 is conforming, i.e., σ 1 f ci 2 is non-conforming, i.e., σ 2 f recall that t s,t c T are event types such that t s = started and t c = completed
213 Section 7.2 Verification of Event Logs with LTL Checker 203 The LTL Checker is currently the most important ProM plug-in for the declare prototype (cf. Chapter 6) because both applications use LTL. The LTL Checker can be used for verification of (declare) instances against properties specified in terms of constraint templates (e.g., the ConDec templates presented in Section 5.2) or constraints (e.g., the six constraints from the Fractures Treatment model presented in Section 5.5). The remainder of this section is organized as follows. In Section we shortly describe the default LTL Checker as a stand-alone application and in Section we present how the LTL Checker and declare can be used together The Default LTL Checker The default version of the LTL Checker contains 60 predefined typical properties one may want to verify using the LTL Checker (e.g., the so-called 4 eyes principle) [25]. These can be used without any knowledge of the LTL language. In addition the user can define new sets of properties. Each property is specified in terms of an LTL expression. Formulas may be parameterized, are reusable, and carry explanations in HTML format. This way both experts and novices may use the LTL Checker. LTL used in the LTL Checker has more expressive power than in the ConDec language and the declare prototype (cf. Chapters 5 and 6). While ConDec and declare use LTL to specify relations between activities and event types, the LTL Checker allows referring to activities, event types, users, time stamps and data elements. In fact, in the LTL Checker, it is possible to refer to any element of audit trail entries appearing in MXML logs [25] (see, e.g., the MXML log presented in Table 7.1). Moreover, it is possible to parameterize properties, which enables reusability. Properties can be loaded into the LTL Checker via plain text files. Table 7.3 shows a part of such a file. Each property is specified as a formula. For example, formula for the person P executed activity A property in line 11 has two parameters: parameter P represents a user and parameter A represents an activity. After that, some information about the formula (i.e., a short description and a list of parameters) and the LTL formula are given. This formula can be used to check if a user (P) executed an activity (A) in an instance. The LTL Checker presents the description and parameters of loaded formulas, while the LTL expression remains hidden from the users. Figure 7.5 shows how the LTL Checker presents formulas from the file shown in Table 7.3. A value for each parameter of the selected formula must be manually specified. In this manner, formulas can be reused for the verification of event logs using various parameter settings. In the example shown in Figure 7.5, the parameters for the selected person P executed activity A property are set to John Smith and approve transaction. However, any other person and activity could have been selected (e.g., user Mary Jones and activity perform surgery).
214 204 Chapter 7 Using Process Mining for the Constraint-Based Approach Table 7.3: A formula involving users in the LTL Checker 1 # version : # date : :58:30:112 3 # author : Manually 4 set ate.eventtype; 5 set ate.workflowmodelelement; 6 set ate.originator; 7 rename ate.eventtype as event; 8 rename ate.workflowmodelelement as activity; 9 rename ate.originator as person; 10 ############################################################################# 11 formula person_p_executed_activity_a( P: person, A: activity ) := 12 { <h2>did person <b>p</b> execute activity <b>a</b>?</h2> 13 <p> Arguments:<br> 14 <ul> 15 <li><b>p</b> of type set (<i>ate.originator</i>)</li> 16 <li><b>a</b> of type set (<i>ate.workflowmodelelement</i>)</li> 17 </ul> 18 </p> } 19 <>((( person == P /\ activity == A ) /\ event == "complete")); 20 ############################################################################# 21 formula activity_a_executed( A: activity ) := 22 { <h2>was activity <b>a</b> successfully executed?</h2> 23 <p> Arguments:<br> 24 <ul> 25 <li><b>a</b> of type set (<i>ate.workflowmodelelement</i>)</li> 26 </ul> 27 </p> } 28 <>(( activity == A /\ event == "complete")); Figure 7.5: LTL Checker: Did John Smith execute activity approve transaction?
215 Section 7.2 Verification of Event Logs with LTL Checker Combining the LTL Checker and declare declare allows for an automatic export of its LTL formulas to the LTL Checker in two ways, as Figure 7.6 shows. Templates belonging to a language or constraints from a model can be exported to a file that can be loaded into the LTL Checker. In this manner, declare templates and constraints can be reused as verification properties in the LTL Checker. Moreover, the LTL Checker can use declare templates and constraints for the verification of either MXML log files created either by declare (and thus containing information about instances executed in declare) or by any other system. DECLARE ProM model templates MXML event logs LTL Checker LTL Checker files Figure 7.6: Combining declare and the LTL Checker Table 7.4 shows a part of the LTL Checker file that declare created for all templates in the ConDec language. When exporting templates, declare creates parameterized formulas that can be loaded into the LTL Checker. One formula is created for each template. Each parameter in the created formula refers to one of the template s parameters (cf. sections 5.2 and 6.2). For example, the response formula in line 22 refers to the response template presented in Section Therefore, this formula contains two parameters referring to activities A and B. After that, some information about the formula (i.e., a short description and a list of parameters) and the template s LTL formula are given. Note that the LTL Checker and declare use slightly different syntaxes for LTL expressions. During the export, declare automatically converts template s formulas into the syntax recognized by the LTL Checker. Figure 7.7 shows how the LTL Checker presents formulas from the file shown in Table 7.4. In the LTL Checker, the parameters for the response template can be set to perform surgery and prescribe rehabilitation, or for any other two activities. For example, it can be also used to verify the response formula between activities curse and pray on event logs containing instances of the model presented in Figure 6.11(a) on page 174. Besides constraint templates, declare can also export constraints from models to files that can be loaded into the LTL Checker. Table 7.5 shows a part of the LTL Checker file that declare created from constraints in the Fractures Treatment model presented in Figure 6.5 on page 167. While exporting templates cre-
216 206 Chapter 7 Using Process Mining for the Constraint-Based Approach Table 7.4: A part of the file generated by declare while exporting ConDec templates to ProM 1 # version : # date : :35:46:414 3 # author : DECLARE 4 set ate.eventtype; 5 set ate.workflowmodelelement; 6 rename ate.eventtype as event; 7 rename ate.workflowmodelelement as activity; 8 ############################################################################# 9 formula init ( A: activity ) := 10 { 11 <h2>init</h2> 12 <p> A has to be the first activity.</p> 13 <p> Arguments:<br> 14 <ul> 15 <li><b>a</b> of type set (<i>ate.workflowmodelelement</i>)</li> 16 </ul> 17 </p> 18 } 19 ((( activity==a /\ event=="start") \/ (activity==a /\ event=="ate_abort")) 20 _U (activity==a /\ event=="complete")); 21 ############################################################################# 22 formula response ( A: activity, B: activity ) := 23 { 24 <h2>response</h2> 25 <p> Whenever activity B is executed, 26 activity A has to be eventually executed afterwards.</p> 27 <p> Arguments:<br> 28 <ul> 29 <li><b>a</b> of type set (<i>ate.workflowmodelelement</i>)</li> 30 <li><b>b</b> of type set (<i>ate.workflowmodelelement</i>)</li> 31 </ul> 32 </p> 33 } 34 [](((activity==a /\ event=="complete") -> <>((activity==b /\ event=="complete")))); 35 ############################################################################# ates parameterized formulas, exporting constraints creates formulas without parameters. For example, formula response performsurgery prescriberehabilitation in line 19 in Table 7.5 does not have any parameters. Instead of the parameters, it uses the actual activities involved in the constraint, i.e., perform surgery and prescribe rehabilitation. Figure 7.8 shows how the LTL Checker presents constraints from the Fractures Treatment model presented in Figure 6.5 on page 167 in the file presented in Table 7.5. Formulas in the generated LTL Checker file do not have parameters because constraints involve real activities from the model. Therefore, this type of declare export can not be reused in verification and can only be used to verify event logs against specific constraints.
217 Section 7.3 The SCIFF Language 207 Figure 7.7: ConDec templates in LTL Checker Figure 7.8: Constraints from the Fractures Treatment model in LTL Checker 7.3 The SCIFF Language The SCIFF framework [48, 49] is based on abductive logic programming [144]. SCIFF is originally thought for the specification and verification of global interaction protocols in open multi-agents systems, which share many aspects with the service-oriented computing setting [59]. SCIFF envisages a powerful logic-based language, with a clear declarative semantics, for specifying social interaction, and is equipped with a proof procedure capable to check at run-time or a-posteriori whether a set of interacting entities is behaving in a conforming manner with respect to a given specification. The SCIFF language is another language that
218 208 Chapter 7 Using Process Mining for the Constraint-Based Approach Table 7.5: A part of the file generated by declare while exporting the Fractures Treatment model to ProM 1 # version : # date : :35:46:414 3 # author : DECLARE 4 set ate.eventtype; 5 set ate.workflowmodelelement; 6 rename ate.eventtype as event; 7 rename ate.workflowmodelelement as activity; 8 ############################################################################# 9 formula init_examinepatient () := 10 { 11 <h2>init</h2> 12 <p> A has to be the first activity.</p> 13 <p> parameter(s) [A] ->examine patient</p> 14 <p> type: mandatory </p> 15 } 16 (((activity=="examine patient" /\ event=="start") \/ (activity=="examine patient" /\ 17 event=="ate_abort")) _U (activity=="examine patient" /\ event=="complete")); 18 ############################################################################# 19 formula response_performsurgery_prescriberehabilitation () := 20 { 21 <h2>response</h2> 22 <p> Whenever activity B is executed, 23 activity A has to be eventually executed afterwards.</p> 24 <p> parameter(s) [A] ->perform surgery</p> 25 <p> parameter(s) [B] ->prescribe rehabilitation</p> 26 <p> type: optional </p> 27 } 28 [](((activity=="perform surgery" /\ event=="complete") -> 29 <>((activity=="prescribe rehabilitation" /\ event=="complete")))); 30 ############################################################################# can be used for process mining in the context of the constraint-based approach. SCIFF is used in ProM to verify and discover constraints. It provides an alternative for LTL with different capabilities. SCIFF abstracts from the notion of event. Instead of using variables directly referring to events, it uses the more generic notion of a predicate. For example, in the context of workflow management systems, we can define events as a fact that user Originator performed event of the type Type on activity Activity, denoted by perform(activity,type,originator). Using this notation, the fact that John Smith started activity examine patient is denoted by perform(examine patient,started,john Smith). Note that it is also possible to leave some parameters unspecified, i.e., perform(examine patient,started,originator) denotes that some user started activity examine patient. There are three basic operators that can be used in SCIFF. First, operator H(Event, T ime) denotes that Event happened at time Time. Second, expectation that Event should happen at time Time is denoted by E(Event,Time). Finally, negative expectation EN(Event, T ime) denotes that Event is ex-
219 Section 7.3 The SCIFF Language 209 pected not to happen at time Time. Note that Event and Time can be variables, or they could be grounded to a specific value. For example, expression H(perform(examine patient,completed,originator),t e ) T e > 10 matches with any completion of activity examine patient at a time greater that 10 units, performed by a whatsoever Originator. On the other hand, expression E(perform(decide,completed,Originator),T e ) T e < 100 represents that activity decide must be completed within 100 time units. The three basic operators can be used to specify SCIFF integrity constraints, i.e., rules that relate events that already happened and events that are expected to happen in the future. These rules are represented as forward rules of the form Body Head [49]. Consider, for example, the response constraint between activities perform surgery and prescribe rehabilitation from the Fractures Treatment model shown in Figure 5.17 on page 145 and Table 5.8 on page 146. This constraint can be denoted as SCIFF formula H(perform(perform surgery,completed,originator),t S ) E(perform(prescribe rehabilitation,completed,originator),t R ) T R > T S. In fact, all ConDec templates (cf. Section 5.2) can be specified in the SCIFF language [70]. Note that [70] presents the mapping between SCIFF and the DecSerFlow language [37, 38]. DecSerFlow is a constraint-based language based on LTL and is very similar to ConDec and also supported by declare 1. Although both SCIFF and LTL-based ConDec are both suitable for declarative specifications, there are some differences between these languages. First, while automata generated from LTL [74,111,112,158] provide a deadlock-free execution mechanism for ConDec models (cf. Chapter 5), there exists no method for a deadlock-free execution of SCIFF models. Second, the SCIFF language is more expressively powerful than ConDec with respect to the data and time related aspects. Consider, for example, the aspect of time. While LTL implicitly models the concept of time via its temporal operators, SCIFF specifies time by explicitly constraining time variables. This allows for the specification of complex time constraints. Consider, for example, the ConDec response template (cf. Section 5.2.2), which specifies that activity B has to be completed after activity A without considering time interval between executions of activities A and B. The SCIFF formulas in Table 7.6 show several examples of how the response template can be extended with different deadlines and Figure 7.9 shows illustrations of these deadlines. The first formula represents the classical response template without a deadline, i.e., it is important only that B is completed at any moment after A. The following three formulas extend the plain response template with three different deadlines. The second formula requires that B is completed not earlier than N time units after A is completed. The third formula specifies that B has to be completed not later than N time units after A is completed. Finally, the last formula requires B to be completed not earlier than N and not later than 1 DecSerFlow is tailored towards the specification of web-services and their choreographies.
220 210 Chapter 7 Using Process Mining for the Constraint-Based Approach M time units after A is completed. Table 7.6: Deadlines in the response template in SCIFF 1. no deadline: H(perform(A,t c, Originator), T A ) E(perform(B,t c, Originator), T A ) T B > T A 2. after N time units: H(perform(A,t c, Originator), T A ) E(perform(B,t c, Originator), T A ) T B > T A + N 3. within N time units: H(perform(A,t c, Originator), T A ) E(perform(B,t c, Originator), T A ) T B > T A T B < T A + N 4. between N and M time units: H(perform(A,t c, Originator), T A ) E(perform(B,t c, Originator), T A ) T B > T A + N T B < T A + M Recall that t c T is an event type such that t c = completed. 1. no deadline TB > TA /\ TB < TA + N 3. within N TB > TA + N 2. after N TB > TA + N /\ TB < TA + M 4. between N and M 0 TA N M time execution of activity A deadline interval for execution of activity B (TB) Figure 7.9: Deadlines in SCIFF formulas from Table 7.6 Similarly like LTL, the SCIFF language can be used for process mining in the constraint-based approach. There are two plug-ins in the ProM framework that use SCIFF. First, the SCIFF Checker presented in Section can be used to verify event logs against SCIFF formulas. Second, DecMiner presented in Section can be used to learn SCIFF formulas from event logs Verification of Event Logs with SCIFF Checker The SCIFF Checker plug-in in ProM is similar to the LTL Checker plug-in both in design and functionality. The SCIFF Checker can verify event logs against properties specified as SCIFF formulas. After an MXML file storing logs of events is loaded into ProM, a SCIFF property that should be verified is selected
221 Section 7.3 The SCIFF Language 211 in the SCIFF Checker. For example, formula existence of activity A can be selected, as shown in the middle screen in Figure The verification procedure classifies instances from the referred event log into conforming (i.e., correct) and non-conforming (i.e., wrong), as shown in the bottom screen in Figure In addition, instances for which exceptions occurred during the verification are classified as exceptional. Figure 7.10: The SCIFF Checker plug-in in ProM Manipulation of parameters in SCIFF Checker is more sophisticated than in the LTL Checker, where for each parameter in the LTL Checker a specific value must be explicitly given (cf. Figure 7.5). In the SCIFF Checker it is, e.g., possible to set the value for the activity name to be equal to or different from a given value, as shown in the top screen in Figure The available options depend on the parameter type. For example, the set-up for time parameters is even more elaborate, i.e., it is possible to specify that a time variable is equal, greater or less than a given (absolute or relative) time stamp value Discovering Constraints with DecMiner DecMiner is a ProM plug-in that is able to learn SCIFF formulas from event logs. This plug-in uses a modified algorithm from the field of inductive logic programming for learning models from examples and background knowledge [ ]. In this approach SCIFF formulas are learned from event logs, which are previously labeled as conforming and non conforming. For example, a hospital can label executed instances of patient s treatments as normal or too long and then learn a model that discriminates these two classes. The learned model must consist of formulas that all hold for conforming instances. Formulas that do not hold in at least one of the conforming instances are discarded from the learned model [ ].
222 212 Chapter 7 Using Process Mining for the Constraint-Based Approach After an MXML event log file is loaded in ProM, DecMiner learns a model from this file via three steps, as Figure 7.11 shows. First, all instances from the MXML file must be classified or as conforming or non conforming. This classification can be done manually or by previously executing analysis with the SCIFF Checker or the LTL Checker. Second, relevant activities are selected. Finally, templates (i.e., formulas) that should be considered are selected from a predefined collection. Note that the mapping between the SCIFF language and ConDec and DecSerFlow is already integrated in DecMiner. An existing manual mapping between ConDec templates and SCIFF integrity constraints allows automatic generation of learned models in declare format. In Figure 7.11 DecMiner presents the learned model in the Designer component of the declare prototype (cf. Chapter 6). This shows the true integration of the various approaches. Figure 7.11: The DecMiner plug-in in ProM 7.4 Recommendations Based on Past Executions While traditional workflow management systems tend to enforce a particular way of working to users, flexible approaches to workflow management systems (e.g., adaptive systems (cf. Section 2.1.5), case-handling systems [195], or our constraint-based approach) aim at shifting the decision making from the system to users. As discussed in Chapter 1, a flexible style of work assumes that end users are both allowed and capable to make good decisions about how to work. However, flexibility usually comes at a cost, i.e. the more flexible a workflow management system is, the less support it provides to its users and hence the more knowledge these users need to have about the process they are a part of.
223 Section 7.4 Recommendations Based on Past Executions 213 Also, full support in a workflow management system usually comes at a cost of loosing flexibility, as shown in Figure high flexibility support low decission making users system Figure 7.12: Tradeoff: flexibility vs. support [90, 258] Although users of flexible systems have the option to make their own decisions while working, a certain level of support is still necessary. Reasons for this can be various: inexperienced users, exceptional situations, personal preferences, etc. Traditionally, this problem is solved by educating workers (e.g. by making them more process aware), or by having a workflow management system restricting the user and thus sacrificing flexibility. Obviously, both options are costly. Moreover, they both require a process specialist to gain insights in the process supported by the system, either to educate the workers about these processes or to change them into more restrictive ones. Process mining is by origin a stand-alone application performed after the execution, i.e. an event log is taken and used to produce a model (e.g., a process model or social network), to check whether reality fits with the model (cf. the LTL Checker, the Conformance Checker in ProM [25, 206]), or to extend an existing model (e.g., building a process model into a simulation model). However, process mining techniques can also be applied to provide recommendations to users, while they are executing process instances. A recommendation service more or less applies process mining on-the-fly, i.e. by looking at an event log (set of completed executions) and a current partial execution, predictions can be made about the future of the current (partial) instance [258]. Recommendations can be considered as predictions about a current instance, conditioned on the next step that has not been performed yet [258]. For example, given the partial execution of the current Fractures Treatment instance (cf. Figure 6.5 on page 167) and the completed executions of similar instances of the Fractures Treatment model in an event log, it is predicted that this instance will take 90 days if the next step is activity prescribe sling. However, if the next step is activity apply cast, it will last only 60 days. Figure 7.13 shows an overview of a recommendation service as it is imple-
224 214 Chapter 7 Using Process Mining for the Constraint-Based Approach mented in the ProM framework and the declare prototype (cf. Chapter 6). However, the same architecture can be achieved by any other process mining tool and any other workflow management system providing some degree of flexibility. The workflow engine creates event logs for executed instances. The recommendation service bases its recommendations on the information in this log. At the moment when the recommendation for an instance is needed, the workflow engine sends the partial log of the instance, i.e., a record of all events performed in the instance up to this moment. The recommendation service then answers by sending a recommendation to assist users in choosing the next step(s). Such a recommendation consists of a list of advised next steps (e.g. examine patient ) combined with a number of quality attributes (e.g. following this recommendation will lead to a quicker recovery). process mining toolset information system analyst recommendation service partial log recommendation workflow engine end user e.g. ProM e.g. DECLARE event log Figure 7.13: An overview of the recommendation service [258] To decide what the next step in an instance should be, the recommendation service compares the partial log of the instance to completed instances in the event log and searches for similar instances. As one can imagine, there can be many criteria for similarity of instances. Figure 7.14 shows several types of simple instance abstractions that can be used as criteria for comparison of instances [258]. If prefix abstraction is used, then two instances are similar if their activities are executed in the same order. The multi-set abstraction considers instances where the same activities are executed the same number of times in any order as similar instances. Similar instances in the set abstraction are instances where the same activities are executed, regardless of how many times and in which order. Another important concept in the recommendation service is the goal of the recommendations. For example, the recommendation can be generated to steer the execution towards the shortest throughput time, towards avoiding executions of a critical activity, minimizing costs, etc. The Log-Based Recommendations plug-in in ProM is a recommendation service (cf. Figure 7.13). This plug-in allows for selection of a preferred scale and
225 Section 7.4 Recommendations Based on Past Executions 215 Prefix abstraction: Multi-set abstraction: Set abstraction: current state Partial instance: A B C D C D C D E... <A,B,C,D,C,D,C,D,E> {A,B,C 3,D 3,E} {A,B,C,D,E} Figure 7.14: Possible abstractions of instances [258] contributor, as Figure 7.15 shows. The scale refers to the recommendation goal, i.e., the recommendation service in Figure 7.15 generates recommendations that will lead to short execution times of instances because Duration scale is selected. The contributor refers to the criteria of instances similarity, i.e., in this case prefix abstraction is selected (cf. Figure 7.14). Further on, some additional settings are available for managing and monitoring the performance of the service. Figure 7.15: The Log-Based Recommendations plug-in in ProM The declare prototype uses the Log-Based Recommendations plug-in in ProM for providing recommendations to users. In fact, recommendations are optional in declare, i.e., this option can be either turned-on or switched-off in the prototype. The Framework component of declare communicates with the Log-Based Recommendations plug-in as its recommendation service. Each time when a user triggers a new event in an instance (by starting, completing,
226 216 Chapter 7 Using Process Mining for the Constraint-Based Approach or cancelling an activity), a new recommendation for that instance is requested from ProM and presented to all users working on that instance. The Worklist component presents recommendations in a special panel next to the instance, as Figure 7.16 shows. The recommendation shown in Figure 7.16 is generated for an instance of the Fractures Treatment model (cf. Figure 6.5 on page 167) using the Log-Based Recommendations plug-in and settings shown in Figure 7.15, i.e., the recommendation service scale is duration (i.e., the goal is to minimize flow time) and the contributor is prefix abstraction (i.e., the recommendation is based on instances with a similar prefix). The current recommendation suggests that the next step in this instance should be starting activity check X ray risk. Moreover, the recommendation specifies that if activity check X ray risk is started next, the expected average execution duration for this instance is time units. Also, not starting activities examine patient, prescribe medication, prescribe rehabilitation and prescribe sling will cause the instance to be completed within time units. In other words, instances with a similar prefix where activity check X ray risk was started at this point had a short execution time, and their average execution time was time units. Figure 7.16: declare: presenting recommendations to users Note that declare presents recommendations purely as additional information in a Worklist, i.e., users are not forced to follow recommendations. Instead, they can freely decide what to do, even if this means acting against what is recommended. In this manner, declare offers a significant level of support to its users, without sacrificing the flexibility. It also nicely illustrates that process mining and flexibility fit well together. Allowing for a lot of freedom, but at the same time monitoring and supporting, seems to combine the best of two worlds. 7.5 Summary Process mining [28] can be applied to the constraint-based approach. The declare prototype, which can be used for execution and verification of constraint
227 Section 7.5 Summary 217 models, stores information about executed instances in ProM readable MXML files. This is the first necessary step towards the integration of process mining and the constrain-based approach. Moreover, several plug-ins that use techniques tailored towards the constraint-based approach are already available in the ProM framework [91]. While the declare prototype can execute and verify constraint models, these plug-ins offer other useful capabilities, as Table 7.7 shows. Table 7.7: Capabilities of declare and the four ProM plug-ins enactment verification conformance discovery support ProM plug-ins LTL SCIFF Log-Based declare Checker Checker DecMiner Recommendations The LTL Checker plug-in offers verification of event logs against properties specified in LTL [25]. Moreover, this plug-in can be easily used to verify past executions recorded in event logs against ConDec templates and constraints because the declare prototype offers automatic export of templates and constraints to LTL Checker files. The SCIFF Checker plug-in uses the powerful SCIFF language [48, 49] for advanced verification of event logs. This plug-in allows for the use of time and data variables in a more sophisticated way than the LTL Checker. Moreover, the DecMiner plug-in is able to learn SCIFF formulas from event logs and to automatically generate a model containing these formulas as constraints [ ]. Thanks to the mapping between ConDec and SCIFF [70], DecMiner automatically generates a ConDec model from the learned formulas. Process mining can also be useful during the execution of instances. Analysis of past executions can serve as basis for generating recommendations for users that are currently executing process instances [258]. By using the Recommendations plug-in in ProM, declare is able to overcome the flexibility vs. support tradeoff, i.e., declare users can get support from the system without sacrificing the flexibility. Process mining techniques provide a powerful complement for the constraintbased approach. Moreover, the flexible style of work supported by declare can benefit from the true integration of the two approaches, as Figure 7.17 shows. Accountability, which is addressed by monitoring of executed instances with the LTL Checker, SCIFF Checker and DecMiner, plays an important role in flexible processes. On the one hand, flexibility allows people to work in various ways, i.e., people working with systems like declare are more likely to be able to work in their own preferable way. The traditional procedural approach, on
228 218 Chapter 7 Using Process Mining for the Constraint-Based Approach flexibility DECLARE support ProM: Log-Based Recommendations accountability ProM: LTL Checker, SCIFF Checker, DecMiner Figure 7.17: Integration of various approaches the other hand, tends to force people to work in a pre-defined way and people have much less choice. Thus, monitoring the actual execution of processes can be considered as more important in the constraint-based than in the procedural approach. Consider, for example, the subway systems in Paris and Amsterdam. The Paris subway relates to the procedural approach, i.e., one must have a ticket in order to enter the subway. Therefore, frequent ticket controls are no longer necessary once the passenger is in the subway. Amsterdam s subway is more flexible, i.e., anyone can enter it. Due to this fact, random ticket controls are often conducted within the subway system in Amsterdam. Flexibility can sometimes come at a cost. When multiple options are available, making the right decision might be difficult (e.g., inexperienced users, unusual situations, etc.). Therefore, providing an adequate support for users is crucial in flexible processes. The Log-Based Recommendations plug-in in ProM can offer support to users working with flexible systems like declare. Moreover, the provided support does not sacrifice the intended flexibility at all. Moreover, the support provided by this plug-in is customizable and adjustable, e.g., based on the information retrieved during analysis of past executions (e.g., the LTL Checker, SCIFF Checker and DecMiner). Moreover, these four plug-ins enable a-posteriori analysis of the effects of the provided support. For example, if the effects are not satisfiable, the recommendation service (i.e., Log-Based Recommendations plug-in) can be adjusted.
229 Chapter 8 Conclusions In this chapter we summarize our findings: in Section 8.1 we describe how we addressed our research goal, in Section 8.2 we summarize the contributions, in Section 8.3 we describe the limitations of our work, and in Section 8.4 we propose directions for future work. 8.1 Evaluation of the Research Goal The goal of the research presented in this thesis is to enable companies that use BPM systems to achieve an optimal balance between local and centralized decision making. In order to achieve this (i.e., the goal in the research), we (1) proposed a comprehensive constraint-based approach towards process support and (2) developed the declare prototype of a workflow management system that can offer an optimal ratio between flexibility and support (cf. Section 1.5). declare can be downloaded from The problem with current systems for the automation of business processes is that they either focus on providing flexibility or support. The drawback of such systems is that users who work with flexible systems (e.g., groupware systems) do not get sufficient support from the systems, and users who work with systems that do provide support (e.g., workflow management systems) do not have enough flexibility in their work (cf. Chapter 1). In this thesis, we presented a constraint-based approach to workflow management systems that is able to combine flexibility and support. Moreover, besides the theoretical definition of the language in chapters 4 and 5, in Chapter 6 we also present the proof of concept prototype declare [2, 183]. We hope that enriching workflow management systems with flexibility will encourage organizations to combine workflow technology on the one hand and democratic work regimes with a high degree of localized decision making on the other. This way, organizations can benefit from the automated support that these systems offer.
230 220 Chapter 8 Conclusions 8.2 Contributions We believe that our approach is comprehensive in more aspects that just combining flexibility and support. First, in Section 6.11, we have showed that it is possible to combine multiple approaches into a federated workflow management system. This is a particulary interesting finding from a practical point of view because contemporary organizations implement multiple business processes, which can have different characteristics. Moreover, it is often the case that some parts of a business process require a high degree of support, while other parts of the same process must be flexible. Second, in Chapter 7, we showed that existing process mining techniques can support our constraint-based approach in the diagnosis phase of the BPM cycle (cf. Figure 1.1 on page 2), which proves that our approach can be applied to all phases of the cycle. This section summarizes our contributions. In Section we describe how our approach allows for different kinds of flexibility of workflow management systems. In Section we summarize the different types of support that our approach provides. Section discusses how our approach can help apply workflow technology in organizations that use democratic regimes of work with a high degree of localized decision making. In Section we briefly describe the possibility to combine various approaches to business processes. The applicability of our approach to the full BPM life cycle is described in Section Flexibility of the Constraint-Based Approach As pointed out in [ ], there are several types of flexibility when it comes to workflow management systems: (1) flexibility by design, (2) flexibility by underspecification, (3) flexibility by change, and (4) flexibility by deviation. In Chapter 2 we presented many approaches and systems that aim at improving the flexibility of workflow technology, but none of these approaches covers all types of flexibility. In the context of flexibility, the most important contribution of our approach is the fact that it is able to support all types of flexibility. Actually, despite the fact that the primary motivation of our research is enabling a high degree of flexibility by design, it is remarkably easy to also provide for all other types of flexibility using our constraint-based approach (cf. sections and 3.3). Flexibility by Design Flexibility by design is achieved in a workflow management system when its process models cover many execution alternatives [125, ]. In general, constraint-based approaches are obvious candidates for offering a high degree of flexibility by design because execution alternatives are implicitly specified in constraint models, i.e., all execution alternatives that do not violate constraints are possible [ ]. When it comes to our constraint-based approach, we allow for all execution alternatives that do not violate any mandatory constraint.
231 Section 8.2 Contributions 221 Moreover, with our approach it is possible to easily develop process models that offer a wide range of execution alternatives. For example, both models presented in figures 5.17 on page 145 and 5.18 on page 148 allow for infinite number of execution alternatives. Flexibility by Underspecification Flexibility by underspecification is achieved in a workflow management system if it is possible to design process models that have unspecified parts, which will be determined at run-time [125, ]. This type of flexibility is offered with dynamic decomposition of constraint-based models and YAWL models, as described in Section Flexibility by underspecification in our approach can be achieved in a dynamic way, where the user at the latest moment during the execution of instances decides (1) whether to invoke a subprocess, (2) which subprocess to invoke, and (3) with which parameters. Flexibility by Change Flexibility by change is achieved in a workflow management system when it is possible to change models of running instances during their execution [125, ]. Our constraint-based approach and the declare prototype offer flexibility by change by allowing for a comprehensive change of running instances: activities and constraints in instances can be easily added, removed, and changed. Considering the problems that traditional approaches face when it comes to run-time change (cf. Section 2.1.5), our constraint-based approach uses a remarkably simple method to handle this kind of change [184]. On the one hand, it is straightforward to check if a specific ad-hoc change is applicable to the state of the current instance by simply checking if the referring change permanently violates a mandatory constraint. On the other hand, in case of an invalid change, it is possible to produce a detailed diagnostics pinpointing the reason for the change failure (cf. sections 4.5, 5.7, and 6.5). Flexibility by Deviation Flexibility by deviation is achieved in a workflow management system if it is possible to deviate from a process model during execution, without having to change the model [ ]. Our constraint-based approach allows for optional constraints in process models (besides mandatory constraints). While mandatory constraints must be fulfilled during execution, optional constraints can be violated (cf. sections 4.2 and 5.4). Moreover, the declare prototype will provide an informative warning to users each time when they are about to violate an optional constraint, as shown in Figure 6.8 on page 171. In this manner, users can make decisions whether or not to violate the constraint and to deviate from the constraint-based model.
232 222 Chapter 8 Conclusions Support of the Constraint-Based Approach Besides allowing for a high degree of flexibility, our constraint-based approach and the declare prototype support various types of user assistance. In this section we briefly summarize the various types of support that can be provided by our approach and the declare prototype: verification of models to detect errors, monitoring states of constraints and instances, enforcing the correct execution of instances, run-time recommendations based on past executions, and analysis of instances executed in the past. Verification of Models Constraint-based models can contain an arbitrary number of constraints that interfere in subtle ways. This can cause errors in models. Verification of constraintbased models provides an automated mechanism for detecting two types of errors, as described in sections 4.6 and 5.8. First, it is possible to automatically detect if a constraint-based model contains an event/activity that can never be executed, i.e., the so-called dead event/activity. Second, it is possible to detect that instances of a constraint-based model can never be executed correctly because it is not possible to satisfy all mandatory constraints from the model, e.g., there is a conflict in the model. In addition to automated verification of constraintbased models, it is also possible to detect the exact combination of constraints that causes (each of) the error(s). Reporting both the error and its direct cause (e.g., Figure 6.11 on page 174 and Figure 6.12 on page 175) helps developers of constraint-based models to understand the problem and eliminate errors. Monitoring States of Instances and Constraints Execution of instances of constraint-based models is driven by constraints. In order to execute an instance in a correct way, it is necessary that, at the end of the execution, all mandatory constraints are satisfied, i.e., that the instance is satisfied. Executing activities in an instance may change the state of one or more constraints, and the instance itself (cf. sections 4.4 and 5.6). Therefore, it is important that the instance state and states of all its constraints are presented to users throughout the execution of the instance. The insight into the state of the instance and its constraints helps users of the declare prototype to understand what is going on and which actions are necessary in order to execute the instance in a correct way (cf. Figure 6.7 on page 169). Enforcing Correct Execution of Instances Some actions of users might cause an instance (and its constraints) leave the satisfied state. In some of such cases it is possible to take some actions that will eventually lead to a correct, i.e., satisfied, instance. We refer to this type
233 Section 8.2 Contributions 223 of violations as to temporary violations. In other cases, the instance becomes permanently violated, i.e., it becomes impossible to satisfy the instance in the future. Especially in instances with multiple constraints, it is very difficult for users to be aware of actions that will permanently violate the instance. Therefore, as described in sections 4.4, 5.6, and 6.4, the declare prototype prevents users from taking actions that lead to permanent violation of instances. Recommending Effective Executions While executing constraint-based models, users typically have many alternatives available, i.e., the constraint-based approach offers a high degree of flexibility. In some situations, it might be difficult for users to decide themselves which alternative is the most appropriate one. Simultaneous run-time recommendations about which action leads to which result can help users in these situations. As described in Section 7.4, the declare prototype provides run-time recommendations generated by the ProM tool [8, 91, 258] to its users. The architecture of the recommendation service in ProM allows for generation of various kinds of recommendations. In general, recommendations are generated based on past executions and are tailored towards a specific goal. For example, one recommendation strategy could be recommending actions that, in the past, led to quick instance completion. By presenting recommendations as additional information on the screen (cf. Figure 7.16 on page 216), declare allows its users to choose whether to follow the recommendations or not. Analysis of Past Executions Workflow management systems support the execution of vast numbers of instances. Often, it is hard to keep track of all instances that were executed in the past. However, the information about past executions can be very useful in practice. Process mining techniques focus on various types of analysis of past executions, which can help to improve business processes (e.g., by redesigning process models) [28]. Although initially motivated by traditional approaches, existing process mining techniques can also be applied in the context of our constraint-based approach, as described in Chapter 7. Moreover, several mining techniques tailored towards the constraint-based paradigm are already available (e.g., the LTL Checker, SCIFF Checker and DecMiner presented in sections 7.2, and 7.3.2, respectively). This is promising as processes that require a lot of flexibility typically benefit most from the results of process mining The Constraint-Based Approach and Organization of Human Work Due to the lack of flexibility, procedural workflow management systems are unable to support Democratic Work Regimes (DWRs), as shown by the evalua-
234 224 Chapter 8 Conclusions tion with respect to structural requirements of De Sitter [231] in Table 2.3 on page 43. Enhancing flexibility of workflow technology allows users to make decisions about how to work, i.e., flexibility enables local decision making in business processes (cf. Chapter 1). Due to a higher degree of flexibility, our constraintbased approach an the declare prototype enable the implementation of organizational styles that advocate local decision making in business processes, e.g., Socio-Technical Systems (STS) (cf. Section 2.3). STS advocate organization of work into Self-Managed Work Teams (SMWTs), where a meaningful piece of a business process (i.e., a subprocess instead of a single activity) is allocated to a SMWT as their assignment. Within one assignment, decisions are made locally by the SMWT, i.e., the SMWT controls the execution of the assignment. The declare prototype supports the style of work advocated by STS in two ways. First, work can be structured into meaningful pieces via the possibility to create arbitrary decompositions of declare constraint-based models, as described in Section Second, a high degree of flexibility allows declare users to choose between multiple execution alternatives within instances they are working on. These two factors enable our approach to fulfil the requirements for a socio-technical style of work specified by De Sitter [231], as shown in Table 8.1. Table 8.1: Evaluation of declare with respect to the STS structural requirements of De Sitter [231] Socio-Technical requirements declare 1 functional deconcentration YES: multiple execution alternatives (multiple parallel processes) allow users to choose the most appropriate alternative for each work order. 2 integration of performance and control YES: flexibility allows people who execute an instance to control how the instance is executed. 3 performance integration A YES: model decomposition allows structuring (whole tasks) processes into meaningful pieces of work. 4 performance integration B NOT (prepare + produce + support) APPLICABLE 5 control integration A YES: flexibility allows people to select the (sensing + judging +selecting + acting) appropriate corrective alternative and execute it. 6 control integration B NOT (quality + maintenance + logistics APPLICABLE + personnel, etc.) 7 control integration C YES: because people control the operational (operational + tactical + strategic) aspect of their work, operational, tactical and strategic control can be integrated at the workplace level.
235 Section 8.2 Contributions 225 Functional deconcentration. In a functional deconcentration different groups of work orders have different executions [231]. Constraint-based models allow for many execution alternatives as long as the main rules (i.e., constraints) are followed. This allows declare users to choose the most appropriate execution alternative for each (group of) work order(s), i.e., functional deconcentration can easily be achieved with declare. Integration of performance and control. The same people who perform the work should also be authorized and responsible for control of work [231]. Flexibility and decomposition of business processes in declare allows for the integration of performance and control, i.e., users can control the piece of a business process that they are executing. Integration into whole tasks. Instead of specialized, short-cycled tasks, tasks should form a meaningful unit of work allocated to a group of people for execution [231]. Decomposition of declare models allows for structuring a large business process into subprocesses, which are allocated to a group for execution. Therefore, integration into whole tasks is possible in declare. Integration of preparation, production and support. Preparation, production and support functions must be integrated at the workplace level [231]. As discussed in Section 2.3, workflow management systems support the production function only, and therefore this parameter is not applicable. This also applies for declare. Integration of control functions: sensing, judging, selecting, and acting. The functions of a control cycle are: (1) sensing the process states, (2) judging about the need for a corrective action, (3) selecting the appropriate correction action, and (4) acting with the selected control action [231]. All control functions should be integrated at the workplace level [231]. Because declare allows people to control their work, they can sense, judge, select, and finally act based on the selected control action. Hence, this requirement of De Sitter [231] is also supported. Integration of the control of quality, maintenance, logistics, personnel, etc. Control of quality, maintenance, logistics, personnel, etc. should be conducted at the workplace level by people who are performing the work [231]. As discussed in Section 2.3, workflow management systems support the production function only, and therefore this parameter is not applicable. This also applies for declare.
236 226 Chapter 8 Conclusions Integration of operational, tactical and strategic controls. Operational, tactical and strategic controls should be integrated at the workplace level [231]. Thanks to the flexibility provided by declare, people executing the work control the operational aspect of their work. This makes it possible to integrate operational, tactical and strategic control at the workplace level. If we compare Table 8.1 to the earlier analysis of contemporary workflow management systems (cf. Table 2.3 on page 43), then it becomes apparent that declare provides much more support for the style of human work advocated by STS Combining the Constraint-Based Approach with Other Approaches Different types of business processes must often be combined in practice. On the one hand, a company can run various types of business processes. On the other hand, a business proces itself can consist of subprocesses with different characteristics. Therefore, workflow technology should be able to support various types of processes. The declare prototype can support mixtures of various processes via the decomposition of constraint-based declare and procedural YAWL processes (cf. Section 6.11). Moreover, the same principle can be applied to other systems in order to enable combining even more approaches (e.g, worklets [41,44,45]) Business Process Management with the Constraint-Based Approach The Business Process Management (BPM) life cycle consists of several phases: design, implementation, enactment and diagnosis of business processes. Workflow management systems and process mining tools are two types of software products that can, combined, support the whole BPM cycle. As a workflow management system, the declare prototype supports the first three phases: design, implementation and enactment phases. Process mining tools, e.g., the ProM tool, aim at supporting the diagnosis phase by allowing for automated analysis of executed business processes. Although not tailored for our constraint-based approach, existing process mining techniques can be used to diagnose executions of constraint-based processes. Moreover, some of the existing techniques (e.g., the LTL Checker, the SCIFF Checker, and the DecMiner) are fully applicable to the constraint-based approach. This shows that the constraint-based approach can be applied to the whole BPM cycle, as shown in Figure 8.1.
237 Section 8.3 Limitations 227 ProM process mining tool diagnosis DECLARE workflow management system process enactment process implementation process design Figure 8.1: Constraint-based approach in the BPM life cycle 8.3 Limitations Besides the problem of a missing activity life-cycle in the proposed ConDec language, which is described in detail in Section 5.9, the research presented in this thesis has two further limitations. First, in Section 8.3.1, we describe problems related to the complexity of models with many constraints. Second, in Section we describe problems related to the absence of a proper empirical evaluation of our approach Complexity of Constraint-Based Models There are two problems related to the complexity of models with a large number of constraints. The first problem is of the technical nature: due to the use of LTL for constraint specification, efficiency dramatically decreases when the number and complexity of mandatory constraints rises. The second problem is related to the human capacity to deal with information. Efficiency-Related Problems As described in Section 5.4, an automaton is generated for a conjunction of formulas of all mandatory constraints in an LTL-based constraint model, i.e., for the so-called mandatory formula (cf. Definition on page 142). The mandatory formula is used for (1) execution, (2) ad-hoc change and (3) verification of constraint models based on LTL. Since the automata generated for an LTL formula is exponential in the size of the formula [74,84,85,108,111,112,158], the time needed for generating these automata becomes very long for big mandatory formulas. This can cause problems. For example, generating the automaton for an instance with a big mandatory formula may be extremely slow. There are two possible causes of this problem. First, the more mandatory constraints there are in a model, the larger the mandatory formula for the model
238 228 Chapter 8 Conclusions will be. Second, as shown in Section 5.2, various constraint templates have different LTL formulas. For example, the LTL formula for the alternate precedence template presented in Table 5.2 on page 129 is significantly more demanding from a computational point of view than the formula for the existence template presented in Table 5.1 on page 127. Consider the Fractures Treatment model presented in Figure 6.5 on page 167 as an illustrative example. Loading a new instance of this model in declare takes approximately 12 seconds on a computer with a Pentium 4 processor [4] of 2.80 GHz and 0.99GB of RAM using Microsoft Windows XP Professional version 2002 [5]. If the alternate precedence constraint between activities check X ray risk and perform X ray is removed from the original Fractures Treatment model, then staring an instance in declare on the same computer takes approximately 1 second. Obviously, the size of the LTL formula for the alternate precedence constraint dramatically increases the time to construct the automaton for the mandatory formula of the instance, i.e., it may take too long to create and start a new instance of this model in declare. The efficiency problem described above can occur at several points in declare. First, when creating a new instance, an automaton is generated for the mandatory formula, which can cause the instance creation to take a long time. Second, when applying an ad-hoc change an automaton is generated for the mandatory formula of the changed instance, which can cause the application of the ad-hoc change to last too long. Third, whenever a condition on a mandatory constraint changes (i.e., after completion of an activity in the instance), a new automaton is generated for the mandatory formula, which can cause the processing of a completed activity to take too long (cf. Section 6.9). Fourth, during verification, i.e., analyzing dead activities, conflicts and history violations during ad-hoc change (cf. sections 5.8 and 5.7), an automaton is generated for combinations of mandatory constraints in order to identify the cause of error, which can cause the verification to be too time-consuming. Note that, in case of models that contain dead activities and conflicts, the plain detection of errors might take take less time than finding the set of constraints that causes the error. This is because the original model has a smaller set of satisfying traces than its submodels, i.e., models where some of the mandatory constraints are removed (cf. Property 4.2.5). Therefore, the automaton generated for the whole model contains less traces than the automata created for the subsects of mandatory constraints. As a consequence, the generation of the automaton for the whole model might be considerably faster than the generation of the automata for subsets of mandatory constraints. Capacity of Humans to Deal With Information Constraints represent rules that should be followed while executing instances of constraint-based models. Therefore, it is important that people who are exe-
239 Section 8.3 Limitations 229 cuting an instance are aware of states of constraints in the instance throughout the whole execution. In this manner, people can understand what can or cannot be done in an instance and why, and what should be done in order to satisfy constraints and execute the instance correctly. Because of this, the declare prototype presents whole instances and states of all constraints (by means of different colors) to its users (cf. Section 6.4). Because people must keep track of states of all constraints while executing instances, instances with many constraints can easily become too complex for people to cope with. The fact that the different types of constraints have different semantics and the fact that constraints can interact in various ways makes handling instances with many constraints even more complex. On the one hand, an instance with many constraints contains a high degree of variance and the amount of information that people must handle. On the other hand, there are severe limitations on the amount of information people are able to receive, process and remember [172]. For example, research shows that it is difficult for people to deal with (approximately) more than seven chunks of varying information at a time [122,172]. Therefore, models with many constraints can easily become too complex for humans to cope with. Process Decomposition as a Solution At the moment, the only possible solution that can solve the two problems described above seems to be using the constraint-based approach to model business processes with a moderate number of constraints. To achieve this, we recommend decomposing large processes into small units of work, preferably modeled using our constraint-based approach. This will increase the efficiency on the one hand, and make it easier for people to execute their work on the other hand Evaluation of the Approach Another limitation of the work presented in this thesis is the lack of a proper empirical evaluation of the proposed constraint-based approach and the declare prototype. An empirical test is missing due to the time limitations. However, we did consider the available options for an evaluation. We see three possibilities for an evaluation: (1) evaluation by experts, (2) conducting laboratory experiments, and (3) testing in practice. Evaluation by Experts Evaluation by experts is not an uncommon practice in the area of workflow technology [247, 248]. This type of tests can be performed by, e.g., means of conducting a survey or conducting interviews with experts in the field. These experts can come from academia and industry and the evaluation is based on their knowledge and professional opinion about the evaluated approach. The
240 230 Chapter 8 Conclusions advantage of this kind of evaluation is that experts already have the knowledge about the state-of-the-art in the field, and can easily judge the advantages and disadvantages of our approach. A drawback of this type of evaluation is that the evaluation is conducted by experts, who may have different opinions than the end-users. Laboratory Experiments Our approach can also be evaluated with experiments conducted in laboratory conditions. For example, a group of participants (e.g., students) can act as endusers and work with the declare prototype on some imaginary scenarios. The advantage of experiments is that the approach is evaluated by people who acted as end-users and used the declare tool for a while. The main drawback of experiments as an evaluation method is related to the fact that flexibility comes into play when unexpected (exceptional) scenarios occur. On the one hand, waiting for spontaneous exceptions usually takes a lot of time, and can make experiments too costly. On the other hand, prescribing exceptional situations and causing them on purpose during experiments is not a good solution, because prescribed situations cannot be considered unpredicted and exceptional. Testing in Practice The most desirable manner of evaluation seems to be testing the approach in practice. This would solve the problems imposed with the previous two methods: (1) end-users are the ones who evaluate the approach, and (2) practice can offer realistic unpredictable situations where flexibility and support are both needed. However, practice tests are also the most difficult type of evaluation to conduct. There are several reasons for this. First, declare is only a prototype of a workflow management system and would need a lot of further product development in order to be suitable for use in practice. Second, it is not likely that an organization would be willing to abandon using a commercial workflow system and commit execution of its business processes to the testing of a prototype. Third, in order to evaluate our approach, it is necessary that end-users also have knowledge about (many) other approaches, which is not likely in practice. 8.4 Directions for Future Work The work presented in this thesis represents an initiative to increase flexibility of workflow management systems, without sacrificing the support. We hope to have shown that a constraint-based approach is applicable to workflow management systems, and that workflow technology can benefit from the proposed approach. However, the current approach and the declare tool can be improved in various ways, as indicated below.
241 Section 8.5 Summary 231 In Chapter 5 we propose ConDec as a constraint-based language. ConDec focuses only on relations between activities in constraint models (cf. Section 5.2). However, business processes can often also depend on rules that include other perspectives of processes, like, e.g., data, resource and time perspective. In the future, it would be interesting to extend the ConDec language with other perspectives. For example, the time perspective would enable the use of deadlines (e.g., the response template can specify that activity A must be followed by activity B within 5 days ). This can be achieved by, e.g., using the so-called Extended Timed Temporal Logic and timed automata [63] or LogLogics [123]. In Chapter 5 we propose LTL for specifying constraint templates and constraints. As described in Section 5.9, using LTL for this purpose introduces the problem of a rather artificial activity life cycle. Moreover, as discussed in Section 8.3, specifying constraints with LTL seriously decreases efficiency of the approach. Therefore, it would be interesting to also use other languages than LTL as a basis in order to identify the most suitable language. Moreover, using other languages for the constraint specification may reveal other possibilities with respect to possible constraint templates (i.e., maybe different templates can be specified using some other language). The proposed approach should also be evaluated in a proper way. An appropriate evaluation can reveal both strong and weak points of the approach and indicate directions for further improvement. As described in Chapter 1, workflow management systems are not the only type of systems that are used for business process support in practice. Groupware systems also can perform this function. However, these two types of systems focus on different aspects of business processes: while workflow technology aims at automating the operational aspect of processes, groupware is mostly used to support human collaboration. As a workflow management system, the declare prototype focuses on introducing flexibility with respect to the operational aspect of work. However, flexibility implies a deeper involvement of people in work and, therefore, a more intensive collaboration. Therefore, we also propose extending our approach with groupware-like functionality in order to be able to support both operational and collaborative aspects of the flexible work style. 8.5 Summary In this thesis we presented a new approach to workflow management systems that is able to achieve a better balance between flexibility and support. We hope that, by optimizing flexibility and support, workflow technology will provide sufficient support for handling complex situations and, at the same time, enable people to control their work. As a result, people who work with workflow management systems will be more satisfied and achieve better results while performing their daily work.
242 232 Chapter Conclusions
243 Appendix A Work Distribution in Staffware, FileNet and FLOWer In order to gain insight into how workflow management systems distribute work to people, we have modeled the work distribution mechanisms of three commercial workflow management systems: Staffware [238], FileNet [107] and FLOWer [180]. Staffware and FileNet are examples of two widely used traditional workflow management systems. FLOWer s is based on the case-handling paradigm, which can be characterized as a more flexible approach [26,39]. Each of the models is built as a CPN model [138,139,152] and is an extension of the basic model presented in Section In the remainder of this chapter we present each of the developed CPN models: Staffware in Section A.1, FileNet in Section A.2, and FLOWer in Section A.3. Finally, Section A.4 concludes the chapter. A.1 Staffware We extended the basic model to represent the work distribution of Staffware. The way of modeling the organizational structure and resource allocation algorithm are changed, while the concept of work queues and the possibility of the user to forward and suspend a work item are added to the model. In this section we first describe the organizational structure of Staffware. Second, we describe the work queues and the two level distribution that accompanies them. Third, we explain the resource allocation of Staffware and its allocation function. Finally, we show which features have to be added to the basic model to implement the suspension and forwarding of work. Simple organizational structure can be created in Staffware using the notions of groups and roles. The notion of group is defined as in the basic model, i.e., one group can contain several users, and one user can be a member of several
244 234 Chapter A Work Distribution in Staffware, FileNet and FLOWer groups. However, specific in Staffware is that a role can be defined for only one user. This feature does not require any changes in the model structure or color sets. However, it changes the way the initial value for the user maps should be defined one role should be assigned to only one user. A.1.1 Work Queues Groups are used in Staffware to model a set of users that share common rights. If a whole group is authorized to execute a work item, then each member of the group is authorized to execute the work item. Staffware introduces a work queue for every group. The work queue contains work items that group members can execute and it is accessible to all members of the group. Single users can be considered to be groups that contain only one member. Thus, one work queue is also created for every user and this personal queue is only accessible by a single user. Each user has access to the personal work queue and to work queues of all the groups the user is a member of. While the basic model offers a work item directly to users, Staffware offers items in two levels. First, a work item is offered to work queues (colset WQ = string) in the work distribution module (cf. Figure A.1). We refer to this kind of work items as to queue work items (colset QWI = product WI * WQ). Second, every queue work item is offered to all members of a group (i.e., work queue) in the offering sub-module (cf. Figure A.1). Only one member will execute the queue work item once. We refer to a queue work item that is offered to a member (of a work queue) as to user work item (colset UWI = product User * QWI). Figure A.1 shows the first level of distribution in the work distribution module of Staffware. The transition offers to work queues removes a work item token from the place new work items and offers it to work queues by producing queue work item tokens in place to offer to work queues. To do this, it retrieves activity maps, user maps and field maps as input elements. It also produces a work item token in the place offered work items. The queue work item tokens in the place to offer to work queues are produced by the allocation function offer qwi in the arc inscription between the transition offers to work queues and the place to offer to work queues. This function takes a work item, activity maps, user maps and field maps 1 as parameters. The effects of this function are explained in Section A.1.2. The transition offers to work queues produces a queue work item token in the place offered work items to store the information about which work items are expected to be completed by work queues. A token in the place to offer to work queues sends a message to the offering sub-module that the queue work item should be further distributed to the work queue members. After the completion of a queue work item, the offering sub-module creates a queue work item in the 1 The fifth parameter is an empty list and is used as aid to perform calculations in the function. This parameter should always be left empty and does not influence the function results.
245 Section A.1 Staffware 235 place completed queue work items. The transition completes work item considers a work item to be completed when all queue work items that originate from that work item are completed. This transition retrieves a work item from the place offered work items and waits until all queue work items that originate from (that were offered to work queues based on) the referring work item. For this reason, the allocation function offer qwi is called on the arc inscription between the place completes work item and the transition completes work item with the same parameters like in the arc inscription between the transition offers to work queues and the place to offer to work queues. iamaps amaps activity map fmaps AMaps amaps ifmaps fields fmaps FMaps new work items iwi(* work item is first offerd to work Textqueues on the basis of tmaps, umaps and fmaps *) WI wi offers to work queues wi wi offered work items WI wi completes work item offer_qwi(wi,amaps,umaps,fmaps) umaps completed work items user map umaps WI (* only umaps are necessary as input for offering queue work items to users *) iumaps UMaps to offer to work queues QWI offering offering (* a work item is complete when every queue, to which it was offered, has executed the work item *) to be offered Out UWI suspended In UWI forwarded In UWIxWQ rejected Out UWI selected In UWI approved Out UWI withdrawn offer offer_qwi(wi,amaps,umaps,fmaps) Out UWI completed completed queue work items In QWI UWI Figure A.1: Staffware - work distribution Distribution to work queues in Staffware follows a similar logic like the distribution in the basic model, but also introduces some changes. A difference between these two distribution models is that, instead of distributing work directly to the work lists module (users) like in the basic model, the Staffware work distribution module hands-off the distribution to users to is sub-module offering. While a work item is the object of distribution in the basic model, the Staffware work distribution module distributes queue work items. Figure A.2 shows the second level of distribution in Staffware within the offering sub-module. The first transition to fire here is the transition offers to work queues, when the message about the new queue work item is received from the work distribution module as a new queue work item token at the place to offer to work queues. This transition (1) removes the queue work item from the place to offer to work queues and produces it in the place offered work queues, and (2) retrieves user maps and creates new user work items in the place to be offered. The offers for users are created by the allocation function offer uwi, which takes
246 236 Chapter A Work Distribution in Staffware, FileNet and FLOWer a queue work item that is to be offered and the user maps as parameters. This function searches in user maps for all members of the work queue and creates a user work item for each member that was found. user map I/O (* use umap to offer qwi to queue members *) umaps [elt(qwi,qwis)] to offer (* every queue work item to work queues is offered to members I/O qwi QWI of the queue *) offer_uwi(qwi,umaps) offers qwi::qwis qwis iumaps [] offered queue work items qwis QWIs UMaps umaps qwis selects qwi assigned work items QWI qwi completes qwi del(qwi,qwis) forward and suspend forwardandsuspend to be offered Out UWI suspended In UWI forwarded In UWIxWQ rejected [not(elt(qwi,qwis))] Out UWI (u,qwi) Reject (u,qwi) (u,qwi) selected (u,qwi) In UWI approved Out UWI offer_uwi(qwi,umaps) withdrawn offer (* withdraw all offers Out for this queue work item *) UWI (u,qwi) completed In (* a queue work item will be executed UWI only once, by one user/queue member *) completed queue work items Out QWI Figure A.2: Staffware - offering The offering sub-module follows the logic of the basic model work distribution module. For a detailed description of this kind of distribution we refer the reader to the Section However, instead of starting with work items like the basic model, the offering sub-module starts with available queue work items. An addition to the Staffware model was the possibility to suspend and forward work. These mechanisms were added in the suspend and forward sub-module, which will be explained Section A.1.3. A.1.2 Resource Allocation The resource allocation of Staffware is captured in the two level distribution mechanism with two allocation functions: (1) function offer qwi (cf. Figure A.1) takes a new work item, activity maps, user maps and field maps as parameters and allocates work queues that are authorized to execute the work item; (2) function offer uwi (cf. Figure A.2) takes a queue work item and user maps as parameters and allocates all users that are members of the referring queue. Just like the basic model, Staffware searches for possible users based on roles and groups. In addition to this, in Staffware users can be allocated by their user names and data fields in the process. Thus, activity maps in the Staffware model
247 Section A.1 Staffware 237 assign a list of users, roles, groups and fields to each activity (TMap = product Task * Users * Roles * Groups * Fields). Figure A.3 shows how an activity map is specified in Staffware. Based on activity maps, function offer qwi (cf. Figure A.1) allocates work queues that are authorized to execute the work item: (1) when a user name is provided in a activity map, the work item is offered to personal work queue of the referring user; (2) for every role in the activity map, this function offers the work item to the personal work queue of the user with that role (note that one role can be assigned to only one user); (3) a work item is offered to the work queue of every group that is stated in the activity maps; and (4) for authorizations via fields, allocation is executed at the run-time following the three above described allocation strategies. The allocation at run-time is referred to as a dynamic work allocation. Every field has a unique name (colset Field = string), e.g., next user. During the execution of the process, every field is assigned a value, and this value changes (e.g., users can assign values to fields). Staffware assumes that the value of the assigned data field can be a group name, a role name or a user name. If the field next user (which for example has the value of Joe Smith assigned) is specified in the activity map of an activity, then the actual value of the field is assigned to the activity map entry at the moment when the activity becomes enabled. Thus, Joe Smith will be used in the allocation. Figure A.3: Staffware - an activity map Figure A.4 shows Staffware Process Client tool, where users can access their work queues and process the work items. In this case, there are two work queues: (1) the work queue for the group Information Systems, and (2) the personal work queue of the user Joe. When all the properties of the Staffware work distribution are merged together, unexpected scenarios might happen. In the example shown in Table 3.3 on page 59 activity read article should be allocated to users which are from the group Information Systems and have the role professor. The basic model allocates this activity to users that are from the group Information Systems and have the role professor, i.e., to the user Joe. Unlike the basic model, Staffware allocates this activity to: (1) the work queue of the group Information Systems (which members are Mary and Joe), and (2) the personal queue of the user who
248 238 Chapter A Work Distribution in Staffware, FileNet and FLOWer Figure A.4: Staffware - a work queue with a work item has the role professor (with one member Joe). A work item is completed in Staffware when all its queue work items are completed (cf. Figure A.1). Thus, the activity read article will be execute two times: (1) once by a member of the of the group Information Systems Mary or Joe, and (2) once by the user who has the role professor Joe. As the result of Staffware work distribution, the work item read article has two possible scenarios of the execution. This activity will be executed either once by Mary and once by Joe, or two times by Joe. Which one of these two scenarios will take place, depends only on which user is faster, i.e., on which users select the activity before the others do. A.1.3 Forward and Suspend When the user selects a work item in the basic model, the work item is assigned to him/her, and (s)he can start the work item and execute it. Figure A.5 shows that Staffware offers a more realistic and somewhat more complex model of the life cycle of a work item than the basic model (cf. Figure 3.9 on page 57). After a user selects the work item, it is assigned to him/her, and then the user can either start the work item or forward it to another user. Forwarding transfers the work item to the state offered, because it is automatically offered to the new user. If the user chooses to start the work item, (s)he can execute it or suspend it. When a work item is suspended, it is transferred back to the state initiated. After this, the system offers the work item again to all authorized users. Forwarding and suspending of work items adds two messages that are exchanged between work distribution and work lists modules in Staffware model. Figures A.1 and A.2 show two new places forward and suspend. Users trigger these two new actions in the start work and stop work sub-modules of the work list module (cf. Figure 3.12(b) on page 61). Figure A.6(a) shows that in the Staffware sub-module start work the user can choose to select or forward (to another work queue) the work item. To enable forwarding, we add the transition forward to the start work sub-module in Staffware model. The request to select a work item is represented with a user work item in the place requested. After this request, the start work sub-module
Workflow Support for the Healthcare Domain
Workflow Support for the Healthcare Domain register patient check patient data physical examination make documents consultation give information Ronny Mans Workflow Support for the Healthcare Domain CIP-DATAMore,More information RectorMoreMoreMore informationMore information
A Dynamic Perspective on Second Language Development. Tal Caspi
A Dynamic Perspective on Second Language Development Tal Caspi i The work in this thesis has been carried out under the auspices of the School of Behavioral and Cognitive Neuroscience (BCN) and the CenterMore information
Cover Page. The handle holds various files of this Leiden University dissertation.
Cover Page The handle holds various files of this Leiden University dissertation. Author: Tabak, F.C. Title: Towards high-speed scanning tunneling microscopy Issue Date:More information,More information
Title: Basic Concepts and Technologies for Business Process Management
Title: Basic Concepts and Technologies for Business Process Management Presenter: prof.dr. Manfred Reichert The economic success of an enterprise more and more depends on its ability to flexibly and quicklyMore informationMore informationarMore information.MoreMore information
Authenticity and Architecture
Authenticity and Architecture Representation and Reconstruction in Context Proefschrift ter verkrijging van de graad van doctor aan Tilburg University, op gezag van de rector magnificus, prof. dr. Ph.More information
CHAPTER 1 INTRODUCTION
CHAPTER 1 INTRODUCTION 1.1 Research Motivation In today s modern digital environment with or without our notice we are leaving our digital footprints in various data repositories through our daily activities,More CalorieMore information InstituteMore information
Structural performance and failure analysis of aluminium foams Amsterdam, Emiel
Structural performance and failure analysis of aluminium foams Amsterdam, Emiel IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. PleaseMore information
Compliance Analysis in IT Service Management Systems. Master Thesis
Compliance Analysis in IT Service Management Systems Master Thesis IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE (M.Sc.) IN INFORMATION SYSTEMS AT THE SCHOOL OF BUSINESSMore informationMore
Item Analysis of Single-Peaked Response Data. The Psychometric Evaluation of Bipolar Measurement Scales
Item Analysis of Single-Peaked Response Data The Psychometric Evaluation of Bipolar Measurement Scales Acknowledgement Financial support for the printing costs of this thesis was provided by the DevelopmentalMore informationMore informationMore information
Understanding Crowd Behaviour Simulating Situated Individuals
Understanding Crowd Behaviour Simulating Situated Individuals Nanda Wijermans The research reported in this thesis was funded by the Netherlands Ministry of Defence ("DO-AIO" fund contract no TM-12). TheMoreMore,More,More information
From Workflow Design Patterns to Logical Specifications
AUTOMATYKA/ AUTOMATICS 2013 Vol. 17 No. 1 Rados³aw Klimek* From Workflow Design Patterns to Logical Specifications 1. Introduction Formal methods in softwareMore information OutlineMore informationiseMore information
Planning of outsourced operations in pharmaceutical supply chains Boulaksil, Y.
Planning of outsourced operations in pharmaceutical supply chains Boulaksil, Y. DOI: 10.6100/IR656997 Published: 01/01/2010 Document Version Publisher s PDF, also known as Version of Record (includes finalMore information.More informationMoreMore informationMore information
Software Engineering. So#ware Processes
Software Engineering So#ware Processes 1 The software process A structured set of activities required to develop a software system. Many different software processes but all involve: Specification defining
Towards Configurable Data Collection for Sustainable Supply Chain Communication
Towards Configurable Data Collection for Sustainable Supply Chain Communication Gregor Grambow, Nicolas Mundbrod, Vivian Steller and Manfred Reichert Institute of Databases and Information Systems UlmMore informationMore informationusMore information
Declarative Process Discovery with MINERful in ProM
Declarative Process Discovery with MINERful in ProM Claudio Di Ciccio 1, Mitchel H. M. Schouten 2, Massimiliano de Leoni 2, and Jan Mendling 1 1 Vienna University of Economics and Business, Austria claudio.di.ciccio@wu.ac.at,More information
Validation of a video game made for training laparoscopic skills Jalink, Maarten
Validation of a video game made for training laparoscopic skills Jalink, Maarten IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. PleaseMore information
ENHANCING FLEXIBILITY IN CLINICAL TRIALS USING WORKFLOW MANAGEMENT SYSTEMS
ENHANCING FLEXIBILITY IN CLINICAL TRIALS USING WORKFLOW MANAGEMENT SYSTEMS Albert Murungi, Josephine Nabukenya Information Systems Department, School of Computing and Informatics Technology Makerere UniversityMore information
Comparing Business Processes to Determine the Feasibility of Configurable Models: A Case Study
Comparing Business Processes to Determine the Feasibility of Configurable Models: A Case Study J.J.C.L. Vogelaar, H.M.W. Verbeek, B. Luka, and W.M.P van der Aalst Technische Universiteit Eindhoven DepartmentMore information
Dynamic Business Process Management based on Process Change Patterns
2007 International Conference on Convergence Information Technology Dynamic Business Process Management based on Process Change Patterns Dongsoo Kim 1, Minsoo Kim 2, Hoontae Kim 3 1 Department of Industrial
Muslah Systems Agile Development Process
Muslah Systems, Inc. Agile Development Process 1 Muslah Systems Agile Development Process Iterative Development Cycles Doug Lahti December 7, 2009 October 5, 2010 In consideration of controllable systemsMore informationMore information
Life-Cycle Support for Staff Assignment Rules in Process-Aware Information Systems
Life-Cycle Support for Staff Assignment Rules in Process-Aware Information Systems Stefanie Rinderle-Ma 1,2 and Wil M.P. van der Aalst 2 1 Department Databases and Information Systems, Faculty of EngineeringMore information
Cover Page. The handle holds various files of this Leiden University dissertation.
Cover Page The handle holds various files of this Leiden University dissertation. Author: Font Vivanco, David Title: The rebound effect through industrial ecology s eyesMore information
All That Glitters Is Not Gold: Selecting the Right Tool for Your BPM Needs
PROVE IT WITH PATTERNS All That Glitters Is Not Gold: Selecting the Right Tool for Your BPM Needs by Nick Russell, Wil M.P. van der Aalst, and Arthur H.M. ter Hofstede As the BPM marketplace continuesMore information
Introduction to Workflow
Introduction to Workflow SISTEMI INFORMATICI SUPPORTO ALLE DECISIONI AA 2006-2007 Libro di testo: Wil van der Aalst and Kees van Hee. Workflow Management: Models, Methods, and Systems. The MIT Press, paperbackMoreekaMore information,More information
Data Science. Research Theme: Process Mining
Data Science Research Theme: Process Mining Process mining is a relatively young research discipline that sits between computational intelligence and data mining on the one hand and process modeling andMore information
Chapter 2 Introduction to Business Processes, BPM, and BPM Systems
Chapter 2 Introduction to Business Processes, BPM, and BPM Systems This chapter provides a basic overview on business processes. In particular it concentrates on the actual definition and characterizationMore information,More information
Chapter 1 Introduction
Chapter 1 Introduction prof.dr.ir. Wil van der Aalst Overview Chapter 1 Introduction Part I: Preliminaries Chapter 2 Process Modeling and Analysis Chapter 3 Data Mining Part II: FromMore information
An Automated Workflow System Geared Towards Consumer Goods and Services Companies
Proceedings of the 2014 International Conference on Industrial Engineering and Operations Management Bali, Indonesia, January 7 9, 2014 An Automated Workflow System Geared Towards Consumer Goods and Services:More information
Why participation works
Why participation works Full title Why participation works: the role of employee involvement in the implementation of the customer relationship management type of organizational change. Key words Participation,More information
MODERN APPROACHES TO LEADING AN ORGANIZATION WITH FOCUS ON HUMAN CAPITAL
Journal of Information, Control and Management Systems, Vol. 6, (2008), No. 2 133 MODERN APPROACHES TO LEADING AN ORGANIZATION WITH FOCUS ON HUMAN CAPITAL Ivana TESAROVIČOVÁ external PhD. student of SilesianMore informationMore information
Constraint-Based Workflow Models: Change Made Easy
Constraint-Based Workflow Models: Change Made Easy M. Pesic 1, M.H. Schonenberg 2, N. Sidorova 2, and W.M.P. van der Aalst 2 1 Department of Technology Management 2 Department of Mathematics and Computer
Event-driven control in theory and practice
Event-driven control in theory and practice Trade-offs in software and control performance PROEFSCHRIFT ter verkrijging van de graad van doctor aan de Technische Universiteit Eindhoven, op gezag van deMore information
Business-Driven Software Engineering Lecture 3 Foundations of Processes
Business-Driven Software Engineering Lecture 3 Foundations of Processes Jochen Küster jku@zurich.ibm.com Agenda Introduction and Background Process Modeling Foundations Activities and Process Models SummaryMore information
Demand Response Management System ABB Smart Grid solution for demand response programs, distributed energy management and commercial operations
Demand Response Management System ABB Smart Grid solution for demand response programs, distributed energy management and commercial operations Utility Smart Grid programs seek to increase operationalMore informationMore information
: ASYNCHRONOUS FINITE STATE MACHINE DESIGN: A LOST ART?
2006-672: ASYNCHRONOUS FINITE STATE MACHINE DESIGN: A LOST ART? Christopher Carroll, University of Minnesota-Duluth Christopher R. Carroll earned his academic degrees from Georgia Tech and from Caltech.More information
About OPITZ CONSULTING
ADF Spotlight ACM Implementation Best Practices Andrejus Baranovskis CEO & Architect Oracle ACE Director Red Samurai Consulting Danilo Schmiedel Solution Architect Oracle ACE Director OPITZ CONSULTINGMore information
Business process model reasoning: from workflow to case management
Available online at ScienceDirect Procedia Technology 9 (2013 ) 806 811 CENTERIS 2013 - Conference on ENTERprise Information Systems / PRojMAN 2013 - International Conference on Project
Measurement Information Model
mcgarry02.qxd 9/7/01 1:27 PM Page 13 2 Information Model This chapter describes one of the fundamental measurement concepts of Practical Software, the Information Model. The Information Model providesMore informationMore information
Agile Manufacturing for ALUMINIUM SMELTERS
Agile Manufacturing for ALUMINIUM SMELTERS White Paper This White Paper describes how Advanced Information Management and Planning & Scheduling solutions for Aluminium Smelters can transform productionMore information
CDC UNIFIED PROCESS PRACTICES GUIDE
Purpose The purpose of this document is to provide guidance on the practice of Modeling and to describe the practice overview, requirements, best practices, activities, and key terms related to these requirements.More information
Cost-Aware Business Process Management
Cost-Aware Business Process Management Dr. Moe Thandar Wynn Business Process Management Discipline Queensland University of Technology, Brisbane, Australia m.wynn@qut.edu.au Web: information.More informationMore information
An Introduction to. Metrics. used during. Software Development
An Introduction to Metrics used during Software Development Life Cycle Page 1 of 10 Define the Metric Objectives You can t control what you can t measure. This is a quoteMore information
Modeling Workflow Patterns
Modeling Workflow Patterns Bizagi Suite Workflow Patterns 1 Table of Contents Modeling workflow patterns... 4 Implementing the patterns... 4 Basic control flow patterns... 4 WCP 1- Sequence... 4 WCP 2-More informationMore | http://docplayer.net/25357284-Constraint-based-workflow-management-systems-shifting-control-to-users.html | CC-MAIN-2018-17 | refinedweb | 96,422 | 52.7 |
GETHOSTNAME(3) BSD Programmer's Manual GETHOSTNAME(3)
gethostname, sethostname - get/set name of current host
#include <unistd.h> int gethostname(char *name, size_t namelen); int sethostname(const char *name, size_t namelen);
The gethostname() function returns the standard host name for the current processor, as previously set by sethostname(). The parameter namelen specifies the size of the name array. If insufficient space is provided, the returned name is truncated. The returned name is always NUL terminat- ed. sethostname() sets the name of the host machine to be name, which has length namelen. This call is restricted to the superuser and is normally used only when the system is bootstrapped.
If the call succeeds a value of 0 is returned. If the call fails, a value of -1 is returned and an error code is placed in the global variable errno.
The following errors may be returned by these calls: [EFAULT] The name parameter gave an invalid address. [EPERM] The caller tried to set the hostname and was not the su- peruser.
hostname(1), getdomainname(3), gethostid(3), sysctl(3), sysctl(8), yp(8)
The gethostname() function call conforms to X/Open Portability Guide Issue 4.2 ("XPG4.2").
The gethostname() function call appeared in 4.2BSD.. | http://www.mirbsd.org/htman/sparc/man3/gethostname.htm | CC-MAIN-2015-06 | refinedweb | 206 | 56.76 |
Legal
Ask a Lawyer and Get Answers to Your Legal Questions
Thanks for your question and good evening.You would need to have a subpoena duces tecum issued to the Florida Department of Highway Custodian of Records.
I would suggest contacting them for a name of the custodian here so that you can add that to your subpoena and serve it.You would get the subpoena forms from the court clerk and the judge would sign it and you pay constable to serve it.
Here is contact information.
So this is a subpoena that requires the Judge's signature, and would not any authorized process server (not "constable") be authorized? (I already have a company in Tallahassee I've used before.)
Also, is the request for the Judge to sign the subpoena something that can be done ex parte in chambers, or is notice of this formality to Defendants required?
You may also want to consider a private process server,t hey can help you complete the subpoena forms and serve them.This is really helpful if you are pro se representing yourself and it is critical you get the record.
Here is reference
I appreciate the chance to help is more reference to the subpoena duces tecum forms.Your court clerk will have court specific ones here.
Subpoenas - Pinellas County, FL - Clerk of the Circuit Court
Your follow up..
Answer:
Yes you can use private server not a problem they can also get it signed.
No hearing is required here or notice to the other side, the court clerk will have judge sign it and let you know when it is ready.
Thanks again.
See Rule 1410 of the Florida Rules of Procedure it goes over the process for s Subpoena Duces Tecum--note you can use a private process server as you asked.
Here are downloadable forms or you can get them from the clerk.
Thanks for your patience.I had to switch this out of chat as it was having site problems.Let me know if you have more follow up.
Sorry for delay. I was away from the office for the last few days. Thanks.
Along the same lines, but maybe should be a separate question... regarding a FL civil theft action, are there any limits or parameters to the discovery process regarding interrogatories (other than "30" questions); specifically the type of questions? I'm looking for the line not to cross between what is appropriate for "Production of Documents" vs. "Interrogatories" vs. "Subpoena Duces Tecum" -- as some of these things seem to overlap in how they might be extracted from the Defense.
Anything is fair game with interrogatories if it is calculated to lead to relevant information.The other side either answers you or files objection and then you would seek to have the court rule on it.The difference with request for production of documents--if you want existing documents as opposed to questions.For instance if you want the bank statement its a document request, if you want the balance or account number you can ask an interrogatory question.A subpoena duces tecum will get you the person and the records to the hearing.This would work with a third party say a bank.You would seek to have the custodian of records bring the account records to the hearing.But you do not get the documents ahead of a hearing.This is a brief summary of the differences.Here are sample interrogatories that might help you. is a request for production below for form, obviously your subjact matter is different, best sample I can give you. might find this helpful as an overview of the civil trial process including discovery..
Great. Thanks as always.
Will digest your provision this evening; the video was good too. Will follow up tomorrow. Have a great evening.
In follow up, I been checking the FL RCP and don't quickly see if the proper procedure for presenting the Interrogatories requires filing with the Clerk of the Court and the usual case and court header, or if it is simply a more informal letter format sent between parties without a copy being filed in the Clerk's office. Can you please clarify this, and also where I can find the info on this as well? Thanks.
(a) shall be in the form approved by the court..
Tried to send you a reply and got bumped (lost) with a simultaneous entry by us both; but thanks as usual. I have the full RCP printed out and just need to cram deeper, but appreciate your pointing to me in the above. Would be good to have access to State court records like I do the Federal stuff with my PACER account to see local attorney examples on some of these forms so that I don't look too "pro se." A web site that had such examples might be a good idea.
(a) use against that party in any other proceeding.
You're getting ahead of me! I was just reading through this in the RCP; much appreciate your supplement thought.I'm digressing somewhat, but with regards XXXXX XXXXX appeal, is there any motions or other "preservation of rights to appeal" that are needed DURING the motions in limine phase in a civil cause where several Counts have been dismissed for Limitation reasons, but one or two with longer SOL time survive and the case continues? The surviving count is Civil Theft (5 yrs); but four Torts were shot down (4 yr SOL), but I strongly disagree with how the Court counted when they became known. If the Civil Theft fails, I would like to return (through appeal) to the other Counts; but fear there may be a procedure for preserving them I need (or should have) to make clear NOW. Am I correct, or can these be Appealed after the disposition of the CT count is resolved?
I didn't file or state I objected, per se, but I filed a Motion To Reconsider, and specified the reasons why I thought the ruling was in error. Would the comport to what the Appellate prerequesites require?
You too. Thanks. | http://www.justanswer.com/law/7qihs-subpoena-discover-procedure-does-plaintiff-use.html | CC-MAIN-2017-04 | refinedweb | 1,031 | 70.84 |
package require tcom set voice [::tcom::ref createobject Sapi.SpVoice] $voice Speak "Hello World" 1 after 3000 exitThe speech SDK is available from
$voice Speak "<> test <" 1returns "0x80045042 {Unknown error}". I did a search on the Microsoft website for the code which, naturally, came up blank. Any ideas would be appreciated :)NEM On Mac OS X there is a command-line "say" program for accessing the built-in speech synthesis capabilities of the OS:
exec /usr/bin/say "Hello World"WHD Ubuntu Linux (and I suspect other distributions) ships with a similar command called "espeak":
$ espeak "Hello World"
GPS Rsynth is a nice public domain package that I've used for speech synthesis. I wrote a say_this.tcl script that built a GUI for tweaking the voice. I've been thinking about improving Rsynth, because it seems to be at the moment dead. Another tool that I've heard good things about was Festival[1]. CMU's Sphinx[2] is another tool that may be good, but I haven't heard from users of it.See Festtcl for a Tcl interface to Festival.
Tcl'ers may be interested in the CSLU Toolkit [3] developed at the Oregon Graduate Institute's Center for Spoken Language Understanding. It includes a RAD environment which supports Tcl and provides tools to do Speech Recognition as well as speech synthesis. -- aricb
JKM would like to know if MG found any solution to his "0x80045042 {Unknown error}". I'm getting the same error, but it may be for a different reason. By default, the SAPI tries to interpret the string with XML tags if the first character is '<'. Change your 1 to a 17 and you should be alright. I'm getting the same error with
$voice Speak "c:/code/tcl/SAPI2.txt" 5any help would be appreciated.MG never did, I'm afraidMG Having looked on the Microsoft website - for once, it's actually returned something sensible, searching for "sapi.spvoice" on - it seems that '5' means "speak a file", the 17 you mentioned before "speak without parsing XML", and 1 is the default. I can replicated the "Unknown Error", using '5', when the file in question doesn't exist. To quote one page of the MS website, the argument must be "a null-terminated, fully qualified path to a file". The page in question is [4], and seems (as of July 10 2005, before they change the address) to be about the first page to start looking at, for this method of speech.
ET: Hey, this is pretty cool. I also downloaded the documentation and found these flags, so the 5 would be 4+1 or async and filename:
Enum SpeechVoiceSpeakFlags 'SpVoice flags SVSFDefault = 0 SVSFlagsAsync = 1 SVSFPurgeBeforeSpeak = 2 SVSFIsFilename = 4 SVSFIsXML = 8 SVSFIsNotXML = 16 SVSFPersistXML = 32 'Normalizer flags SVSFNLPSpeakPunc = 64 End Enum").
ETI found the following additional commands:
$voice Rate ;# return the rate (can be changed while reading a file async) $voice Rate value ;# set the rate of speech. Seems to take values like, -5,-4,...0...1...5,6,... $voice Skip Sentence N ;# N is plus or minus, this is while it's reading a file $voice Volume ;# return volume $voice Volume N ;# set volume to N $voice Pause ;# these 2 work while reading a file, using option 5 (asyn, file) $voice ResumeThere are others that seem to return a handle, but I can't figure out how to then use them. The one I wanted to play with was Voice, you can do this
set v [$voice Voice] $vwhich then gives you
usage: handle ?options? method ?arg ...?But I couldn't figure out what to do next. Too bad it doesn't return a list of permitted options like many tcl commands do.MG has made a little headway.
set v [$voice Voice] puts "You are speaking with [$v GetDescription]"Which, for me, shows
You are speaking with Microsoft Samwhich is the default voice on my computer. You can also use
set current [$voice Voice] set list [$voice GetVoices] set howmany [$list Count] for {set i 0} {$i < $howmany} {incr i} { set person [$list Item $i] puts "Voice $i is called [$person GetDescription], and is [set gender [$person GetAttribute Gender]]." puts "Let's make [expr {$gender == "Female" ? "her" : "him"}] speak." $voice Voice $person $voice Speak "Hello, my name is [$person GetAttribute Name]" } $voice Voice $current ;# return to originalto get a list of all the voices you have, and show their name/gender. ($person GetDescription is the same as $person GetAttribute Name).As you can see, you can change the voice by using
$voice Voice <newVoice>where <newVoice> is a voice of the form [[$voice GetVoices] Item $number]It's also possible to control which device the sound goes to, in much the same way as setting the voice:
set current [$voice AudioOutput] set devices [$voice GetAudioOutputs] set howmany [$devices Count] for {set i 0} {$i < $howmany} {incr i} { set thisdevice [$devices Item $i] puts "Device $i is called '[$thisdevice GetDescription]'. Let's play something through it..." $voice AudioOutput $thisdevice $voice Speak "I am speaking through [$thisdevice GetDescription]" } $voice AudioOutput $current ;# return to default
Each async speak (of a file or a string) gets queued and will be spoken in turn. To wait until speaking is done, one can use this:
$voice WaitUntilDone <timeout-milliseconds> ;# 0 returned if a timeout occurred and still speaking, otherwise a 1 if finishedIf timeout milliseconds is -1 this will just wait until the voice is done speaking, but this would then hang the event loop until the speech is done. Using a small value (e.g. 10), one can do a polling operation.
TLT Here is a simple script that implements talking Caller ID using a modem:
# This script opens a Caller ID-enabled modem and speaks the received name after every ring. package require tcom set Modem com8: ;# modem port set Name "" ;# current name to speak # This procedure responds to messages from the modem. proc modemCallback {chan voice} { # Read the message from the modem. set line [gets $chan] puts $line # If a ring, speak the current name and start a timer to reset the name. if {$line == "RING" && $::Name != ""} { $voice Speak $::Name set cmd {set ::Name ""} after cancel $cmd after 6000 $cmd return } # Get the name from the "NAME=" line. if {![regexp {NAME=(.*)} $line {} name]} { return } # Speak the name. set ::Name $name $voice Speak $::Name } # Create a button to show the console. pack [button .button -text "Show Console" -command "console show"] wm iconify . # Create the voice object. set voice [::tcom::ref createobject Sapi.SpVoice] # Open the modem. The procedure "modemCallback" will be called when a message is received. set chan [open $::Modem w+] fconfigure $chan -buffering line fileevent $chan readable [list modemCallback $chan $voice] # Reset the modem and enable Caller ID. puts $chan "ATZ" puts $chan "AT+VCID=1"
peterc 2008-11-26: The sound sample quality on the default voice is pretty horrible; today's mobile phones are more clear. Does Microsoft (or any third party) offer better samples anywhere?
TLT This script saves the spoken text as a wave file:
package require tcom # Create the objects. set voice [::tcom::ref createobject Sapi.SpVoice] set fileStream [::tcom::ref createobject Sapi.SpFileStream] set audioFormat [::tcom::ref createobject Sapi.SpAudioFormat] # Set the audio format of the file stream. set SAFT11kHz8BitMono 8 $audioFormat Type $SAFT11kHz8BitMono $fileStream Format $audioFormat # Open the file and attach the stream to the voice. set SSFMCreateForWrite 3 $fileStream Open text.wav $SSFMCreateForWrite False $voice AudioOutputStream $fileStream # Speak the text. $voice Speak "This is a wave file" $fileStream CloseWJG (06/04/12) Just hacked together a simple proc to read text strings aloud using the espeak Linux command. Its good fun. It also shows how useful the gnocl::setOpts command can be.
#--------------- # espeak.tcl #--------------- ## \file # File documentation. #\verbatim #!/bin/sh #\ exec tclsh "$0" "$@" #\endverbatim package require Gnocl ## # -f Text file to speak # -a Amplitude, 0 to 200, default is 100 # -g Word gap. Pause between words, units of 10mS at the default speed # -l Line length. If not zero (which is the default), consider # lines less than this length as end-of-clause # -p Pitch adjustment, 0 to 99, default is 50 # -s Speed in words per minute, 80 to 390, default is 170 # -v Use voice file of this name from espeak-data/voices # -w Write output to this WAV file, rather than speaking it directly # -b Input text encoding, 1=UTF8, 2=8 bit, 4=16 bit # -m Interpret SSML markup, and ignore other < > tags # -q Quiet, don't produce any speech (may be useful with -x) # -x Write phoneme mnemonics to stdout # -X Write phonemes mnemonics and translation trace to stdout # -z No final sentence pause at the end of the text # ref: espeak man pages proc readAloud {args} { # set defaults and parse args gnocl::setOpts "-a 5 -s 110 $args" eval exec espeak -a $a -s $s [list $t] } readAloud -t "How now brown cow. She sells sea shells by the sea shore. Peter Piper picked a peck of pickled peppers."MSA 06/05/12
#neospeech web service api example. package r http package r dom package r tcom # Подключаемся по https package require tls http::register https 443 ::tls::socket # voices #TTS_PAUL_DB #TTS_KATE_DB #TTS_JULIE_DB namespace eval neospeech { proc getConvertSimpleVoice {str} { set voice TTS_KATE_DB set email [email protected] set accountId 1234567890 set loginPassword somepassword set str [::http::formatQuery text $str] set url "" set token [::http::geturl $url] set resultCode [getXMLResponseAttr [::http::data $token] resultCode] if {$resultCode != 0} { puts Error:[getXMLResponseAttr [::http::data $token] resultString] exit } set conversionNumber [getXMLResponseAttr [::http::data $token] conversionNumber] set statusCode 5 while {$statusCode != 4} { set url "" set token [::http::geturl $url] set statusCode [getXMLResponseAttr [::http::data $token] statusCode] if {$statusCode == 5} { puts statusCode=5 exit } } set downloadUrl [getXMLResponseAttr [::http::data $token] downloadUrl] return $downloadUrl } proc getXMLResponseAttr {xml_response attr} { set d [::dom::DOMImplementation parse $xml_response] set response [set [::dom::document getElementsByTagName $d response]] set attr [::dom::element getAttribute $response $attr] return $attr } };# end ns neospeech proc getFile { url } { set token [::http::geturl $url] set data [::http::data $token] ::http::cleanup $token return $data } set str {Let him now speak, or else hereafter for ever hold his peace.} set voice_url [neospeech::getConvertSimpleVoice $str] set file [getFile $voice_url] set fname [lindex [split $voice_url /] end] set f [open $fname w] fconfigure $f -translation binary puts $f $file close $f set App [::tcom::ref createobject wmplayer.ocx] #$App URL $voice_url $App URL $fname [$App controls] play | http://wiki.tcl.tk/6252?redir=15705 | CC-MAIN-2017-34 | refinedweb | 1,735 | 59.94 |
In this Programme, you’ll learn how to print all prime numbers between Intervals or two numbers (entered by the user) by making a user-defined function.
To nicely understand this example to find prime numbers between intervals, you should have the knowledge of following C++ programming topics:
- for Loop
- break and continue Statement
- Functions
- Types of User-defined Functions
Example #1: Prime Numbers Between two Intervals
#include <iostream> using namespace std; int checkPrimeNumber(int); int main() { int n1, n2; bool flag; cout << "Enter two positive integers: "; cin >> n1 >> n2; cout << "Prime numbers between " << n1 << " and " << n2 << " are: "; for(int i = n1+1; i < n2; ++i) { // If i is a prime number, flag will be equal to 1 flag = checkPrimeNumber(i); if(flag == false) cout << i << " "; } return 0; } // user-defined function to check prime number int checkPrimeNumber(int n) { bool flag = true; for(int j = 2; j <= n/2; ++j) { if (n%j == 0) { flag = false; break; } } return flag; }
After Compiling the above code we get Output
Enter two positive integers: 12 55 Prime numbers between 12 and 55 are: 13 17 19 23 29 31 37 41 43 47 53
So to print all prime numbers between two integers,
checkPrimeNumber() function is created.And then This function checks whether a number is prime or not.
All integers between n1 and n2 are passed to this function.
If a number passed to
checkPrimeNumber() is a prime number, this function returns true, if not the function returns false.
If the user enters larger number first, this program will not work as intended. To solve this issue, you need to swap numbers first.
Ask your questions and clarify your/others doubts on Program to find prime numbers between intervals by commenting or posting your doubt on forum. Documentation | https://coderforevers.com/cpp/cpp-program/prime-interval-function/ | CC-MAIN-2019-39 | refinedweb | 296 | 52.77 |
The Java Specialists' Newsletter
Issue 155
2008-01-16
Category:
Concurrency
Java version: 5+
Subscribe
RSS Feed
Welcome to the 155th issue of The Java(tm) Specialists' Newsletter. Last Tuesday, we had a bit of a break from the rain here on Crete, so I packed my laptop in my Suzuki Jimny and headed for a more interesting location.
As you are a Java Specialist reader this will almost certainly be of interest. We have recently launched a course specifically designed for top Java professionals like yourself. The course is the result of all the knowledge and experience gained from publishing 150 advanced Java Newsletters, teaching hundreds of seminars and writing thousands of lines of Java code and offers Java programmers the chance to truly master the Java Programming Language. For more information check out the Master's the Micromanager.
mi·cro·man·age: to manage or control with excessive attention to minor details.
When tuning the performance of a system, we typically start by measuring the utilisation of hardware (CPU, disk, memory and IO). If we do not find the bottleneck, we go one step up and look at the number of threads plus the garbage collector activity. If we still cannot find the bottleneck, in all likelihood we have a thread contention problem, unless our measurements or test harness were incorrect.
In the past few laws, I have emphasised the importance of protecting data against corruption. However, if we add too much protection and at the wrong places, we may end up serializing the entire system. The CPUs can then not be utilized fully, since they are waiting for one core to exit a critical section.
A friend of mine sent me some code that he found during a code review. The
programmers had declared
String WRITE_LOCK_OBJECT, pointing to
a constant String, like so:
String WRITE_LOCK_OBJECT = "WRITE_LOCK_OBJECT"; // Later on in the class synchronized(WRITE_LOCK_OBJECT) { ... }
But wait a minute? String constants are stored in the intern
table, thus if the String
"WRITE_LOCK_OBJECT" occurs in two
classes, they will point to the same object in the constant
pool. We can demonstrate this with classes A and B, which
each contain fields with a constant String.
public class A { private String WRITE_LOCK_OBJECT = "WRITE_LOCK_OBJECT"; public void compareLocks(Object other) { if (other == WRITE_LOCK_OBJECT) { System.out.println("they are identical!"); System.out.println( System.identityHashCode(WRITE_LOCK_OBJECT)); System.out.println( System.identityHashCode(other)); } else { System.out.println("they do not match"); } } public static void main(String[] args) { A a1 = new A(); A a2 = new A(); a1.compareLocks(a2.WRITE_LOCK_OBJECT); } }
When you run A.main(), you see that with two instances of A, the field WRITE_LOCK_OBJECT is pointing to the same String instance.
they are identical! 11394033 11394033
Similarly, in B we compare the internal String to the String inside A, and again they are identical:
public class B { private String WRITE_LOCK_OBJECT = "WRITE_LOCK_OBJECT"; public static void main(String[] args) { B b = new B(); A a = new A(); a.compareLocks(b.WRITE_LOCK_OBJECT); } } they are identical! 11394033 11394033
If the String had been created with
new, it would
have been a different object, but I still think this is a bad idiom to use
for locking:
public class C { private String WRITE_LOCK_OBJECT = new String("WRITE_LOCK_OBJECT"); public static void main(String[] args) { C c = new C(); A a = new A(); a.compareLocks(c.WRITE_LOCK_OBJECT); a.compareLocks(c.WRITE_LOCK_OBJECT.intern()); } }
As we would expect, the Strings are now different objects, but if we
intern() it, it would point to the same object again:
they do not match they are identical! 11394033 11394033
Since he had repeated this pattern throughout his code, his entire system was synchronizing on one lock object. This would not only cause terrible contention, but also raise the possibilities of deadlocks and livelocks. You could also lose signals by having several threads waiting on a lock by mistake. Basically, this is a really bad idea, so please do not use it in your code.
Before we continue, we should consider Amdahl's Law in relation to
parallelization. According to Wikipedia,
Amdahl's law states that if
F is the fraction of
a calculation that is sequential (i.e. cannot benefit from parallelization),
and (1 -
F) is the fraction that can be
parallelized, then the maximum speedup that can be achieved by using
N processors is
1 ------------- F + (1 - F)/N
For example, if
F is even just 10%, the problem
can be sped up by a maximum of a factor of 10, no matter what the
value of
N is. For all practical
reasons, the benefit of adding more cores decreases as we get
closer to the theoretical maximum speed-up of 10. Here
is an example of how we can calculate it:
public class Amdahl { private static double amdahl(double f, int n) { return 1.0 / (f + (1 - f) / n); } public static void main(String[] args) { for (int i = 1; i < 10000; i *= 3) { System.out.println("amdahl(0.1, " + i + ") = " + amdahl(0.1, i)); } } }
We can see from the output that the benefit of adding more cores decreases as we get closer to the theoretical maximum of 10:
amdahl(0.1, 1) = 1.0 amdahl(0.1, 3) = 2.5 amdahl(0.1, 9) = 5.0 amdahl(0.1, 27) = 7.5 amdahl(0.1, 81) = 9.0 amdahl(0.1, 243) = 9.642857142857142 amdahl(0.1, 729) = 9.878048780487804 amdahl(0.1, 2187) = 9.959016393442623 amdahl(0.1, 6561) = 9.986301369863012
(Thanks to Jason Oikonomidis and Scott Walker for pointing this out):
We can thus have thread safe code without explicit locking. There is still
a memory barrier though, since the field inside the
AtomicInteger is marked
as
volatile to prevent the visibility problem seen
(or perhaps not seen would be more appropriate ;-) in The Law of the Blind Spot.
If we look back at the example in our previous law, The Law of the Corrupt Politician, we can simplify
it greatly by using an
AtomicInteger, instead of explicitly
locking. In addition, the throughput should be better as well:
import java.util.concurrent.atomic.AtomicInteger; public class BankAccount { private final AtomicInteger balance = new AtomicInteger(); public BankAccount(int balance) { this.balance.set(balance); } public void deposit(int amount) { balance.addAndGet(amount); } public void withdraw(int amount) { deposit(-amount); } public int getBalance() { return balance.intValue(); } }
There are other approaches for reducing contention. For
example, instead of using
HashMap or
Hashtable for shared
data, we could use the
ConcurrentHashMap. This
map partitions the buckets into several sections, which can
then be modified independently. When we construct the
ConcurrentHashMap, we can specify how many
partitions we would like to have, called the
concurrencyLevel (default is 16). The
ConcurrentHashMap is an excellent class for
reducing contention.
Another useful class in the JDK is the
ConcurrentLinkedQueue, which uses an efficient
wait-free algorithm based on one described in Simple,
Fast, and Practical Non-Blocking and Blocking Concurrent
Queue Algorithms. It also uses the compare-and-swap
approach that we saw with the atomic classes.
Since Java 6, we also have the
ConcurrentSkipListMap
and the
ConcurrentSkipListSet as part of the
JDK.
There is a lot more that I could write on the subject of contention, but I encourage you to do your own research on this very important topic, which will become even more essential as the number of cores increases.
Kind regards from Crete
Heinz
Concurrency Articles
Related Java Course
Would you like to receive our monthly Java newsletter, with lots of interesting tips and tricks that will make you a better Java programmer? | http://www.javaspecialists.eu/archive/Issue155.html | CC-MAIN-2017-09 | refinedweb | 1,254 | 54.02 |
As a test engineer I want to be able to run async test fixtures and test cases in different async tasks with the same Context. Not a copy; the same specific instance of contextvars.Context().
I do NOT want the task to run in a COPY of the context because I want mutations to the context to be preserved so that I can pass the mutated context into another async task.
I do NOT want the task to inherit the potentially polluted global context.
class Task currently unconditionally copies the current global context and has no facility for the user to override the context.
Therefor I propose adding a context argument to the Task constructor and to create_task()
It should be noted that this argument should not be used for "normal" development and only for "weird" stuff like test frameworks.
I should also note that Context().run() is not useful here because it is not async and there is no obvious existing async equivalent. This proposal would be roughly equivalent.
I should also note that a hack like copying the items from one context to another will not work because it breaks ContextVar set/reset. I tried this. It was a heroic failure. It must be possible to run a task with an exist instance of context and not a copy.
Here is a real-world use case:
Here is the hacked Task constructor I cooked up:
class Task(asyncio.tasks._PyTask):
def __init__(self, coro, *, loop=None, name=None, context=None):
...
self._context = context if context is not None else copy_context()
self._loop.call_soon(self.__step, context=self._context)
asyncio._register_task(self)
if folks are on board I can do a PR | https://bugs.python.org/msg366722 | CC-MAIN-2021-43 | refinedweb | 283 | 64.2 |
For the past two months, I have been helping my son’s grade 8 class to learn to program. All students wrote Python programs and got a feel for what programming is. This post has details on how we organized the course, code examples and lessons learned.
Background
This year, all schools in Sweden are required to start teaching programming. Many schools already teach programming, but they depend on having teachers that know enough to teach. Writing code will be included in the subjects math and technology. Earlier this year, another grade 8 dad, Olle, volunteered to help teach programming this fall. He also recruited me (as a fellow enthusiastic programmer) to help.
We spoke to Caroline, the technology and math teacher, and she was grateful for any help. The class had done a bit of Scratch programming before, but.
Philosophy
Python. Olle and I got together a couple of times to plan it out. We discussed what language and environment to use. We picked Python for a couple of reasons. First of all, both Olle and I already know Python (I use it every day at work). Second, all students have a MacBook Air, so there is already a development environment for Python installed. On their computers they have Python 2.7.10 and the IDLE editor. Python is also a good choice because the syntax is relatively easy and compact, and is now the most popular language.
No magic. Caroline wanted the students to learn text-based programming. Block-based programming is good for showing concepts like if-statements and loops. However, professional programming is almost exclusively text-based. But, professional programming also involves using numerous frameworks and libraries, and we wanted to avoid that. We wanted to get to the essence of programming while avoiding “magic” behavior. So the programs should be as self-contained as possible. The behavior of the code should be evident from the instructions, without relying on functionality from external libraries.
Minimal subset. We also decided that we wanted to minimize the number of language constructs used, while still being able to write programs with a good chunk of logic in them. All programming comes down to using variables, statements in order, conditionals and loops. In Python terms, we decided that it would be enough to introduce strings and integers, if-elif-else and for-loops. We also need print-statements for output and raw_input() to make the programs interactive. And finally with decided to show how to use functions as well, to be able to show a way to structure programs. This is quite a small subset of Python, but it still allows for creating fairly advanced programs. In particular, there is no need to use classes.
For everybody. Finally, we wanted to show that anybody can learn how to program. It is like learning any other skill – there is no magic to it. And ideally we wanted to let the students feel the joy of programming – the sense of accomplishment that comes from having created a working program from scratch.
Progression
Hello world. The first program would be a simple “Hello, world!”-print-out, which only requires a print statement. Of course it also requires being able to start the IDLE editor, writing the code, and running a python program on the command line. Next, variables are introduced. We also talked about types (strings and integers) and that values of variables can be changed. Here we also show how to print the value of a variable (using %s notation) and how to read input.
Calculator. The next step is to make a calculator. This is a good way to introduce the concept of a function. We start by defining a function called add, that adds two numbers together. The program then prompts the user for two numbers, calls the add function and prints the result. Next we write a subtract function, and introduce if-statements to let the user pick between adding and subtracting.
Game – guess-the-number. The final example is to write the game of guess-the-number. A random number (between 1 and 100) is generated, and the user has to guess the number. Here we had to introduce one piece of magic, in the form of importing the function randint. We started by just generating the random number and letting the user guess and then comparing the guess to the number. This requires using an if-statement and converting the text input to an integer. The next step is to print whether the guess is correct, too high or too low. From there, it is natural to introduce loops to let the user guess more than once. So we introduced the for-loop here, and got a complete game of guess-the-number. We decided to use a for-loop instead of a while-loop, to reduce the number of python constructs needed.
The last two examples are both small, but they can be varied and expanded quite a bit. For the calculator, you can add multiplication and division. If division is introduced, it is good to also introduce floats, in order to avoid the unintuitive effects of integer division. You can also compose the functions, for example by adding and then multiplying. Or you can hard-code one of the inputs, or make a square function.
The guess-the-number game can be expanded even more. You can count the number of tries, write a scoring function, make the range selectable, check if a number has been guessed before (needs a list), allow for multiple plays etc.
In the Classroom
The first lesson was on Monday October 23th. Olle and I both took some time off work and came to school and gave the first presentation together. We started by talking a bit about what you do when you work as a programmer. For the programming part, we used our own laptops connected to the projector. We showed them how you start a terminal, how you start IDLE, how to type in print “Hello, world!” and how to run that program from the terminal.
It was good to be two, so one of us could walk around in the classroom and help whoever had a problem getting started. At the end of the one hour lesson, everybody had written a program in Python and got it to run on their own computer. We also emphasized that it is good to cooperate and help each other. Professional programming is a team effort, and pair programming is often used.
For the following two lessons, Olle and I split up and took one lesson each. We continued going through the concepts at a pretty brisk pace. Again we used our own computers hooked up to the projector and live-coded as we explained new concepts. The students followed along on their own computers. When I had shown how to write the add function, I was happy to hear how quickly the students came up with their own variations of what the functions could do.
One student asked if we could make the add function take a little while to come up with the answer (as if the computer was thinking). This was a good opportunity to show how to search for answers to programming questions. I googled “python delay one second” and the first hit was a Stack Overflow question showing how to use time.sleep(1).
After this initial push, Caroline took over and let the students work on their programs. Caroline mailed me a couple of questions, and I mailed her some more example programs. I came back to the school a few weeks later and helped with various questions. There were many excellent ideas from the students on how the guess-the-number program could be improved. So I showed how you can add playing a sound, how to measure the time taken to get to the correct answer, how to keep track of previous guesses, and how to keep track of high scores between runs of the program. To keep track of previous guesses, we used a list, and checked if the current guess was in there before adding it to the list. To keep track of high scores, we used a dict, and used pickle to store it to file and read it back. So these were natural points to introduce lists and dicts.
Some students chose to write a quiz game instead of guess-the-number. The program prints a question and four alternatives, and the user picks an answer. There were also some who made a battleship game, a hangman game and a dice rolling game. Caroline reported that many students were quite enthusiastic and worked on their programs at home as well. At the end of the course, the students were told to show their programs to their parents. That’s a nice opportunity to show off what you have built, and to explain how it works.
Tips
Here are some of the things we learned during the course.
- It worked well to start coding from lesson one. The pace was high, so it is important to make sure all students got help to get their programs running.
- As a teacher without a lot of programming experience, you have to accept that some students will soon know more than you do.
- It will be difficult to know how to grade the students’ work.
- Many students helped others with problem solving. This was good, but sometimes the “helpers” didn’t have much time for their own programs.
- Start IDLE as a background process (end the command with &) so you can run the python code from the same shell.
- Use arrow up to repeat the last command – usually you want to run the same python program over and over (after modifying it in IDLE).
- Use ctrl-c to stop a running program.
- Show how to comment out code using #
- Show how to indent and outdent blocks of code in IDLE, so they don’t have to do it manually line by line.
- If there are unicode characters (for example Swedish å, ä and ö) in the source code, the file encoding must be specified. IDLE suggests this at save, but you need to select it.
Common errors and tips for debugging.
- Make sure any changes in IDLE are saved before the program is run (the asterisk next to the file name indicates unsaved changes).
- Look out for missing colons, missing brackets, and wrong indentation.
- A common error when printing variables is missing %-signs, or using & instead of %.
- Look out for cases where a string (from raw_input() for example) is compared to an integer. This will not raise an error in Python 2, but will cause some comparisons to give the wrong result.
- When trouble-shooting the guess-the-number game, set the number to be guessed to a fixed value temporarily, for example 57. Then it is easier to see if the response to each guess is correct. When the program is debugged, set it back to a random number again.
Personally I would not write code for more than an hour before using some kind of source control system (like git). However, I think it is better to concentrate on the pure programming aspect when they are just learning to program, instead of adding the complexity of dealing with version control as well. It is good enough to just tell them to make back-ups of their program files once in a while.
Code
All the code examples used are available at GitHub. Note that we were using Python 2.7, not Python 3. There are instructions on how to start IDLE and how to run a Python program in the file getting_started.txt. Then there are a number of files called e.g. example1_print_and_variables.py that contain the examples we started with.
There is one basic version of a guess-the-number program, and one advanced version with a few bells and whistles. In the end, there was some “magic” added, for example using sleep(), timer and playing sound clips and background music. This was all added on student requests, and did not confuse anybody as far as I can tell.
Conclusion
Neither Caroline, nor Olle and I, knew what to expect when we started this. However, it worked out better than any of us had hoped. The cooperation between us meant that all the students wrote complete Python programs. Many of the students were quite enthusiastic and want to continue programming.
For me, it was really nice to see how the students “got it” and quickly came up with all sorts of ways to improve the basic programs we presented. Hopefully the examples from this course can be useful for others as well. If so, please let me know in the comments.
.”
This is the best part. I too have a desire to teach everyone how to code. I tried BASIC (which is what I learned on around age 5) and I tried Python, the latter of which really fills the niche today that Basic did in the 80s.
However, I also feel that Python 2 was a lot easier for beginner, and the Python Foundation will drop support for it in 2 years. I wanted a language that was easier to teach, so I set out to build one. I am new to teaching, but the student I am currently mentoring happens to love learning with my language. It has variables and loops and conditionals and functions, and math and input and output– all in < 100 commands. And (importantly) it is text-based.
Rather than have schools adopt my own language (which is implemented in Python) I want to encourage all teachers to consider a long-term goal: 1. To keep track of the pitfalls in teaching first-time coders, as I have done. 2. To collectively weigh out which of those “first lesson” pitfalls are truly essential, and which are “unnecessary.” As I have done. And 3. To work together with language developers to develop text-based languages that are comparably easy to learn as text-based languages.
I am happy to lend a hand or voice to such ventures, but this isnt about me. My language is one example of a direction that teachers could go in. If teachers learned enough skills to be proactive about this (co-designing educational languages) then I believe it would make it easier for more students to learn coding, including those who struggle now. The counter-argument is that first-time students should learn an industry language. There are plenty of counter-arguments to that counter-argument.
If teachers would only blog** about there wishes regarding a more ideal language for teaching, I believe it would be possible for developers to take it from there– once the blogs were known. ** Certainly, what started with blogging would ultimately have to turn into some kind of advocacy, but I am suggesting that educators devote a little time to this, not to make it another full-time job.
Python is just about the best thing for this, for now. It has several advantages for education, which is probably why it is used both in junior high schools and at MIT. And yet, you may not learn as much about types– you may not learn as much about braces. *Fortunately*, the lessons that students really need to obtain from their first coding language are not enough to say, write programs in C.
Hi codeinfig,
Thanks a lot for your comments! I think teaching text-based programming was a choice that turned out really well. However, using Python 2.7 was only because it was convenient – it was pre-installed on the students’ computers. I noticed three quirks with Python 2.7 while teaching. The first is that comparisons between different types (e.g. strings and ints) should raise an exception instead of giving a (sometimes) wrong answer. The second is unicode support – you need to explicitly handle it (e.g. for Swedish å. ä and ö). Third, print not being a function (with brackets) looks like an exception to the rule that when you give an argument, you surround it in brackets (always bad to have to teach exceptions to a rule).
But I am very happy overall that we picked Python! It is a very nice language well suited to an introductory course in programming.
> comparisons between different types (e.g. strings and ints) should raise an exception instead of giving a (sometimes) wrong answer.
Agreed; in nearly a decade of python coding Ive never noticed. But I looked this up and you are absolutely correct. Note that despite it being based on Python 2 I can fix fig so that it will raise an error if a string/int or int/string pair is compared.
> The second is unicode support – you need to explicitly handle it (e.g. for Swedish å. ä and ö).
yeah, thats an important one.
> print not being a function (with brackets) looks like an exception to the rule that when you give an argument, you surround it in brackets (always bad to have to teach exceptions to a rule).
There are always exceptions in languages, except lisp… but this one is easy to fix:
from __future__ import print_function
You can even setup python to load that line when python loads.
I am glad youre happy with Python and Im glad educators are still trying to teach text-based coding to students.
I also just noticed that you chose Python 2.7. I want everyone who prefers 2.x to know, that when the Python Foundation drops 2.x support in 2020, that they primarily maintain the “CPython” implementation.
In other words, there are Many implementations of Python (including PyPy) that may not drop 2.x support in 2 years. If you feel that 2.x is better for your purposes, dont hesitate to keep using it (but do look into other implementations. the differences between CPython and PyPy are minimal, for my own needs– far more minimal than the differences between 2.x and 3.x.) Not everyone has abandoned 2.x, it is still maintained.
Caroline just reminded me that some of the students even expressed interest in high schools with a special programming profile 🙂
Thanks for sharing, this was a really heartwarming read 🙂 Just a quick note concerning programming language choice: I understand the appeal of picking a pre-installed one, but except for that, I can’t recommend Python 3 over Python 2 enough. Especially in the context of teaching beginners whose native language is not English. Python 3 source encoding is UTF-8, strings are Unicode, not bytes, accented characters are shown when evaluating at the REPL instead of cryptic escape sequences, variable names can contain any Unicode word characters. (Plus many things under the hood as well, but these are the most palpable usability improvements.)
Hi David,
Thanks for commenting. I agree with you. If I did it again, I would use Python 3.
Hacker News discussion:
Pingback: Java Weekly, Issue 208 | Baeldung
When I was in Grade 8, I think the ZX Spectrum had just come out! How time flies. Obviously I wasn’t learning to programme one though! Our school was very proud of the 1 computer it owned which I believe was still a ZX80 at that time!
Article in Computer Sweden (in Swedish): | https://henrikwarne.com/2017/12/17/programming-for-grade-8/?share=google-plus-1 | CC-MAIN-2019-13 | refinedweb | 3,244 | 73.07 |
Eric on all things VB!
View Stats
Click.
If you recall from an earlier post, I said that the main reason for signing an assembly and giving it a strong name was to prevent it being tampered with. For example, many of the assemblies that make up the .NET framework itself are strong-named assemblies that sit in the GAC. If I disassembled one of these, messed around with the IL code, and then reassembled it, I'd have to sign it with my signature. Running any existing applications that used the assembly I'd changed would now fail, because the signature had changed. That's all great in that it stops anyone tampering with existing, deployed assemblies.
What may still be a concern is that the IL code is still legible, so that anyone can see how your code works and what it does (provided they spend a bit of time getting familiar with IL code). This raises the issues of intellectual copyright and opens the potential for security breaches as unscrupulous developers can easily look for weaknesses in your application.
Enter code obfuscation (Dotfuscator on Visual Studio's Tools menu). This utility renders the contents of your assemblies (the IL code), incomprehensible to human readers, but leaving it so that it can still be run by the CLR. This is done by renaming everything in sight, variables, classes, methods etc with meaningless or duplicate names. Obfuscation is an excellent demonstration of how useful good names are to understanding code!
The following is an example of how code access security might affect your code when calling unmanaged code. Unmanaged code is a posh way of saying “It isn’t .NET code, it is the stuff you used to write before .NET” :-)
In VB6 making a call to a Windows API function wasn't affected by any security settings. Potentially in .NET, an administrator might configure a machine's code access security policy to disallow calls to unmanaged code (such as the Windows API), because this is seen as a security risk. That means that code like this, could fail with a SecurityException, when the API function is invoked:
1: Module Module1
2: Sub Main()
3: Win32.MessageBox(0, "Here's a MessageBox", "Platform Invoke Sample", 0)
4: End Sub
5: End Module
6:
7: Public Class Win32
8: Declare Auto Function MessageBox Lib "user32.dll" ( _
9: ByVal hWnd As Integer, ByVal txt As String, _
10: ByVal caption As String, ByVal Type As Integer) _
11: As Integer
12: End Class
There's nothing you can do to stop the code failing (other than having a word with the administrator), but you can handle things a bit more gracefully. You can check when your app starts up whether you've got all the permissions you want, and then handle the situation before you hit the problem. This can be done with an attribute (declaratively), so the permissions are checked when an assembly is loaded:
<Assembly:SecurityPermissionAttribute(SecurityAction.Demand, Flags := SecurityPermissionFlag.UnmanagedCode)>
You could also perform the check programmatically:
1: Try
2: Dim sp As New SecurityPermission(SecurityPermissionFlag.UnmanagedCode)
3: Console.WriteLine("Demanding SecurityPermissionFlag.UnmanagedCode")
4: sp.Demand()
5: Console.WriteLine("Demand for SecurityPermissionFlag.UnmanagedCode succeeded.")
6: Catch e As Exception
7: Console.WriteLine(("Demand for SecurityPermissionFlag.UnmanagedCode failed: " & e.Message))
8: End Try
Either way you can test for the permission before you need to use it.
The above checks whether you had the necessary permissions to perform an operation (in this case calling unmanaged code) by using Demand. You can also Assert or Deny a permission.
For.
The second big security area in the .NET framework after Authentication and Authorisation is cryptography. Again, the framework contains a lot of powerful functionality, and again there is an application block that makes it all a bit easier to use.
The classes and namespaces that deal with cryptography cover three key areas:
I don't want to get bogged down in the detail of these three topics (there's plenty of resources elsewhere on the web for that), but just highlight where, in a VB application, these technologies may be useful.
Symmetric Key Cryptography
This kind of encryption algorithm uses a single shared key that can be used to both encrypt and decrypt data. These algorithms are also relatively fast. The fundamental weakness of this approach is that both the encrypting and decrypting parties need to know the same key, and to make sure that no one else has access to the key. The framework includes implementations of a number of well known algorithms of this type such as DES, TripleDES and AES.
Where this is useful in a VB application is where the application wants to store it's own data securely - perhaps as a file, or a section of a configuration file, or a registry key. A couple of useful classes are ProtectedMemory and ProtectedData which look after all the key management issues for you.
Asymmetric Key Cryptography
While these algorithms are slower than the symmetric ones, they don't rely on a single shared key. Instead, they use a pair of keys, on public and one private, where data encrypted with one key can only be decrypted using the other. These algorithms, as well as providing encryption, also form the basis of digital signatures.
This approach is appropriate when exchanging data securely between users or systems. The HTTPS protocol is an example of the use of this approach.
Hashing
You can think of hashing as a one way system, you can encrypt (hash) a piece of data to generate a unique hash code, but you can't work out from a hash code what the original piece of data was. What's the use of this then? An authentication system that verifies passwords is a good example:
You need to check that a user password is correct when they log on - so you need to keep a record of passwords somewhere and the password needs to be transmitted over the network from the client application for checking. This gives you two security loopholes - passwords stored in a database and passwords flying around on the network. If you store a hash of the password in the database, and the client application sends a hashed version of the password for checking we've avoided both problems! We just need to compare hash values to check if the password is correct.
The framework includes implementations of a number of hashing algorithms including SHA and MD5.
As promised, the first in a series of posts about security relevant to a developer new to .NET (such as a VB6 developer).
These are what most people will immediately think of when security is discussed.
ASP.NET has a huge amount of built-in support for performing authentication and authorisation (a far cry from the situation with ASP) including a full-blown membership system that simplifies creating a web site that requires registration before you can access certain areas of the site.
Authentication and authorisation have tended to be less significant when developing desktop based systems because a) it is a more "controlled" environment and b) the desktop OS can be left to handle these issues for you.
A common scenario where you do need to customise things is where a desktop application needs its own authorisation rules (we can still rely on the OS performing the authentication for us, typically against Active Directory). What we want to do in our application is authorise users using Active Directory (or ADAM) - to minimise the development effort here you should look at the Security Application Block.
What (or who) is ADAM, some of you are asking? Simply put Active Directory Application Mode is a standalone version of Active Directory that you can distribute with your application. Therefore you could have you own authentication database which could also be customised without impacting Active Directory itself.
While I'm on the subject of application blocks, there are a couple of others that relate to security. There's a Cryptography Application Block which I'll talk about in a later post, but for now I wanted to point out the Logging Application Block. This is great if you need to record what's going on in your application for troubleshooting, error reporting, or from the security standpoint, auditing.
Creating log entries from your code is straightforward:
1: Dim logEntry As LogEntry = New LogEntry()
2: logEntry.EventId = 100
3: logEntry.Priority = 2
4: logEntry.Message = "User Logon: " + userName
5: logEntry.Categories.Add("Audit")
6: logEntry.Categories.Add("UI Events")
7: Logger.Write(logEntry)
The configuration options of the application block then enable you to control things like where the log messages get written to, and to define filters based on LogEntry fields.
Possible destinations for log messages include: a database, the Windows Event Log, a rolling flat file or an email.
One:
It's the last one, CAS, which probably needs most explanation for newcomers to .NET.
I will take a look at security over the next week or two on this blog… which should be useful recap for me as well :-)
There may be some circumstances where upgrading a VB6 application is not feasible, for example it may rely on a control that simply does not work in the target operating system. I came across a couple of articles on MSDN that discussed the ways that virtualisation technologies can be applied to this problem. There are three basic approaches:.
Just.
A.
How.
Aberdeen Group have created a new report “Migrating from VB6 to .NET: The challenge of software agility in a volatile economy”
Some bits that stood out for me:
Choice in general is a good thing. When faced with moving a Visual Basic 6.0 application to .NET there is plenty of choices to be made. One of those choices is “Which tool should I go with to migrate the code?”.
My advice in general is “Try as many as you can on a representative chunk of your application”.
I spotted over on the VB Migration Partner website a case study which contained an example of just that:
.”
This was a 650K Lines of Code conversion in just 6 months with a total effort of 18 man-months. Pretty impressive.
P.S. 13/2.5 = 5 times faster, in case you were wondering :-) | http://blogs.msdn.com/goto100/default.aspx | crawl-002 | refinedweb | 1,725 | 53.31 |
Lambda Expressions appeared in C++11, and since then they become one of the most distinguishing features of Modern C++. What’s more, with each revision of the Standard the ISO Committee improved the syntax and capabilities of lambdas, so they are even more comfortable to use.
Read on to see how you can learn all the details of this powerful modern C++ feature.
A Few Questions to you?
- Do you know all of the details of this feature?
- Why we needed lambdas in the first place?
- How the compiler “expands” the code for lambdas?
- How to capture variables and even declare new captured elements?
- And many more questions!
On my website, you can read at least five large articles that describe all the essential parts of lambdas:
Have a look if you haven’t read it before:
- Lambdas: From C++11 to C++20, Part 1
- Lambdas: From C++11 to C++20, Part 2
- 5 Advantages of C++ Lambda Expressions and How They Make Your Code Better
- C++ Lambdas, Threads, std::async and Parallel Algorithms
- C++ Tricks: IIFE for Complex Variable Initialization
Would you like to see more?
Lambda Ebook
I packed my knowledge about lambdas into a beautiful ebook, with lots of examples and additional descriptions.
And just as with my book about C++17, I made it available at Leanpub.
The platform makes it easy for me to publish new updates, and everyone who has the book can immediately access the latest changes (at no charge).
Also, the platform allows you not only to read it as PDF but also Epub, Mobi or Online.
Here’s the list of the things you can learn:
- C++03 - What was the motivation for the new C++ feature?
- C++11 - Early days:pointer and allowing
constexpr. You’ll also learn how to leverage the overloaded pattern.
- C++20 - In this chapter, you’ll see all of the new features adopted for C++20 like template lambdas and how to use them with concepts and
constexpralgorithms.
Recent Changes
Here are the recent changes from April, May and June:
- I rewrote the C++03 chapter and include sections about
bind1stand
bind2ndand other functional stuff - they are deprecated and then removed in C++17. There’s also another section in the C++14 chapter on how to replace that old functionality with modern alternatives.
- I rewrote the C++20 chapter, as the C++20 Standard is mostly done and we won’t see any feature changes. This covers
constexpralgorithms, template lambdas and a few other improvements.
- New sections about deriving from lambda expressions. That includes the underlying theory, and then you’ll decipher the overloaded pattern that is handy for
std::variantvisitation. You also see a shorter syntax for C++20.
- IIFE sections improved with one more example
- Return type sections in C++11 chapter got an extra example
- An updated book cover!
- New appendices: the first one with a list of lambda “techniques” and then a second with top 5 lambda features (converted and adapted from my recent article)
- Plus many updates across the whole book
You can download the sample (it’s almost half of the book!) from this page: C++ Lambda Story @Leanpub
Let’s now focus on two examples that I recently added to the book:
Example 1 - Lambdas And Concepts, C++20:
As you might already know, in C++20, we’ll get concepts… and they work nicely with lambdas, have a look:
Let’s define a concept for our
IRanderable static interface:
template <typename T> concept IRenderable = requires(T v) { {v.render()} -> std::same_as<void>; {v.getVertCount()} -> std::convertible_to<size_t>; };
In the above example we define a concept that matches all types with
render() and
getVertCount() member functions. We can then use it to write a generic lambda:
#include <concepts> #include <iostream> struct Circle { void render() { std::cout << "drawing circle\n"; } size_t getVertCount() const { return 10; }; }; struct Square { void render() { std::cout << "drawing square\n"; } size_t getVertCount() const { return 4; }; }; int main() { auto RenderCaller = [](IRenderable auto &obj) { obj.render(); }; Circle c; RenderCaller(c); Square s; RenderCaller(s); }
Play with the code @Wandbox
Thanks to the constrained
auto we can define a generic lambda that is constrained with the given concept:
auto RenderCaller = [](IRenderable auto &obj) { }
Additionally, if you need more control over the template parameter, you can also use template lambdas!
auto foo = []<typename T>(std::vector<T> const& vec) { std::cout<< std::size(vec) << '\n'; std::cout<< vec.capacity() << '\n'; };
This time you can add “template head” to the lambda.
Example 2 - Almost Always Auto and IIFE?
When the initialisation of a variable is complicated, and you’d like to make the variable
const you can use the IIFE trick (Immediately Invoked Functional Expression) :
For example:
const auto EnableErrorReporting = [&]() { if (HighLevelWarningEnabled()) return true; if (HighLevelWarningEnabled()) return UsersWantReporting(); return false; }(); if (EnableErrorReporting) { // ... };
But… isn’t it hard to read?
Not only you need to decipher the type of
EnableErrorReporting, but also you have to be careful and notice that we call lambda in the last line
}();.
In the book, I’m showing at least two ways on how to improve the syntax and make it more readable. (hint: maybe we shouldn’t use
auto here, and another option is to leverage
std::invoke()).
The Plans
There will be one more update, planned near the end of summer. To make it visible and clear for the readers, I set the “progress” to 95% for the book.
The target is to reach 100+ pages and also release it at Kindle Direct Publishing store.
Your Feedback
I appreciate your initial feedback and support! The book has now more
than 800 readers (and only one refund)! That’s not too bad I think :)
Add your feedback/review here:
You can use this comment site:
Or just write a direct email to me:
bartlomiej DOT filipek AT bfilipek DOT com
How to Get the Book?
There are three ways:
Buy directly at Leanpub:
C++ Lambda Story @Leanpub
Buy together with my C++17 Book
Buy C++17 in Detail AND C++ Lambda Story Together
Support me on Patreon (Every patron gets the book for free! + extra content) | https://www.bfilipek.com/2020/06/lambdastory.html?m=1 | CC-MAIN-2021-04 | refinedweb | 1,026 | 59.23 |
gettext – Message Catalogs¶
The gettext module provides a pure.
Note
Although the standard library documentation says everything you need is included with Python, I found that pygettext.py refused to extract messages wrapped in the ungettext call, even when I used what seemed to be the appropriate command line options. I ended up installing the GNU gettext tools from source and using xgettext instead.
Translation Workflow Overview¶
The process for setting up and using translations includes five steps:
Mark up literal strings in your code that contain messages to translate.
Start by identifying the messages within your program source that need to be translated, and marking the literal strings so the extraction program can find them.
Extract the messages.
After you have identified the translatable strings in your program source, use xgettext to pull the strings out and create a .pot file, or translation template. The template is a text file with copies of all of the strings you identified and placeholders for their translations.
Translate the messages.
Give a copy of the .pot file to the translator, changing the extension to .po. The .po file is an editable source file used as input for the compilation step. The translator should update the header text in the file and provide translations for all of the strings.
“Compile” the message catalog from the translation.
When the translator gives you back the completed .po file, compile the text file to the binary catalog format using msgfmt. The binary format is used by the runtime catalog lookup code.
Load and activate the appropriate message catalog at runtime.
The final step is to add a few lines to your application to configure and load the message catalog and install the translation function. There are a couple of ways to do that, with associated trade-offs, and each is covered below.
Let’s go through those steps in a little more detail, starting with the modifications you need to make to your code.
Creating Message Catalogs from Source Code¶
gettext works by finding literal strings embedded in your program in a database of translations, and pulling out the appropriate translated string. There are several variations of the functions for accessing the catalog, depending on whether you are working with Unicode strings or not. The usual pattern is to bind the lookup function you want to use to the name _ so that your code is not cluttered with lots of calls to functions with longer names.
The message extraction program, xgettext, looks for messages embedded in calls to the catalog lookup functions. It understands different source languages, and uses an appropriate parser for each. If you use aliases for the lookup functions or need to add extra functions, you can give xgettext the names of additional symbols to consider when extracting messages.
Here’s a simple script with a single message ready to be translated:
import gettext # Set up message catalog access t = gettext.translation('gettext_example', 'locale', fallback=True) _ = t.ugettext print _('This message is in the script.')
In this case I am using the Unicode version of the lookup function, ugettext(). The text "This message is in the script." is the message to be substituted from the catalog. I’ve enabled fallback mode, so if we run the script without a message catalog, the in-lined message is printed:
$ python gettext_example.py This message is in the script.
The next step is to extract the message(s) and create the .pot file, using pygettext.py.
$ xgettext -d gettext_example -o gettext_example.pot gettext_example.py
The output file produced looks like:
#" #: gettext_example.py:16 msgid "This message is in the script." msgstr ""
Message catalogs are installed into directories organized by domain and language. The domain is usually a unique value like your application name. In this case, I used gettext_example. The language value is provided by the user’s environment at runtime, through one of the environment variables LANGUAGE, LC_ALL, LC_MESSAGES, or LANG, depending on their configuration and platform. My language is set to en_US so that’s what I’ll be using in all of the examples below.
Now that we have the template, the next step is to create the required directory structure and copy the template in to the right spot. I’m going to use the locale directory inside the PyMOTW source tree as the root of my message catalog directory, but you would typically want to use a directory accessible system-wide. The full path to the catalog input source is $localedir/$language/LC_MESSAGES/$domain.po, and the actual catalog has the filename extension .mo.
For my configuration, I need to copy gettext_example.pot to locale/en_US/LC_MESSAGES/gettext_example.po and edit it to change the values in the header and add my alternate messages. The result looks like:
# Messages from gettext_example.py. # Copyright (C) 2009 Doug Hellmann # Doug Hellmann <doug.hellmann@gmail.com>, 2009. # msgid "" msgstr "" "Project-Id-Version: PyMOTW 1.92\n" "Report-Msgid-Bugs-To: Doug Hellmann <doug.hellmann@gmail.com>\n" "POT-Creation-Date: 2009-06-07 10:31+EDT\n" "PO-Revision-Date: 2009-06-07 10:31+EDT\n" "Last-Translator: Doug Hellmann <doug.hellmann@gmail.com>\n" "Language-Team: US English <doug.hellmann@gmail.com>\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" #: gettext_example.py:16 msgid "This message is in the script." msgstr "This message is in the en_US catalog."
The catalog is built from the .po file using msgformat:
$ cd locale/en_US/LC_MESSAGES/; msgfmt -o gettext_example.mo gettext_exa\ mple.po
And now when we run the script, the message from the catalog is printed instead of the in-line string:
$ python gettext_example.py This message is in the en_US catalog.
Finding Message Catalogs at Runtime¶
As described above, the locale directory containing the message catalogs is organized based on the language with catalogs named for the domain of the program. Different operating systems define their own default value, but gettext does not know all of these defaults. Iut uses a default locale directory of sys.prefix + '/share/locale', but most of the time it is safer for you to always explicitly give a localedir value than to depend on this default being valid. illustrate how that works by creating a second message catalog and running a few experiments.
$ cd locale/en_CA/LC_MESSAGES/; msgfmt -o gettext_example.mo gettext_exa\ mple.po $ python gettext_find.py Catalogs: ['locale/en_US/LC_MESSAGES/gettext_example.mo'] $ LANGUAGE=en_CA python gettext_find.py Catalogs: ['locale/en_CA/LC_MESSAGES/gettext_example.mo'] $ LANGUAGE=en_CA:en_US python gettext_find.py Catalogs: ['locale/en_CA/LC_MESSAGES/gettext_example.mo', 'locale/en_US/LC_MESSAGES/gettext_example.mo'] $ LANGUAGE=en_US:en_CA python gettext_find.py Catalogs: ['locale/en_US/LC_MESSAGES/gettext_example.mo', 'locale/en_CA/LC_MESSAGES/gettext_example.mo']
Although find() shows the complete list of catalogs, only the first one in the sequence is actually loaded for message lookups.
$ python gettext_example.py This message is in the en_US catalog. $ LANGUAGE=en_CA python gettext_example.py This message is in the en_CA catalog. $ LANGUAGE=en_CA:en_US python gettext_example.py This message is in the en_CA catalog. $ LANGUAGE=en_US:en_CA python gettext_example.py This message is in the en_US catalog.
Plural Values¶
While simple message substitution will handle most of your translation needs, gettext treats pluralization as a special case. Depending on the language, the difference between the singular and plural forms of a message may vary only by the ending of a single word, or the entire sentence structure may be different. There may also be different forms depending on the level of plurality. To make managing plurals easier (and possible), there is a separate set of functions for asking for the plural form of a message.
from gettext import translation import sys t = translation('gettext_plural', 'locale', fallback=True) num = int(sys.argv[1]) msg = t.ungettext('%(num)d means singular.', '%(num)d means plural.', num) # Still need to add the values to the message ourself. print msg % {'num':num}
$ xgettext -L Python -d gettext_plural -o gettext_plural.pot gettext_plu\ ral.py
Since there are alternate forms to be translated, the replacements are listed in an array. Using an array allows translations for languages with multiple plural forms (Polish, for example, has different forms indicating the relative quantity).
#" #: gettext_plural.py:15 #, python-format msgid "%(num)d means singular." msgid_plural "%(num)d means plural." msgstr[0] "" msgstr[1] ""
In addition to filling in the translation strings, you will also need to describe the way plurals are formed so the library knows how to index into the array for any given count value. The line "Plural-Forms: nplurals=INTEGER; plural=EXPRESSION;\n" includes two values to replace manually. nplurals is an integer indicating the size of the array (the number of translations used) and plural is a C language expression for converting the incoming quantity to an index in the array when looking up the translation. The literal string n is replaced with the quantity passed to ungettext().
For example, English includes two plural forms. A quantity of 0 is treated as plural (“0 bananas”). The Plural-Forms entry should look like:
Plural-Forms: nplurals=2; plural=n != 1;
The singular translation would then go in position 0, and the plural translation in position 1.
# Messages from gettext_plural.py # Copyright (C) 2009 Doug Hellmann # This file is distributed under the same license as the PyMOTW package. # Doug Hellmann <doug.hellmann@gmail.com>, 2009. # #, fuzzy msgid "" msgstr "" "Project-Id-Version: PyMOTW 1.92\n" "Report-Msgid-Bugs-To: Doug Hellmann <doug.hellmann@gmail.com>\n" "POT-Creation-Date: 2009-06-14 09:29-0400\n" "PO-Revision-Date: 2009-06-14 09:29-0400\n" "Last-Translator: Doug Hellmann <doug.hellmann@gmail.com>\n" "Language-Team: en_US <doug.hellmann@gmail.com>\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "Plural-Forms: nplurals=2; plural=n != 1;" #: gettext_plural.py:15 #, python-format msgid "%(num)d means singular." msgid_plural "%(num)d means plural." msgstr[0] "In en_US, %(num)d is singular." msgstr[1] "In en_US, %(num)d is plural."
If we run the test script a few times after the catalog is compiled, you can see how different values of N are converted to indexes for the translation strings.
$ cd locale/en_US/LC_MESSAGES/; msgfmt -o gettext_plural.mo gettext_plur\ al.po $ python gettext_plural.py 0 In en_US, 0 is plural. $ python gettext_plural.py 1 In en_US, 1 is singular. $ python gettext_plural.py 2 In en_US, 2 is plural.
Application vs. Module Localization¶
The scope of your translation effort defines how you install and use the gettext functions in your code.
Application Localization¶
For application-wide translations, it would be acceptable to install a function like ungettext() globally using the __builtins__ namespace because you have control over the top-level of the application’s code.
import gettext gettext.install('gettext_example', 'locale', unicode=True, names=['ngettext']) print _('This message is in the script.')
The install() function binds gettext() to the name _() in the __builtins__ namespace. It also adds ngettext() and other functions listed in names. If unicode is true, the Unicode versions of the functions are used instead of the default ASCII versions.
Module Localization¶
For a library, or individual module, modifying __builtins__ is not a good idea because you don’t know what conflicts you might introduce with an application global value. You can import or re-bind the names of translation functions by hand at the top of your module.
import gettext t = gettext.translation('gettext_example', 'locale', fallback=True) _ = t.ugettext ngettext = t.ungettext print _('This message is in the script.')
See also
- gettext
- The standard library documentation for this module.
- locale
- Other localization tools.
- GNU gettext
- The message catalog formats, API, etc. for this module are all based on the original gettext package from GNU. The catalog file formats are compatible, and the command line scripts have similar options (if not identical). The GNU gettext manual has a detailed description of the file formats and describes GNU versions of the tools for working with them.
- Internationalizing Python
- A paper by Martin von Löwis about techniques for internationalization of Python applications.
- Django Internationalization
- Another good source of information on using gettext, including real-life examples. | https://pymotw.com/2/gettext/index.html | CC-MAIN-2017-22 | refinedweb | 2,037 | 51.44 |
We uploaded a second (and likely final) release candidate for 3.3.8.
Here are the main links:
C++ Wt: download wt-3.3.8-rc2.tar.gz and read its release notes
Java JWt: download jwt-3.3.8-rc2.zip and read its release notes
This release fixes several bugs, and introduces a few small features, like OpenID Connect support.
We also released a second release candidate for Wt 4.0.0: download wt-4.0.0-rc2.tar.gz and read its release notes.
Windows builds are available on the releases page.
#include <sys/select.h>in
Postgres.Cfor | https://www.emweb.be/news/2017/08/01/wt___jwt_3_3_8_rc2_and_wt_4_0_0_rc2?wtd=QFOWLwrBGedERhF2 | CC-MAIN-2018-30 | refinedweb | 102 | 61.73 |
In today’s clusters made of commodity servers, failures are the norm. Having your code fail because one data node happened to be down when you were querying it is frustrating and no fun. Based on my experience hardening our product, I wanted to share a few quick and easy pointers that can help make your MongoDB queries more robust to node failures. Let’s get started.
Background for Beginners
Note: If you are completely new to MongoDB, I would recommend reading the following links. They should give you a high-level understanding of the components that make up a MongoDB cluster.
- Introduction to MongoDB Features
- How MongoDB Maintains Redundancy and Data Availability (a.k.a Replica Sets)
- How MongoDB Scales Horizontally (a.k.a Sharding)
Let’s first start by understanding PyMongo and its advantages. As you already know, Python is a high-level programming language that provides code readability with object oriented programming. Its ease of syntax and code written are more readable and smaller than code written in other languages. And MongoDB is a document-oriented scalable database designed for handling big data applications (such as content management systems). Since MongoDB supports dynamic schemas, it also provides the flexibility of storing any kind of data into your database without any rigidity.
So, in case you have never used PyMongo before, the PyMongo distribution contains tools for interacting with MongoDB database with Python. It is the most efficient tool for working with MongoDB using the utilities of Python. PyMongo was created to incorporate the advantages of Python as a programming language and MongoDB as a database. Since Python provides an easy to implement way of programming and MongoDB can handle large document repositories, the PyMongo tool can be of great use since it provides the best of these two technologies.
At the very core of PyMongo is the MongoClient object, which is used to make connections and queries to a MongoDB database cluster. It can be used to connect to a standalone mongod instance, a replica set, or mongos instances. To specify which mongo endpoints to contact, you simply pass in the endpoints into the host argument of the object. For example, if we had a stand-alone mongod node on 10.0.0.3 listening on port 27017, you can contact that node as follows:
import pymongo client = pymongo.MongoClient(host=“10.0.0.3:27017”)
The node passed into the host is called a seed node. Once you initialize the MongoClient, it will connect to the specified seed node in the background. With some basics, let’s talk about how to make your queries more robust!
Tip #1: Increase Your Host Seed List
The first tip is very simple, but quite useful. Instead of passing in a single seed node, pass in a list of seed nodes. Passing in a list of seed nodes gives the MongoClient object more endpoints to contact in case the connection to one of the input nodes is down. This not only makes initializing a mongo connection more robust, but also makes the client robust to later connection failures occurring after initialization. Code wise, it would look like the following:
client = pymongo.MongoClient(host=“10.0.0.3:27017”) # Fragile; single point of failure client = pymongo.MongoClient(host=[”10.0.0.3:27017”, “10.0.0.3:27018”, “10.0.0.3:27019”]) # More robust; can contact two other nodes in case one goes down
Note that all nodes in your seed list must be part of the same logical group. For example, the seed list must contain mongodnodes belonging to the same replica set, or mongos query routers connected to the same config servers, or config servers in the same sharded cluster. Mixing these up will result in unpredictable behaviour.
Tip #2: If at First You Don’t Succeed, Try, Try Again
After initializing a connection to a mongo node, the connection may fail at any time. This can be caused by a large variety of factors, ranging from network connectivity issues to actual mongo node failures. If one of these failures occurs, the next time a query is issued, the MongoClient will try to contact the unavailable node, determine it is unreachable, and then throw an exception. Afterward, the MongoClient object will automatically try to connect in the background to another live node in the seed list you provided, meaning your subsequent queries are likely to succeed. However, your code is still left with an exception to handle.
A clean and easy way to handle these situations is to wrap all your queries in a retry decorator that will catch the exception and then retry your query. Here is some sample code:
def retry(num_tries, exceptions): def decorator(func): def f_retry(*args, **kwargs): for i in xrange(num_tries): try: return func(*args, **kwargs) except exceptions as e: continue return f_retry return decorator # Retry on AutoReconnect exception, maximum 3 times retry_auto_reconnect = retry(3, (pymongo.errors.AutoReconnect,)) @retry_auto_reconnect def get_count(client, db, collection): return client[db][collection].count()
By wrapping your queries in a retry decorator, you can make them more robust to node failures and other temporary failures such as primary re-elections. Granted, this method isn’t a guarantee your queries will succeed in all cases as it assumes that there are other healthy nodes in the cluster to failover to. However, assuming a relatively stable cluster, this method handles a large number of failures.
Tip #3: Catch the Right Exceptions
In the above example, we retried on AutoReconnect exceptions, which is raised when a connection to a database is lost. While this is one of the exceptions to catch, it is by no means complete. Other exceptions can be found in the pymongo documentation (see link below). A couple ones you may want to also catch include:
- pymongo.errors.ConnectionFailure: thrown whenever a connection to a database cannot be made (actually the super-class of AutoReconnect above)
- pymongo.errors.ServerSelectionTimeoutError: thrown whenever the query you issue cannot find a node to serve the request (e.g. issuing a read on primary, when no primary exists)
These two exceptions are useful in both unsharded and sharded MongoDB clusters. However, there is some additional complexity in handling node failures for sharded clusters. In sharded clusters, most of your queries should be sent through the mongos because it can route your queries to the appropriate shard. If the mongos instance fails, then aConnectionFailure will be raised. However, what happens if your connection to the mongos is healthy, but the mongos fails to make a connection to a replica set? In these cases, the mongos will aggregate all the errors and send back an error which is then raised as an OperationFailure.
While it may seem simple to just also retry OperationFailures, it’s not quite that clean because OperationFailures can represent a large variety of errors, ranging from retryable errors such as primary re-elections to non-retryable errors such as authentication errors. You can distinguish between different types of errors by examining the code field of the error. Although at some point in time there was a list of error codes and their meanings, they have since become outdated and removed (see SERVER-24047), which means you’ll have to do some trial and error to discover which codes you should retry on.
Conclusion
Hopefully, these tips were helpful for all you MongoDB lovers! If you have any questions, comments, or have your own tips to share, feel free to comment below and get the discussion flowing. Also, feel free to check out our product page or contact us to learn how we can help protect your enterprise applications deployed on MongoDB database, either on-premises or “in” the cloud.
Cheers!
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/pymongo-pointers-how-to-make-robust-and-highly-ava-1 | CC-MAIN-2017-30 | refinedweb | 1,303 | 51.78 |
Lead Image © mikiel, 123RF.com
Policy-based DNS in Windows Server 2016
High Resolution
GeoDNS providers help manage external access to an application as a function of the requesting client's location. Using these services, you can define which IP address is returned in response to a DNS request or whether the request is answered at all, making it easy to roll out and manage access to your application: You only need to publish a name, and the client automatically connects to the best available data center site. However, the internal publication of applications in distributed Windows infrastructures forces administrators to resort to workarounds when assigning IP addresses to a DNS request.
This lack of flexibility is compensated for either by the application itself controlling client access as a function of the Active Directory (AD) location (e.g., Exchange) or by administrators publishing the application on various locations under different names and passing these names to the right clients. For clients that permanently remain in one location, this workaround usually works quite well, but with frequent relocations, things often go wrong. Additionally, web services make it difficult to manage multiple names, for example, because SSL certificates must be issued in different names and if wildcard certificates are not available.
Policy-Based DNS
Windows Server 2016 gives you a tool – policy-based DNS – that lets you provide DNS resolution with the utmost flexibility. The possible applications of policy-based name resolution go far beyond geographically based load balancing and also help you increase the security of your entire IT landscape.
Consider a practical example: A company (call it "Contoso," in the typical Microsoft way) with offices in Germany, Japan, and Canada has decided to introduce an internal web-based collaboration platform. The client farm is made up of corporate and private devices, including smartphones and tablets, because employees love to make use of the bring-your-own-device policy. Centralized management of devices using Group Policy is therefore impossible, and you cannot issue trusted internal certificates. To tackle these challenges, you use the name of the public company's website for the internal collaboration platform. A high-quality SSL certificate by a large provider, which all devices trust, already exists. However, this is also the namespace of the company's AD domain. DNS is provided exclusively on domain controllers (DCs), and you want to keep it this way, if possible.
The original plan to operate the entire platform at the Canadian data center had to be revised in the pilot phase because the latency of the WAN links to the other locations affected usability. This results in the following requirements:
- The German web server farm (10.0.101.99) must be reached from all productive network segments of the German site under .
- The same applies for the Canadian (10.0.102.99) and Japanese (10.0.103.99) locations.
- From the guest networks, must resolve to the public IP address 40.84.199.233.
This challenge can be solved with policy-based DNS in just a few steps, but to do so, you need PowerShell with DNS and AD modules (e.g., on a DC), which needs to be launched with increased rights. First, define the default entry for :
> Add-DnsServerResourceRecordA -Name "www" -ZoneName "contoso.com" -IPv4Address "40.84.199.233"
Now place the client subnets on all domain controllers (Listing 1).
Listing 1
Placing Client Subnets on DCs
> $dcs = (Get-ADDomainController -Filter *).Name > foreach ($dc in $dcs) { Add-DnsServerClientSubnet -Name "DE-Prod" -IPv4Subnet "10.0.101.0/24" -ComputerName $dc Add-DnsServerClientSubnet -Name "CA-Prod" -IPv4Subnet "10.0.102.0/24" -ComputerName $dc Add-DnsServerClientSubnet -Name "JP-Prod" -IPv4Subnet "10.0.103.0/24" -ComputerName $dc }
Depending on which client subnet the request comes from, the DNS server must give different answers. For this purpose, a so-called "zone scope" is necessary in the DNS zone contoso.com for each site:
> Add-DnsServerZoneScope -ZoneName "contoso.com" -Name "DE" > Add-DnsServerZoneScope -ZoneName "contoso.com" -Name "CA" > Add-DnsServerZoneScope -ZoneName "contoso.com" -Name "JP"
Now, add an A record for with the respective IP address to each zone scope (Listing 2). To establish the location dependency, you will be defining three DNS policies that must be rolled out individually on each server (Listing 3).
Listing 2
Adding Records to Zone Scopes
> Add-DnsServerResourceRecordA -Name "www" -ZoneName "contoso.com" -ZoneScope "DE" -IPv4Address "10.0.101.99" > Add-DnsServerResourceRecordA -Name "www" -ZoneName "contoso.com" -ZoneScope "CA" -IPv4Address "10.0.102.99" > Add-DnsServerResourceRecordA -Name "www" -ZoneName "contoso.com" -ZoneScope "JP" -IPv4Address "10.0.103.99"
Listing 3
Defining DNS Policies
> $dcs = (Get-ADDomainController-Filter *).Name > foreach ($dc in $dcs) { Add-DnsServerQueryResolutionPolicy -Name "Client Subnet DE" -ClientSubnet "EQ,DE-Prod" -ZoneName "contoso.com" -ZoneScope "DE,1" -Action ALLOW -ComputerName $dc Add-DnsServerQueryResolutionPolicy -Name "Client Subnet CA" -ClientSubnet "EQ,CA-Prod" -ZoneName "contoso.com" -ZoneScope "CA,1" -Action ALLOW -ComputerName $dc Add-DnsServerQueryResolutionPolicy -Name "Client Subnet JP" -ClientSubnet "EQ,JP-Prod" -ZoneName "contoso.com" -ZoneScope "JP,1" -Action ALLOW -ComputerName $dc }
From now on, the response from the respective zone scope is returned for requests from the individual production client subnets, even if a DNS server from the "wrong" location is requested. Clients from all other subnets (thus, also those on guest networks) receive a response from the default scope.
System Internal Processes
If you were now to take a look at the DNS management console, you would probably not find any trace of the recently executed action. Indeed, client subnets, zone scopes, and DNS policies can be managed exclusively in PowerShell. To see where the zone scopes with their respective entries are stored, use
AdsiEdit.msc to connect to the name context,
DC=DomainDNSZones,DC=contoso,DC=com
as shown in Figure 1. However, even a file-based DNS zone can include zone scopes. The scope data is here, along with the zone, with the same name as the subdirectory:
C:\Windows\System32\DNS\ (Figure 2).
The files have the default DNS format and both the zone scope container in AD and the subdirectory in the DNS folder will only be created if a zone scope is created for the zone. However, the zone scopes of a file-based DNS zone will not be transmitted, even between two Windows Server 2016 machines. If you really want to assign zone scopes to file-based zones, you need to manage the zone scopes and their respective records separately on each server or transfer the scope files from the master server by some other means. In each case, however, you need to create the zone scope for each server; otherwise, the files are not found (Figure 3).
The client subnets and DNS policies are only ever stored by the server and in the registry branch
HKLM\Software\Microsoft\Windows NT\CurrentVersion\DNS Server\. The names of keys and values and the content of the registry entries are available here and therefore allow inventory, documentation, and diagnostics using tried and trusted tools if no tools geared specifically to policy-based DNS are available.
How DNS Policies Work
The operating principles of DNS policies are more like firewall rules than Group Policy. For one thing, only the first policy for which all conditions apply actually works. For another, the DNS policies do not configure any settings; they only decide whether the DNS server responds to the request (
ALLOW), ignores it (
IGNORE), or refuses a response (
DENY). The response returned in the above example for the IPv4 address to the request for
is included in the DNS configuration. Therefore, the DNS policies for each standards-compliant DNS client are completely transparent.
To plan and implement your DNS policies optimally, you need to know these simple rules, which the policies follow:
- When a DNS request is received, the policies at server level are first checked in the order in which they were defined (
-ProcessingOrder). If all the conditions of a policy apply, it is used, and processing is complete. Because at this moment it is not yet ensured that the server actually hosts the requested zone, server-level policies can only have
DENYor
IGNOREas a result.
- If none of the server-level policies apply, the requested DNS zone comes into play. If the server is authoritative for this zone, the zone-level policies are checked and executed if the conditions are met. If the server is not authoritative, it will answer the request.
- Conditions within a policy are ANDed by default. In other words,
-ClientSubnet 'EQ,JP-Guest' -QueryType 'NE,SRV'
will apply to all requests that come from the
JP-Guest subnet and are not of the
SRV type. If you want to change this behavior, and OR the criteria of different types, use the
-Condition parameter, which accepts the values
AND (default) and
OR.
- Each condition has to include one or both of the
EQ,value1,value2,… or
NE,value1,value2,… expressions. The possible values in the
EQclause are ANDed (condition applies if none of the
NEvalues fit). Whereas, values within the
NEclause are ANDed (condition applies if none of the
NEvalues fit).
- Multiple policies on the same level (i.e., server level or in the same zone) need to have different rank numbers (processing order). If you add a new policy and specify an assigned rank number, your new policy receives the requested processing order, and existing policies in the ranking move down a slot.
You can use the following types of criteria to control the behavior of your DNS server: client subnet, transport protocol (TCP or UDP), Internet protocol (IPv4 or IPv6), IP address of the DNS server (if it listens on multiple interfaces), fully qualified domain name (FQDN) of the request (wildcards are allowed at the beginning, e.g., *.contoso.com ), type of request (A, TXT, SRV, MX, etc.), and the time in the local time zone of the DNS server. The individual criteria are described in detail on TechNet [1].
Buy this article as PDF
(incl. VAT) | https://www.admin-magazine.com/Archive/2017/42/Policy-based-DNS-in-Windows-Server-2016 | CC-MAIN-2020-34 | refinedweb | 1,661 | 53.71 |
I have written this program for an assignment and I have told the program to count the number of deposits and withdrawals then print them when the user is finished. My problem is that it also counts when a deposit = 0 or a withdrawal = 0. I have an idea but I dont know how to write it, maybe something like this?
if (deposit == 0)
then disregard count
Can someone give me some advice please
/* this program prompts the user to enter a starting balance, /
then prompts the user to enter a deposit amount, /
then prompts for a withdrawal amount, /
then gives the user an option to continue or exit the program, /
if input continue loops again, if input exit prints total deposit, withdrawals /
and end balance including %1 applied to all transactions */
#include <stdio.h>
#include <conio.h>
void main(void)
{
float tax, balance[2], deposit, withdraw, total[2] = {0, 0};
char answer;
int count[2] = {0, 0};
clrscr();
printf("This program will calculate the closing account balance based\n");
printf("on your inputs. On exit it will give you a total of the\n");
printf("deposits, withdrawals, tax and closing balance.\n\n");
printf("Please enter your opening account balance
$)");$)");
scanf("%f", &balance[0]);
fflush(stdin);
clrscr();
do
{
printf("Enter a deposit amount
$)");$)");
scanf("%f", &deposit);
fflush(stdin);
total[0] = total[0] + deposit; /* accept deposit and add to deposit total */
count[0] = count[0] + 1; /* count number of deposits */
printf("Enter a withdrawal amount
$)");$)");
scanf("%f", &withdraw);
fflush(stdin);
total[1] = total[1] + withdraw; /* accept withdrawal and add to withdrwal total */
count[1] = count[1] + 1; /* count number of withdrawals */
clrscr();
printf("Your deposit is equal to $%0.2f\n", deposit);
printf("Your withdrawal is equal to $%0.2f\n", withdraw);
printf("\nHave you finished entering deposits or withdrawals?(y/n)\n");
scanf("%c", &answer);
fflush(stdin);
}
while (answer == 'n' || answer == 'N');
if (answer == 'y' || answer == 'Y')
clrscr();
{
printf("\nYour opening balance was $%0.2f\n", balance[0]);
printf("\nThe number of deposits are %d and total amount is $%0.2f\n", count[0], total[0]);
printf("\nThe number of withdrawals are %d and total amount is $%0.2f\n", count[1], total[1]);
tax = (total[0] * 0.01) + (total[1] * 0.01); /* calculate tax */
printf("\nThe total goverment tax at 1 percent per transaction is $%2.2f\n", tax);
balance[1] = balance[0] + total[0] - total[1] - tax; /* calculate closing balance */
printf("\nYour closing balance is $%0.2f\n", balance[1]);
if (balance[1] < 0)
printf("Easy on the withdrawals, your account is not looking so good :-(\n");
else
printf("Keep those deposits coming! :-)\n");
printf("\n\n\nThanks for using my program, See ya later!\n");
printf("Brought to you by ZeD\n");
}
scanf("%c");
fflush(stdin);
} | http://cboard.cprogramming.com/c-programming/2546-not-sure-how-fix.html | CC-MAIN-2014-42 | refinedweb | 460 | 60.75 |
>>
Remove Leading Zeros From String in Java
There are many approaches to remove leading zeroes from a string in Java. Here we will use some basic functions of String and Arrays class which to remove leading zeroes from a string.
So this approach proceed as first convert string to a character array so that evaluation of each character in string becomes simpler. Now it seems simple that compare each character and find first non zero characters. But here is a constraint that in Java that array doesn't have equals method to compare its value so here we will use valueOf() method of String class to compare each character.
Now we get the position of first non zero digit in our String the only thing remaining is to trim our array upto first non zero digit position. For this use copyOfRange() method which takes three arguments one is original array second is from a position where a copy is to be started and third is to position upto which copy is to be done.
Example
public class RemoveLeadingZeroes { public static void main(String[] args) { String str = "00099898979"; int arrayLength = 0; char[] array = str.toCharArray(); arrayLength = array.length; int firstNonZeroAt = 0; for(int i=0; i<array.length; i++) { if(!String.valueOf(array[i]).equalsIgnoreCase("0")) { firstNonZeroAt = i; break; } } System.out.println("first non zero digit at : " +firstNonZeroAt); char [] newArray = Arrays.copyOfRange(array, firstNonZeroAt,arrayLength); String resultString = new String(newArray); System.out.println(resultString); } }
Output
myCSV.csv file created with following text
first non zero digit at : 3 99898979
- Related Questions & Answers
- Remove Leading Zeros from a String in C#
- Java Program to Remove leading zeros
- Remove Leading Zeros from an Array using C++
- Remove leading zeros in array - JavaScript
- Remove leading zeros in array in JavaScript
- How to remove leading zeros from a number in JavaScript?
- Remove leading zeros in a JavaScript array?
- Python program to remove leading zeros from an IP address
- Removing leading zeros from a String using apache commons library in Java
- Add leading Zeros to Python string
- Remove Trailing Zeros from string in C++
- Explain how to remove Leading Zeroes from a String in Java
- Remove Leading Zeroes from a String in Java using regular expressions
- Add leading Zeros to Python Program string
- Java Program to remove leading and trailing whitespace from a string | https://www.tutorialspoint.com/remove-leading-zeros-from-string-in-java | CC-MAIN-2022-27 | refinedweb | 388 | 50.97 |
Getting.):
1. Unzip Spark
~$ tar -xzf spark-1.2.0-bin-hadoop2.4.tgz
2. Move the unzipped directory to a working application directory (
C:\Program Files for
3. Symlink the version of Spark to a
spark directory. This will allow you to simply download new/older versions of Spark and modify the link to manage Spark versions without having to change your path or environment variables.
~$ ln -s /srv/spark-1.2.0 /srv/spark
4. Edit your BASH profile to add Spark to your
PATH and to set the
SPARK_HOME environment variable. These helpers will assist you on the command line. On Ubuntu, simply edit the
~/.bash_profile or
~/.profile files and add the following:
export SPARK_HOME=/srv/spark export PATH=$SPARK_HOME/bin:$PATH
5.
$SPARK_HOME/conf. First, create a copy of!
1.:
2."))
3. Start up an IPython notebook with the profile we just created.
~$ ipython notebook --profile spark
4. In your notebook, you should see the variables we just created.
print SPARK_HOME
5. At the top of your IPython notebook, make sure you add the Spark context.
from pyspark import SparkContext sc = SparkContext( 'local', 'pyspark')
6.!:
1. Obtain a set of AWS EC2 key pairs (access key and secret key) via the AWS Console.
2..
3. Launch a cluster as follows:
~$ cd $SPARK_HOME/ec2 ec2$ ./spark-ec2 -k <keypair> -i <key-file> -s <num-slaves> launch <cluster-name>
4. SSH into a cluster to run Spark jobs.
ec2$ ./spark-ec2 -k <keypair> -i <key-file> login <cluster-name>
5.., 2012, pp. 2–2.
Books on Spark
Helpful Blog Posts
- Setting up IPython with PySpark
- Databricks Spark Reference Applications
- Running Spark on EC2
- Run Spark and SparkSQL on Amazon Elastic MapReduce
Once again, thanks to @genomegeek for! | https://www.districtdatalabs.com/getting-started-with-spark-in-python | CC-MAIN-2018-17 | refinedweb | 289 | 76.93 |
Class describing an archive file containing multiple sub-files, like a ZIP or TAR archive.
Definition at line 24 of file TArchiveFile.h.
#include <TArchiveFile.h>
Definition at line 41 of file TArchiveFile.h.
Specify the archive name and member name.
The member can be a decimal number which allows to access the n-th sub-file. This method is normally only called via TFile.
Definition at line 44 of file TArchiveFile.cxx.
Dtor.
Definition at line 63 of file TArchiveFile.cxx.
Definition at line 55 of file TArchiveFile.h.
Definition at line 51 of file TArchiveFile.h.
Return position in archive of current member.
Definition at line 71 of file TArchiveFile.cxx.
Definition at line 57 of file TArchiveFile.h.
Definition at line 56 of file TArchiveFile.h.
Definition at line 52 of file TArchiveFile.h.
Returns number of members in archive.
Definition at line 79 of file TArchiveFile.cxx.().
Definition at line 121 of file TArchiveFile.cxx.
Try to determine if url contains an anchor specifying an archive member.
Returns kFALSE in case of an error.
Definition at line 149 of file TArchiveFile.cxx.
Explicitely make the specified member the current member.
Returns -1 in case of error, 0 otherwise.
Definition at line 88 of file TArchiveFile.cxx.
Explicitely make the member with the specified index the current member.
Returns -1 in case of error, 0 otherwise.
Definition at line 100 of file TArchiveFile.cxx.
Archive file name.
Definition at line 31 of file TArchiveFile.h.
Current archive member.
Definition at line 36 of file TArchiveFile.h.
File stream used to access the archive.
Definition at line 34 of file TArchiveFile.h.
Index of sub-file in archive.
Definition at line 33 of file TArchiveFile.h.
Sub-file name.
Definition at line 32 of file TArchiveFile.h.
Members in this archive.
Definition at line 35 of file TArchiveFile.h. | https://root.cern.ch/doc/master/classTArchiveFile.html | CC-MAIN-2018-47 | refinedweb | 311 | 63.86 |
Top Dentists 2016 Taking a bite out of oral health
DIY Network’s BLOG CABIN
A TV-perfect home overlooking Lake Coeur d'Alene
FEBRUARY 2016 #123 • $3.95 (DISPLAY UNTIL MAR 15, 2016)
w w w.spok anecda.com
TWO TIME EPICUREAN DELIGHT AWARD WINNER
BEST OF SPOKANE AWARD 2008 THROUGH 2015
02/16 FEATURES FE BRUARY 2 0 16 | V1 8 : I SSUE 0 2 (1 2 3 )
5 2
SPOKANE DOES IT BEST! Our city knows how to enthusiastically
embrace events and people who come visit, and we’re rolling out the red carpet yet again, for the 2016 Team Challenge Cup ice skating event. We’re proving yet again that Spokane does it best!
5 9
TOP DENTISTS Look in your mouth if you want to guage your overall health. Well, actually, you should have a dentist look in your mouth, care for your teeth and guide your overall oral health journey. What? You’re looking for a good dentist? Well. we’ve got our 2016 list of 106 Top Dentists for you!
8 0
BLOG CABIN It takes a village to raise a child, and it took millions of viewers, idea contributors and drawing hopefuls to make this blog cabin overlooking Lake Coeur d’Alene a reality. After 23 million entered to win it, there was one lucky winner who said, no, to keeping it. Who will call it home now?
ON THE
2016 8
spokanecda.com • FEBRUARY • 2016
COVER:
CONTENTS WHAT’S INSIDE 16
73
Editor’s Letter
Health Beat
Sticking Power
Heart health; Suicide prevention; Home fitness
18
Readers Respond What you had to say about recent issues of the magazine
21
102
Homestyles Tile 101
108
Real Estate
First Look and Buzz
Homeowners and icy sidewalks
Craftsman Cellars; Lilacs & Lemons; Spokane by the Numbers
Metro Talk
110
34
Climate Change?
People of Spokane, out and about
Automotive
People Pages
37
116
Recalls! Get your auto recalls
123
The Scene
Local Cuisine
A new harp is needed
Unique wedding feasts
38
128
Book Reviews
Restaurant Reviews
Page turners with local twiststs
Luna; Fleur de Sel Artisan Creperie
40
136
Artist Profile
Signature Dish
Artist Nathan O’Neill’s lessons that transcend
Scratch’s Seafood Trio
42
138
Dining Guide
What I Know
Where to chow down in town
Hutton Settlement’s executive director, Michael Butler tells us what he knows
Liquid Libations
46
Datebook What to put on your calendar
10
spokanecda.com • FEBRUARY • 2016
144
Nitro Coffee
EDITORIAL
Editor in Chief Blythe Thimsen blythe@spokanecda.com
509.623.9727
Marketing Editor
Robin Bishop
robin@spokanecda.com
Copy Editor Rachel Sandall Datebook Editor Ann Foreyt ann@spokanecda.com
Food Editor
Katie Collings Nichol
katie@spokanecda.com
ART Creative Director/Lead Graphics Kristi Somday kristi@spokanecda.com
Graphic Designer/Traffic Manager Monica Hoblin ads@bozzimedia.com
PHOTOGRAPHERS Makenna Haeder
James & Kathy Mangis
Rick Singer Photography
CONTRIBUTORS Robin Bishop Michael Butler
• Complimentary Hot Breakfast Bar • FREE Indoor climate controlled parking • Great City Center location—within walking distance to the INB Performing Arts Center, Riverfront Park, shopping and many great restaurants & pubs
Kate Derrick Paul Haeder
Sarah Hauge Marny Lombard Chris Lozier Laurie L. Ross Francesca Minas
Sydney McNeal
Justin Rundle Chris Street
Cara Strickland David Vahala Julia Zurcher
SALES & BUSINESS DEVELOPMENT President Emily Guevarra Bozzi
emily@bozzimedia.com
SALES | MARKETING Vice President - Sales Cindy Guthrie
cindy@bozzimedia.com
Senior Account Manager Jeff Richardson jrichardson@bozzimedia.com
Account Managers Erin Meenach erin@bozzimedia.com Christine King christine@bozzimedia.com Craig Hudkins craig@bozzimedia.com
OPERATIONS Accounts Receivable & Distribution denise@bozzimedia.com
Hot Tub & Romantic Fireplace Suites Available
Publisher & CEO Vincent Bozzi vince@bozzimedia.com
C0-Publisher/Co-Founder
Emily Guevarra Bozzi
emily@bozzimedia.com
Find us on
33 W. Spokane Falls Blvd Spokane, WA 99201
bwcitycenter.com 12
spokanecda.com • FEBRUARY • 2016
BEST OF THE INLAND NW SINCE 1999
Spokane Coeur d'Alene Living is published ten times per year by Northwest Best Direct, Inc., dba Bozzi Media, 104 S. Freya St. Ste. 209, Spokane, WA 99202-4866, (509) 533-5350, fax (509) 535-3542. Contents Copyrighted© 2012-2015 Northwest Best Direct, Inc., all rights reserved. Subscription $20 for one year. For article reprints of 50 or more, call ahead to order. See our “Contact Us!” page for more details.
spokanecda.com • FEBRUARY • 2016
13
CONTACT US Spokane Coeur d’ Alene Living
4 WINE TAPS, 34 BEER TAPS 133 BOTTLED BEERS & FULL BAR.
SATURDAY & SUNDAY 8AM - 1 PM.
SERVING BRUNCH
Story submissions: We’re always looking
HOURS: MON-THUR 11AM-10PM | FRI 11AM-11PM SAT 8AM-1PM & 3PM-11PM | SUN 8AM-1PM AND 3PM-10PM
905 N. WASHINGTON ST. | 509-392-4000
THE OLD BROADVIEW DAIRY
TheBlackbirdSpokane.com /
@TheBlackbirdGEG blythe@ spokanecda.com. BUZZ: If you have tips on what’s abuzz in the region, contact the editor at blythe@ spokanecda.com. $19 subscription sold. Contact the circulation director at (509) 533-5350.
50 TAPS @MANITOTAPHOUSE MANITOTAPHOUSE.COM
FULL BAR THANK YOU SPOKANE FOR VOTING US THE BEST NEIGHBORHOOD RESTAURANTSOUTH, BEST BEER LIST AND SILVER FOR BEST PUB FARE!
3011 S. GRAND BLVD. (509) 279-2671 11AM-11PM SUN-THURS. 11AM-MIDNIGHT FRI. & SAT. 14
spokanecda.com • FEBRUARY • 2016.
EDITOR’S LETTER
STICKING
16
spokanecda.com • FEBRUARY • 2016!
READERS RESPOND WHAT YOU HAD TO SAY
NAMELESS VOYEUR Not to sound like a voyeur, but I love when you provide a sneak peek into some of the area’s finest homes. I especially enjoy it when an iconic “old money” home is featured. It gives us a glimpse into a more elegant era and it’s nice to know our architectural history is being preserved. Looking in other people’s houses is kind of my guilty pleasure. Shhh! Name withheld via Facebook
RE-JOYCE That was such a great article about Ben Joyce in your last issue (The Power of Place, January 2016). I have seen some of his work before and know of him by name, but this was the first chance I had to “get to know him.” I love that he is famous in the art world, and racking up the accolades, fans and demand, yet he still chooses to make Spokane his home. That either says a lot about him, a lot about Spokane, or a lot about both. Proud to count him as one of our Spokane brethren. Adam Reinauer Spokane, WA JOYCE-FUL I felt giddy and joyful when I got my last copy in the mail and saw Ben Joyce on the cover. I have loved his artwork since before anyone was talking about it. When I first encountered his work several years ago I thought about picking up a piece, but it didn’t seem like a “practical” purchase given that it wasn’t a necessity and I was living off of a strictly necessity-based budget at the time. Fast forward to today, and I am kicking myself. His work goes for way more now than it did back then, and I still would have been able to survive if I had thrown “practical” out the window and just bought it when I wanted to. Lesson learned! I may not have his work in my home, but I am a huge fan. Thanks for the story. Lara Davis Spokane, WA 18
spokanecda.com • FEBRUARY • 2016
JUICE NO MORE The juice article (Juice it Up!, January 2016) was of interest to me, and I appreciate that you covered this not just trendy, but also healthy, topic. I bought a juicer about two years ago, after reading about the benefits of it. I’ve also seen tons of posts on Facebook by people who “juice,” and it always looks so good. Here is the part that I struggle with: it is expensive. My juicer has sat on the back counter of my kitchen for the past several months because I can’t justify spending the money that it costs to keep up with fresh produce. With young kids and a meatand-potatoes husband, none of whom are interested in drinking green juice, I feel guilty using three quarters to half of our grocery budget on the produce it takes to keep me in the juice. One of the juice places in the story mentioned the expense of buying the produce yourself, and said their product costs less. I decided to go buy a juice and was horrified to find it cost $7 for one drink!!! Not on my budget! There is no way I can afford to do that on a regular basis, and a regular basis/longterm consumption seems to be where the benefit of juicing comes in. One $7 juice a week isn’t going to make a difference in my health. I guess the real gripe needs to be with the cost of produce, and the cost being put on farmers, and so on up the chain. Until we are able to make healthy foods more affordable and more cost desirable than the drive-thru burger joint, I think we will continue to see obesity, diabetes, heart disease and other illnesses plague our society. It is sad but true: health and wealth go hand in hand. Name withheld Spokane, WA
spokanecda.com • FEBRUARY • 2016
19
g n i m o C g n i r p s s i h t
A Bozzi Media presentation Catering by Red Rock
(509) 795-2019
421 W. Riverside Ave | Spokane, WA 99201
Spokane’s premier meeting & event space
FIRST LOOK
22 L I L ACS & L EMO NS 24 W H AT’ S H OT W H AT’ S N OT 2 5 S P OKO- G N OME
IT’S POURING
W
It’s pouring in Kendall Yards!
inemakers usually recall the moment they caught the wine bug. For Greg Shelman, who was raised on a farm in Ritzville, it was in 1966 at the impressionable age of 14. A young Shelman was touring Napa Wine Country with his parents and although he was too young to taste the wine, a
pharmacy degree, but wine has a way of pursuing its people. So much so that in in 2003, when there were Spokane Valley. After several years as an assistant winemaker, he started Craftsman Cellars in 2013. At the close of 2015, with aged wine ready to pour, he opened his first tasting room in the “it” neighborhood of Kendall Yards, which is within walking distance of downtown. It is now officially “pouring” season in Kendall Yards. >>
spokanecda.com • FEBRUARY • 2016
21
FIRST LOOK BUZZ
It’s been long said that winemaking is both a science and an art. Shelman believes, due to the details winemaking requires, it’s actually more of a craft. He has leveraged his strong science background and craftsman-like attention to detail into remarkable wine. There are no shortcuts taken as Craftsman Cellars produces handcrafted wine in the oldworld style. All of the wine-making operations are carried out by hand with the power of gravity rather than electrical pumps. Although more time consuming, Shelman believes this method is a gentle approach, as it introduces less oxygen to the wine. The thought behind the name Craftsman Cellars is that Shelman views the winemaker as a craftsman, who with precise attention to detail shepherds the wine through the complicated journey from the vineyard to your glass. It’s the precise reaction to slightest nuances throughout the process that result in consistently great wine. Current production is just 570 cases and Shelman looks to increase that to 1,000 cases. As for now, the lineup of exclusively red wines is only available at the tasting room, which has quickly become a popular gathering place. As for Shelman, he still thinks wine smells fantastic.) — Laurie L. Ross Visit for directions and current tasting room hours.
22
spokanecda.com • FEBRUARY • 2016
d]
[not so goo
s n o m e l d n a s lilac [good]
nt by Vince
B oz z i
d]
f ba [good out o.
509.624.7263
R yl k Wee ER MIE
-
PRE
T SLET ia! NEWzzi Med o by B
spokanecda.com • FEBRUARY • 2016
23
FIRST LOOK BUZZ
American Dental Habits edition**
spokanebythenumbers
7 out of 10
Americans brush at least twice a day
30%
17%
aren’t brushing enough
1 min 52 seconds
brush after eating lunch
21%
is the average total amount of time spent brushing per day (56 seconds per daily brushing)
brush after eating dinner
23%
18 seconds
of Americans have gone 2 or more days without brushing their teeth, within the last year
longer is how much more time African-Americans spend brushing per day
Ages 18-24
tend to spend 16 seconds longer than the average time spent brushing each day
6 out of 10
**courtesy of the American Dental Association (ADA)
37%
of adults ages 18-24 have gone 2 or more days without brushing their teeth, within the last year
4 out of 10
Americans brush at bedtime and as soon as they wake up
Americans floss at least once a day
brush after eating breakfast
of Americans never floss
38%
20%
H T Carhartt has signed a lease as the anchor tenant in the newly renovated historic Bennett Block, with plans to open in the fall of 2016. Bee’s Wrap, carried at the Rocket Market. It is a reusable alternative to plastic wrap made with beeswax, organic cotton, jojoba oil and tree resin. Washable and reusable. Whoa! The Citizen Hall of Fame, a partnership between the Spokane Public Library Foundation, the City of Spokane, and the Spokane Public. 2016 is the Second Annual Spokane Citizen’s Hall of Fame, and the public nomination process stretched through January until February 4, 2016. Can’t wait to see who is nominated! Spokane City Council requiring employers to provide employees with sick leave – “Hot” for employees who get a new benefit
24
spokanecda.com • FEBRUARY • 2016
Macy’s closing their downtown store. It is an institution and is the only true department store in downtown. What will go in there and what will become of the downtown retail scene? Spokane City Council requirimg employers to provide employees sick leave – “Not Hot” for owners of small businesses who feel the government should stay out of how they run their businesses. When stores raise their prices for their final sales. We were in there last week and know how much the sofa cost, so why is it more expensive now that it is on “sale?” Don’t mess with professional shoppers, we’re onto you!
NOT
Dear Spoko-Gnome, This past Saturday, I was driving on the South Hill and noticed, not for the first time, a stone pillar that says “Rockwood Boulevard” at the top. It looks old, and even though I have seen it before, I realized I don’t really know what it is for, other than decoration on Rockwood. Do you know anything about it? I’m curious. ~ Kim J.
Dear Kim, I threw my little gnome self into the history books, digging for information, and discovered an article written by Mike Prager, in 1997 for the SpokesmanReview, that said: “The round pillars are made from chunks of basalt rock mortared together in a circular pattern that resembles the shape of a lighthouse. Originally the pillars had streetlight fixtures attached to them, as well as bird fountains, according to historical newspaper advertisements. The pillars were erected as an amenity when the neighborhood was first developed by the Spokane-Washington Improvement Co. between 1908 and 1911. The subdivision was marketed as the most exclusive neighborhood in the city at the time. Its curving, tree-lined boulevards were designed by the Olmsted brothers, who in their day were renowned for designing parks and residential areas in major cities around the country. The developers spent $100,000 on the land and another $100,000 on streets, sewers and other improvements - a celebrated amount of money for the time.” Other articles mentioned that many people believed they were designed by famed Spokane architect Kirtland K. Cutter; however, there is no reference to the pillars in books about him, so that remains a bit of a mystery. So now, you only have a little something to be curious about!
SPOKO-GNOME spokanecda.com • FEBRUARY • 2016
25
SPOT
FIRST LOOK BUZZ
the
DIFFERENCE
can you spot the FIVE differences?
Answers: 1) Heart by “Welcome” on laptop screen. 2) Lipgloss color 3) Letters missing on laptop keys 4) Signature on baseball 5) Spoon in coffee cup missing
Tod
Marshall
named 2016-2018 Washington State Poet Laureate
TOD MARSHALL, an award-winning poet and a professor at Gonzaga University,
has been appointed the fourth Washington State Poet Laureate by Governor Jay Inslee. Marshall is the first Eastern Washington resident to hold the position, and
26
spokanecda.com • FEBRUARY • 2016
opportunities for outreach to this program.“ Marshall was the first in his family to attend college and has dedicated himself to bringing humanities experiences to underserved populations. “Poetry matters—not just to poets, professors and students: poetry matters to everyone,” says says, “I hope to reinforce a message that as children they probably took for granted: their voices, their words, their songs of the self, are important and need to be heard.” — David Haldeman, Marketing and Communications Manager, Humanities Washington
WISHING STAR PRESENTS
February 26, 7pm-10pm Northern Quest Resort and Casino PURCHASE TICKETS AT: or
Mercedes in Liberty Lake 21802 E. George Gee Ave. Liberty Lake, WA 99019.
“Like” Wishing Star Foundation and Taste Spokane on Facebook.
509.991.1977
5620 S Regal St., Suite #6 Spokane, WA 99223
GYM PERSONAL TRAINING GROUP FITNESS spokanecda.com • FEBRUARY • 2016
27
CITY TREK BUZZ
by Julia Zurcher
DOWNTOWN ENTERTAINMENT DISTRICT
S
pokane’s Entertainment District is an epicenter of theatre and concerts. There are three notable venues that provide Spokanites with eclectic performances ranging from classical music to standup comedy: the Martin Woldson Theater, the Knitting Factory and the Bing Crosby Theater. This district is perfect for a special night out; in easy walking distance are bars, restaurants and shops you can visit before or after your night’s entertainment.
LISTEN.
The Baby Bar is an indelible presence in downtown Spokane. To many locals, this micro-drinkery, with its surreal decorations and hard to find entrance, is the ultimate bar experience. What keeps this location popular year after year (besides their house bloody mary and close proximity to Neato Burrito) is its regular lineup of shows and entertainment. Most of the shows are free or very cheap, so check their Facebook for upcoming events and a new music experience.
EXPERIENCE. The Bing Crosby Theater was built in 1915 as a movie theatre, full of the glitz and glamor synonymous with early 20th century cinema. Today, the style and festivity continues with the Bing at the center of community events and entertainment. Visit bingcrosbytheater. com for a calendar of events and tickets – you’ll find a show the whole family will love.
DRINK. Spokane is a great place for craft beer lovers, especially with the recent opening of Orlison Brewing Co.’s new taproom on 2nd Ave. Originated in 2009, Orlison Brewing Co. distinguishes itself from other breweries by only brewing lagers. (Unsure what makes a lager different from an ale? It’s the type of yeast used during fermentation – this results in the clean and crisp taste lagers are known for.) Don’t be put off by the lager-only selection, Orlison offers a beer to please any palette. Both the Brünette brown lager and Lizzy’s Red are award winners, while the Ünderground is brewed with roast barley and black malt to produce a stout lager that will please any dark beer aficionado.
28
spokanecda.com • FEBRUARY • 2016
EAT. The latest creation from restaurateur Adam Hegsted (of the Yards Bruncheon and Wandering Table), the Gilded Unicorn is a modern speakeasy that never takes itself too seriously. The menu boasts modern takes on retro classics, like the Ambrosia Salad (with avocado, lime and fruit with vanilla-poppy seed dressing) or the Tatertot Casserole (kobe beef and wild mushrooms with tatertots and brown cheese). The drink menu is just as whimsical and deciding which drink to order can be tricky, so order a Drink Flight and the bartender will craft three, 2 oz. drinks just for you.
Apartments include: Large 1 & 2 Bed/2Bath, Full Kitchen w/Appliances, Washer and Dryer in each unit.
(509) 921-0249 13505 E Broadway, Spokane Valley
• Gourmet Dinner Menu • Continental Breakfast • 24 Hr Emergency Call System • All Utilities
• Indoor Pool • Transportation Service • Free Wi-Fi Internet • Housekeeping
• DIRECTV Included • Onsite Exercise Facilities • Life Enrichment Programs • Greenhouse/Raised Bed Gardens spokanecda.com • FEBRUARY • 2016
29
RETAIL THERAPY BUZZ
Feeling in a rut? Does it seem like life is the same thing over and over? You get up, work, eat, clean sleep, repeat? Need to add some options that spice up your life a bit? We get it! In honor of Groundhog Day, which is celebrated at the beginning of every February, and which is also a movie depicting life as one big rerun, we’re picking some items that will help you break the routine and break out of the Groundhog Day rut.
2016 PORSCHE PANAMERA 4 EDITION $96,970.00
If it is your sedan or mini-van that has you in a rut, this exquisite automobile is sure to drive away the blahs. Imagine slipping behind the wheel of this, one of the most coveted automobiles, for your daily commute. The Porsche Doppelkupplung (PDK), 7-speed transmission under the hood, and leather interior with dark walnut trim on the inside will truly have you sitting in the lap of luxury. Available locally through Porsche of Spokane,
HARISSA DIP MOROCCAN SPICY RED PEPPER SAUCE $12.95 / 180g
Hot chili peppers, spices and herbs combine in this sauce to ignite your palate and bring some zest to your everyday dishes. Sometimes dubbed the national condiment of Tunisa, this zingy addition to meats, stews and couscous will turn up the heat on your cooking this month, and get you out of a culinary rut. Available locally through Oil & Vinegar,
NARS AUDACIOUS LIPSTICK IN RITA-SCARLET $32.00
Sometimes all you need to transform your life is a big, bold red lipstick swiped across your smacker. It makes taking on the world with fearless gusto a possibility. François Nars, Creative Director for NARS cosmetics said it best, “Embrace the audacious in everything, especially your lipstick color. It’s liberating, exhilarating, empowering.”
30
spokanecda.com • FEBRUARY • 2016
photo courtesy of U.S. Navy
Deer Park’s
Pride
FIRE CONTROLMAN
1st Class Elizabeth Clark from Arleigh-Burke guidedmissile destroyer USS Ross (DDG 71) was recognized as “Missile Defender of the Year” during a ceremony at the Missile Defense Advocacy Alliance (MDAA) annual ceremony in Alexandria, Virginia, in January. The event also included Defender of the Year winners from the Army, Air Force and Army National Guard who best exhibited leadership, personal effort and demonstrated a commitment to excellence in missile defense and their critical role in defending our country. The award recognizes and honors the contributions of members of the military who man fully operational missile defense systems. The defenders are active-duty officers, enlisted personnel, or reservists from each service branch who work within the missile
defense system and are nominated by their peers and commanding officers. Clark, from Deer Park, Washington, enlisted in the Navy in March 2010. She served four years aboard Ross and over the last year, Clark’s efforts enabled the ship to support a wide range of missions in various environments. Clark was the lead AN/SPY-1D 3D Radar, or, SPY technician aboard the ship and ensured scheduling and completion of all preventative and corrective maintenance. Her efforts have corrected casualty reports, avoided multiple others and helped to save the Navy approximately $78,950. “I am very honored to have been selected for this achievement; it took a lot of long hours, on and off duty, and unwavering support from my fellow technicians and leadership,” said Clark. “We did this together and I am fortunate to have a great chain of command that recognizes our hard work and dedication to the mission and our Sailors.” Her experience and system knowledge resulted in 100 percent mission readiness allowing Ross to track and execute its first Ballistic Missile Defense (BMD) firing. It was due in part to Clark’s ability to operate and maintain the Aegis Combat Suite and SPY radar. Clark was recently promoted to petty officer first class and has been recognized for excellent work as one of 6th Fleet’s 2M technicians for the 3rd quarter fiscal year 2015. The Missile Defender of the Year award is an annual honor given by the Missile Defense Advocacy Alliance, a non-profit organization devoted to building a wide range of successful missile defense systems for the U.S. — Mass Communication Specialist 2nd Class Veronica Mammina, Defense Media Activity Public Affairs spokanecda.com • FEBRUARY • 2016
31
SPA PARADISO
509.747.3529 | spaparadiso.com
TOM SAWYER COFFEE
509.747.3529 | spaparadiso.com
THE WANDERING TABLE
509.443.4410 | thewanderingtable.com
KENDALL YARDS
Wandered Yet? URBAN STOP Best New Restaurant
North side of Kendall Yards
(just off the beaten path!) 608 N. Maple, Spokane WA 99201 Now introducing the Wounded Warrior Special Blend. Tom Sawyer Country Coffee will donate $3.00 to the Wounded Warrior Project 速 for each pound of Wounded Warrior Special Blend sold! Enjoy wonderful, high estate coffees and give back to injured service members. Great for home or at the office!
Cell: 360-770-3112 | tomsawyercountrycoffee.com
509 443 4410 1242 W. Summit Parkway thewanderingtable.com
Best Appetizers
BRAIN FREEZE CREAMERY 509.321.7569 | brainfreeze.biz
VERACI PIZZA
509.389.0029 | veracipizza.com
THE YARDS BRUNCHEON
509.290.5952 | theyardsbruncheon.com.
Valentines Day limited edition
White Chocolate Strawberry
Dark Chocolate Cherry Chunk
Visit our website for hours, flavors, & more! brainfreezecreamery.com
For more information KENDALLYARDS.COM
Kendall Yards | 509-321-7569 1238 W. Summit Parkway
509.290.5952 1248 W. SUMMIT PARKWAY SPOKANE, WA 99201
IN KENDALL YARDS
MODERN AMERICAN DINER SERVING BRUNCH ALL DAY!
509.389.0029
1333 W. SUMMIT PKWY
Wood-Fired Authentic Neapolitan made from the freshest ingredients
OPEN DAILY 11AM-9PM
Online ordering now available! OPEN 7 DAYS A WEEK FOR BREAKFAST AND LUNCH. FOLLOW US ON:
DECEMBER RELEASE PARTY - HOSTED BY CENTERPLACE D e c e m b e r 1 0 t h 2 0 1 5 , a t C e n t e rp l a c e R e g i o n a l E v e n t C e n t e r
photos by Mangis Photography - James & Kathy Mangis
PUTTIN’ ON THE RITZ 2015 D e c e m b e r 3 1 st 2 0 1 5 , a t t h e D a v e n p o r t H o t e l
photos by Mangis Photography - James & Kathy Mangis
The luxury you HARDWOOD.
Recognized as one of the world’s most desirable flooring choices. With Shaw Hardwood, luxury is yours.
deserve.
LO C A L LY OW N E D & O P E R AT E D S I N C E 1 9 9 4
11315 EAST MONTGOMERY | SPOKANE VALLEY, WA 99206 509.921.9677 | OPEN MON-FRI 8 TO 5 | SAT 10 TO 4
THE SCENE 3 8 B O OK R E VIE WS 40 ARTI ST PROFI LE 46 DATEBO OK
T
HARK!
he Spokane Symphony is in need a new concert harp. The Symphony’s current harp is badly cracked, has unreliable intonation and is incapable of producing volume sufficient for use in a professional orchestra. Harps have a limited professional lifespan due to the massive pressure on the wood sounding board and frame (over 3,000 lbs. of force). Over time this pressure warps the harp, causing loss of stability, inaccurate intonation and poor sound quality. The Salvi harp currently owned by the Symphony was built in the early 1970s, and has been moved frequently and used rigorously. There are now nine major cracks in the sounding board, the instrument is breaking expensive strings at an alarming rate and the instrument has poor intonation and sound. Like the tympani or piano, professional orchestras generally own and care for their own harps. Unfortunately, the Spokane Sym-
A New Harp is Needed
phony’s harp is no longer of professional quality and is increasingly unstable. At this point in its life, it can be compared to an old, high mileage car, which is expensive to maintain yet offers rapidly diminishing performance and reliability. Over the course of the past several years Maestro Preu has consistently sought more harp sound in the orchestra; however, the current instrument cannot provide the sound quantity and quality that is needed. A new harp could elevate the orchestral sound to a new level with the unique beauty that only the harp can provide. “The current instrument is failing fast,” says EareckaTregenza, Spokane Symphony’s principal harpist, who was hired by the orchestra in 2008. “It is frustrating to be unable to provide the sound that I want to contribute to the Symphony due to an insufficient instrument. The gift of a new harp would make a lasting and indelible mark on the Symphony. The harp is a beautiful and glamorous instrument whose sparkling sound is essential to the orchestra’s timbre.” Purchasing a harp requires a hearty financial investment, but would you expect anything less for an instrument often depicted being played by angels? That melodic heavenly sound comes at a high price, making it an expensive endeavor. The Symphony has narrowed their list to two viable options: The Style 30 Lyon and Healy Harp, which runs $25,000, and the Salzedo Lyon and Healy Harp, which is $37,000. Decisions, decisions! Luckily, the weight of the decision is not up to those of us who slip into the upholstered seats to listen to the Symphony, but there is an opportunity to donate to support whichever harp they choose. For any people interested in donating, contact Jennifer Hicks at the Spokane Symphony (jenniferhicks@spokanesymphony.org ) to find out more. There will be a special invitation-only concert for all donors, featuring the new harp when it arrives. How heavenly! spokanecda.com • FEBRUARY • 2016
37
BOOK REVIEWS LOCAL AUTHORS by Kate Derrick
The Sacred Lies of Minnow Bly
Red Thunder
by Stephanie Oakes
by David Matheson
Spokane author Stephanie Oaks has come forth with a debut young adult novel that is impressing both young and adult readers alike. The Sacred Lies of Minnow Bly is a story about a young woman raised in a cult and the ramifications that follow her eventual escape. The story follows the protagonist, Minnow Bly, as she recounts her life growing up in a strange cult called the Kevinian, led by a disturbing figured called The Prophet. When the book begins, the reader is introduced to Minnow as she is detained in a juvenile detention facility and being investigated for the death of The Prophet and the fire that destroyed the Kevinian community. The story is a dark one, full of heartbreak and despair. Minnow has been tortured by The Prophet for acting out as a teenager, leaving her with a gruesome disability. Though Minnow is now free from the Kevinian control, she now must face the outside world and come to terms with herself and her secrets. While she is learning to interact in her new environment, she must divulge to the police the painful information that she desperately wants to keep to herself. Oaks writes Minnow’s story with beauty and honesty, keeping the reader engaged from the very first page. Though the book is marketed as a young adult piece, it is a story that will please readers of adult ages as well. The story is extremely well written, filled with dark passages and scenes of beauty. This may be her first novel, but Oaks has already proven herself a talented author.
Idaho resident David Matheson is an expert on the many tribal traditions of our area. As a prominent member of the Coeur d’Alene tribe, he has worked within the community as a both a council leader and a tribal chairman. Having lived most of his life on the reservation, Matheson is now sharing his extensive knowledge on tribal customs in his book, Red Thunder. In Red Thunder, Matheson shares a memoir revolving around the oral history of the Coeur d’Alene tribe in North Idaho. By putting to paper some of the stories passed down by his ancestors, Matheson seeks to share his knowledge and continue the tradition of sharing stories of pre-European Native Americans. The story follows both the old and young as they navigate the trials and tribulations of life, including love and loss. As the reader, you follow characters as they work, fall in love, go into battle and cope with the loss of loved ones. Matheson keeps you drawn in to these stories while simultaneously teaching the reader the different traditions and culture of his tribe. Red Thunder is a book full of beauty, tragedy and hope. Readers that are interested in local tribal culture will delight in his re-telling of oral history, though Matheson’s writing will also please readers less familiar with Native American tradition. Indulge in Matheson’s book and you will find yourself seeking out more by this talented author.
Published by Dial Books, hardcover, $17.99 Stephanie Oakes lives in Spokane, Washington, and works as a library media teacher in a combined middle and elementary school. She has an MFA in poetry from Eastern Washington University. The Sacred Lies of Minnow Bly is her debut novel.
38
spokanecda.com • FEBRUARY • 2016
Published by Epicenter Press, paperback, $17.95 David Matheson was born on the Coeur d’ Alene Indian Reservation. Since his birth, Matheson has been a member of the Schi’tsu’umsh people, now called the Coeur d’ Alene Tribe. He has served as a council leader, the tribal chairman and manager of various tribal operations over his career. Matheson holds an M.B.A. in Business Administration from the University of Washington. He resides in Northern Idaho.
Theo Chocolate: Recipes & Sweet Secrets from Seattle’s Favorite Chocolate Maker Featuring 75 Recipes Both Sweet & Savory
Versatile Expressive
by Debra Music & Joe Whinney
From the founders of Seattle’s famous Theo Chocolate Company comes a rare book with a variety of recipes for any chocolate lover. Authors Debra Music and Joe Whinney share some of their most tried and true recipes, both sweet and savory, for those who enjoy chocolate and the Northwest culinary scene. Theo Chocolate: Recipes & Sweet Secrets from Seattle’s Favorite Chocolate Maker is a beautiful book organized with 75 different recipes and colorful pictures alongside. The recipes range from traditional to unique and include offerings such as cookies, cakes, breakfast and even dinner items. While the recipes work best with Theo’s own brand of chocolate, they can also be substituted for different brands depending on what you have available. Another clear theme within Music and Whinney’s cookbook is the importance of local ingredients and supporting local business. Beginning with an introduction about Theo’s business model and ethics, the reader gains a solid understanding of the authors’ company as well as their love for chocolate. Some sections of the book also give an interesting foundational instruction of how to temper and treat chocolate as well. Though some of the recipes are complicated, requiring specific varieties of chocolate, dark and milk, the recipes are generally easy to follow and execute. This cookbook will especially please those who are looking for somewhat of a challenge in the kitchen. Try one of these sweet recipes and impress your friends and family at your next gathering! Published by Sasquatch Books, hardcover, $24.95 Debra Music took a 3,000-mile leap of faith and moved to Seattle from the northeast to assist in the launch of Theo Chocolate. Since he founded the company, Joe Whinney has dedicated his passion for chocolate, sustainability, and economic justice to the mission of Theo Chocolate.
Own an original. The only one. Yours! My studio is moments away from Kendall Yards. Call and arrange a visit! (509)
327-2456 | painter@elstewart.com spokanecda.com • FEBRUARY • 2016
39
ARTIST PROFILE NATHAN O’NEILL
Lessons that Transcend Multi-media Artist
Nathan O’Neill
by Robin Bishop
W
e all learn valuable lessons as we age, impacting who we become as we mature. Nathan O’Neill, a native Spokane multi-media artist, has had his share of valuable life lessons, from rocky relationship stories to personal choices that left him doing more than eight years in a Texas prison for bank robbery. Periods of darkness in our lives can leave us painting shadowed pictures of our future, or we can learn from the darkness letting it teach us to appreciate light and possibility. O’Neill has done just that. O’Neill’s wide body of modern artwork includes acrylics on canvas, sculpture, pottery, blended media and even furniture. What he has become most known for since his return to Spokane in 2009 has been his large format murals and canvases. Anyone that frequented downtown via the Division Street would have seen one of his large paintings on the chain-link fence that enclosed the deconstructed lot on the corner of 3rd and Division last year. It was the colorful one with the large-eyed Koi fish, cityscape and geometric shapes. The color and image-over-image styling in this piece, is indicative of O’Neill’s style in all his work. When speaking about his process, O’Neill states that any given piece will have separate sections that may or may not relate to each other. That’s just how his mind works during the creative process. Approaching a blank canvas, board or wall, has never been intimidating for O’Neill. “I’ll run out of material before I’ll ever run out of ideas,” he quips. His non-stop internal imagery and idea mill provide O’Neill with fresh content every time he steps in front of a piece. This allows a single piece to embody a number of different subjects. Imagine yourself standing in front of the INB Performing Arts Center on several different occasions. You may notice different emotions and focus on different things with each visit. O’Neill manages to capture these different focal elements and emotions in a single piece, leaving you no shortage of imagery to absorb as you view his work. Most of O’Neill’s work consists of a menagerie of bright and colorful images layered translucently over an anchor subject. These are surrounded by surreal textures, geometry, color and light elements that bring the separate objects together in the final story. Any given piece may have elements of spiritual awareness, the human form, small floral or landscape details, geometric shapes and random objects that capture his imagination. “I find myself working through my own issues and life-lessons as the paint goes on the canvas,” O’Neill confesses. “If I start a painting in a dark place, that’s reflected in my color choices and technique, but if I’m conscious of that and choose to add light to a piece, I 40
spokanecda.com • FEBRUARY • 2016
feel it opening me up and lightening my energy as I paint. I’ve learned to accept this vulnerability in my work and it’s helped heal me in many ways.” Thankfully, O’Neill embracing this instead of filtering it out, leaves us a highly emotional, transcendent body of artwork to view through our own experiences. O’Neill desires the hard trials he’s evolved through and the vulnerability he expresses in a piece, to be experienced by observers of his work and appreciated through their own journey. His work is a visual journal of renewal and healing that he hopes will transcend social and economic differences and connect with viewers that may have never gone through troubles like
he did, but still ring familiar in emotional content and message. Being raised in a creative family that supported the artistic process, O’Neill was encouraged to embrace creativity early in life and it has remained with him through transitions and difficulties. He feels blessed that he was allowed access to art during his time in prison. The institution saw the benefit and encouraged creative outlets. He approached this time with a monklike awareness and used the years to hone his craft and quiet the chaos that brought him to that place. While his personal identity solidified, he found a freedom and confidence in his work that remains. O’Neill’s pieces are imbued with energy and renewal. Since his return to the area, O’Neill is excited to see Spokane going through a renewal. Excited to give something back to the local community, he recently completed a series of pieces highlighting landmarks in his hometown. These pieces can be viewed in the Liberty Building on the landing between the first and second floors of Auntie’s Bookstore. You can also view pieces at Salon Sapphire on West 2nd Avenue and Satellite Diner on West Sprague. O’Neill has a few large commercial mural installations around town and looks forward to more of these opportunities. You can keep up with O’Neill’s upcoming projects and learn more about his work at facebook.com/artbynmoneill. Robin Bishop is a free-lance writer and is the editor of Catalyst Magazine. She can be contacted at dragonflywriter2014@gmail. com or via facebook at Dragonfly Writer/ Robin Bishop.
Wine touched by an
Angel & Love
Our romantic historic venue offers wine tasting, craft beer, small plates, special event venue and magical weddings. (509)979.2749 • In Marketplace Winery | 39 W. Pacific Ave. Spokane, WA 99201
A Healthy Heart is a Happy Heart • Herbs and flavorful seasoning blends to make your food taste better! • Lose the cholesterol with our cold pressed extra virgin olive oils! • Teas that make your heart sing! Spice & Vine Mercantile has everything to please the most sophisticated palate.
15614 E. Sprague Ave | 509.315.4036 | spokanecda.com • FEBRUARY • 2016
41
WHAT I KNOW
MICHAEL BUTLER
M
any Executive Director grandHutton Settlement father.
by Michael Butler
42
spokanecda.com • FEBRUARY • 2016
at
photo by James & Kathy Mangis
spokanecda.com • FEBRUARY • 2016
43
WHAT I KNOW
MICHAEL BUTLER
mother, wisely would not allow her to “date” until she was much entire compound. (Interestingly, it was later noted that the Ranch older. Early in my high school senior year and her sophomore year, had been built on mudflow from a prior eruption.) Had the eruption Kathy and her family moved to Vicksburg Mississippi, where they occurred any other day than that Sunday morning, staff and the boys lived for close to three years. We wrote letters almost daily, called would likely have been seriously injured or worse, lost their lives, as frequently and I even visited her for a week in Mississippi and a week they would had been planting trees inside the blast zone, much of in Spokane when she and her mother came to visit her grandmother. which was destroyed by the eruption. Kathy’s father was transferred back to Longview in the spring of This was certainly an event that shaped my life. Our immedi1971, which enabled us to be together once again! I have often ate focus was the safety of our residents. Just prior to this, because said that our marriage had to be in God’s plan since the story is so of concerns with Mount St. Helen’s recent activity, I had prearremarkable. ranged to evacuate the boys if necessary to another unoccupied The day of the wedding ceremony along with the births of facility that was deemed a safe distance from the mountain. That our children, Ben, in 1977; Sam, in 1978 and Jessica, in 1982, unforgettable Sunday morning as we were leaving with our chilwere the four most exciting days of my life. Soon after we were dren to go to church, I looked up and saw the mountain’s summit married I began taking classes at Portland State University while engulfed in a gigantic, churning plume of volcanic ash with huge Kathy worked in a Longview dental office. I had attended Lower bolts of lightning flashing throughout it. As was reported later, as Columbia Community College and a partial year at Washington the mountain erupted there was no sound heard by those of us livState University while Kathy was away. ing on the opposite side of the blast because of the vacuum created In 1974, shortly after graduating from Portland State, I was hired by the explosion. I quickly told Kathy to pack some things and take as a counselor at Toutle River Boy’s Ranch, which was located our children to my parent’s house in Longview, then I rushed off to about 30 miles from beautiful Mount St. Helens. implement the evacuation plan for the Ranch. I The Boy’s Ranch was a residential treatment did not return to meet up with them at my parent’s center for delinquent boys who were placed by until early the next day. The mud flow that swept the courts and probation counselors who believed down the Toutle River totally destroyed the Boy’s they would benefit from intensive counseling Ranch, the temporary home for 32 boys and place I have daily and hard work planting and thinning trees in the of employment for 20 adults. nearby woods. I rose quickly up the ranks and was On a daily basis, the Ranch board and I made sought to put the named the executive director in 1977. I reported “executive” decisions in the rebuilding process. children first who to a mixed board of men and women and would We had to purchase land, answer questions of have been under call upon that experience in the future. concerned neighbors, drill a well, design and Once again, my God had to have a plan for construct a new facility, write grants and raise the our care and me. How else could a young man of 26 be trusted more than 1.2 million dollars to fund the project protection. with the lives of over 30 boys, 20 staff, and a nondebt-free. The stories of the twists and turns with profit with more than a one million dollar annual the state of Washington, FEMA, the Corp of budget? Engineers, as well as staff and boys are just too There were many areas of the Boy’s Ranch that numerous to tell. required constant monitoring. Seeking the necesIn 1982, the Toutle River Boy’s Ranch was sary financial support of those in the surrounding reopened at their new location well away from communities, following government controlled guidelines, managthe dangers of any possible future Toutle River overflow. After the ing the boy’s required work program, monitoring the evening school reopening, life there settled down somewhat into a recognized rouprogram along with the counseling the boy’s received all were part of tine of helping young men who had been in some sort of trouble find my busy daily routine. I brought to that position the lessons about a way to re-enter society. With guidance and counseling given by the forestry, logging and tree planting I had learned from my father and dedicated employees of the Ranch, many of the residents went on to grandfather, and I developed an intuition about what a boy might become useful and civic-minded citizens. be thinking, good or bad, partially from time observing the chilIn 1987, I received a call from a company hired to recruit dren that my mother babysat. Applying those experiences, coupled qualified candidates to be considered for the executive adminwith my aggressive personality, enabled me to keep the program istrator position at Hutton Settlement in Spokane. The presiding going during some very tough times in the economy, with social administrator, Robert Revel, was anticipating retirement in the next changes and with environmental issues. During that time, I pursued few years and it was considered prudent to bring someone in to train and received a Master of Education degree from the University of under him so that the transition of leadership might be as smooth Portland, graduating in 1978. as possible. Qualifications being considered were: experience in the On May 18, 1980, the eruption of Mount St. Helen’s volcano day-to-day maintenance of a children’s facility, a knowledge of buildcaused a mudflow that destroyed the Toutle River Boy’s Ranch’s ing and real estate operations, an ability to report to and respect
44
spokanecda.com • FEBRUARY • 2016
Lawyers licensed in Washington, Idaho, & Tribal Courts II took the one less traveled by and that made all the difference. To learn more about The Hutton Settlement, visit
BUSINESS LAW • EMPLOYMENT LAW FAMILY LAW • CRIMINAL LAW 10.0 Rating CIVIL LITIGATION
509.868.5389 | EowenLawOffice.com 108 N. Washington, Ste. 302 Spokane, Washington 99201
- Specializing in -
Weddings - Family Events - Portraits - Senior Pictures Product Shoots - Fashion - Royalty
Spokane and Coeur d’Alene
mangisphotography.com info@mangisphotography.com (509) 863-3068
spokanecda.com • FEBRUARY • 2016
45
DATE BOOK FEBRUARY through February 18: Domestic Legibility
EWU Gallery of Art is pleased to present Domestic Legibility, a collaborative exhibition by Aaron Trampush and Bradly Gunn. The Eastern Washington University Gallery of Art is open Monday-Friday, 9am-5pm, and is free to the public. Eastern Washington University Gallery of Art. EWU Fine Arts Building. Cheney, WA 99004. For more information, log on to. ewu.edu/cale/programs/art/gallery.
through April 2: Masterworks from the Print Collection of the Jundt Art Museum
FEBRUARY
DATEbook
February 12:. Spokane Arena. 720 West Mallon Ave., Spokane, WA 99201. For tickets, call 1-800-325-SEAT or visit.
through May 22: Treasure!
ART
February 5, March! Downtown Spokane. For more information or a complete map of participating venues, please log on to.
February 7, February 15, March 6, March 21: Spokane Poetry Slam and BootSlam
Spokane Poetry Slam is competitive performance poetry at its Northwest finest! Every first and third Sunday, Avenue, Spokane, WA 99201. The Bartlett, 228 W Sprague Avenue, Spokane, WA 99201. For more information, please log on to:
through February 7: Nature Connects: LEGO® Brick Sculptures
Nature Connects uses the magical fun of LEGOs® to connect visitors to the wonders of the natural world. Twenty seven sculptures created from nearly 500,000 LEGO®bricks by artist Sean Kenney of New York include an 8-foot-tall hummingbird, a 7-foot-tall rose and a 5-foot-tall butterfly. The exhibit aims to spark creativity in viewers of all ages and to foster a greater sense of play in viewers of all ages. Museum of Arts and Culture. 2316 W. First Avenue, Spokane, WA 99201. Call (509) 456-3931 or e-mail themac@northwestmuseum.org for more information.
46
Fifty Masterworks from the Print Collection of the Jundt Art Museum will feature works on paper by Max Beckmann, Marc Chagall, Salvador Dalí, Francisco Goya, Wassily Kandinsky, Corita Kent, Käthe Kollwitz, Pablo Picasso, Rembrandt van Rijn, Andy Warhol, and forty other artists. These works, including lithographs, screenprints, engravings, etchings, and other prints are all drawn from Gonzaga University’s 4,500-piece permanent collection. Selected by Dr. Paul Manoguerra, director and curator of the museum, these prints demonstrate the distinctive strength of the collection at its current stage of development and complement the museum’s facilities for academic research. Junta Art Museum at Gonzaga University. 200 E Desmet Ave. Spokane, WA 99202. For more information, log on to. gonzaga.edu.
spokanecda.com • FEBRUARY • 2016
Treasure is a word that stirs the imagination of everyone of every age. An educational and entertaining exhibit for museums, Treasure! explores the history of treasures and treasure hunting, the technology employed in hunting treasure, as well as the people and personalities that hunt for treasure. Treasure! has several thematic areas and hands-on activities that allow you to try tools of treasure hunting and investigate treasures. This special exhibit features actual artifacts from shipwrecks and other treasure sites and includes over 4000 sq. ft. of exhibits on underwater treasure, buried treasure, gold rushes, treasures in the attic, in popular culture, protecting treasure and modern treasure hunts. A special treasure laboratory and artifacts from the museums’s collections will be on display as well in the setting of an “antique store”. Museum of Arts and Culture. 2316 W. First Avenue, Spokane, WA 99201. Call (509) 456-3931 or e-mail themac@northwestmuseum.org for more information.
MUSIC
February 4: Morris Day and the Time
Morris Day has always had a flair for fashion and a love for rhythm and blues. In junior high, he played drums in a band with Prince,
eventually appearing in Prince’s 1984 film debut, Purple Rain. The Time was originally created as Prince’s alter-ego band, to be seen as the cool, street-wise funk band contrasting Prince’s more soulful R&B sound. Together, Morris Day and The Time meld classic oldschool sounds with energetic vocals, witty lyrics and smooth-as-silk dance moves. Northern Quest Casino, 100 N Hayford Rd, Airway Heights, WA 99001. For tickets, please log on to.
February 6: Spokane Symphony Films at the Fox: City Lights
PAINT. DRINK. HAVE FUN.
Great for all occasions! • Girls Night Out Date Night • Private Parties • Team Building
The 1931 romantic comedy City Lights follows the misadventures of Charlie Chaplin’s iconic Little Tramp as he falls in love with a beautiful blind flower seller. It was ranked as the 11th greatest American film of all time (by the American Film Institute), and features what critics have called the “greatest single piece of acting ever committed to celluloid.” In addition to directing and playing the starring role, Chaplin composed the film’s score, which enhances the on-screen action, accelerating the poignant story and ratcheting up the slapstick. 13-14: Spokane Symphony Classics: Tchaikovsky on Dante
Pyotr Ilyich Tchaikovsky’s whirling fantasia was based on a passage in Dante’s Inferno. The vividly depicted harsh, unceasing winds of hell fade to a whisper as a solo clarinet launches Francesca’s pathetic tale. Jean Sibelius also mused on Dante’s work when composing his Second Symphony, which he described as “a struggle between death and salvation.” The resulting symphony, a magnificent fusion of lush sounds and powerful passions, is one of the most beloved works of the 20th century. Vivian Fung’s sound portrait of Chicago’s majestic Aqua Tower features undulating strings and whooping brass. 15: The Tenors
The Tenors, formerly known as the Canadian Tenors, perform operatic pop music that includes a mixture of classical and pop. The quartet features Victor Micallef, Clifton Murray, Remigio Pereira and Fraser Walters. Northern Quest Casino, 100 N Hayford Rd, Airway Heights, WA 99001. For tickets, please log on to.
View our class calendar and RSVP at Use code
SPOCDAMAG at checkout for 10% off public class. offer expires 6/30/16
NOW WITH LOCATIONS IN SPOKANE AND COEUR D'ALENE! spokanecda.com • FEBRUARY • 2016
47
DATE BOOK FEBRUARY
February Winter
16-17:
Chamber
Soiree:
Back by popular demand! Now you can once again enjoy the old world elegance of the Davenport Hotel as you hear the talented musicians of the Spokane Symphony perform an outstanding assortment of baroque, classical and contemporary chamber music. Relax at a table with wine and hors d’oeuvres, or take a seat in the gallery of the distinguished Marie Antoinette Ballroom. The Davenport Hotel. 10 South Post Street. Spokane, WA 99201. Single tickets are $48 for table seating, including wine and hors d’oeuvres, and $20 for General Admission seating in the Gallery. For tickets, call 1-800325-SEAT or visit. com. Tickets may also be purchased with personalized service at the Box Office of Martin Woldson Theater at The Fox, 1001 West Sprague Avenue, or by calling 509624-1200.
February 19: Rock and Worship Road Show
Christian music favorites Newsboys, Jeremy Camp and Mandisa will be performing, along with Danny Gokey, Family Force 5 and Audio Adrenaline. A Pre-show Party will take place before each show with artists Citizen Way and another special guest, and Shaun Groves returns to the Rock & Worship Roadshow as the event’s guest speaker. Make plans now to come out to Christian music’s most entertaining tour for the whole family! Spokane Arena. 720 West Mallon Ave., Spokane, WA 99201. For tickets, call 1-800-325SEAT or visit.
February 20: Vocal Point, with Gonzaga University’s Big Bing Theory
Percussion without drums! Rhythm without a bass! Vocal Point wows audiences from Disneyland to New York City with their vocal firepower, innovative arrangements, and captivating choreography. Known for their performance on NBC’s The Sing Off, and first place in the international Championship of Collegiate A Cappella, Vocal Point delivers a stunning, high energy performance using only their mouths to recreate the complex instrumentation. Here in Spokane, they’ll be joined by Gonzaga’s Big Bing Theory as the opening act! INB Performing Arts Center. 334 W. Spokane Falls Blvd. Spokane, WA 99201. For tickets, call 1-800-325-SEAT or visit.
February 21: Spokane Youth Symphony: Passione Dvořák’s “Symphony from the New World” is a work of enduring genius written by one of greatest composers of the Romantic era. The outer movements are energetic and exuberant while the second movement features one the most memorable melodies
48
spokanecda.com • FEBRUARY • 2016
of all time. This concert will leave you passionate about the symphonic experience and awestruck at the magical skills of our youthful musicians. 27-28: Spokane Symphony Classics: Zen Fantastique
A study of great music that transcends tragedy and sorrow through Zen-like meditation. Karen Tanaka’s beautifully flowing Water of Life is a reflection on the 2011 tsunami in Japan. Edward Elgar’s beloved Cello Concerto, written in the wake of World War I, is a poignant requiem to a lost way of life. Hector Berlioz’s lavishly orchestrated Symphonie Fantastique is a dream-like account of an artist’s self destructive response to unrequited love..
March 12: Spokane Symphony Superpops: For Ella Fitzgerald
When Patti Austin sings, people listen. When she sings Ella Fitzgerald, they swoon! Hear live in glorious color the beautiful renditions of Ella’s favorites that earned Patti Austin a Grammy nomination for Best Jazz Vocal Album for her album For Ella, with big band arrangements by the legendary Patrick Williams. A Grammy Award-winner with nine total nominations, Patti delivers classics such as “Too Close for Comfort,” “Honeysuckle Rose,” “Satin Doll,” “Miss Otis Regrets,” “The Man I Love,” and “How High the Moon.” Join us and bask in the glow of Patti’s sensual voice, swinging groove and vivacious audience interplay..
distribution. In fact they are some of the same films that played Cannes, Toronto, or Vancouver film festivals. Films are run at the AMC, the Magic Lantern, and other venues around downtown Spokane. For more information and a complete schedule of events, please log on to. spokanefilmfestival.org
February 2-4: Spokane Ag Expo and Pacific Northwest Farm Forum
Spokane Ag Expo is the largest machinery show in the Inland Northwest that provides the best possible showcase for agricultural equipment and related products to the farmer, rancher and producer. There will be over 300 vendors & exhibits for both the full-time and part-time landowner and producer showcasing all the latest in farming innovations. As well, Pacific Northwest Farm Forum spotlights major speaker events during the show and seminars ranging across a wide spectrum of current topics of interest to full-time and part-time farmers, ranchers and agricultural producers. Spokane Convention Center. 334 W Spokane Falls Blvd, Spokane, WA 99201. For more information or details, contact Myrna O’Leary, Show Director, at 509.321.3633 or moleary@greaterspokane.org.
February 5-7: Monster Jam 2016
Unlike any other Monster Jam show seen before, this exclusive showcase of endurance, versatility and extreme driving skills will feature the best Monster Jam truck lineup ever highlighted by more racing, more freestyle, more donuts, more wheelies and more action! Fans will also be treated to all-new competition vehicles such as thrilling Monster Jam Speedsters and Monster Jam ATVs as they rip through the arenas during aggressive head-to-head racing action. Spokane Arena. 720 West Mallon Ave., Spokane, WA 99201. For tickets, call 1-800-325SEAT or visit.
EVENTS
January 29-February 6: Spokane International Film Festival
The Spokane International Film Festival is a small, selective offering of world-class films. These are the very best features, documentaries and shorts that have been made around the world during the past two years but have not yet been commercially released for wide
February 7: Shanghai Circus
A one-night only engagement of the all new Shanghai Circus returns to Spokane! Featuring breathtaking, gravity-defying, and spectacular feats that will dazzle the minds and hearts of the whole family! This year’s show brings audiences the very best of
China’s revered circus tradition, celebrating two thousand years of acrobatics, juggling, and contortion in a presentation that will mesmerize the whole family. If it’s humanly possible – and even if it’s not – the Shanghai Circus will do it with spectacular flair, integrating seemingly impossible dexterity with humor, tradition, and grace. INB Performing Arts Center. 334 W. Spokane Falls Blvd. Spokane, WA 99201. For tickets, call 1-800-325-SEAT or visit.
February 9: National Geographic Live! On the Trail of Big Cats. INB Performing Arts Center. 334 W. Spokane Falls Blvd. Spokane, WA 99201. For tickets, call 1-800-325-SEAT or visit.
February 13-14: The Illusionists
Having shattered box office records around the world, The Illusionists - Live From Broadway™ is now coming to captivate Spokane. This mind-blowing spectacular showcases the jaw-dropping talents of five of the most incredible illusionists on earth. Full of hilarious magic tricks, death-defying stunts and acts of breathtaking wonder, The Illusionists has dazzled audiences of all ages. INB Performing Arts Center. 334 W. Spokane Falls Blvd. Spokane, WA 99201. For tickets, call 1-800-325-SEAT or visit.
February 13: Brian Regan
Brian Regan has distinguished himself as one of the premier comedians in the country. His performances are relatively clean as he refrains from using profanity and off-color humor, and his material usually covers everyday events, where he finds humor in experiencing day-to-day life. Inspired by Steve Martin, The Smothers Brothers and Johnny Carson, Brian is a “comedian’s comedian” that can turn the most mundane situation into side-splitting stand-up material. Northern Quest Casino, 100 N Hayford Rd, Airway Heights, WA 99001. For tickets, please log on to.
February 16: The Harlem Globetrotspokanecda.com • FEBRUARY • 2016
49
DATE BOOK FEBRUARY
ters 90th Anniversary World Tour
Harlem Globetrotters are preparing. Spokane Arena. 720 West Mallon Ave., Spokane, WA 99201. For tickets, call 1-800-325SEAT or visit.
March 1: Dave Ramsay’s Smart Money Tour
Join Dave Ramsey and Rachel Cruze focus your money on what matters. At this one-night event, you’ll get to experience the plan that has helped millions of people around the world get and stay out of debt. INB Performing Arts Center. 334 W. Spokane Falls Blvd. Spokane, WA 99201. For tickets, call 1-800-325-SEAT or visit.
Falls Blvd. Spokane, WA 99201. For tickets, call 1-800-325-SEAT or visit.
March 11-13: Inland Northwest Motorcycle Show and Sale
Come find all your open-road needs and desires at Washington’s largest motorcycle show and Sale. Motorcycles from all over the world, including many custom designs, will be featured, along with hundreds of accessories. As well, there will be free entertainment for buyers and spectators. This event will be held indoors at the Spokane Fair and Expo Center and free parking is provided. Spokane Fair and Expo Center. 404 N Havana St. Spokane Valley, WA 99202. For more information, please email info@spokanemotorcycleshow.com or log on to http:// spokanemotorcycleshow.com/
March 8: National Geographic Live! Where the Wild Things Live.while working to save them. INB Performing Arts Center. 334 W. Spokane
50
spokanecda.com • FEBRUARY • 2016
Conjuring the 1960s and the war in Vietnam, this is a fierce, funny, and haunted play about a friendship that ends—and battle that doesn’t. For thirty years, Ben and Jeter have remained united by a time that divided the nation. The ghosts that appear are in many ways permanent residents in the bodies and psyches of those who fought in war, as well as those who became its indirect casualties. The Modern Theatre - Spokane. 174 S Howard St, Spokane, WA 99201. For more information and tickets, please log on to: http://
February 26-March 20: Little Women
This timeless captivating story is brought to life in this glorious musical filled with personal discovery, heartache, hope, and everlasting love. Based on Louisa May Alcott’s life, Little Women follows the adventures of sisters Jo, Meg, Beth and Amy March. Upon the advice of a friend, Jo weaves the story of herself and her sisters and their experience growing up in Civil War America. Spokane Civic Theatre. 1020 N Howard St., Spokane, WA 99201. For showtimes and more information, call (509) 325-2507. For tickets, call 1-800-325-SEAT or visit http://.
February 27-28: York
March 2: Peppa Pig Live!
More fun than a muddy puddle! Peppa Pig, star of the top-rated TV series airing daily on Nick Jr., is hitting the road for her firstever U.S. theatrical tour, Peppa Pig’s Big Splash! Peppa Pig is an animated children’s television program starring Peppa, a young female pig, her pig family, and her many animal friends. These colorfully crafted stories feature common children’s activities such as riding bikes, going swimming, visiting family or playing with friends. 19-March 6: Last of the Boys
THEATRE
through February 7: All My Sons
Based on a true tragedy, this American classic is perhaps Miller’s greatest masterpiece. The shadow of catastrophe is hidden deep in the unbearable power that is known as The American Dream. Acts of atonement and confessions leave us investigating if forgiveness will ever be found. Can one ever escape from guilt? The Modern Theatre - CdA. 1320 E Garden Ave, Coeur d’Alene, ID 83814. For more information and tickets, please log on to:
through February 21: Sordid Lives
When Peggy, a good Christian woman, hits her head on the sink and bleeds to death after tripping over her lover’s wooden legs in a motel room, chaos erupts in Winters, Texas. Spokane Civic Theatre. 1020 N Howard St., Spokane, WA 99201. For showtimes and more information, call (509) 325-2507. For tickets, call 1-800-325-SEAT or visit http://.
Produced by the team at The Modern Theater in commemoration of Black History Month, Spokane Civic Theatre is honored to once again host York for this special twodate engagement in the Firth J. Chew Studio Theatre. York was William Clark’s personal slave, accompanying the Corps of Discovery as the only black man on the Lewis and Clark Expedition. In a stirring performance, David Casteal weaves the story of York’s challenges and accomplishments, blending gripping first-person narration with energetic, live African drumming and traditional Native American drum recordings. Spokane Civic Theatre. 1020 N Howard St., Spokane, WA 99201. For showtimes and more information, call (509) 325-2507. For tickets, call 1-800-325-SEAT or visit http://.
March 4-20: Maybe Baby
Our 2015 resident playwright, Matt Harget, brings his romantic comedy about a couple’s difficulties trying to conceive a child to the stage. This endearing production will warm your heart and perhaps give you baby fever. The Modern Theatre - CdA. 1320 E Garden Ave, Coeur d’Alene, ID 83814. For more information and tickets, please log on to:
SPORTS
February 10: Spokane Chiefs vs Portland Winterhawks 7:05 pm. Spokane Arena. 720 West Mallon
Ave., Spokane, WA 99201. For tickets, call 1-800-325-SEAT or visit.
February 13: Spokane Chiefs vs Kootenay Ice
7:05 pm. Spokane Arena. 720 West Mallon Ave., Spokane, WA 99201. For tickets, call 1-800-325-SEAT or visit.
February 14: Spokane Chiefs vs Everett Silvertips
5:05 pm. Spokane Arena. 720 West Mallon Ave., Spokane, WA 99201. For tickets, call 1-800-325-SEAT or visit.
February 20: Spokane Chiefs vs TriCity Americans 7:05 pm. Spokane Arena. 720 West Mallon Ave., Spokane, WA 99201. For tickets, call 1-800-325-SEAT or visit.
February 24: Spokane Chiefs vs Prince George Cougars
7:05 pm. Spokane Arena. 720 West Mallon Ave., Spokane, WA 99201. For tickets, call 1-800-325-SEAT or visit.
Olympic Game Farm
On the Olympic Peninsula
Come See the Waving Bears! Olympic Game Farm 1423 Ward Rd. • Sequim, WA 98382
1-800-778-4295 • 360-683-4295 •
February 26: Spokane Chiefs vs Prince George Cougars
7:05 pm. Spokane Arena. 720 West Mallon Ave., Spokane, WA 99201. For tickets, call 1-800-325-SEAT or visit.
February 27: Spokane Empire vs Wichita Falls Night Hawks
7:00 pm. Spokane Arena. 720 West Mallon Ave., Spokane, WA 99201. Single game tickets will go on sale Wednesday, February 3, 2016. For tickets, call 1-800-325-SEAT or visit.
March 9: Spokane Chiefs vs Kamloops Blazers
7:05 pm. Spokane Arena. 720 West Mallon Ave., Spokane, WA 99201. For tickets, call 1-800-325-SEAT or visit.
March 11: Spokane Chiefs vs Tri-City Americans
7:05 pm. Spokane Arena. 720 West Mallon Ave., Spokane, WA 99201. For tickets, call 1-800-325-SEAT or visit.
March 12: Spokane Chiefs vs Kelowna Rockets
7:05 pm. Spokane Arena. 720 West Mallon Ave., Spokane, WA 99201. For tickets, call 1-800-325-SEAT or visit. spokanecda.com • FEBRUARY • 2016
51
Spokane Does it Best Rolling out the Red Carpet for the 2016 Team Challenge Cup by Robin Bishop photos courtesy of U.S. Figure Skating
S
p,
52
spokanecda.com • FEBRUARY • 2016
spokanecda.com • FEBRUARY • 2016
53 among the establishment is that team competitions push elite skatfor the inaugural Team Challenge Cup. This event is going to have ers to meet each other’s expectations in ways that individual skating legs. It’s going to stick around, and Spokane gets to be the first,” says competitions cannot. With the team structure there comes higher Barb Beddor, Vice President of StarUSA, the event management energy and more camaraderie. This makes for great television and agency that placed the Spokane bid for this event. StarUSA, owned energizes the competitive atmosphere at the events. The teams will by Barb Beddor and Toby Steward, has organized be watching the competition, and cheering on USA basketball, boxing, wrestling, volleyball, their teammates from ice-level team boxes built hockey and U.S. Figure Skating events across the by Garco Construction on the west side of the western U.S. Steward, president of StarUSA adds, Arena. “Spokane’s overwhelming response to the previThe international format is designed for the ous elite skating events definitely played a part in event to be rotated among three continents in Dubbed being awarded this inaugural event for USFSA.” even-numbered years, gaining momentum as a “Skate city USA,”... The addition of the World Team Trophy and post-Olympic tour stop when it falls in Olympic Spokane has a zest for years and creating an additional competition to Olympic team events in the last few years has community that is rare the schedule in non-Olympic years. The team proven to have dynamic results among elite skaters, encouraging Van Wagner Sports & structure is broken into three continental teams, and appreciated Entertainment and U.S. Figure Skating, partners Team North America, Team Asia and Team in the event, to launch the 2016 Team Challenge Europe. An international figure skating icon will Cup. This biennial, Ryder Cup-style competition captain each team. This is why Kristi Yamaguchi between continents is scheduled for April 22-24 was on hand for the announcement. She will be at the Spokane Arena. Team North America’s captain. Team Europe will In a write-up on Sports Business Daily last fall be led by 1984 Olympic champion Christopher Van Wagner executive V.P., Chris Pearlman, admitted to the “legs” Dean, of the famous duo Torvil and Dean. Team Asia has yet to their organization anticipates this event to have in the future when announce who their captain will be. Team captains will not be he stated, “My goal is to have this be the biggest global event in participating in the competition, but will decide team strategy and figure skating outside of the Olympics.” The general consensus.
54
spokanecda.com • FEBRUARY • 2016
spokanecda.com • FEBRUARY • 2016
55
509.838.3333
Natural Light Portraits - Families - Canadian passports Business Portraits - Professional Portraits Restoration - Damaged photo repair While you wait Passports - Any Country 415 1/2 W Main Ave | Spokane WA 99201 | ricksingerphotography.com 56
spokanecda.com • FEBRUARY • 2016Review
RAISE YOUR
BARBER dragonflywriter2014@gmail.com or via Facebook at Dragonfly Writer/Robin Bishop.
The program consists of two days of competition and a third day featuring an exhibition. Here’s the break out. Day 1 (April 22) will be the Men’s and Ladies’ Singles Short Programs Day 2 (April 23 afternoon) will highlight the Pairs Skating and Ice Dancing Day 2 (April 23 evening) will be the Ladies’ and Men’s Programs Day 3 (April 24) will feature 24 performances in the Exhibition Gala
7 WA S H I N G T O N L O C AT I O N S T O SER VE YOU! weldonbarber.com spokanecda.com • FEBRUARY • 2016
57
PERMANENT
TEETH-IN-A-DAY
BOARD CERTIFIED SURGEONS ● STATE OF THE ART FACILITY BOARD CERTIFIED PROSTHODONTISTS ● BOARD CERTIFIED ANESTHESIA PROVIDERS 911 N. Nevada #120 Spokane WA, 99218 58
spokanecda.com • FEBRUARY • 2016
509-IMPLANT 509-467-5268
A Healthy Mouth is a Healthy Life by Blythe Thimsen
W
e’ve gotten the message about many of the health habits that are hot topics in the news these days: Eat more greens, move more, drink less, don’t smoke, sleep at least eight hours. That about sums it up, right? Not entirely. If you’re looking for overall health, don’t forget to look in your mouth. Actually, a better recommendation is for a dentist to look in your mouth. Great health starts with great oral health, and dentists are your first line of defense when it comes to spotting health concerns and helping to creating an overall picture of health. It’s never too early or too late or too late to take control of your oral health. The Mighty Mouth, a statewide campaign to “Unleash the Power of Oral Health,” is a trend we should all get behind. Read on to learn more about their suggestions for oral health, and visit their site at for more information. Not sure who to see for an initial oral health exam, or who to go to when you need a specialist? Here is the 2016 annual list of Spokane and Coeur d’Alene area Top Dentists. This list is excerpted from the 2016 topDentists™ list, a database, which includes listings 106 dentists and specialists in the Spokane area. The Spokane area list is based on thousands of detailed evaluations of dentists and professionals by their peers. The complete database is available at
2016 2016 topDentists™ list, which includes listings for more than 100 dentists and specialists in the Spokane area. For more information call 706-364-0853; or write P.O. Box 970, Augusta, GA 30903; by email (info@usatopdentists.com) or at. topDentists™™, LLC Augusta, Georgia. All rights reserved. This list, or parts thereof, must not be reproduced in any form without permission. No commercial use of the information in this list may be made without permission of topDentists™, LLC. No fees may be charged, directly or indirectly, for the use of the information in this list without permission.
spokanecda.com • FEBRUARY • 2016
59
February is Chi l dre n’s De ntal Health Month Protect your child’s baby teeth for a lifetime of better oral health Oral health is an important part overall health. But because baby teeth eventually fall out they often don’t get the attention they deserve. Baby teeth matter. Painful cavities can make it difficult to eat, sleep, play and learn. The germs that cause cavities in baby teeth can lead to cavities in adult teeth, affecting your child’s smile and health even when they are adults. Cavities are caused by infectious germs Most people don’t know that cavities are caused by germs and these cavity-causing germs are infectious. The germs can actually be spread via saliva (typically from moms to babies) by sharing food, utensils, or even kisses. These germs can also lead to significant, ongoing problems with permanent teeth. Here’s a surprise about cavities, - it’s not just sugary foods that cause tooth decay. Even snacks often thought of as “healthy” such as bagels, juice, crackers and raisins contribute to decay if consumed too often. Teeth need time to rest and rebuild in between drinking and eating. Sweet or high carbohydrate foods and sweet drinks feed the germs in your mouth that cause cavities. The germs make acids that eat into teeth leading to decay. These acid attacks last for 20 minutes after you eat or drink. Drinking (anything other than water) and snacking or “grazing” frequently during the day means food and drink are on your teeth for a long time. This leads to costly, and sometimes painful, cavities. When snacking, choose tooth-friendly foods such as fruit, vegetables and cheese. Your waistline will benefit, and so will your oral and overall health. Here are some simple tips to protect children’s oral health. Cavities are easily preventable • By age one take your child to a dentist or physician for an oral health checkup. • If you put your baby to sleep with a bottle, use only water. • Brush your child’s teeth twice a day. Help them brush until they can tie their own shoes. • Use a small toothbrush and fluoride toothpaste. Fluoride strengthens teeth enamel. Use a rice-sized amount of toothpaste until your child is 3 and then a pea-sized amount. • Start flossing as soon as teeth touch. Set an example and floss your own teeth daily. Flossing removes 40% of the gunk that toothbrushes can’t reach. • Ask your child’s dentist or physician about fluoride varnish, which is painted on teeth to prevent or heal early decay. • Ask your dentist about sealants to protect molars which are hard to keep clean. • Make sure your child has regular oral health checkups to spot problems early. Visit TheMightyMouth.org for more information on protecting your health. You’re healthier with a healthy mouth.
Endodontics Lisa A. Ellingsen, DDS Ellingsen Endodontics 1005 North Evergreen Road, Suite 201 Spokane Valley, WA 99216 509-921-5666 Michelle A. Ellingsen, DDS Ellingsen Endodontics 1005 North Evergreen Road, Suite 201 Spokane Valley, WA 99216 509-921-5666 Tim L. Gatten, DDS Access Endodontic Specialists 602 North Calgary Court, Suite 301 Post Falls, ID 83854 208-262-2620 I. Blake McKinley, DDS Spokane Endodontics 620 North Argonne Road, Suite A Spokane, WA 99212 509-928-8762 Scott J. Starley, DDS Inland Endodontics 3151 East 29th Avenue, Suite 201 Spokane, WA 99223 509-535-1720 Roderick W. Tataryn, DDS Tataryn Endodontics 2700 South Southeast Boulevard, Suite 201 Spokane, WA 99223 509-747-7665
General Dentistry Bryan D. Anderson, DDS 2807 South Stone Street, Suite 102 Spokane, WA 99223 509-624-7151 Michael A. Bloom, DDS Bloom Dentistry 9928 North Government Way Hayden, ID 83835 208-772-3583 George J. Bourekis, DDS 12409 East Mission Avenue Spokane, 99216 509-924-4411 Rodney D. Braun, DDS Braun & Jarvis Family Dentistry 775 East Holland Avenue, Suite 201 Spokane, WA 99218 509-464-2391 Laura J. Bruya Wilson Sordoff and Wilson Family Dentistry 12706 East Mission Avenue Spokane, WA 99216 509-928-3131
60
spokanecda.com • FEBRUARY • 2016
Congratulations Dr. Gerald Smith 9 years in a row!
2016 Timothy J. Casey, DDS Casey Family Dental 22910 East Appleway Avenue, Suite 5 Liberty Lake, WA 99019 509-927-9279
Travis V. Coulter, DDS Coulter Family Dentistry 1601 S. Dishman Mica Rd. Spokane, WA 99206 509-209-8747
Robert R. DesRoches, DDS Englund & DesRoches Dentistry 6817 North Cedar Road, Suite 201 Spokane, WA 99208 509-326-8170
James P. Dorosh, DDS Dorosh Dental 10121 North Nevada Street, Suite 301 Spokane, WA 99218 509-467-1001
Brent L. Child, DDS 10121 North Nevada Street, Suite 101 Spokane, WA 99218 509-468-1685
Debra L. Craig, DDS Harmony Family Dental 10103 North Division, Suite 201 Spokane, WA 99218 509-467-1562
Chad S. DeVore, DDS Lakeland Family Dental 14596 North Highway 41, Suite A Rathdrum, ID 83858 208-687-4455
Eric C. Ellingsen, DDS Ellingsen-Henneberg Dentistry 1215 North McDonald Road, Suite 203 Spokane Valley, WA 99216 509-924-2866
Kimberly Richards Craven, DDS South Hill Family Dental 1424 South Bernard Street Spokane, WA 99203 509-747-7166
Terry T. DeVore, DDS Spirit Lake Family Dental 6070 West Jackson Street Spirit Lake, ID 83869 208-623-6400
Erin E. Elliott, DDS Post Falls Family Dental Center 313 North Spokane Street Post Falls, ID 83854 208-773-4579
Blaine D. Dodson, DDS Evergreen Cosmetic & Family Dentistry 1005 North Evergreen Road, Suite 202 Spokane Valley, WA 99216 509-928-4191
Ola J. Englund, DDS Englund & DesRoches Dentistry 6817 North Cedar Road, Suite 201 Spokane, WA 99208 509-326-8170
Brooke M. Cloninger, DDS 2001 East 29th Avenue Spokane, WA 99203 509-534-4600 Becky Van Coombs, DDS South Hill Pediatric Dentistry 2020 East 29th Avenue, Suite 130 Spokane, WA 99203 509-315-8500 Constance Copetas, DDS 104 West 5th Avenue, Suite 290-E Spokane, WA 99204 509-747-5586
Cliff R. Cullings, DDS Cullings Family Dentistry 22106 East Country Vista Drive, Suite C Liberty Lake, WA 99019 509-926-0066
2016
NOT YOUR STANDARD DENTIST OFFICE
Dr. Paxton
Dr. Johnson
Dr. Weber
Dr. Freuen
Dr. Hauck
Dr. Brooks
NORTHWEST IMPLANT AND SLEEP DENTISTRY is committed to providing each patient with individualized, professional care that restores optimal oral health and brings a confident smile to everyone they treat. They understand that every patient has unique clinical needs, and the expertise of Dr. Mark Paxton and their experienced team allows them to craft custom treatment plans that are right for each client. Their state-of-the-art facilities are equipped with technologies for the most accurate diagnostics and procedures. Each member of the Northwest Implant and Sleep Dentistry team is highly trained in providing exceptional care and pursues continuing education programs throughout the year to maintain their leading-edge skills and knowledge. Diligence in maintaining the most up-to-date training allows the Northwest Implant team to offer complex remedies for more than just cosmetic correction. They are known for successfully correcting and providing training for craniofacial deformities and other complicated procedures. The team at Northwest Implants and Sleep Dentistry is highly qualified. Their patients are treated by Board Certified Surgeons, Prosthodontists and Anesthesia providers in a state of the art facility with a focus on patient safety, comfort, and satisfaction. Northwest Implants realizes it is still a reality that certain patients suffer from dental phobia, so they have made it their goal to provide treatment options that cater to these individuals. Their Teeth in a Day procedure can make dramatic positive changes in a person’s life with a simple and quick approach. Northwest Implants and Sleep Dentistry is committed to delivering personalized oral surgery solutions that improve patients’ lives through cutting edge technology, professional and compassionate care, and desiring to meet patients personal oral care needs.
Northwest Implant and Sleep Dentistry, 9911 N. Nevada St., Suite 120, Spokane, WA 99218, (877) 833-8469,
62
spokanecda.com • FEBRUARY • 2016
2016 Michele L. Foglia, DDS Spokane Valley Dental 200 North Mullan Road, Suite 103 Spokane Valley, WA 99206 509-928-8431 Amir A. Ganji, DDS Cannon Hill Dental 1424 South Bernard Street Spokane, WA 99203 509-624-5590 Katherine Hakes, DDS Integrated Dental Arts 5011 West Lowell Avenue, Suite 130 Spokane, WA 99208 509-464-3100 Andrew F. Heidergott, DDS 10121 North Nevada Street, Suite 202 Spokane, WA 99208 509-466-6979 Robb B. Heinrich, DDS 10121 North Nevada Street, Suite 302 Spokane, WA 99218 509-467-1117
Jeffrey R. Hood, DDS Evergreen Cosmetic & Family Dentistry 1005 North Evergreen Road, Suite 202 Spokane, WA 99216 509-928-4191 James C. Hoppe, DDS 3010 South Southeast Boulevard, Suite E Spokane, WA 99223 509-534-0569 Bradley D. Jarvis, DDS Braun & Jarvis Family Dentistry 775 East Holland Avenue, Suite 201 Spokane, WA 99218 509-464-2391 Mark A. Jensen, DDS Millwood Family Dental 3018 North Argonne Road Spokane, WA 99212 509-928-5444
Gary D. Keller General Dentistry 1005 North Pines Road, Suite 300 Spokane Valley, WA 99206 Phone: 509-926-1161
Ronald E. Mendenhall, DDS 101 West Mullan Avenue Post Falls, ID 83854 208-773-4581
Susan Mahan Kohls, DDS 2020 E. 29th Avenue, Suite 100 Spokane, WA 99203 509-534-0428
Daniel James Mergen, DDS Mergen Dental 902 West 14th Avenue Spokane, WA 99204 509-747-5186
Ryan R. Love, DDS 420 North Evergreen Road, Suite 600 Spokane, WA 99216 509-928-2525
Stephen H. Mills, DDS 3201 South Grand Boulevard Spokane, WA 99203 509- 747-5184
Joseph L. Luchini, DDS Luchini Family Dentistry 2107 West Pacific Avenue Spokane, WA 99201 509-838-3544
Bill H. Molsberry, DDS 4407 North Division Street, Suite 416 Spokane, WA 99207 509-487-2116
Rudyard McKennon, DDS 407 W. Riverside Avenue, Suite 864 Spokane, WA 99201 509-624-5303
Kent E. Mosby, DDS Laser Dentistry of Coeur d’Alene 910 West Ironwood Drive Coeur d’Alene, ID 83814 208-667-1154
2016
IT ALL BEGINS WITH A SMILE DR. SHANNON MAGNUSON has been creating amazing smiles in Spokane for over 18 years. She has a beautiful office in north Spokane designed to appeal to patients of all ages. Dr. Magnuson combines her love of art and science to design a treatment plan tailored to the individual. This may involve getting together with the patient and their dentist or other specialists to create the best plan. “Most people say they want to improve their smile but they don’t really see the full potential of what can be done,” says Dr. Magnuson. Today’s technology has transformed the specialty of orthodontics. Efficient, comfortable treatment is available to patients of all ages. Dr. Magnuson offers cosmetic options like Invisalign and Clear Braces as well as exceptionally efficient self-ligating braces. Most people are blown away when they see their before and after photographs at the end of treatment. “I work hard to exceed the patient’s expectations. The details make a big difference,” says Dr. Magnuson. She and her team feel grateful every day for the opportunity to make such an impact on their patients. Orthodontics is an investment that pays back many times over. “My team and I will go the extra mile for you – it’s our commitment and our pleasure,” says Dr. Magnuson.
Dr. Shannon Magnuson Magnuson Orthodontics, 10121 N. Nevada St., Suite 201, Spokane, WA 99218, (509) 443-5597, Magnusonortho.com
2016
Kathrine A. Olson, DDS 210 South Sullivan Road Spokane Valley, WA 99037 509-924-9596 Brent H. Osborn, DDS North Pines Dental Care 1107 North Pines Road Spokane Valley, WA 99206 509-924-6262 Kurt A. Petellin, DDS 1717 Lincoln Way, Suite 105 Coeur d’Alene, ID 83814 208-765-0397
INSTANT COSMETIC TEETH WHITENING
Photo Credit: B. Kelly Dullanty
Call today and ask about our off site services! We make house calls for groups of 6 or more! (509)-808-1997 ● Like us:
David G. Petersen, DDS Petersen Family Dentistry 123 West Francis Avenue, Suite 104 Spokane, WA 99205 509-483-3332 Kurt Peterson, DDS Peterson Dental 1604 West Riverside Avenue Spokane, WA 99201 509-747-2183 Simon P. Prosser, DDS 251 East 5th Avenue, Suite B Spokane, WA 99201 509-747-2183 James J. Psomas, DDS 12409 East Mission Avenue Spokane, WA 99216 509-924-4411 Paul F. Reamer, DDS Reamer Family Dentistry 12805 East Sprague Avenue Spokane Valley, WA 99216 509-924-5661 Charles L. Regalado, DDS 6817 North Cedar Road, Suite 201 Spokane, WA 99208 509-326-6862 James Allen Robson, DDS Avondale Dental Center 1683 East Miles Avenue Hayden, ID 83835 208-215-3328 Stanley A. Sargent, DDS Grand Corner Dental 3707 South Grand Boulevard, Suite B Spokane, WA 99203 509-838-2434
64
spokanecda.com • FEBRUARY • 2016
Spokane Oral Surgery Surgical Extraction & Implants • Wisdom Teeth Extractions • Facial Cosmetic Surgery • Reconstructive Surgery Botox Injections • Oral Pathology • Bone Grafting • Orthognathic Surgery
• Diplomate of the American Board of Oral and Maxillofacial Surgery • Diplomate of the American Board of Dental Anesthesiology
Our surgical and nursing staff is one of the most highly trained in the country! They are trained in Advanced Cardiac Life Support (ACLS) and Pediatric Advanced Life Support (PALS), and have received certification for the Dental Anesthesia Assistant National Certification Examination (DAANCE). In addition, all of our surgical assistants are not only trained in Basic Life Support (CPR), but are also BLS Instructors. We are happy to offer BLS classes for Healthcare Providers to our dental and medical community. While our surgical and nursing staff is highly trained in all these areas, they are also compassionate to your needs as a patient and always strive to provide you with the best experience!
For over 20 years Dr. Paxton has been traveling to Guatemala with the Spokane Chapter of Hearts in Motion. Hearts in Motion is a Chicago based group that coordinates mission teams to travel to countries where there is insufficient medical facilities and services.
NEW Northside Spokane: 9911 N. Nevada St., Suite 120
Next month he will return, and will treat a wide range of patients with cleft lip and palate, craniofacial deformities, and other maxillofacial reconstructive conditions.
Spokane Valley: 12109 E. Broadway Ave, Bldg C
South Hill: 2807 S.Stone, Suite 202
Post Falls: 602 Calgary Ct., Suite 202
Contact us for an appointment at any of our locations: 1-509-242-3336 ● ● info@spkoms.com • FEBRUARY • 2016 65
2016 Todd Schini, DDS Schini Family Dentistry 2000 Northwest Boulevard, Suite 100 Coeur d’Alene, ID 83814 208-664-3321 Jay H. Sciuchetti, DDS 2103 South Grand Boulevard Spokane, WA 99203 509-624-0542 Robert R. Shaw, DDS 2700 South Southeast Boulevard, Suite 101 Spokane, WA 99223 509-747-8779 Mary Smith, DDS North Cedar Dental 6817 North Cedar Road, Suite 101 Spokane, WA 99208 509-325-0233 Mark M. Sodorff, DDS Sodorff and Wilson Family Dentistry 12706 East Mission Avenue Spokane, WA 99216 509-928-3131 Michael A. Trantow, DDS 12121 East Broadway Avenue Building 3 Spokane, WA 99206 509-928-3363 John Van Gemert, DDS Liberty Park Family Dentistry 1118 South Perry Street Spokane, WA 99202 509-534-2232 George J. Velis, DDS Velis Family Dentistry 820 South Pines Road Spokane, WA 99206 509-924-8200
509.926.0570
eportho.com
Nicholas G. Velis, DDS Velis Family Dental 820 South Pines Road Spokane, WA 99206 509-924-8200 Scott Warnica, DDS 12409 East Mission Avenue Spokane, WA 99216 509-924-4411 Marc D. Weiand, DDS Weiand and Weiand DDS 1414 North Vercler Road, Suite 6 Spokane WA, 99216 509-926-1589
66
spokanecda.com • FEBRUARY • 2016
Ronald W. Weiand, DDS Weiand and Weiand DDS 1414 North Vercler Road, Suite 6 Spokane WA, 99216 509-926-1589
Grapetree Village | 2001 E. 29th
New Patients Welcome
Kory Wilson, DDS Avondale Dental Center 1683 East Miles Avenue Hayden, ID 83835 208-215-3328
Appointments Available Monday through Friday
509.534.4600
Stephen O. Woodard, DDS 1020 South Pines Road Spokane, WA 99206 509-924-8585
Oral and Maxillofacial Surgery Chad Patrick Collins, DDS The Center for Oral & Maxillofacial Surgery 322 West 7th Avenue Spokane, WA 99204 509-624-2202
2009-2015 Reader's Survey
BEST DENTIST 2009 - 2016
Daniel R. Cullum, DDS Implants Northwest 1859 North Lakewood Drive, Suite 101 Coeur d’Alene, ID 83814 208-667-5565 Kenji Willard Higuchi, DDS Drs. Higuchi and Skinner, P.S. 12509 East Mission Avenue, Suite 101 Spokane Valley, WA 99216 509-928-3600 Bryan W. McLelland, DDS Spokane Oral & Maxillofacial Surgery 12109 East Broadway Avenue, Building C Spokane Valley, WA 99206 509-926-7106
GARY D. KELLER, DDS General Dentistry serving Spokane Valley
Mark C. Paxton, DDS Spokane Oral & Maxillofacial Surgery 12109 East Broadway Avenue, Building C Spokane Valley, WA 99206 509-926-7106 Daniel W. Skinner, DDS Drs. Higuchi and Skinner, P.S. 12509 East Mission Avenue, Suite 101 Spokane Valley, WA 99216 509-928-3600
Orthodontics Michael Paul Chaffee, DDS Riverstone Orthodontics 2140 West Riverstone Drive, Suite 301 Coeur d’Alene, ID 83814 208-667-9212
35 years of experience ● NEW patients welcome
Gary Keller, DDS
1005 N. Pines Road, Suite 300 Spokane Valley, WA 99206 509.926.1161 Find us on facebook spokanecda.com • FEBRUARY • 2016
67
weiand weiand Professional Care
2016
Personal Attention
Patient testimonial
“What an amazing experience! Dr. Marc made me feel very comfortable about the procedure and made sure I was at ease. His awesome staff was welcoming and very friendly and also made the experience that much more pleasant!”
yteeth.com
509.926.1589
• • • • • •
One Day Crowns General Dentistry Child & Adult Care Root Canal Therapy Gum Disease Prevention Periodontal Laser Treatment • Implant Restorations • Tooth Whitening • Emergencies
Carmen S. December 2015
Congratulations 8 Years in a row! Dr. Marc and Dr. Ron Weiand yes, I’m still here :-)
1414 N Vercler Rd Bldg #6 Spokane Valley, WA 99216
Erik R. Curtis, DDS Curtis Orthodontics 215 West Canfield Avenue Coeur d’Alene, ID 83815 208-772-7272 Jacob DaBell, DDS DaBell Orthodontics 720 North Evergreen Road, Suite 101 Spokane Valley, WA 99216 509-921-1700 Paul L. Damon, DDS Damon Orthodontics 12406 East Mission Avenue Spokane, WA 99216 509-924-9860 Richard C. Ellingsen, DDS Ellingsen Paxton Orthodontics 12109 East Broadway Avenue, Suite B Spokane Valley, WA 99206 509-926-0570 Bret Johnson, DDS 755 East Holland Avenue Spokane, 99218 509-466-2666 Shannon L. Magnuson, DDS Magnuson Orthodontics 10121 North Nevada Street, Suite 201 Spokane, WA 99218 509-443-5597 Diane Stevens Paxton, DDS Ellingsen Paxton Orthodontics 12109 East Broadway Avenue, Suite B Spokane Valley, WA 99206 509-926-0570 Gerald S. Phipps, DDS Phipps Orthodontics 520 South Cowley Street, Suite 102A Spokane, WA 99202 509-838-3703
Flowers for your Valentine Simply Unforgettable!
1216 S. Grand Blvd. Spokane 509.624.1301 • beaukflorist.net 68
spokanecda.com • FEBRUARY • 2016
Scott William Ralph, DDS Liberty Lake Orthodontics 23505 E. Appleway Avenue, Suite 204 Liberty Lake, WA 99019 509-892-9284 Gerald E. Smith, DDS Smith Orthodontics 101 West Cascade Way, Suite 100 Spokane, WA 99208 509-467-6535
spokanecda.com • FEBRUARY • 2016
69
2016 Pediatric Dentistry Tom Dance, DDS Dentistry for Kids 1027 West Prairie Avenue Hayden, ID 83835 208-772-2202 Andrew Hrair Garabedian, DDS The Children’s Choice 418 East 30th Avenue Spokane, WA 99203 509-624-1182 Molly Gunsaulis, DDS Dentistry for Children 15404 East Springfield Avenue, Suite 102 Spokane Valley, WA 99037 509-922-1333 Christopher W. Herzog, DDS The Children’s Choice 418 East 30th Avenue Spokane, WA 99203 509-624-1182
Erin L. Johnson, DDS South Hill Pediatric Dentistry 2020 East 29th Avenue, Suite 130 Spokane, WA 99203 509-315-2090
John M. Ukich, Sr., DDS Pediatric Dental Center of North Idaho 1717 Lincoln Way, Suite 205 Coeur d’Alene, ID 83814 208-667-3556
Lauralee Nygaard, DDS Lauralee Nygaard Periodontics 1005 North Evergreen Road, Suite 102 Spokane Valley, WA 99216 509-927-3272
Jason R. Moffitt, DDS Moffitt Children’s Dentistry 520 South Cowley Street, Suite 101 Spokane, WA 99202 509-838-1445
John R. Ukich, Jr., DDS Pediatric Dental Center of North Idaho 1717 Lincoln Way, Suite 205 Coeur d’Alene, ID 83814 208-667-3556
Gary M. Shellerud, DDS 508 West 6th Avenue, Suite 208 Spokane, WA 99204 509-838-4321
Periodontics
Charles Emerick Toillion, DDS The Children’s Choice 418 East 30th Avenue Spokane, WA 99203 509-624-1182
David W. Engen, DDS 9911 North Nevada, Suite 110 Spokane, WA 99218 509-326-4445
David Bruce Toillion, DDS The Children’s Choice 418 East 30th Avenue Spokane, WA 99203 509-624-1182
Anthony G. Giardino, DDS South Hill Periodontics 2700 South Southeast Boulevard, Suite 210 Spokane, WA 99223 509-536-7032
Shaun M. Whitney, DDS Lake City Dental Specialties 1322 West Kathleen Avenue, Suite 2 Coeur d’Alene, ID 83815 208-664-7300
Prosthodontics Earl M. Ness, DDS 823 West 7th Avenue Spokane, WA 99204 509-744-0916
No Dental Insurance? We have an affordable option for you!
Fawson Dentistry Loyalty Savings Plan Designed to provide optimal oral health care for those without dental insurance coverage. Covered services include:
1st Visit
2nd Visit
A complete exam with oral cancer screening
Necessary x-rays** Fluoride Treatment A professional cleaning One emergency exam and necessary x-rays** **Panoramic x-rays are not included in this plan.
Enroll anytime in-office or online! Affordable monthly and annual payment plans are available. Additional plan information can be found on our website. No open enrollment! No deductibles or co-pays! No maximums or coverage limits!
Now that’s a solution worth smiling about!
509.535.9515 • 2204 E. 29th Ave. Suite 208 • Spokane, WA 99203 • 70
spokanecda.com • FEBRUARY • 2016
Valuing Professional Practices in Divorce Actions Professional practices include ownership interests of doctors, dentists, veterinarians, lawyers, pharmacists, physical therapists, accountants and chiropractors. Goodwill may exist for a professional person even though the goodwill is personal to the professional and is not readily marketable. “I can’t sell my practice” is not a defense to a goodwill valuation. there. While it may seem that qualified appraisers should be able to reasonably concur on values, as would be the case with a home appraisal, this is not typically the case. Much of the valuation process in entirely subjective. Five different methods of valuation have been approved by the Washington Supreme Court. The choice of the valuation method by the respective appraiser can substantially impact the result. Similarly, the appraiser will
apply a “capitalization rate” in their valuation process. Also entirely subjective, a difference in just a percent or two can result in a substantial difference in the ultimate value derived. Frankly, it is not a difficult task to “skew” the true value of the practice simply though the subjective selection of valuation method and capitalization rates. The success in setting an appropriate value depends in large part on the skill of the attorney representing either the practitioner or their spouse. There are many traps for the unwary present in professional practice and business valuations. Similarly, it is critical to use skilled appraisers. These appraisers are traditionally CPAs trained and certified in business valuations. These CPAs should also have extensive experience in prior court testimony as well. The goal is to present the Court with a fair, supportable, and appropriate value.
Addicus Publishing is pleased to announce the release of Divorce in Washington, a comprehensive guide to the divorce legal process. Available at the Amazon, Barnes and Noble, and Apple store websites. Also available on Kindle, Nook, I-Books and at the Addicus Publishing website addicusbooks.com. This 249 page guide is written in a userfriendly question and answer format by noted Spokane divorce attorney David J. Crouse.
David J. Crouse | (509) 624-1380 | crouselawgroup.com
Health
Feb 24th2016 5pm - 8pm Chateau Rive
symposium
WILL 2016 BE THE YEAR YOU GET SERIOUS ABOUT YOUR HEALTH AND WELLNESS? Join Bozzi Media as we partner with Rockwood Health Systems, and numerous health and wellness agencies, for a night of full-on access to some of the top health and wellness professionals in the area. In similar fashion to the successful Women’s Health Symposium last year (panel discussion with top docs, health and wellness vendors throughout the room), we are going bigger and better with an extended event honoring both men and women’s health. Grab your friends and join us as we discuss the top health concerns facing us today and how to avoid them to ensure long, healthy, vibrant lives. Tickets include healthy food and drink options.
eventbrite.com | Tickets are $20/person or $120/table of 8 Wednesday, Feb 24 | 5:00 p.m. until 8:00 p.m. Chateau Rive at the Flour Mill | 621 W Mallon | Spokane
72
Ticket or Vendor Inquiries: Contact Stephanie Regalado stephanie@spokanecda.com or (509) 995-6016 spokanecda.com • FEBRUARY • 2016
HEALTH BEAT 7 3 HE ART HE ALTH
7 6 SUI CI DE P REVENTI ON 78 HOME FI TNESS
Love Your Heart: Go Red For Women
R
by Francesca Minas, American Heart Association
ed:
spokanecda.com • FEBRUARY • 2016
73
HEALTH BEAT HEART HEALTH
74
spokanecda.com • FEBRUARY • 2016
should get to help identify serious health concerns before they become life threatening – such as heart disease. WellWoman: •. • Manage your blood sugar. The American Heart Association’s recommendation for healthy blood glucose is <100 mg/dL. Blood sugar
levels that are too high can lead to diabetes and over time damage the eyes, kidneys, nerves or heart. • Get your blood pressure under control. A normal high blood pressure is 120/80. One-in-three adults have high blood pressure and yet many people don’t even know they have it. • Lower your cholesterol. The desired level for total cholesterol is less than 200 mg/dL.
Come visit –our– IV Lounge
• Stay active. The benefits of exercising for just 30 minutes a day are plenty, stress reduction and improving cholesterol numbers. Walking is the easiest way to begin exercising. • Lose weight if you need to. • Eat healthy. A diet rich in a variety of vegetables and fruits, lean proteins, healthy fats and whole grains is your first defense against the onset of high cholesterol, high blood pressure and heart disease. Limit sodium to less than 1,500 mg a day. For sugar,).
FOR NOW TER CALL OOS GY B ENER CIALS! SPE
IV Cocktails to boost –energy– –immunity– –winter wellness–
801 W. 5th Ave, Suite 104 | Spokane, WA 99204 509.747.7066 | info@themetabolic-institute.com for more information visit TheMetabolic-Institute.com.. org/wearredday Spokane Go Red For Women Luncheon, March 9. Join local women at the Spokane Convention Center in supporting the American Heart Association. Sponsored nationally by Macy’s and locally by Providence Health Care. Pre-event ticket sales only. http:// SpokaneGoRedLuncheon.heart.org
NO JOINING FEE ($50 SAVINGS) CHENEY
615 4TH ST CHENEY, WA 99004 (509) 389-1146
SOUTH HILL
2903 EAST 25TH AVENUE SPOKANE, WA 99223 (509)325-0335
VALLEY
321 S. DISHMAN MICA RD. SPOKANE VALLEY, WA 99206 (509) 389-1146
NORTH
9107 NORTH COUNTRY HOMES BLVD. SPOKANE, WA 99218 (509)325-0335
JAZZERCISE.COM spokanecda.com • FEBRUARY • 2016
75
HEALTH BEAT SUICIDE PREVENTION
WORK WE CAN ALL DO by Marny Lombard
“I CAN’T IMAGINE.” “HOW ARE YA’ TODAY?”
These phrases hang in my closet of painful memories. They come from the early days and months after my son, Sam, died by suicide. The first phrase hurts because, while true, it is profoundly isolating. The second phrase hurts because anyone who has lost a loved one to suicide has suffered vast internal injury, yet their colleagues, their grocery checkers and sometimes even their good friends can see no sign of damage. When Sam was a small boy, he went through what I now call the building blocks of depression: emotional sensitivity, perfectionism, lack of community connectedness, chronic sleep disturbance and bad dreams, mystery tummy pains, prickly personality. After being picked on by boys at school, he eventually retreated and decided he just didn’t need friends. He self-isolated. The cards dealt to my son included a Joker: the genetic card. My father had bipolar disorder, and other men in his family had bipolar or depression. I knew that Sam found many parts of life difficult as a child, but, I had no idea that this was early depression. No one screened for or asked about childhood depression either at school or in the doctor’s office. Today, 25 years later, this is slowly beginning to change - and must change further. The numbers of adolescents and young adults who consider and plan for suicide are shocking. In 2014, seven percent of Spokane County 12th graders reported attempting suicide; among 10th graders, 12 percent attempted suicide. Levels of depressive feelings were reported by 33 percent of 12th graders and 38 percent of 10th graders. This information is collected through the Washington State Healthy Youth Survey. Statewide, they surveyed 233,000 youth. In 2014, 42,773 Americans died by suicide, as reported by the U.S. Centers for Disease Control. This is more than twice the toll of homicide, and more deaths than from car accidents or breast cancer. Some estimates are that 90 percent of suicides are by people who have a mental illness - most often depression. As a young man, Sam met the world on his own terms. He was idiosyncratic in his thinking and his rituals: bright pink hair dyed each spring to celebrate the end of the school year, an ancient Chevy Luv which flew a pirate flag from a 14-foot flagpole in the truck’s bed. Immensely hard working at his studies in the architecture program at Montana State University (MSU), Sam lived for backcountry snowmobiling with a far-flung community of sled-heads. He hid his depression from all but his closest friend. I was 60 when Sam died. He was my only child, and had been candid with me all the way through his roller coaster ride and final descent. I remember the night he told me that he
76
spokanecda.com • FEBRUARY • 2016
had become convinced that if he took his life, I would survive. Some weeks later, I sat in on a counseling appointment with Sam on campus when he described his plan for suicide. He said then that he didn’t feel the need to carry out his plan. As parents, we constantly hope for and believe that the best will win out for our children. So, he stayed at MSU. He died in Bozeman the week before finals, his senior year. With Sam’s death, I lost everything. In those first weeks, I felt shock, distress, exhaustion, nausea and a great sense of vulnerability, as if I had aged overnight. Friends and coworkers showered me with support. At times, I also felt a walled-off sense of pity. I set about my grief work as a matter of life or death. The Survivors of Suicide Loss Support Group, offered weekly by Hospice of Spokane, became my lifeline. My wonderful therapist introduced me to EMDR, which sheltered me from the worst pain. With my background as a journalist and magazine editor, the task of educating myself about suicide and depression offered something close to a sense of normalcy. My learning goes on today, nearly three years later. One of the first pieces that clicked into place is that suicide is about profound distress and a wish to end that emotional pain. It is not a wish to die. A lot of research and education about mental health and suicide is under way today nationally, statewide and locally.
Feel like you’re cracking up? Let’s talk. Licensed and Experienced Mental Health Counseling Anxiety • Depression • Trauma
Cami Huysman, MA, LMHC (509) 202-2732
Detroit’s Henry Ford Health System in 2002 undertook to create “Perfect Depression Care.” Within four years they dropped their suicide rate of 88 per 100,000 by three-quarters, and then even further. This became known as the Zero Suicide Initiative, which is spreading state by state and health system by health system, across the nation. Group Health has adopted Zero Suicide. A few months after Sam’s death, I began to meet with Dr. Paul Quinnett, the founder of QPR (Question, Persuade, Refer), one of the leading training programs internationally, and Dr. John Osborn, a longtime VA physician in Spokane and a suicide survivor, whose 17-year-old nephew died by suicide. Out of those meetings came what is now Zero Suicide Inland Northwest. We are a grassroots group working to educate the greater region about how communities and health care providers can better identify, treat and manage individuals who are suicidal. While Sam’s death meant that I lost
everything, I also lost my natural sense of fear. This was liberating. One day, I was deep in reading the national strategies for suicide prevention, when I came to a section about media coverage of suicide. Here, the authors voiced the need for stories of recovery and a return of hope for those who once were headed toward suicide. The proverbial lightning bolt hit: I needed to leave my job at Gonzaga University, which I loved, and get to Seattle, where activity in mental health and suicide prevention would be flourishing. I had the skills to write those stories and to work in this field. Like that, I braced myself for a leap of faith. Eighteen months later, I am employed part-time for Forefront: Innovations in Suicide Prevention, based at the University of Washington’s School of Social Work. As a volunteer, I remain deeply involved with Zero Suicide Inland Northwest. I work so that we might save another mother’s child. Join me. This is work that we can all do.
“Tools for Preventing Suicide: The 2nd Annual Zero Suicide Inland Northwest Conference” will be held at Gonzaga University on Friday, March 11, with a training day to follow on Saturday, March 12. Here is the link: zero-suicide-inland-northwest-conference-and-training-day-registration-20270091389 The conference is free; a nominal fee will be charged for continuing education credits. Both the conference and training day have robust offerings for professionals and for families who have lost a loved one to suicide, those who worry about a friend or loved one, and anyone who wants to learn how to help save a life from suicide.
University Chiropractic Serving Spokane Valley Since 1977
Our Services:
Chiropractic Care, Massage Therapy, Physical Therapy, Nutritional Guidance
509-922-4458 303 S. University Rd, Spokane 99206 spokanecda.com • FEBRUARY • 2016
77
HEALTH BEAT HOME FITNESS PROGRAMS
TOP HOME FITNESS PROGRAMS by Justin Rundle
OVER THE PAST DECADE,
there’s been a massive shift in the growth of new home fitness programs. So many fitness enthusiasts are forgoing the expenses and added commute of the gym, and refocusing their fitness efforts at home or anywhere a mobile device can play a workout video, or open a PDF guide. If you’re an avid and dedicated gym-goer, then this review is not for you; however, if you enjoy home workouts or have any interest in trying a home fitness program, read on to discover the hottest home fitness programs. Before diving into the top home fitness programs, here’s a little background about my expertise within the home and travel fitness industry. I’ve been a coach and trainer for ten years and actually started developing my training style with my wife, Jessica Rundle, when P90x first emerged into infomercials. CrossFit was also becoming popular and there were plenty of components we liked about both of these programs. There were also things we felt we could improve on. When we had the opportunity to train individuals and groups who had previously used the programs above, our beliefs about these programs - both the pros and the cons – were confirmed. At this point in our careers, our program is based on nearly a decade of personal training innovation and real world feedback of what works for most busy people and their fitness goals. 78
spokanecda.com • FEBRUARY • 2016
THE TOP HOME FITNESS PROGRAMS REVIEWED: KAYLA ITSINES BEACH BODY GUIDE (BBG) Developed by a 24-year-old Australian fitness trainer Kayla Itsines, this guide consists of two online 12-week workout plans for the moderate price of $69.97 (per guide). Various pieces of exercise equipment are needed to complete the workouts. Kayla’s workout videos are just under 30 minutes in length and can be accessed through PDFs or through a new app. Once a plan is finished, there’s an option to upgrade to receive additional training plans.
PROS Good short-term solution for weight loss (not a sustainable program for long-term results), accommodating all dietary preferences. Geared towards a female audience.
CONS Workouts should be conducted in a gym or large room with a variety of equipment. Nutrition and meal plans are not a sustainable approach towards long-term weight loss. Geared for people in their mid-twenties. No online support or real training guidance, meaning it is more of a gym-training plan than a home workout. Not beginner friendly. goes through short bursts of intense exercise, followed by longer intervals of rest and recovery. Geared for intermediate to advance levels.
PROS No equipment needed, and it is great for temporary weight loss and muscle growth. It comes with a nutrition plan, fitness guide and workout calendar.
CONS Not beginner friendly, and due to the intensity, it is not joint friendly. Please note, Beachbody allows anyone who pays into the Beachbody Coach system to become a Beachbody coach or “trainer,” so buyer be ware.
P90X BY BEACHBODY P90x has become a series of popular training DVDs that has literally launched the Beachbody brand and has helped them become a multi-billion dollar company. These bestselling 90-day home workout program DVDs by Tony Horton, use 12 routines based on “muscle confusion” to create transformative mix-and-match routines. The 20-60 minute long videos cover training techniques, from Yoga, Kempo Karate to plyometrics and core synergistics. This is all available for $140.
PROS Good choice for those looking for an intermediate to advanced fitness level workout. Nutrition plan, fitness guide and workout calendar included
CONS Requires more equipment than most people have in their homes (chin up bar, resistance bands, dumbbells, yoga mat etc.) Because the primary 12 workouts are very routine, this a program designed for short-term results..
PROS Free downloadable ebook programs and meal plans available with hundreds of different workout routines. There are realistic trainers and goals.
CONS There is no offline access or personal guidance or feedback, which helps keep you accountable and motivated.
WORKOUT ANYWHERE Disclaimer: Workout Anywhere is the program that my wife, Jessica, and I
created together, based on our education, years in the field, experience and constant improvement formula. Based on our personal as well as professional experiences, I believe we’ve created an incredible home workout and traveling fitness program. I would be remiss to not include this program in the list of home workout programs. I’m biased though; it is difficult for me to see any cons about a program we have worked to create in order to provide the ultimate home workout experience., other than it is still a growing community and not everyone knows about it. I truly believe this is the best, and I am open to hearing your feedback. Workout Anywhere is an affordable comprehensive fitness program that allows one to master the benefits of bodyweight and minimal equipment exercises and train, literally, anywhere. These are workouts you can do in your home, outside, while traveling and even in your office or cubicle. The workouts are efficient and effective, ranging from 8-30 minutes with videos and guides. Family-friendly workouts offer flexible duration and intensity levels. This, along with online accountability/coaching works well for busy people seeking sustainable, long-term results. There’s a free option as well as affordable priced memberships starting at $9.95 per month.
FINAL THOUGHTS
When looking into home fitness and travel friendly training programs, it’s easy to allow the media’s noise or hype for a popular program distract what you really need out of a home fitness program. Being sure that a program is constructed for all-levels, and is capable of keeping you accountable is part of the formula for developing a healthy lifestyle rather than a fitness phase. Just be sure to do your homework and ask questions for any program in consideration. If you need my guidance, you can ask me at WorkoutAnywhere.com. Cheers to your health! Justin Rundle is a Certified Personal Trainer with nine years of training experience. He holds a Bachelor’s degree from Whitworth University, and is the Mount Spokane High School Strength and Conditioning Coach, the Mt. Spokane Varsity Defensive Line Coach and the owner of (online personal training and dieting assistance).
LOSE WEIGHT
FEEL GREAT! Lose up to 1-2lbs/day With Our Medical Weight Loss Programs
Get started for only * $ * A $202 Value Restriction applies
99
LIFESTYLE CHANGE Lose up to 3-5 pounds per week • Complete Initial Medical Evaluation & Blood Work • FDA Approved Appetite Control Medication • Lipotropic Injections • Individual Weight Loss Counseling
FAST TRACK
Lose up to 1-2 pounds per day • Complete Initial Medical Evaluation & Blood Work • Pharmaceutical Grade Fat Burning Injections, which provide rapid, long lasting weight loss.
Call Today for your FREE Consultation! 30 YEARS EXPERIENCE
SPOKANE VALLEY 13318 E Sprague Avenue Spokane Valley, WA 92216 (509) 928-2406
COEUR D'ALENE 2201 N Government Way #D Coeur d'Alene, ID 83814 (208) 665-9951
CHECK OUT OUR WEBSITE FOR VALUABLE COUPONS! spokanecda.com • FEBRUARY • 2016
79
Welcome to the
Blog Cabin by Sarah Hauge photos by Jill Klinke at Tour Factory spokanecda.com • FEBRUARY • 2016
81
T
82
spokanecda.com • FEBRUARY • 2016
Though most people think of summer homes overlooking the lake, the Blog Cabin is built for year-round living.
spokanecda.com • FEBRUARY • 2016
83
A climate controlled airlocked foyer opens into the spacious open concept floorplan of the home, with lots of light.
84
spokanecda.com • FEBRUARY • 2016 ‘client’ every week and ensure the room(s) being filmed a particular week were ready to go.” “It was also unusual to be assisting, albeit
spokanecda.com • FEBRUARY • 2016
85
A walnut dining room table topped with glass encases vintage marquee lettering that is a nod to the home’s location on Lake Coeur d’Alene.,
86
spokanecda.com • FEBRUARY • 2016.”
6295 W. Harbor Drive | $849,500 YOU ONLY HAVE 18 SUMMERS WITH YOUR FAMILY ON THE WATER!
Mark Hensley | 509-998-7200
1102 S Windsong Lane | $800,000 View
918 S Liberty Dr | $669,000
Build your Exceptional Home here on 149’ of western facing sandy beach in Liberty Lake, WA.
Stunning Craftsman 3708 sf with beach access & beautiful view of Liberty Lake. Back parking with entrance to main level.
Mark Hensley | 509-998-7200
Jodi Hoffman | 509-220-3496
PENDING
5416 S Quail Ridge Circle | $650,000
5809 S Windstar St | $575,000
Views of city lights and surrounding area! Quality-built home in gated community with huge cook’s kitchen.
The finest views in Spokane in the highly desirable Quail Ridge Gated Community on Spokane’s South Hill.
Stunning Paras Custom Home in Eagle Ridge. Private half acre lot with sweeping views. This elegant home backs up to a private greenbelt.
Peggy McCartney | 509-993-0186
Kelli Johnson | 509-990-5219
Joanne Pettit | 509-868-4383
219 N Legacy Ridge Dr | $550,000
322 E High Drive | $1,195,000 Estate home with sweeping southern views on nearly 1 acre. Casual elegance at its finest, plus a 5 car garage!
1960 N Forest Ridge St | $549,000
528 E 14th Ave | $1,290,000
Magnificent Custom Home with Fully Finished Basement, 3 Car Tandem Garage on nearly 1/2 Acre. In Rocky Hill backing to Greenbelt.
1910 Tudor Revival on the National Historic Register. Just minutes from downtown on 1 acre of park-like grounds.
Joanne Pettit |509-868-4383
Diane Kooy | 509-435-8376
Joe Dinnison | 509-869-4509
88
spokanecda.com • FEBRUARY • 2016.
Where building relationships is just as important as the projects we build
REMODEL • NEW CONSTRUCTION • DESIGN & BUILD
KITCHENS • BATHROOMS • BASEMENTS • DECKS • ADDITIONS • NEW HOMES
Contact Dave Covillo for your FREE In-Home Consultation
(509) 869-7409 WA License # RENOVDC9600B ID License # RCE-14413 Licensed • Bonded • Insured
Build with Character Site Responsive Design High-Performance Resource Efficiency Build What You Need The kitchen is full of welcome contrasts between light and dark, mirroring the light-dark interplay of the home’s exterior.
Build with Character Craft a design that reflects who you are.
NEW CONSTRUCTION | REMODELS STRAW BALE | PASSIVE SOLAR
621 South 'F' Street Spokane, WA 99224 tel.: (509) 747-7647 fax: (509) 747-5979 tom@tomangell.com
Creating innovative and healthy solutions for your home, business, and community projects. spokanecda.com • FEBRUARY • 2016
89
as patio pavers, a coffee table top and in the indoor waterfall. “It’s so hard to choose a coolest DIY,” says Eastman. “If I had to pick just one it would be the basalt rock art in the master bathroom. The basalt was gathered from a natural flow uphill from the house.” Using vintage and handmade items
Upper left: the master bedroom loft. Upper right: The length of the lower level. Bottom: The downstairs entertainment room.
90
spokanecda.com • FEBRUARY • 2016 railing around the living and dining rooms opens to the lower level entertainment room. The twostory windows provide light and views to both levels of the home.
92
spokanecda.com • FEBRUARY • 2016
HANSON • CARLEN Architecture & Construction
O LD WO R LD C R A F T S M EN
509.838.0424
dark, mirroring the light-dark interplay of the home’s exterior. Light-colored, grayveined countertops find their counterpoint in the darker engineered quartz backsplash that extends to the ceiling behind the sink, and the cabinetry includes both white upper cabinets and dark-stained lower cabinetry. The antique Hoosier cabinet
Everyone has an artist hidden inside. Our goal is to design the perfect party for you! Birthday Parties, Bridal Shower, Ladies Night, Baby Shower, or any other type of party!
509-747-6171 714 E. Sprague Spokane, WA 99202 clayconnection.net
Mention this ad for a two-for-one workshop! All Skill Levels & Ages | Supplies Included spokanecda.com • FEBRUARY • 2016
93
Above: An outdoor firepit is a perfect gathering spot, especially on a snowy day. Below: The tucked away library nook with built in shelving and flip-down table is an indoor option for sitting by a fire.
94
spokanecda.com • FEBRUARY • 2016
14209 N Wandermere Estates Ln MLS #201528072 Price: $539,900 2007 Wandermere Estates Daylight Walkout Rancher with fabulous views over Golf Course on a private cul-de-sac lot. 4 bedrooms, 4 baths + 3 car attached garage. Knotty maple doors, trim & cabinets throughout. Custom kitchen with granite, SS appliances, double ovens, disposal, compactor & charming butler's pantry! Large main floor master with private door to deck, custom tiled walk-in shower, his and her walk-in closets, 2 vanities & garden tub! ALL the extra's plus BRAND NEW EXTERIOR PAINT! 55+PUD Community. Move in ready.
Marie Pence
509.230.8457 mariepencerealtor@gmail.com
spokanecda.com • FEBRUARY • 2016
95
CREATING YOUR LIFESTYLE
The master bedroom and loft, and master bath.
with Monarch Custom Builders
The Ridgecrest model at The Craftsman at Meadow Ridge in Post Falls Large lots w/room for a shop, custom built homes starting at $275K
208.772.9333
monarchcustomhomes.com 96
spokanecda.com • FEBRUARY • 2016 open onto the wrap-around deck. The spacious master bathroom combines multiple types of tile, including copper penny
TROVATO INTERIORS
Home Furnishings Boutique
NORWALK FURNITURE SALE: 30% OFF
“Custom designed, hand built in the USA”
BRAMBLE FURNITURE Original Paintings | Aidan Gray | Import Collection European Linens | Antique Reproductions
18 S Union Rd, Spokane Valley 99206
509-217-6646 | find us on facebook
partner with bozzi media events
y t i C e h of t 0011 For more info call 509-533-5350 or email sales@bozzimedia.com spokanecda.com • FEBRUARY • 2016
97
If You’ve Got a Real Mess... Call the Best!
Services include:
Professional | Commercial | Residential
509.216.1218 98
spokanecda.com • FEBRUARY • 2016
Photo courtesy of Clydesdale Frames
Frame going up (below) Finished frame (right) Photo courtesy of Wind River Timber Frames
We use Douglas-fir, Western Larch, Engelmann Spruce, Hemlock, Grand Fir, Western Red Cedar, White Pine, and Lodgepole Pine.
We are a custom manufacturer of high grade rough and surfaced timbers. Our mill specializes in supplying home and commercial builders and we take pride in our ability to meet highly specified orders.
Whiteman Lumber | 877-682-4602 | Cataldo, Idaho |
spokanecda.com • FEBRUARY • 2016
99
Melissa S. Williams LUTCF, CLTC, President
509-789-1818
Melissa@starfinc.com
T
Should I Stay or Should I Go?
his is a common refrain that I am hearing from many clients lately. When market volatility strikes, it is very easy to become concerned, and turning on the TV or a radio news stations can cause even more confusion and angst. It seems as though each so called expert has different advice for us and how we should invest with different reasons of why the market is falling. No wonder we are afraid, after all, we are still smarting from the financial crisis of 2008 and the subsequent decline of the stock market. So is that happening again? Well, I don’t know, but I do know that during periods of market declines, it may be prudent to understand the historic significance of selling our stocks at a low. Many people want to liquidate their holdings and run to a “safer” investment until it is “safe” to return to the market. Unfortunately, the devastating result in this behavior is the participation of the downside, while being out of the market during the rise. This is a classic example of investors selling low and buying high, which is of course the opposite of what their intentions are. While not two markets are the same, historical evidence suggests that using your emotions as a guide could cost you dearly. Selling from fear due to a falling market and reentering when the markets rebound can cause investors to have a much lower account value then if they wait it out. Some investors may find it difficult to remain optimistic as the market declines. I certainly understand this. The key is not to panic, and to understand that there is hope on the horizon. If we stay calm, remain steady and use common sense, we may be able to win the rewards of an increasing account value when the market rebounds again as it has in the past. Securities and Advisory Services offered through Centaurus Financial, Inc., a registered Broker Dealer, member FINRA and SIPC. Star Financial and insurance services, Inc. and Centaurus Financial, Inc. are not affiliated companies.
Subscribe Online or give us a call bozzimedia.com ● 509-533-5350 100
spokanecda.com • FEBRUARY • 2016lined fire pit on the lower level patio is surrounded with wood and metal benches;
TeresaJaynes listing by
5 BEDROOM & 4 BATH Spacious 5 Bedroom, 4 bathroom home situated at the end of a quiet cul-de-sac in Mead School District. Oversized beautiful park-like fenced yard w/sprinkler system & a fantastic sports court that begs to be played on. Features include formal living & dining. Granite counters, kitchen island & plenty of room to entertain. Large Master suite with walk-in closet, additional living space for guests in the lower level & freshly painted exterior with new waterfall feature off of the covered front porch.
sculptural heaters contribute more warmth on chilly evenings. With beautiful spaces inside and out for living, relaxing and retreating, it’s no wonder this one-in-a-million Northwest home has drawn the attention of millions across the country. If you’d like to learn more about this property, contact Mark Hensley of John L Scott Real Estate (509-998-7200 or markhensley@johnlscott.com), or learn more at
Teresa Jaynes, Broker 509 714-5284
tjaynes@cbspokane.net
CREDITS: DIY Network Design and Build Manager: Dylan Eastman Construction Management: Edwards Smith Construction Structural Engineer: Brian Waddell
spokanecda.com • FEBRUARY • 2016
101
HOME STYLES TILE 101 TILE GLOSSARY While there are a variety of materials available for countertop installations, such as quartz, stainless steel, solid surface, wood, concrete, laminate, glass, etc., this glossary will only include natural stone finishes and other tile format materials.
Tile 101
What’s the Difference? by Robin Bishop
ONE OF MY favorite DIY projects is installing tile. Everything from small tile mosa-
ics to large format flooring; I enjoy it all, although grouting—meh. After determining the vision for your tile project, the next step is selecting the best tile for the job. Here are a few questions that will help you determine which tile will get it done. 1. Will the tile be installed on flooring, backsplash or wall, or other specific use area? 2. If tile is used in flooring application, will it be used in a high traffic area? 3. Is the project indoor or outdoor? 4. Will the tile be used in wet areas? If so, how wet? 5. What is the “look” you want to achieve with your project? 6. What is your budget? • The beauty of tile is the flexibility. There are a lot of options so you can usually find a product that will achieve the look you want without breaking the bank; however, if your tastes run to the unique or exotic, you may want to be realistic about not compromising in the product you select. The best place to start is educating yourself on the different types of tile and what makes some better than others in certain applications.
102
spokanecda.com • FEBRUARY • 2016
Ceramic Ceramic tile is the most common tile used in the U.S. It’s made from clay and then heated (fired). The glaze, which is added after firing, is what determines the colors. The possibilities are endless. It does come unglazed, as well. The main ingredients being clay, glass and sand, it is easily recycled making it an environmentally friendly option. Ceramic tile is easy to clean, low maintenance, cost-effective, and endures its share of moisture gracefully, making it a diverse tile in almost any application. You will find the hottest trend, wood-look, large format (12”x24”), and plank tiles are being made from ceramic and porcelain due to the versatility of glazing. Nicole Johnson, showroom manager at United Tile shared, “Many people have the misconception that a small space requires small tile. Nothing can be further from the truth! In actuality, the larger format eliminates busy grout joints which visually expands the space making it feel clean and open. The varying widths and lengths of the plank options give an organic quality that wouldn’t be as prevalent in a square format.” Porcelain Porcelain tile is a type of ceramic tile. The main difference is that the tile is fired at a higher temperature making it denser and more water and stain resistant. This makes porcelain tile a better solution for outdoor projects or indoor high traffic and wet areas. Due to increased density porcelain is harder to cut which can result in higher labor and installation costs. It is available in matte, unglazed, or high gloss, and the price has recently come more in line with that of ceramic. Many think that porcelain and ceramic tile are interchangeable, and in many applications they are, but Carolyn Gallion, coowner of R. W. Gallion, offers a knowledgeable warning. “Many signs in flooring stores
INCORPORATED
NOW IS THE TIME.
YOU'VE DONE THE DREAMING... NOW LET US BUILD ON THOSE IDEAS WITH OUR EXCEPTIONAL CRAFTSMANSHIP TO CREATE SOMETHING EXTRAORDINARY.
12701 E. SPRAGUE SPOKANE VALLEY | 509.927.7777
OUR WORK SPEAKS FOR ITSELF • KITCHENS • BATHS • FLOORS
CHECK US OUT AT RWGALLION.COM OR ON• 2016 FACEBOOK spokanecda.com • FEBRUARY 103
HOME STYLES TILE 101 • granite • marble • soapstone • limestone • travertine • quartz • & more
will emphasize the word porcelain on their signage, but there is a distinction you should make before purchasing. There are many tiles sold as porcelain glazed. This is a ceramic tile with a porcelain glaze (the color) over the surface. This may lead to purchasing product that is not a true porcelain tile.” So if you are looking for a true porcelain, make sure to double check.
tresko monument Over 60 Years of Experience • Custom Designed
1979 W 5 th A ve • S pokane WA 99201 • 509.838.3196 or check us out on facebook countertopsales@treskostone.com
Clothing | Handbags | Jewelry | Accessories
KNOW A HOUSE THAT SHOULD BE FEATURED? Contact Spokane CDA Living editor, Blythe, at blythe@spokanecda.com
613 S. Pines Rd. | Spokane Valley, WA Monday-Saturday: 10am-5pm 6630 E. Sprague Ave. Ste B. | Spokane Valley, WA Tuesday-Saturday: 10am-5pm
509.321.2330 | jemalane.com
104
spokanecda.com • FEBRUARY • 2016
Quarry Quarry tile is another type of ceramic tile. It is a thicker unglazed version that is durable, freeze-resistant, chip and scratch resistant, and impervious to most issues making it a natural option for commercial and industrial applications. Because it comes unglazed, the color selections are more natural reds, oranges, grays, browns and a few more. This, however, also means you will need to seal quarry tile after installation, especially when used in high moisture areas. Quarry tile has a natural texture so is less slippery when wet. If you choose to leave it unsealed, be aware of the risk of mold and damage from over exposure. Marble When it comes to timeless elegance, you can’t beat marble tile. Cool, rich and elegant, it distinguishes itself among tile options with a variety of colors from grey to creamcolored, with contrasting veins that deliver a unique appearance. Marble tile can have multiple finishes from polished to honed, and brushed to tumbled. Whether you combine similar or contrasting tones, this natural material will instantly elevate the image of any room. Granite One of the most popular countertop options in recent years is granite, but it is an extremely versatile stone created when magma cools and solidifies. This high pressure creation results in a scratch, stain and acid resistant stone that is also naturally antibacterial. While the signature look is black with natural flecks at the surface, it does come in a wide variety of colors and patterns, and can even have occasional veins. Because of its durability it is a great candidate for indoor and outdoor installations of just about any kind.
Shaw® flooring combines elegance, versatility, form and function to any room in your home. The classic style provides a timeless look that is sure to impress.
28 W. Boone, Spokane 99201 | 509.413.1397 | spokane.floorcoveringsinternational.com
spokanecda.com • FEBRUARY • 2016
105
HOME STYLES TILE 101
Travertine A naturally beige stone, travertine is a type of limestone, a byproduct of natural mineral springs. Its random patterns and color variations are created by the different mineral combinations giving it a rustic and natural appearance. It is a porous stone so ensuring that it is well sealed even prior to grouting will keep its natural colors and save it from staining. Once sealed though, it does very well in wet applications such as kitchen backsplashes and shower walls. Travertine comes in a variety of finishes and the polished finish can make a more affordable substitute for marble. Quartzite Don’t let the elegant appearance of quartzite fool you. It is amazingly durable. It is created from sandstone and other minerals under pressure and heat resulting in a unique and luxurious looking stone suitable for indoor and outdoor installations. Quartzite has large striations and and variations making it a nice focal piece. It’s available in a number of colors and finishes.
106
spokanecda.com • FEBRUARY • 2016
Onyx Well known for its milky white to black pearl-like appearance, onyx is best used as the jewel of your project. It has natural striations in transparent stone, but it is softer than granite or quartzite. This makes it more suitable for indoor, accent and light traffic applications. Because of its jewel-like appearance, you will see it most commonly used in mosaics. Sandstone/Limestone If you’ve ever touched sandstone you find it feels like it looks. It has a natural sand-like grain that you would expect to have some texture and usually has an even color consistency in red/pink and yellow shades. Sandstone is great for high traffic areas and wall applications. Limestone is fairly simple in color and texture. It is fine grained with fewer veins and comes in simple earth tones.
Home design is a work of art... Let us inspire you!
INTERIOR DESIGN | WINDOW BLINDS | CARPETS | RUGS | HARDWOOD FLOORS | TILE | DRAPERIES | WALLPAPER
Monday-Friday 8:30AM-5PM | Saturday 10AM-2PM
WALLFI*986D6
E. 2820 30th Ave • 534-5064 • wallflwr@aimcomm.com • wallflowerdesigns.com spokanecda.com • FEBRUARY • 2016
107
REAL ESTATE SNOW REMOVAL
Homeowners liable for snow, ice control THE SNOW may have given us a slight reprieve with the mid-January warm up, but as anyone living in the Northwest knows, warm mild temperatures one day can easily be replaced with a snowy blast the next day. So while we are still in the firm grasp of winter, it is important to be aware of how snow and ice on our sidewalks can put a responsibility on our shoulders as homeowners. During the winter months, it’s common to see shopping centers and business owners out and about clearing snow and ice from pathways, parking spaces and entrances. But this isn’t just good business to help customers get in the door - it’s also a liability issue should someone slip andicing. to try and prevent such injuries and the resulting emergency room visits. In the end, the person who is most likely to slip and fall is the homeowner themselves. — BPT
108
spokanecda.com • FEBRUARY • 2016
Nancy Wynia Associate Broker ABR, CNE, CRS, GRI 800-403-1970 509-990-2742 nwynia@windermere.com
OLD WORLD CHARM
831 E. ROCKWOOD BLVD.
ARROWHEAD TRADITIONAL
340 W. WILSON AVENUE
SUNSETS & STARGAZING
9423 S. LABRADOR LANE
Magnificent 1913 2-story Tudor Rockwood Mansion. New custom cabinetry complements the original woodwork. Grand formal library boasts Englenook FP. Epicurean island kitchen features rainforest slab marble. Luxurious master suite retreat with private deck and a stunning 2nd master suite both on upper level. Olmsted Bros. inspired gardens w/in-ground pool & tennis court. 5 Bedrooms, 6 Baths $1,492,000
Exceptional Two-Story features custom detailing & upgrades throughout. Open floor plan. Spacious formal living room with wall of windows. Cook's island kitchen with eating area adjoins family room. Luxurious master suite includes garden tub & private deck. Upper level boasts 4 total bedrooms. Finished walkout lower level. Oversized 3 car garage. Friendly deer neighbors & river views! 6 Bedrooms, 4 Baths $460,000
Over 10 panoramic view acres. Elegant formal living room with library alcove. Formal dining room with built-in cherry buffet. European kitchen features gas range, hardwood plank floors, adjoining sun room & family room with gas fireplace. Walkout lower level boasts family room w/gas fireplace, kitchenette with gas range, theater room. Outdoor shop with indoor & RV parking. Special solar panel with grid feedback. 4 Bedrooms, 4 Baths $450,000
CLIFF PARK TUDOR
BROWNE'S MOUNTAIN RANCHER
BETTER THAN NEW
SO
LD
SO
523 W. SUMNER AVE.
LD
SO
5002 E. GLENNAIRE DR.
LD
902 W. WESTERA CT.
Magnificent estate sited on enchanting garden filled double lot in historic Cliff Park. Stunning old world charm features beamed ceilings & gleaming hardwoods. Renovations include kitchen island w/ cherry cabinets & granite counters. Elegant living & formal dining room perfect for entertaining. Master bedroom boasts imported chandelier. Carson not included. 3 Bedrooms, 4 Baths $450,000
Spectacular Views from this gorgeous one-story home. Formal living and dining rooms. Cook's kitchen boasts gas range, eating bar, walk-in pantry & skylight. Family room with gas fireplace opens to covered deck. Master suite with double sink vanity, jetted tub and double closets. Parklike yard. Newer roof. New exterior paint. 4 Bedrooms, 3 Baths $325,000
Gorgeous George Paras Craftsman! Elegant Shabby Chic interior with designer tones throughout. Open floor plan features great room w/soaring ceilings & gas FP. Cook's kitchen boasts granite countertops & upgraded stainless steel appliances. Lux master suite with double sinks & walk-in closet. Laundry room & extra storage. Fabulous patio & landscaping. Fenced backyard. 3 Bedrooms, 2 Baths $314,500
WEST PLAINS PARCEL
WEST PLAINS COMMERCIAL
GREAT LOCATION
RICHLAND/FRUITVALE RD
Rare find! Close-in acreage zoned LDR - low density residential - with The Fairways golf course views. Easy access to freeway. Adjoins West Terrace Heights. Bring your builders! 7.20 Acres $274,500
13008 W. 21ST AVENUE
Airway Heights office set-up located on nearly an acre. Office break room with mini-kitchen, map room and large storage area. Chain link fenced. Public water. Convenient location. Contract terms available. $179,000
406 E. 7TH AVE.
Mint Condition Duplex close to hospitals & downtown. Each updated unit features 2 bedrooms, full bath, nostalgic kitchen, living room, dining area and stacking washer & dryer. Recent updating includes new vinyl siding, new windows, new carpeting, updated bathrooms and new side fence. 4 parking spaces. Fenced backyard. Great tenants. 4 Bedrooms, 2 Baths $135,000
View complete virtual tours at
spokanecda.com • FEBRUARY • 2016
109
METRO TALK CLIMATE
Climate Action or Climate Change Fatigue?
photo by Makenna Haeder
by Paul K. Haeder
Facing uncertainty, the Inland Empire needs more than a global warming bucket list
I
ce,
110
spokanecda.com • FEBRUARY • 2016
photo by Makenna Haeder spokanecda.com • FEBRUARY • 2016
111
METRO TALK CLIMATE
112
spokanecda.com • FEBRUARY • 2016
spokanecda.com • FEBRUARY • 2016
113
METRO TALK CLIMATE
photo by Makenna Haeder
114
spokanecda.com • FEBRUARY • 2016
”Who are we in our arrogance to think we can stop that from happening here?” or staff.
spokanecda.com • FEBRUARY • 2016
115
AUTOMOTIVE RECALL
What’s going on with automobile manufacturers?
G by David Vahala
116
enerally speaking, automobiles are the most expensive investment a person will make in their life, other than purchasing a home. Like a home, people expect the builder, or manufacturer of their car, to guarantee their product is what was represented at the time of sale. That there is no deception in how it was built or performs, that the home or vehicle will stand the test of time with regular use and that it is safe for its occupants. When you purchase a home, there are laws protecting the buyer, as there also are with automobiles. So what is going on in the automotive industry over the last several years - numerous safety recalls - has been top of mind with the media, government transportation agencies, auto safety organizations, Wall Street and car owners –
spokanecda.com • FEBRUARY • 2016
as it should be. Last fall, one of the best known (and most popular) automotive brands in the world, Volkswagen, suffered a public relations and financial nightmare when it admitted to, “willingly cheating the government, customers and the environment,” in the words of Consumer Reports magazine. Without getting too technical, Volkswagen engineers programmed their diesel
engine’s Engine Control Unit – ECU – a computer that manages engine performance and fuel usage, to operate far more efficiently when cars were tested at vehicle emissions stations than they operate in day-to-day driving. In the U.S., 482,000 V-6 diesel Volkswagens, Audis and Porsches from model years 2009 through 2015 had ECUs programmed to pass emission tests and then revert back to an operational mode that emitted significantly more pollutants. Worldwide, nearly 11 million vehicles were affected. In September, the EPA issued a notice of violation to Volkswagen for failure to comply with Clean Air Act regulations. Volkswagen terminated “a small group of engineers” and a few executives while the value of the company, based on its stock price, literally was cut in half. Executive management disavowed any knowledge of the deception, however, they did make key changes at the top with the CEO of Porsche stepping in to lead Volkswagen while the previous CEO “retired.” New Volkswagen sales plummeted as the EPA placed a “stopsale” on 2015 diesel models while continuing an investigation into the illegal strategy and who approved it. Volkswagen withdrew the certification application for vehicles equipped with the 2.0-liter diesel engine for the 2016 model year, effectively cancelling this year’s new models. Worldwide, all the vehicles must be recalled and repaired to bring each car’s emissions systems into compliance with pollution regulations. Why would Volkswagen do this? For profit. They rushed products to market (diesel engines) that weren’t fully developed yet, selling millions of diesel cars during years when gas prices were spiking – starting in 2008 with the world recession. The proverbial end for Volkswagen is likely a debacle that is one of the most complex and costly fixes in automotive history. Adding in fines by the EPA of $37,500 for each of the 482,000 cars that are affected, or $18 billion, and settlements with the U.S. Justice Department, state authorities and dozens of countries in Europe, the cost could exceed $34 billion. While Volkswagen may go down in history with their epic recall, some say at least their malfeasance didn’t cause any deaths. Several automobile manufacturers and parts suppliers have had problems that caused fatalities and injuries. In February 2014, General Motors recalled nearly 800,000 small cars due to faulty ignition switches,
• Works with ALL insurance • Lifetime Guarantee • FREE premium detail with the completion of every service • BMW + Mini Cooper Certified Collision Specialist • Locally family owned since ‘79
2417 N. Astor | Spokane, WA (509) 483-6843 | spokanecda.com • FEBRUARY • 2016
117
Restore it! Enjoy it! Love it!
1949 Chrysler Town & Country Convertible - Before Restoration
Restore your classic, Call us today! Brad Enders (208) 755-3334 Jason Mortenson “Cartist” (509) 220-3830 1710 N. 4th St #110, Cd’A ID 83814 (next to Bistro on Spruce & Slate Creek Brewery)
Health symposium
Q+A WITH A PANEL OF FIVE PRACTITIONERS
(dermatology, back health, sports medicine, men and women’s health, mental health) Plus a multitude of specialists in booths throughout the venue... we hope you will join us for the most relevant information that will help you live long, healthy, vibrant lives. Guests will enjoy great company, inspiring speakers, a complimentary glass of wine, and a healthy dinner buffet.
Wednesday Feb 24th, 2016 5:00p.m.– 8:00p.m.
Tickets at eventbrite.com $20/person or $120/table of 8
Brought to you by Bozzi Media and
118
spokanecda.com • FEBRUARY • 2016
AUTOMOTIVE RECALL which could shut off the engine and prevent airbags from inflating. The company continued to recall more cars over the next several months, leading to nearly 30 million cars recalled worldwide. General Motors also paid compensation for 124 deaths. General Motors had known the defect in ignition switches for at least a decade prior to the recall being issued. They agreed to forfeit $900 million to the United States government as part of a deferred prosecution, and are still litigating personal lawsuits. Perhaps what is the worst automotive recall case, based on total recalls, is from the supplier of air bag inflators for 19.2 million vehicles in the U.S., and about 40 million worldwide: Takata Corporation. After first trying to minimize the significance of its malfunctioning parts, then dragging its feet, it now is headed for the most recalls in automotive history. Nine people have died and more than 100 have been injured in crashes worldwide from airbags exploding, blowing apart a canister and discharging metal fragments into vehicle occupants. Many of the deaths and injuries involved occurred in low speed crashes. Last November, the National Highway Transportation Safety Agency (NHTSA) imposed the largest civil penalty in automotive history– $200 million – on Takata for its violations of the Motor Vehicle Safety Act. Vehicles produced by the following manufacturers are impacted: • • • • • • • • • • • • • • • •
Acura BMW Chevrolet Chrysler Daimler (Mercedes) Trucks and Vans Dodge Ford GMC Honda Infiniti Mazda Mitsubishi Nissan SAAB Subaru Toyota
As you can see, the list represents a significant number of automakers. There are 44 Takata recalls to date. Is your vehicle included? It is a valid question that many
Tire & Automotive
Since 1989
SPOKANE'S BEST AUTO REPAIR 2006-2015
spokanecda.com • FEBRUARY • 2016
119
AUTOMOTIVE RECALL
Everett Smith Need some direction? CoMPAS can steer your investing in the right direction, and it's only available through COUNTRY Financial®. To find out more about how this investment management service can be part of your tangible plan, call us today at 509.216.3928.
Investment management, retirement, trust, and planning services provided by COUNTRY Trust Bank®. 0315-127
drivers are asking. The NHTSA provides online resources to help owners identify if they have an automobile with a pending recall, as well as providing other important information:
MANUFACTURER NOTES: THIS RECALL DATA LAST REFRESHED: Dec 18, 2015 SearchSafetyIssues
NHTSA Recall Number: 15V-313 Recall Date: May 29, 2015 Manufacturer Recall Number: R25
The Takata recall hits close to home for me. My 2008 Chrysler 300 has two pending recalls on file. One is for a faulty ignition switch and the other is for the Takata airbags. If you’ve never seen what a National Highway Traffic Safety Administration recall looks like, here’s an example:
SUMMARY: THE DRIVER AIRBAG INFLATOR HOUSING MAY RUPTURE, DUE TO EXCESSIVE INTERNAL PRESSURE, DURING NORMAL AIRBAG DEPLOYMENT EVENTS. THIS CONDITION IS MORE LIKELY TO OCCUR IF THE VEHICLE HAS BEEN EXPOSED TO HIGH LEVELS OF ABSOLUTE HUMIDITY FOR EXTENDED PERIODS OF TIME.
Year: 2008 Make: CHRYSLER Model: 300 Number of Open Recalls: 2 NHTSA Recall Number: 14V-567 Recall Date: November 10, 2014 Manufacturer Recall Number: P57 SUMMARY: SOME VEHICLES MAY EXPERIENCE A FREQUENCY OPERATED BUTTON IGNITION KEY (FOBIK) THAT DOES NOT FULLY RETURN TO THE ON POSITION AFTER ROTATION TO THE START POSITION, BUT REMAINS BETWEEN THE START AND ON POSITIONS. SAFETY RISK: VEHICLE MAY LOSE ELECTRICAL FUNCTIONS. THIS CONDITION MAY LEAD TO AN UNINTENDED CHANGE IN IGNITION SWITCH POSITION, RESULTING IN THE LOSS OF ENGINE POWER, POWER STEERING, AND BRAKING ASSIST, WHICH MAY INCREASE THE RISK OF CRASH, AND DISABLE THE AIRBAGS. REMEDY: CHRYSLER WILL REPLACE THE WIRELESS IGNITION NODE AND PROVIDE A NEW SET OF KEY FOBIKS. RECALL STATUS: Recall INCOMPLETE
120
spokanecda.com • FEBRUARY • 2016
SAFETY RISK: AN INFLATOR RUPTURE, DURING AIRBAG DEPLOYMENT EVENTS, COULD RESULT IN METAL FRAGMENTS STRIKING AND POTENTIALLY SERIOUSLY INJURING THE VEHICLE OCCUPANT(S). REMEDY: THE DRIVER AIRBAG INFLATOR MUST BE REPLACED. RECALL STATUS: Recall INCOMPLETE MANUFACTURER NOTES: THIS RECALL DATA LAST REFRESHED: Dec 18, 2015 Besides the VIN search tool, NHTSA also provides additional safety information based on a vehicle’s make, model and model year,. My own experience with these recalls working with a Chrysler dealer at which I
had past warranty work done, has led me to abandon them and move to another dealer. At least the service writers at Barton Auto Group were familiar with the recalls and what actions Chrysler was taking. Their customer service has been good, including providing me with a temporary fix for the ignition recall, sharing contact information for Chrysler Corporation and what the general process is for getting the vehicle repaired. Manufacturers hamstring dealers; how to handle recalls is a basic process that dealerships understand though. One may be better than another, as I found. Meanwhile, NHTSA and the U.S. Department of Transportation continue pressing manufacturers about their recalls. At a public hearing last summer on Chrysler’s handling of 23 vehicle safety recalls involving more than 11 million defective vehicles, a consent order against Fiat Chrysler – FCA – acknowledged violations of the Motor Vehicle Safety Act in three areas: effective and timely recall remedies, notification to vehicle owners and dealers, and notifications to NHTSA. To remedy its failures, FCA agreed to repair vehicles with safety defects, purchase some defective vehicles back from owners, and pay a $105 million civil penalty. In the latest chapter of the recall for my car, I learned on December 8, 2015, NHTSA issued an amendment to its July 24th consent order with FCA acknowledging significant failures in early warning reporting dating to the beginning of the requirements in 2003. The amendment requires FCA to pay $70 million in additional civil penalties. All the millions in fines Takata and FCA are paying aren’t helping owners. Neither of the recalls on my car has been addressed yet, however, I don’t drive it during winter, so it is not as pressing of a safety concern for me. There’s time to continue pressing Chrysler and work with Barton Auto Group before we need to drive the car again in spring. Persistence will be – and is - key! Happy Motoring! David Vahala is a Certified Car Guy, having owned 28 vehicles so far (but who’s counting!) He owns 944 Automotive, an auto detail and resale business, and works part time as an auctioneer assistant at Dealers Auto Auction Northwest. He enjoys driving his two Porsche’s, a 1988 944 and 2000 Boxster.
Mon-Fri | 7:30 - 5:30
spokanecda.com • FEBRUARY • 2016
121
If you can make it at home, then why would you go out and pay for it?
Authentic, Masterfully Created Cuisine that simply can't be duplicated.
122
spokanecda.com • FEBRUARY • 2016
LOCAL CUISINE EATWith
1 2 8 R ESTAU R ANT REVIEWS 136 SIGNAT UR E D IS H 1 3 8 D I N I N G G UI D E 1 4 4 LIQUI D L I B AT I O N S
LOVE
A Guide to Memorable and Meaningful Wedding Food
by Cara Strickland
photo by Sylvia Fountaine
L
ast fall, I was at a very tasteful wedding, the date of one of the groom’s best friends. We had gone through the buffet line, filling our plates with a little of everything. There were two kinds of meat being carved, a vat of mashed potatoes, creamy linguini and assorted salads and rolls. After a few bites, back at our table, my date, who loves take and bake pizza more than anyone I’ve ever met, looked at me and said: “It’s kind of like a glorified Old Country Buffet, isn’t it?” spokanecda.com • FEBRUARY • 2016
123
LOCAL CUISINE WEDDING FOOD photo by Gary Peterson Photography
Scott Cook of Two Cooks with Love told me that one of his most memorable weddings he catered included cooking pizza on a barbecue, since the venue didn’t have a kitchen. Not only was the pizza delicious, but also he was able to cook in front of a receptive crowd.
Just Desserts
When I asked people about the most memorable foods they’ve eaten at weddings, a huge percentage immediately began to talk about the dessert. Here’s a big hint to the trends though: no one mentioned cake! One person remembered a wedding that took the desserts to another level, making them the entirety of what was served. The wedding was after lunch, leaving plenty of time for the guests to celebrate and be home for dinner. This type of feast might also work well at an after-dinner wedding, too. If you’re planning to make dessert a highlight, here are some ideas the guests I spoke to were still talking about:
In the whirlwind of flowers, venue, centerpieces and the dress, the food might not be the sexiest of wedding elements. Still, it’s likely to be the part of the wedding that guests engage with the most (perhaps a tie with the bar). As I spoke with a variety of wedding guests, it became clear that they remembered the food, for better or worse. Although any part of a wedding can be spendy, a memorable meal need not be any more expensive than the boring meal I ate last fall (in fact, I’m willing to bet that many of the alternatives I’m suggesting will save you money).
Give them a pizza your mind
So many people talked about pizza that I decided it needed its own section. One couple ordered pizzas from all their favorite places to be delivered piping hot to the reception. They got a call from their credit card company, wondering why they needed quite so much food. Answer: because you can never have too much pizza! Another guest told me about a wedding she attended at a pizzeria in Brazil, which served an entire meal themed around pizza, from tiny pizza appetizers to caramelized banana dessert pizzas. This is definitely a unique way to eat what you love. Laura Carey of Spokane’s Veraci Pizza told me that they’ve catered weddings all over with their pizza food truck. Now that they also have their brick and mortar location in Kendall Yards, this spot works well for couple taste tests ahead of the wedding.
124
spokanecda.com • FEBRUARY • 2016
• Doughnuts, either mini, made on the spot, or a variety collected from favorite places. • A s’mores bar, complete with fire pit. • Cookies and milk • Cheese wheels instead of cake. The bride and groom froze the leftovers to enjoy throughout the year. • A collection of pies, purchased or made by friends. • A candy bar. No, not just an individual candy bar; rather, an entire bar of candy which the bride bought in bulk and set off by fun containers. • A chocolate fountain with fresh fruit. • Pumpkin cheesecake. This is ideal for a fall wedding, but it is your day, so if you love pumpkin cheesecake and your wedding is in May, well, go for it!
Keep It Simple
Sometimes it is the little joys and simplest of foods that universally please our palates. I can’t tell you how many people smiled as they told me about the macaroni and cheese bar they enjoyed at a wedding they’d attended. One woman told me that she’d loved the chicken salad sandwiches served on freshly baked croissants. A bride spoke fondly of her own wedding reception which featured lovingly crafted soup. Like love itself, sometimes it’s best to stick to the basics.
Think Outside the Box
You have so many options for your wedding meal, and traditional caterers are just the beginning. Most restaurants are willing to cater an event. This is a fun option because you can opt to serve meaningful dishes, such as the very dish you shared on your first date. Potluck weddings have become popular in the last
few years. This can be a fun idea; however, if you’re planning to attempt one of these, be sure that you have options for keeping hot and cold food at safe holding temperatures. Nobody wants to remember your wedding as the source of his or her food poisoning! Maybe you have a chef friend, or a favorite deli you’d like to support. If so, be sure to ask about volume discounts. Before you get too involved in the planning, however, make sure you’re in communication with your venue. Some places require you to use an approved caterer. You should also ask if you to meet an alcohol minimum, so you can plan that portion of the feast.
The Most Important Meal
Who doesn’t love breakfast? It’s easy (and delicious) to accommodate vegetarians, you’ll keep your youngest guests happy and you’ll save money. What could be better? Plan a morning wedding and have a breakfast or brunch at a normal hour, or serve breakfast for dinner. Sylvia Fountaine of Feast Catering told me that one of her favorite weddings to cater was brunch themed, because the couple loved going to brunch together. Fountaine made their favorite brunch foods and included mimosa and Bloody Mary bars for the full brunch effect. You could make breakfast as elaborate as you like (crepes and omelet’s to order, anyone?) or keep it simple with pancakes, quiche or pastries. Pro tip: If you’re having your wedding at a hotel, consider serving breakfast. This is the meal their kitchen serves the most often, so they excel at it, and it’s usually the most affordable option.
Cook Your Culture
Among the caters I spoke to many told me that some of their favorite weddings to cater were those where the bride and groom asked them to get a little cultured. The Davenport Hotel and the Coeur d’Alene Resort both fondly remember Indian dishes created specially by their chefs. Several people spoke of whole roasted pigs for the Bulgarian and German heritage of the bride and groom, and I attended a wedding
photo by Sylvia Fountaine
spokanecda.com • FEBRUARY • 2016
125
DOWNTOWN SPOKANE’S PREMIER RIVERSIDE EVENT CENTER Located on the ground floor of the historic Flour Mill building, Chateau Rive is an elegant venue with old world charm.
The perfect venue for company parties, weddings, luncheons, meetings, retreats and trade shows. FO R MO RE I N F O R M AT I O N O N FACIL ITY RENTAL RATES & CATERING OPTI O N S ,
PLEASE CALL 509.795.2030
PRESENTED BY
6 2 1 W E S T M A L L O N AV E N U E , S P O K A N E , WA 9 9 2 0 1 W W W. C H AT E A U R I V E . C O M
with a Hawaiian groom, catered by his family, with much of the food brought by plane just before the ceremony. One bride told me that she served Ono Kauswe, a Burmese soup that is served at every large gathering in her husband’s family. Each of these meals provided a chance to showcase a culture valued by the marrying couple. If you’re nervous about serving only ethnic food, you might want to take a page from one bride marrying her Persian groom; they served American food for dinner and followed it up with Iranian desserts. It was the best of both worlds.
Mine Your Memories
Across the board, the most meaningful food at weddings wasn’t always fancy or trendy, but a lot of it had history. It is not required to extend into the distant past to be history. Maybe you want to recreate a picnic you had when you were first dating or the meal he made the night he proposed. If you’re getting married on or near a holiday, you might want to borrow from family gatherings, like the clam chowder served on New Year’s Eve, or, if you want to tap into your Irish roots, the corned beef and cabbage so often served up in March.
Eat What You Love
Every caterer I spoke with expressed a willingness to work with the engaged couple to help them realize their wedding food vision. Most were willing to create menus from scratch, to try things they’d never cooked before and to experiment. Some even told me that was part of the fun. In the end, a wedding is about love. The love between the couple, of course, but also the love between the couple and their guests. When you think about what to serve, pick something you love to eat, not only because you want to feed your guests well, but also because you’ll get to keep the leftovers.
n s, salo ticket ore ! t n e v ing, e and m on din es, travel servic
thedealplanet.com spokanecda.com • FEBRUARY • 2016
127
RESTAURANT REVIEW LUNA
Sea Bass
Luna Love
W
hether it’s the Eggs Benedict, the locally renowned Coconut Cake or Patrick’s friendly banter behind the bar, it seems like everyone has a Luna favorite. So when the restaurant changed ownership for the first time in two decades, a wave of panic spread across the fine dining lovers of Spokane. Would the
128
spokanecda.com • FEBRUARY • 2016
by Sydney McNeal photos by Rick Singer Photography
service be the same? Would the menu change? Would they (heaven forbid) stop serving fresh baked Bouzies bread? William and Marcia Bond opened Luna in 1992, providing the (then) unique combination of fine dining in a neighborhood setting. Over the next 22 years, the Bonds established the restaurant as a South Hill landmark through nostalgic décor, exceptional staff, an impressive wine list and a menu with something for everyone. When they decided to sell and enjoy retirement, the list of potential buyers spanned from California to New York. The Bonds decided to stay local. Aaron DeLis, his
Ahi Tuna Tartare
spokanecda.com • FEBRUARY • 2016
129
RESTAURANT REVIEW LUNA
Beet Salad
Spiced Lamb Chickpea Pasta
130
wife Hannah Heber, and parents Frank and Julie DeLis are experienced Spokane restaurateurs whose credits include current ownership of the Rusty Moose and former ownership of Taste Café. I would be lying if I said I wasn’t nervous about this passing of the torch. Luna is walking distance from our house and a favorite meeting spot for most of our friends – what if the new owners didn’t measure up? After a recent dining experience however, I’m happy to say that my mind has been put at ease. Walking through the doors to meet some girlfriends for dinner, I felt instantly welcome. The staff and French-inspired atmosphere at Luna just have a way of making you feel at home. Our server, Ashley, was over in minutes to recommend her favorites from the wine list. With a comprehensive but carefully selected assortment, Luna has been recognized by Wine Spectator as having one of the most outstanding wine lists in the world ($7 - $13 by the glass). If you’re craving something a little more unique, the Herbin’ Martini is a must try and just one of Luna’s Signature Cocktail options ($10). With Small Plates including everything from Crispy Polenta Balls ($7) to Scallops featuring a parsnip puree and chocolate dust ($20), we wanted to order one of everything. At Ashley’s recommendation, we started with a new addition to the menu – the Quinoa ($12). Presented on a sleek rectangular plate, pesto infused quinoa was topped with grilled broccolini and toasted slivered almonds. A red pepper sauce drizzled over the top provided a kick of spice that was unexpected but not overwhelming. Light and hearty at the same time (as well as gluten and dairy free), this is sure to be a welcome addition to the menu. Of course we also had to sample Luna’s signature Ahi Tuna Tartare ($15). A longtime menu favorite, the Tartare comes out as a colorful drum of ahi tuna layered on avocado, red onion and tomato. We requested the wasabi vinaigrette on the side to accommodate a dairy
spokanecda.com • FEBRUARY • 2016
Beignets
allergy. The modification allowed us to ration the dressing with each mouthful and enhanced the dish since the wasabi can be overpowering. The tuna tartare flavors might be better suited for a hot summer day, but the presentation is unique and it was enjoyable nonetheless. The Lacinato Kale Salad ($12) and the Beet Salad ($8) were the standouts from the salad section. The Kale Salad has been on the menu for as long as I can remember, and for good reason. The contrast of creamy vinaigrette and crunchy pistachios topped with fresh parmesan will inspire you to recreate this at home (or at least try!) and has remained a foolproof ordering option through the ownership change. The beet salad seems to fall more in line with fine dining trends of small portion size and unique presentation. Mixed greens, roasted beets, pistachios and julienned radishes were peppered across an elegant rectangular plate and finished off with a creamy dill vinaigrette. About a third the size of the kale, this starter salad is light but intensely flavorful. Large Plates featured a short but diverse list of distinctive entrées including chicken, duck, beef, fish, pasta and vegetarian dishes. We were reminded of Luna’s exceptional service after asking about a scallop dish that had been on the menu a few months prior. The current version features coco and vanilla flavors, but we were in the mood for something more savory. Ashley knew exactly the dish we were thinking of, and brought out three scallops sautéed in a chili lime beurre blanc served over a bed of garlic mashed potatoes and grilled broccolini. The Sea Bass, featuring squash, zucchini ribbons and Parmesan broth ($30) was enjoyable, but fell short of expectations. A small and delicately presented portion, this dish was fitting for a fall entrée but lacked
flavor and substance. For those in search of a heartier meal, the Spaghetti and Meatballs ($20) and Luna Burger ($16) have weathered the ownership change and remain customer favorites. These are good comfort food options highlighted by tender, high quality meats and polished off with a little Luna flare. The menu also features an assortment of Wood Oven Pizzas ($15-$18) crisped to perfection in the restaurant’s Italian-imported pizza oven. The final highlight of the evening was dessert. Once again, Ashley recognized our struggle to choose just one and suggested a half order of Pumpkin Beignets ($8) and the Apple Crisp ($7). The beignets rival Green Bluff ’s pumpkin doughnuts and were served in a quaint mini French oven accompanied by a trio of dipping sauces. The Apple Crisp featured a poached and pan seared apple, collapsed onto a plate of drizzled caramel. Surrounded by cinnamon whip cream and melt-in-your-mouth apple butter, this dish was light and distinctly flavorful, and was a great compliment to the heavier beignets. These fall-inspired treats were the perfect way to end the meal. In short, everything that made Luna “Luna” is still there. The staff is knowledgeable as ever and, aside from a few subtleties (and the stunning birdcage chandelier in the bar!), everything looks the same. I’ve heard talk of the food not being quite the same quality and (truthfully) had some off-experiences right after the ownership change, but these issues seem to have disappeared as the DeLis-Heber group has found their footing. A slight shift in the menu also seems to be surfacing, although I would imagine careful thought has been put into not changing too much too quickly. You’ll still find your favorites in generous portions (think: Roasted Chicken and Coconut Cake), but dishes like the Beet Salad and Apple Crumble suggest we might be seeing smaller portions with bigger flavor in the future. Luna is located at 5620 South Perry Street, in Spokane, and is open Monday through Friday, 11:00 a.m. to close; Saturday and Sunday, 9:00 a.m. to close, (509) 448-2383,
spokanecda.com • FEBRUARY • 2016
131
RESTAURANT REVIEW FLEUR DE SEL ARTISAN CREPERIE
Crepe
132
spokanecda.com • FEBRUARY • 2016
by Cara Strickland photos by James & Kathy Mangis
W
hen I arrived in Europe for a birthday trip two years ago, the very first thing I ordered was a savory crepe in a restaurant in Germany. Since then, I’ve been searching for somewhere to replicate that wonderful feeling: satisfaction without being over-full, warm and comforting but not heavy. There is something about a crepe that is perfect filled with breakfast, lunch, dinner or dessert. When I made my first visit to Fleur De Sel Artisan Creperie, my hopes were high. The creperie is the latest venture from Laurent and Patricia Zirotti, of Fleur de Sel in Post Falls (and one of my favorite places to celebrate in the area). I knew that this restaurant would be different. For one thing, they only serve crepes, the batter spread onto large, round specially made griddles and filled with whatever captures Laurent’s fancy. The dining room is bright and cheerful, with colorful metal chairs and tables with a light wood finish, but unlike Fleur de Sel, this experience is meant to be speedy. Crepes come up quickly with barely a wait. Employees from the hospitals that are located just a stone’s throw away should have no trouble sneaking away for a quick and filling bite. On one visit, my guests and I went at lunchtime. We all chose savory crepes with one exception for breakfast: a Monte Cristo crepe ($6.25). It came drizzled with syrup and filled with turkey, smoked ham and Parmesan, enveloped by a very thin layer of
spokanecda.com • FEBRUARY • 2016
133
RESTAURANT REVIEW FLEUR DE SEL ARTISAN CREPERIE
organic egg. An excellent way to start the day. I selected the Salmon and Capers Crepe ($8), which was lightly tart and wonderfully smooth. It came topped with a caper berry. Inside I found salmon rillette (a cooked salmon spread) made from salmon from the Columbia River, capers, onion and feta cheese with a hint of dill. A blanket of organic arugula finished the crepe, mellowed by the creamy, pungent flavors. Another guest ordered Figs, Gorgonzola and Ham ($7), with house made fig jam, a simple, elegant choice with the wonderfully savory flavor of Gorgonzola present throughout and mingling in a friendly way with the fig jam. Although we enjoyed each offering, the Bison Meatloaf and Horseradish ($7.95) was a great favorite with our group. Like all of our crepes it was filled generously, but this one felt even more satisfying, possibly because of the rich (and delicious) bison meat. A broccoli slaw adds a wonderful texture and an unexpected touch inside
a crepe. There are also two gluten-free options available for preparing the crepes. One is made with buckwheat and the other is made with garbanzo bean flour (it’s also vegan). If you’re quite hungry, you can add a soup or salad to any savory crepe for a total of $10. Salads and soups are also available a la carte for $3 a piece. The soup of the day when I was there was a rich and smooth carrot with Fleur De Sel spices. It was the perfect antidote to the winter blues, just spicy enough to wake up my taste buds. I also sampled the rice salad with roasted zucchini, capers, olives and sun-dried tomatoes, topped with a house made balsamic vinaigrette. The salad was zesty but not overpowering and the dressing was the perfect touch. Depending on the day, you’ll find crepes (both savory and sweet) and soup du jour. For dessert we sampled a Nutella crepe ($4.50 + $1 for bananas).
It came full to the brim with bananas, a French classic expertly done. The next day, I woke up thinking about it. We also ordered the popular Lemon Curd ($5 + $1 for organic blueberries). The lemon curd is house made, delightful on its own, but when mixed with the generous serving of blueberries, magic occurs. This crepe wasn’t too sweet, or two heavy. It was just right. For days I continued to wonder about a breakfast crepe that I hadn’t ordered but had intrigued me. It’s filled with blueberries and granola and lined with an egg ($5). Finally, I went back and ordered it, sitting alone with the crepe and a cup of Doma coffee. It was just as unexpected as I imagined, but somehow the combination worked. The lightly sweet granola married with the juicy blueberries and complemented the neutral flavors of the egg and crepe. It was a complete breakfast without the excess that often accompanies that concept. Fleur de Sel’s creperie is certainly a different restaurant than its sister in Post Falls, but the same hospitality pervades the space. Everyone is kind and eager to help as you make a decision. When I entered each time, I felt a rush of welcome, a genuine gladness to see me. Some things translate all the way across the borders, from France to Post Falls and now Spokane. Fleur De Sel Artisan Creperie is located at 909 S. Grand Blvd., in Spokane, and is open Monday through Saturday, 8 a.m. to 5 p.m., and Sunday, 9 a.m. to 2 p.m.(509)2423725,
spokanecda.com • FEBRUARY • 2016
135
SIGNATURE DISH SEAFOOD TRIO
Scratch’s Seafood Trio by Chris Street
THE INTERSECTION of First and Monroe is where you’ll find one of Spokane’s coolest addresses for downtown gourmet fare. Scratch Restaurant has been open for eight years and yet doesn’t have the same cache as some other downtown spots because the restaurant’s location has been, perhaps for too long, “in transition.” This is real estate code for a neighborhood in a slump but one that holds promise. Despite this, Scratch has stayed consistently excellent since opening in 2007, and owner Connie Naccarato may have made a winning bet on the dark horse after all. “I always knew this area would come back” says Naccarato, whose bet was on the vision of commercial real estate developer, Jerry Dicker, who is almost single-handedly restoring the neighborhood now officially known as the Davenport Arts District. Scratch’s menu ranges from a buffalo hanger steak ($26), to a BLT ($11), to fresh Dover sole ($25) to prawn gnocchi ($25)— all prepared by Chef Chris Harnett with his youthful eye on fresh, wholesome food and artful presentation. The seasonally changing
136
spokanecda.com • FEBRUARY • 2016
menu is tasty and geared toward healthy eating, which is driven in part by the kitchen staff and owner Connie Naccarato’s lifestyles. Naccarato is a cancer survivor and her kitchen crew is comprised of avid exercisers/athletes (Harnett is a runner and his right hand man is an Ironman competitor). This month’s Signature Dish is Chef Harnett’s Seafood Trio ($39.00); a plump Maine lobster tail with scallops, shrimp, lemon basil risotto and seasonal vegetables. Harnett recommends starting with his Asian spiced calamari ($10.00) and Scratch’s signature salad ($7.00). The calamari comes with three unique dipping sauces: sweet chili, roasted garlic aioli and a ginger cilantro aioli. The salad comes with a house-made dressing with fresh pomegranate juice and delicate baby spinach greens. The Maine lobster is oven-roasted and basted with butter. Harnett’s scallops and shrimp are fresh and pan seared. Seasonal veggies like his asparagus taste garden fresh, making Scratch yet another Spokane restaurant to add to your list of healthy places to eat. Dilapidated neighborhood no mas, the Davenport Arts District is on a glorious rise and Scratch remains at the very top of its game. My suggestion is to go and try their Signature Dish and get there before getting a table becomes the impossible dream. Scratch is located at 1007 W. 1st Avenue, in Spokane, and is open Monday through Friday, 11 a.m. to 10 p.m.; Saturday, 3 p.m. to 10 p.m.; closed Sundays. (509) 456-5656,
spokanecda.com • FEBRUARY • 2016
137
DINING GUIDE FEBRUARY Luna photo by Rick Singer Photography
FEBRUARY
DINING Guide
The Dining Guide includes summaries of local restaurants that are featured on a rotating basis each month and/or issue. Suggestions for additions or corrections can be sent to katie@spokanecda.com
ASIAN AND IND.” Order it the way “Huff” (Patrick’s nickname) gets his. Open daily. 1724 N Monroe (509-4431632) and 1220 W Francis (509) 4132029.. $-$$ Bangkok Thai. Thai. Bangkok Thai took over the former Linnie’s Thai location on Grand Avenue and the former Riverview Thai near Gonzaga. The South Hill restaurant offers combination lunch plates that allow smaller portions of several popular Thai dishes for one price and the Gonzaga location has the best Thai lunch buffet in town for $12/ person. Mon-Thu 11am-9pm, Fri 11am10pm, Sat 12-10pm, Sun 12-9pm. 1325 S Grand Blvd. (509-838-8424) and 1003 E Trent Avenue (509-325-8370). www. spokanebangkokthai.com. $$ Ding How. Asian. Specializing in Chinese, Japanese, Thai, and Korean dishes, Ding How has plenty of variety. This restaurant has already become the place for sushi and other Asian cuisine with regular customers coming from Spokane, Coeur d’Alene and other areas. Ding How offers over 100 sushi
138
items including their special Lobster Roll and Yellowstone Roll. Lunch MonFri 11am-2:30pm, Dinner Mon-Thu 4-9pm, Fri 4-10pm, Sat 12-9pm, Sun 129pm. 1332 N Liberty Lake Rd, Liberty Lake. (509) 921-1901. $-$$. Tues-Sun 11am10pm, closed Mon. 1228 S Grand Blvd in Spokane. (509) 315-5201.. $-$$$ two homemade rounds of “ramen bun” is a fun entrée. A well-selected drink menu, late hours, and modern lounge-feel makes it well set for lingering dates and après-
spokanecda.com • FEBRUARY • 2016
event noshing. Vegetarian options also offered. Mon-Sat 11am-close. 818 West Sprague. (509) 290-5763.. $$
Mon-Fri 11am-9:30pm, Sat 12 noon9pm, Sun 12 noon-8pm. 430 West Main, Spokane. (509) 838-0630. $-$$$
Shogun. Japanese. Shogun is really
owned restaurant on the Division hill offering authentic cuisine emphasizing northern Indian flavors. Taste of India boasts a casual atmosphere with a soundtrack of traditional music and a popular lunch buffet during the week. Try the Tandoori Chicken, Chicken Curry, or Vegetarian Samosa. Mon-Thur 11am-9:30pm, Fri-Sat 11am-10pm, Sun 11am-9pm. 3110 N Division in Spokane. (509) 327-7313. $-$$-of-hand and grill-assisted pyrotechnics. The other is the sushi bar, serving up California and Vegas Roll favorites. Across the bamboo bridge, over a tranquil koi pond (minus the fish… “too many coins”) and past the waterfall and lounge, this is a quiet refuge and counterpoint to the frenetic atmosphere of the main dining room. Shogun is a perfect spot for either a special celebration or a quiet night out. Open seven days 5-10pm. 821 E 3rd. (509) 534-7777. $$-$$$.
Taste of India. Indian. A family- a striking sky ceilings in the main dining rooms. Think Vegas with pad thai. All locations Mon-Thu 11:30-9pm, Fri 11:30pm-9:30pm, Sat 12-9:30pm, Sun 12-9pm. Delivery available. info@ thaibamboorestaurant.com, www.am-9:30pm. 11114 E Sprague Ave in Spokane Valley. (509) 9270500.. $-$$.
BARBECUE
Red Lion BBQ & Pub. For about 20 years, whether it was in the old rhythm and blues, peanut-shellson-the-floor days, or more recently as a sports bar, there’s always been butt-kickin’ BBQ at this downtown corner spot. The undisputed star here is wine broiled chicken, spicy and robust, yet falling-off-thebones moist and tender. Together with their signature fried bread and honey, and you have a BBQ experience that can’t help but please. 126 N Division. Sun-Thu 11am-10pm, Fri-Sat 11am-1am. (Sunday breakfast buffet 9am-noon during football season.) (509) 835-LION (5466).. $-$$
BREAKFAST | LUNCH | DINNER
COCKTAILS | BEER | WINE | ESPRESSO
Uncle Leroy’s BBQ. Don’t be surprised if you’re
greeted by a line of people at Leroy’s— they’re simply waiting their turn to sample Mr. Payne’s world class fare. A red shack with limited but comfortable seating inside, a multi level barbecue smoker (AKA the pit) out back, a patio deck with picnic tables out front and plenty of parking make up an ideal, holein-the-wall setting for pulled pork sandwiches, ribs, smoked sausage and beef brisket. Dinner platters include house made beans, coleslaw, and a beverage. For textbook Kansas City-style smokiness finished off with some cornbread and maybe some peach cobbler, look no further than this charming BBQ joint located in Spokane Valley just off the Pines exit. 205 S Pines Spokane Valley. Tues-Sat 11am-8pm. Closed Sun and Mon.. $-$$.
BISTROS
Open For Breakfast!
Downriver Grill. Innovative, local and seasonal
cuisine in a sleek, modern space with dishes at various price-points to suit every diner. Try the Chipotle BBQ burger for a flavor-packed lunch or the Lemon Thyme Grilled Salmon for a leisurely dinner. Either way, you’ll want to sample the Chocolate Pot de Creme for dessert. Open Tues-Sun 11am-9pm. 3315 W Northwest Blvd in Spokane. $$-$$$
Herbal Essence Café. Northwest cuisine. This re-
laxedwinning house salad, brilliant with sliced pears, crumbled Gorgonzola and a white truffle vinaigrette. 115 N Washington. Lunch Mon-Fri 11-2, Dinner Mon-Sat 5-close. (509) 838-4600.. com. Lunch $-$$, dinner $$-$$$
8am Monday-Friday 9am Saturday-Sunday
VE HA D! E W OVE M
Full Espresso Bar & Pickup Window!
2013 E 29th Spokane WA 99203 | (509) 448.0887 M-TH 8am-9pm | F 8am-10pm | Sa 9am-10pm | Su 9am-8pm BEST SUSHI 4 years in a row!
Thank You Spokane!
430 W. Main Ave. Spokane, WA 99201 | 509.838.0630
Mon-Thu 11am-9pm ~ Fri 11am-10pm ~ Sat Noon-9pm ~ Noon-8pm spokanecda.com • FEBRUARY • 2016
139
KITCHEN SERVING DELICIOUS FOOD EVERY NIGHT UNTIL CLOSING!
DINING GUIDE FEBRUARY
Laguna Café. This South Hill restaurant calls itself
It’s chill during the day... & a nightclub on the weekend!
HAPPY HOUR | TUES-SAT | 3PM - 6PM $1 OFF WELLS, PINTS & WINE $5 APPETIZERS
KE KARAO urs Th Wed & p.m., Sat 8 am -9 p.m., and Sun 8 am- 9 p.m.. (509) 448-0887. $-$$
Scratch. This energetic, hip restaurant in down-
town Spokane (with another location in Coeur d’Alene)ammidnight, Fri 11am-2am, Sat 4p.m.-2am. (509) 4565656.. $$-$$$
Table 13 Restaurant + Whiskey Bar. Hoteliers
232 W. Sprague | Spokane
509.474.1621
nynebar.com
Want to feel the Island love? Try our Lover’s Plate! A large plate of Garlic Chicken with an extra salad for a sweetheart meal! You have to share it with a friend, or you might not have a friend...
| 509.413.2029 1220 W. Francis | Open 7am-9pm daily
| 509.327.4270 1724 N. Monroe | Open 10am-9pm daily
Visit us online at EatAloha.com 140
spokanecda.com • FEBRUARY • 2016
Walt and Karen Worthy tucked this “inviting urban restaurant” into their newest Davenport Grand high rise to encourage sharing and socializing over a menu of small plates. An impressive wine cellar and private whiskey bar make it a prime gathering place for locals and out-of-towners alike. Tapas-style dishes like Spicy Crunchy Tuna Roll, Shrimp and Heirloom Grits, Halibut Sliders, Smoked Beef Brisket Street Tacos and Szechuan Japanese Eggplant and Stir Fried Black Quinoa are in keeping with its Asian-Pacific Northwest flare. Open Tues-Sat 5pm-close. 333 W Spokane Falls Blvd (inside the Davenport Grand Hotel in downtown Spokane). (509) 598-4300. www. davenporthotelcollection.com. $$-$$$
The Wandering Table. A much-anticipated Amer-
ican tapas-style restaurant located in Kendall Yards. Chef Adam Hegsted delights with a variety of small plates (try the Garden for a creative salad take, the Deviled Eggs, or the Popcorn), craft cocktails, a whiskey bar, and other. Open Tues-Thurs, 11:30 a.m. – 11:30 p.m., Fri & Sat 11:30 a.m. - 1 a.m. Sun & Mon, 4 p.m. – 11:30 p.m. 1242 W Summit Pkwy in Kendall Yards. (509) 4434410.. $$
Wild Sage. Tucked into a classic 1911 brick build-
ing in Spokane. (509) 456-7575.. com. $$-$$$
The Wine Cellar. The door to this intimate basement grotto is easy to miss on Coeur d’Alene’s main street Sherman Avenue. This bistro, wine bar, and live music venue embodies generosity with hearty Italian and Mediterranean fare at incredibly reasonable prices, warm and welcoming staff, and a killer space that feels like a retreat from the pressures of life outside. Don’t miss the amazing Mac and Cheese on the appetizer menu and take note that each entrée is accompanies by a salad and bread. 313 E Sherman Ave in Coeur d’ Alene. Mon to Thur 4:30 – 10 p.m., Fri and Sat 4:30 p.m. to midnight. Closed Sun. (208) 664-9463.. com. $-$$.
BREAKFAST AND LUNCH SPECIALTIES Big Red’s Chicago Style Cuisine. This food trailer serves up possibly the best cheesesteak in town along with a formidable Chicago Dog (with all the fixings), and an Italian Beef with a fiery relish made by owner and operator Curtis Bytnar. Feel like fries? Big Red’s offers you the choice of sweet potato or regular, and the regular can come topped with garlic, cheese, or both chili and cheese. Located in the parking lot of the St. Matthew’s Institutional Baptist Church at the corner of Sunset Boulevard and Government Way west of downtown Spokane. Open Mon, 11 am – 3 p.m.; Tues – Sat, 11 am – 5 p.m. Closed Sunday. (509) 991-2359. $ Fleur de Sel Artisan Creperie. Francophiles, attention! Artisan crepes bring distinct gourmet flair to the concept of fast-casual at this bright and cheerful stop on the way up the South Hill. Laurent and Patricia Zirotti of Fleur de Sel in Post Falls offer their signature attention to detail, but in a quicker and more focused package: a creative selection of sweet and savory crepes (think classics like the Monte Cristo or Nutella-filled, to more unconventional Bison Meatloaf and Horseradish, or dessert-like house made lemon curd). Pair your breakfast and lunch selections with soup or a salad for a complete and satisfying meal. Steaming mugs of Doma coffee are served all day— a cozy spot for a caffeine fix. 909 S Grand Blvd. Mon-Sat 8am-5pm, Sun 9am-2pm. (509)242-3725. $-$$ Frank’s Diner. Frank’s has become a Spokane land-
mark in just over a decade. Both early 1900’s-vintage rail cars were originally obtained by the Knight brothers Frank and Jack during the depression, and each converted them to diners in Seattle and Spokane, respectively. Larry Brown, of Onion Bar and Grill fame,
Find us on
RedLion
since 1959at-breakfast hash browns and silver pancakes. 1516 W. 2nd. Seven days 6-8p.m.. (509) 747-8798. 10929 N. Newport Highway, Sun-Thurs 6am-8p.m., Fri-Sat 6am-9p.m. (509) 465-2464.. $
Little Euro. Valley fans of the Old European can
rejoice. One look at the menu and you’ll see that Little Euro offers many of the same breakfast delights as it’s North Division sibling: Danish Aebelskivers, Swedish Crepes, and that mountain of breakfast on a plate they call Hungarian Goulash. Lunch also served. Open daily 6 am – 2 p.m.. 517 N Pines Rd in the Spokane Valley. (509) 891-7662.. $-$$. MonSat 6am-2p.m., Sun 7am-3p.m.. 1710 E Schneidmiller Ave, Post Falls. (208)777-2017. Mon-Sat 6:30-2, Sun 7-2:30p.m.. $
CASUAL DINING Mon-Thurs and 2am Fri-Sat. (509) 747-3946.. $$-$$$
Palm Court Grill The Palm Court Grill offers up-
scale and a fine wild salmon filet with a huckleberry champagne sauce. Serving breakfast, lunch and dinner. Open daily from 6 am to 9 p.m.. Reservations recommended. Private Dining room available, seating up to 30 people. 10 S Post. (509) 455-8888. $$-$$$
Safari Room Fresh Grill and Bar. The MonThurs 4:30-9, Fri 4:30-9:30, Sat 4-9:30, Sun lounge 2-9 and dinner 3-8. (509) 328-5965.. Lunch $$, Dinner $$$
Catering for ALL occasions!
getting married? p.m., last seating at 9 p.m., Tues – Sat. 4365 Inverness Drive in Post Falls. (208) 777-7600. $-$$$
Luna. Luna sets culinary trends as one of the top restaurants in the region. Offering inspired, gardento-table cuisine, Luna has provided a formative space for some of the Inland Northwest’s premier chefs for over 23 years. The space is warm—even whimsical— and boasts one of the best wine cellars in the region. Everything offered is made in-house: the bread comes from their own bakery fifty feet from the back door and most vegetables and herbs are picked from their backyard garden or sourced from local growers. We love Luna’s pizzas fired in their wood-burning oven, their Ahi Tuna Tartare starter and their salads— the Lacinato Kale, Beet and Luna Salads are each filling, yet elegant. Large plates include a diverse list of distinctive entrées including chicken, duck, beef, fish, pasta and vegetarian dishes. Luna offers a full service bar, classic marble-top dining areas, a chic private dining room, and a large patio for comfortable, warm weather dining. 5620 S Perry. Mon-Fri 11:00am-Close, Sat-Sun 9:00am-Close. (509) 4482383. $$-$$$.
let us
cater
We do all set-ups and take-downs. We supply all plates, napkins, and all utensils.
509.835.5466 RedLionBBQ.com
126 N Division Happy Hour 11am-6pm spokanecda.com • FEBRUARY • 2016
141
DINING GUIDE FEBRUARY.. $$\
PIZZA The Flying Goat. Careful thought went into the design of this pub and pizza sibling of the Downriver Grill— and it’s paying off. The Goat offers both classic and artisan toppings on Neapolitan-style pies, the “char” on the crust imparting a distinctive, crunchy flavor. Try the surprising Kiernan and wash it down with a craft beer (14 taps, 1 gravity-fed cask beer, and over 50 more in bottles). The Goat has a “Mug Club” for regulars; all dishes are named after neighborhood quirks – see if you can decipher their menucryptography. Open daily at 11 am. Closes at 10 p.m. (11 on Fri and Sat). 3318 West Northwest Boulevard in Spokane. (509) 327-8277.. $$p.m. – 10p.m. Sun-Thurs, 3p.m. – 11p.m. Fri-Sat. 159 S. Lincoln, under the smokestacks downtown. (509) 777-3900. $$-$$$
ITALIAN
Europa Restaurant and Bakery. Europa offers
much more than pizza (Marsala Steak Penne and Sweet Pepper Tortellini, for example), but if pizza is what you want,. All desserts are prepared entirely on-premise by pastry chef Christie Sutton, which include Christie’s Triple Layer Chocolate Mousse, as is the little shiny dome of chocolate cake and rum genache known as the “Chocolate Birthday Bomb,” Europa’s traditional compliment for patrons celebrating their birthday. Stop into the cozy pub for daily happy hour specials and live music every Sunday night. Open Mon – Thurs 11am - 10pm, Fri – Sat 11am - 11pm, Sun 11am - 10pm. 125 S Wall. (509) 455-4051.. $$
Ferrante’s Marketplace Café. This 118p.m. (509) 443-6304.. $-$$
142
spokanecda.com • FEBRUARY • 2016
Republic Pi. From the purveyors of The Flying Goat is the South Hill version of artisan pizza goodness. The overall unique pizza-gourmet salad-craft beer concept is the same, but with little menu overlap (favorites like the Dalton, Waikiki and Kiernan are served at both locations). Prior to pies, try the Rockwood Avocado sliced, beer battered, fried and served with Pico de Gallo and lime crème, or the spicy and addictive Cliff Park Brussels Sprouts roasted with crispy bacon, balsamic, cracked pepper and chili flakes. Pizzas come in two varieties: ”Traditionalists,” like The District with red sauce, sopressetta, fresh basil, cremini mushrooms and smoked fresh mozzarella, and “Progressives,” like The Republic, a puttanesca pizza topped with tomatoes, capers, Kalamata olives, green onion, basil and fresh mozzarella. A wide selection of locally-focused beer on tap, wine, cocktails and a dessert menu round out the experience. 611 E 30th Ave. Sun-Thur 11am-11pm, Fri-Sat 11am-midnight. (509) 863-9196. South Perry Pizza. Fresh, innovative pies (minus the gourmet pretension) in the heart of the Perry district on Spokane’s South Hill. Located in a former auto body shop, the restaurant has an open kitchen centered around an open-flame pizza oven that turns out brilliant pizzas with a yeasty bready crust that has good chew and the right amount of char. Try the popular Margherita, Veggie, Prosciutto, or one of their creative daily specials. 6 microbrews on tap and several fresh salads start things off right. The garage doors roll up in good weather for patio seating. 11 am – 9 p.m., Tues - Sun. 1011 South Perry Street in Spokane. (509) 290-6047.. $$
PUB AND LOUNGE FARE
The Blackbird Tavern and Kitchen. Head
straight to the bar where there are 34 beers (and 4am-11pm, Sat-Sun 8am-1pm, 3-11pm. (509) 3924000. theblackbirdspokane.com. $$
Manito Tap House. Manito is living into am – 11 p.m. Sun – Thu. Open until 2 am Fri – Sat. 3011 South Grand Blvd in Spokane. (509) 279-2671.. com. $-$$style Taphouse & Grill. Established in 1978, and now featuring Area 51, with its 51 taps of brew, wine and spirits., Fri-Sat 11-1am, Sun 2-midnight. 10 S Post. (509) 455-8888. $$-$$$
Post Street Ale House. This floor to rafter reno-
A Spokane favorite for 25 years!
vation of the former Fugazzi space in the Hotel Lusso by Walt and Karen Worthy of the Davenport gives downtown Spokane a great English-style pub with a striking bar, twenty beers on tap, and a reasonably priced menu built around comfort food. We feel they do some of their fried food particularly well: the Halibut and Chips, the Fried Mozzarella “cubes,” and the Ale House Fried Pickles. If you are hungry, try the Guinness Braised Short Ribs served over mashed potatoes and topped with a pan gravy chunky with vegetables. 11 am – 2 am daily. N 1 Post Street. (509) 789-6900. $-$$
Want to visit a historic Spokane pub full of fun, libations & local flavor?
Serving traditional Irish & American pub fare
• Spokane’s Best Reuben Sandwich • 16 Beers on tap • Patio overlooking Riverfront Park • Locally owned • Family's welcome Open 7 Days a week @ 11:30 AM
525 W. Spokane Falls Blvd (across from the carousel) 509.747.0322 | Odohertyspub.comp.m. – 10p.m. Sun-Thurs, 3p.m. – 11p.m. Fri-Sat. 159 S. Lincoln, under the smokestacks downtown. (509) 7773900. $$
Best Salad
Best Vegetarian Best Chef.. $-$$
OTHER
Brain Freeze Creamery. Ice cream, espresso drinks and sandwiches are offered all day at this welcoming, family-friendly spot in Kendall Yards. The small-batch creamery opened their own storefront in 2014. They offer 24 different flavors with at least a few vegan and dairy-free options each day. Try a scoop of their famed Palouse Crunch, a blend of cinnamon ice cream, red lentils and candied almonds, or Muddy Cups-Dirty Dishes, a brownie batter ice cream studded with mini peanut butter cups. Another favorite is Cakey Doe, vanilla cake batter ice cream with chunks of chocolate chip cookie dough. Anvil coffee and espresso and a small selection of hearty sandwiches broaden the menu just enough to suit everyone’s tastes. 1238 W Summit Parkway, Spokane. Sun – Thurs 7am-9pm, Fri & Sat 7am-10pm, (509) 321-7569. $-$$.
LUNCH Mon-Fri 11am-2pm DINNER Mon-Sat 5pm-Close TWILIGHT MENU Mon-Wed 5pm-6pm 3 COURSES FOR $20
• • • • • •
509.838.4600 • 115 N Washington St. Spokane, WA 99201
Seafood Baked Salmon Buffalo Top Sirloin Prawns & Linguine Spinach Artichoke Halibut Huckleberry Top Sirloin Oven Roasted Lamb
1 Block South of Auntie's Bookstore On and Offsite Catering Available spokanecda.com • FEBRUARY • 2016
143
LIQUID LIBATIONS NITRO COFFEE
F r e s h l y Ta p p e d C o f f e e ? by Chris Lozier
BY NOW YOU MIGHT think the coffee bean has already been fully explored, but there’s an exciting new flavor in town: nitro coffee. Beautiful Grounds Espresso and Beauty Bar co-owner Joe Johnson was the first to offer nitro coffee locally, and he says demand is growing quickly for this delicious new drink. Boosting the natural flavors of the coffee roast and offering a creamy texture, nitro coffee is a wholly new coffee experience. But for all its complexity, there are only three ingredients: ground coffee, water and nitrogen gas. An extension of cold brew coffee, Johnson describes nitro as “taking cold brew to the next level.” Coffee is cold brewed,
144
spokanecda.com • FEBRUARY • 2016
kegged, pressurized with nitrogen gas, refrigerated and finally served from a tap, much like beer. In fact, most people think Johnson is serving beer when they see the tap, so he pours them a sample of the coffee. “They taste it and say, ‘This is wild, this is not what I thought it was going to be,’” he says. Nitrogen is an enhancing gas and its tiny bubbles give the coffee a creamy, soft mouth feel. The gas also amplifies the flavors that are in the coffee roast so that even the most novice coffee drinker can taste the sweet, chocolaty, fruity notes. “It has a smooth, velvety flavor and you can definitely taste the difference,” says Arden Pete of Boots Bakery & Lounge. “Normally I put sweetener in my hot coffee, but I don’t do it with the nitrogenized cold brew. In my opinion it doesn’t need it.” Pete rotates his cold brew roasts with beans from DOMA in Post Falls, Evans Brothers in Sandpoint and Anvil in Spokane. Likewise, Bobby Enslow of Indaba Coffee says they experimented with different cold brews this year, using blends and single origin beans from Africa and Latin America. At Beautiful Grounds, Johnson uses his signature roast from local roaster Roast House Coffee, which he says has strong dark chocolate notes perfect for the nitro process. He has
this coffee, along with a cranberry-cream nitro tea, on tap at his shop inside Auntie’s Bookstore. For beer lovers, nitro coffee is a great chance to get something satisfyingly full-bodied, creamy and sweet like a stout or porter on lunch break, substituting caffeine for alcohol. It is also a good option for people who don’t like bitter coffees, but don’t want the calories and additions found in flavored espressos. Pete says nitro is a great year-round drink, but they see most of their business in the summer, selling 10-20 gallons per week. Likewise, Johnson says that he sells about a keg a week in the winter at Beautiful Grounds, while in the summer he sells
two or three times as much. Enslow said Indaba stopped making nitro for a bit because of the seasonal drop in demand, but they will have it again soon. You can enjoy a pint of nitro fresh at the coffee shop or you can take it with you in growlers. While it loses some of its carbonation, the sweet and creamy flavors don’t change, and it will keep three to four weeks in the fridge. Pete says many people buy a growler for the week, or for a weekend trip to the mountains or the lake. Since the setup is portable, most of the nitro coffee makers travel to events, and Johnson said he brought his nitro to over 20 events this year, selling out each time. “Once people see what I have, they’re very excited about it and they want to feature it at their event,” says Johnson. “I’m really glad that others have started taking it on because it’s an amazing product.” Beautiful Grounds, Boots and Indaba can make your nitro any way you want it, but be sure to try a sample first, because most people think it is perfect as is. If you think all coffee tastes the same, you won’t after you try nitro. “The nitrogen makes it creamy as if you added milk to it and sweetens it like simple syrup would,” says Enslow of Indaba. “It’s an awesome way to get people to try coffee on its own.”
spokanecda.com • FEBRUARY • 2016
145
AD INDEX 14TH AND GRAND ACT SERVICES ALOHA ISLAND GRILL BEAU K FLORIST BERRY BUILT DESIGN INC. BEST WESTERN CITY CENTER BRAIN FREEZE CREAMERY BROADWAY COURT ESTATES CALIFORNIA CLOSETS CAMP BMW CARLSON SHEET METAL THE CELLAR CINDERFELLA'S CLEANING COMPANY CLASSIC GARAGE THE CLAY COLLECTION CLONINGER DDS, BROOKE M. COBBLESTONE CATERING COLDWELL BANKER - JIM LUSTER COLDWELL BANKER - TERESA JAYNES COUNTRY FINANCIAL DAA NORTHWEST AUTO BODY CENTER DANIA DAVENPORT HOTEL DAVID CROUSE, PLLC DID'S HAWAIIAN SHACK & ARCADE E.L.STEWART ELLINGSEN, PAXTON EMVY CELLARS EOWEN ROSENTRATER EUROPEAN AUTO HAUS FAWSON DENTISTRY FLASH'S AUTO BODY FLAVOURS BY SODEXO FLOOR COVERINGS INTERNATIONAL FRUCI GARY D. KELLER, DDS GILDED UNICORN GLOVER MANSION
23 77 140 68 91 12 33 29 4 5 101 135 98 118 93 67 15 95 101 120 121 3 9 71 137 39 66 41 45 121 70 117 18 105 91 67 135 6
GOLD SEAL GRAPETREE GREAT FLOORS HANSON CARLEN CONSTRUCTION CO. HEALTHY HABITS HERBAL ESSENCE IMPLANTS NORTHWEST ITALIAN KITCHEN JAZZERCISE JEMA LANE BOUTIQUE JEWELRY DESIGN CENTER JOHN L. SCOTT LA-Z-BOY LAGUNA LAND EXPRESSIONS LARRY H. MILLER HONDA MAGNUSON ORTHODONTICS MANGIS PHOTOGRAPHY MANITO TAP HOUSE MECHANICS PRIDE AND AUTOMOTIVE METABOLIC INSTITUTE MONARCH CUSTOM BUILDERS, LLC NEXT DAY DRY CLEANING NORTHWEST BACH FESTIVAL NORTHWEST IMPLANTS AND SLEEP DENTISTRY NORTHWEST TRENDS NYNE BAR O'DOHERTYS OLYMPIC GAME FARM THE ONION | AREA 51 PACIFIC FLYWAY GALLERY PENTHOUSE AT THE PAULSEN PINOT'S PALETTE R. ALAN BROWN, INC RED LION BBQ RENOVATIONS BY DAVE RICK SINGER PHOTOGRAPHY ROBERT SHAW, DMD
119 147 BC 93 79 143 64 122 75 104 2 87 7 139 105 11 69 45 14 119 75 96 98 51 58 36 140 143 51 137 120 20 47 107 141 89 56 69
ROCKWOOD HEALTH SYSTEMS ROCKWOOD RETIREMENT COMMUNITY RUBY SUITES RW GALLION INC SHRINERS HOSPITAL SIMPLY NORTHWEST SMILE SOUTH SPOKANE SMITH ORTHODONTICS SPA PARADISO SPICE & VINE MERCANTILE SPOKANE ORAL SURGERY SPOKANE SYMPHONY STAR FINANCIAL STEAMPLANT SUSHI.COM SWINGING DOORS THAI BAMBOO THOMAS W. ANGELL, ARCHITECT TIN ROOF TOM SAWYER COFFEE CO. TOTAL FIT TROVATO UNIVERSITY CHIROPRACTIC VERACI PIZZA WALLFLOWERS WANDERING TABLE WASHINGTON STONE & TRESKO MONUMENTS WEIAND & WEIAND WELDON BARBER WENDLE FORD WHITEMAN LUMBER WILD SAGE WINDERMERE - MARIE PENCE WINDERMERE - NANCY WYNIA WISHING STAR YARDS BRUNCHEON
COMING IN THE MARCH 2016 ISSUE: TOP DOCTORS
20
16
Health symposium
Q+A WITH A PANEL OF FIVE PRACTITIONERS
Established Business Owner/Leader: These women have been business leaders or owners for more than five years. Emerging Business Owner/Leader: These women have been in business leadership or ownership roles for less than five years. Movers & Shakers: These women business leaders are also involved in many different organizations throughout the community (volunteerism, nonprofit boards, etc). Nonprofit Leader: These women lead nonprofit organizations in our region.
146
spokanecda.com • FEBRUARY • 2016
(dermatology, back health, sports medicine, men and women’s health, mental health)
For tickets, schedule and event information go to eventbrite.com Brought to you by Bozzi Media and
19 13 49 103 29 56 64 61 32 41 65 49 100 25 139 139 122 89 83, 85 32 27 97 77 33 107 32 104 68 57 17 99 127 95 109 27 33
NOW LEASING SPACE RETAIL/OFFICE
The perfect South Hill location for your retail store, bank or professional practice, Grapetree Village is a custom-designed office village nestled among the trees on the South Hill’s primary arterial. Enjoy our onsite tenants: Applebee’s, Ameriprise Financial, Atlas Personal Training, The Bar Method, Brooke Cloninger DDS, Dairy Queen, Fit Edge, Laguna Cafe, Massage Envy Spa, Physzique Fitness, Snyder CPA, US Healthworks, and Weldon Barber.
GRAPETREE VILLAGE
2001 E. 29TH | SPOKANE, WA 99203 (509) 535-3619 cloningerandassoc@qwestoffice.net cloningerandassoc.com
104 S. Freya, Suite 209 Spokane, WA 99202-4866 | https://issuu.com/ksomday/docs/scl123digimag | CC-MAIN-2017-39 | refinedweb | 34,613 | 63.49 |
Ahem.
Java has four different “kinds” of types. Up until Tiger, it had these three:
- Primitive types: longs, shorts, booleans, ints, outs, whatever. They map more or less to machine types.
- Classes, which are the primary mechanism for extending the language.
- Arrays, which are this weird hybrid type that was introduced to make it easy to port C and C++ code to Java. But you can’t make arrays immutable, and you can’t subclass them, and there’s only modest syntactic support for them, and reflecting on them is painful.
The problem, of course, with having different types of types is that all of the code you write, now and forever, has to be prepared to deal with each type-type differently. So you get interfaces like this lovely one. Scroll down to the Method Summary, and looky looky, there are 76 actual methods in the class, but only 10 distinct pieces of functionality.
Just to make things worse, Java 5 added a new kind of type, called an enum. They’re sort of like C enumerations, but they can also have methods. Which makes them sort of like Java classes, but you can’t instantiate them, and you can’t have one enum type inherit from another enum type, so you can’t have polymorphic methods, nor can you declare an enumeration heirarchy. So they’re not really like classes at all.
— Steve Yegge, The Next Big Thing (lightly edited for brevity).
Case closed.
And Strings have overloaded operators, but programmers cannot create their own classes with overloaded operators. Then there is autoboxing and having two types for the same kinds of values.
Pretty cheap shot, not very Java-specific (most of it applies tho a vast number of non-pure OOPLs that offer a mix of objects, primitive types and arrays, including C++, C#… – with various degrees of success to hide the ugliness, usually through autoboxing – so an int walks and quacks like an object but you still can’t inherit it and polymorphically overload its methods, etc.)
Oh, and you CAN have subclasses and polymorphic methods on enums. It’s a restricted facility but pretty neat. It seems Steve didn’t do his homework, and as you blindly quoted this, you don’t really master Java either.
It’s interesting too that, differently from other languages from this series, you were not able to quote a piece of specification (JLS) or a first-class authority on Java language design (Steve is a smart and well-known hacker, but certainly not in the league of Bracha, Bloch, Gafter, Joy, Gosling, etc.).
Osvaldo, you’re right that for the Java whinge I didn’t quote official documentation; that’s because I’ve not read it. As you rightly surmise, I have not mastered Java; far from it. I’m only a visitor in the Kingdom Of Nouns. I’ve used, and read about, the other languages in this series much more extensively than Java, but it seemed to me that leaving Java out completely wouldn’t be right.
Mike: Fair enough. But the problem of making fun|criticism of a language that you don’t master, is that you may not find its really ugly bits (and the really authoritative texts that expose it). And as a good Java advocate, I won’t provide you the links. ;-)
But frankly, there are no major blunders in the Java language: at least nothing remotely in the level of the stuff you blogged for other langs. There are some important warts in obvious areas (generics), but these are mostly by-design tradeoffs, so they have their elegant side and their advantages and defenses. The real language design screw-ups are those that happen without explicit planning, or due to sheer design incompetence, or to extreme tradeoffs (like C++’s mountainous issues related to C compatibility). I can remember only very minor issues, like Java’s package-private visibility level. A bigger issue was the original Memory Model, but that was fixed on J2SE5. OTOH, there are tons of major blunders in Java’s standard APIs, remarkably earlier ones – and these are very important as Java is one of these languages with a philosophy of a small language kernel surrounded by vast frameworks, so whenever those fail, and a complete fix is forever impossible due to backwards compatibility, it’s just as bad as a syntax screwup in a “big language” like C++ or Perl.
No, let’s not leave Java out. Thing is, the beauty of the C, Javascript, Ruby and Perl examples is that they’re bug traps, places where the language let’s you or encourages you to make mistakes that will cost you later. Even better, these paths are main roads, not obscure byways. Java and Pascal (if I remember correctly, it’s been a while since I did any Pascal) are designed to prevent dumb errors and typos from sinking your boat without some bad design on your part. At least no example comes immediately to mind, somebody prove me wrong!
Ha, if failure to live up to a philosophical aesthetic is a crime, the best language out there is going to be “anything goes” Perl, LOL.
Osvaldo Pinali Doederlein boldly asserted that “there are no major blunders in the Java language”. Surprisingly enough, I do more or less agree with that. I think that.
My main issue with Java is actually much more mundane than any specific flaw … but since I have a fair bit to say on that subject, I’ll save it for the next post. Hope that’s OK.
Pingback: So what actually is my favourite programming language? « The Reinvigorated Programmer
What the hell? Where do I even start with this idiotic blog?
You haven’t read the Java documentation, haven’t mastered the language…but you pull one quote out of your a** and it’s “case closed”?
Oh what the hell, I guess you now know about as much as any other author of computer books, so maybe you should pitch a manuscript on Java to O’Reilly or somebody.
Guys like you make me hate the internet. Seriously.
foo: it should be quite obvious that anything titled ‘Why {X} Is Not My Favourite Programming Language’ is written at least slightly tongue-in-cheek. Have you really not read the original?
(Are you going to include Pascal in this list, Mike? ‘cos you could just quote virtually anything from the original Kernighan paper :) )
Hi, Nix, thanks for that your flood of comments — it’s great to get your perspective on so much of what I have written all at once. You talk a lot of sense.
I actually read the original Why Pascal is not … in a hardcopy that Kernighan himself sent to me, long long ago. Stupidly, I don’t know where it is now — one more reason why I really, really need to tidy my office up.
As you’ll have seen now, the short Why X Is not … series is now over, and I didn’t cover Pascal. It just didn’t seem worth complaining about a language that has so few serious advocates these days anyway. The six that I mentioned (C++, Perl, JavaScript, C, Ruby, Java) are languages that are in wide us today for solving realistic problems, and that also (with the possibly exception of JavaScript) have substantial communities of people who genuinely love them.
I dunno. JS has enough lovable components that I think you could get a nice language out of it if you attacked it with pruning shears. It does, very occasionally, get used outside the web context (though the only example that springs to mind is the AI and automation engine for oolite).
I think your choice of languages was a good one (Python could have done with being in there too, but obviously you aren’t going to dare criticise a language that you haven’t used recently after the Java fiasco).
(If I had to find one thing to flame about Python, it would be the long period when Python 2 had *two* ways to declare classes with different semantics, though one was admittedly deprecated. Even C++ never did that.)
As a matter of fact, the main reason I left Python out was because I’d already done Perl and Ruby, and I feel that those three languages have enough in common and live in sufficiently close to the same space that doing the third was redundant.
I actually learned most of my JavaScript from Douglas Crockford’s excellent book JavaScript: the Good Parts, the thesis of which is indeed that there is a nice language in there. Unfortunately, I think the environments that it runs in are far more crippling than the language’s real deficiencies, even the single global namespace.
Finally, I am not that inexperienced in Java: a while back I wrote and released an open-source parser for the standard query language CQL, which you can get at — I’m not saying it’s a huge and significant piece of code, but it is at least more than a toy (and deployed in several production systems by several different companies).
Java is a ridiculous language because of all the redundancy and twisting you have to do to get around its insane typing.
To go back to an example from the Kingdom of Nouns:
steve.takeoutgarbage().
In a superior language such as Ruby you can make a Garbage module complete with implementation and add it to any class via mixins.
In Java you either have to make an abstract class which is a bad idea, or have a Garbage interface which is slightly better.
Problem is that EVERY class that you want to give the ability to take out the garbage you have to duplicate the code. To make matters worse the same object is now likely to have several references of several types floating all over the place. BAD BAD BAD BAD
Ruby, like Java is strongly typed, and arguably even more strongly typed. Yet, it does not have all the headaches and requires no contortions to get things to work.
Why?
Because the creators of Ruby understand what OOP really means. Hint: messaging.
I doubt Java will ever have acceptable closure support because of typing, but mostly because of checked exceptions, another terrible idea that adds verbosity with little gain.
In Java patterns exist to overcome its many limitations. In other languages they are simply a guide and not really necessary. The thought of someone using factories in ruby is quite a funny one.
“But frankly, there are no major blunders in the Java language: at least nothing remotely in the level of the stuff you blogged for other langs.”
Is this a joke?
No one can look at the mess that Java produces, yes even by its “masters” and not snicker.
Nice trolling, @qwerty, but too bad that you couldn’t present anything more concrete and articulate than your poor generic attack to static-typed languages (I don’t see much Java-specific content there).
Oh and don’t embarrass yourself with claims like “Ruby is arguably more strong-typed than Java” unless you know what you’re talking about and you can actually follow up with some real argument to back the ‘arguably’. I’m waiting… how exactly is Ruby “more strong-typed than Java”? | https://reprog.wordpress.com/2010/03/17/why-java-is-not-my-favourite-programming-langua/ | CC-MAIN-2015-22 | refinedweb | 1,903 | 67.49 |
Note: You will need to follow the steps in Setting Up Your Environment before the project will build.
I have completed a major re-write of this article using the Enterprise Library rather than the EIF and the stand-alone Logging Application Block. This is available at Get Logging Microsoft Logging Application Block, and show how it could bring some consistency to an application’s logging.
This article is for those who have never encountered the Logging Application Block, those who are looking to evaluate it, and those who have looked at it and thought it seemed like too much trouble. I will provide an overview of what features the Logging Application Block provides, followed by a description of how to get the basics working in your environment.
The article will not explore too deeply the additional features the Logging Application Block provides over the Microsoft Enterprise Instrumentation Framework (EIF).
Many applications, and especially large-scale systems, could benefit from a consistent approach to logging. To help, there are a number of logging libraries available to the .NET developer, such as Log4Net and NSpring. As of April 2003, there has been another alternative, the Microsoft Enterprise Instrumentation Framework (EIF).
The EIF provides a simple way for your code to raise Patterns & Practices initiative, the Logging Application Block has been released. This application block extends the EIF with three new Event Sinks:
It also provides new versions of the Windows Event Log and Windows Management Instrumentation (WMI) Event Sinks, adding facilities such as Log Level. Another new feature is Event Transformation, which allows you to change the contents of an Event before it is stored. For instance, transformations could remove sensitive information or add additional data. By integrating with the Web Services Enhancements (WSE), the Logging Application Block allows log tracing to continue across web service boundaries.
When I first looked at the Logging Application Block, I nearly dismissed it as too much effort. I” that are automatically populated with details of the system at the time the Event was raised (e.g., machine name, timestamp, application domain name, etc.). Through the configuration file, it is possible to request that additional Fields be populated. For example, COM+ properties (including Fields such as the current Transaction ID) or security information can be added to the Event’s contents.
Each Event type also adds its own specific Fields. For example, the ErrorEvent allows a Severity Field to be set.
The Logging Application Block includes new versions of some of the EIF Events. These provide new Fields and some new functionality (e.g., Log Level). It also adds some completely new Events such as the MeteringEvent for web services.”. These Event Sources indicate that a particular section of the software is raising an event. They can be used to partition your application however makes sense to you. You might use one for your configuration classes, one for your data access layer, or even a distinct one for every class in your system.
EventSource
There is a second type of Event Source, the “Request Event Source”. This type is used to indicate that an Event is being raised as part of a particular process (or execution path through the application). For instance, you might use it to indicate that an Event has been raised as part of a "Create New Customer" process:
RequestEventSource createCustomerSource =
new RequestEventSource(” Event Sources.
One warning about the above examples, creating an Event Source can be slow. Therefore, it is recommended that you create each Event Source only once and hold it in a static reference to be reused.
The Logging Application Block improves the functionality of the Request Event Source. It allows the RequestTrace to work across calls to web services!
Whilst Events originate from Event Sources, they terminate at Event Sinks. An Event Sink receives Events and is responsible for persisting them. The EIF provides three Event Sinks:
TraceEventSink
LogEventSink
WMIEventSink
As mentioned in the Background section of this article, the Logging Application Block modifies the latter two and adds three more Event Sinks. You can always write your own custom Event Sink as well.
Viewing the logged events requires a reader for each particular data store. The Event Viewer can be used for the Windows Event Log. A sample C# project, TraceViewer, is provided with the EIF to open Windows Event Trace log files. The WMI events can be seen through a WMI Event Viewer.
Hopefully, the section above gives you an insight into the features provided by the EIF and the Logging Application Block. What that section doesn’t describe is how Event, Event Source and Event Sink objects are made to interact with each other. That “plumbing” is provided by the EIF based on settings in a configuration file.
There are three more concepts to understand: Event Categories, Filters and Filter Bindings. These are not defined at compile time, but at run time, when the EIF reads the configuration file. Therefore, if you need a new Event Category, you can simply change a configuration file. It is this flexibility that makes the EIF so powerful.
Events can be grouped together into Event Categories. By creating categories, you will be able to independently turn logging on or off for particular groups of Events.
Often, one category you will create is an “All Events” category. You may want another category containing only audit events, another that contains trace messages, etc. Note that an event can be a member of more than one category.
For instance:
<eventCategory name="All Events"
description="A category that contains all events.">
<event type="System.Object" />
</eventCategory>
This fragment defines an Event Category called "All Events". It indicates that all Events of type System.Object, or Events derived from System.Object, should be members of this category. Since every class is derived from System.Object, this category automatically includes all Events.
System.Object
A Filter determines which Event Categories are routed to which Event Sinks. For instance, you may want to have all trace messages sent to the TraceEventSink and all audit messages sent to both the TraceEventSink and the LogEventSink.
To implement this, you would first create two Event Categories, one containing the trace events (see
<eventCategories>
<eventCategory name="Trace Events">
<event type="Microsoft.ApplicationBlocks.Logging.Schema.TraceMessageEvent,
... " />
</eventCategory>
<eventCategory name="Audit Events">
<event type="Microsoft.ApplicationBlocks.Logging.Schema.AuditMessageEvent,
... " />
</eventCategory>
</eventCategories>
<filters>
<filter name="filterTraceAndAuditDifferently">
<eventCategoryRef name="Trace Events">
<eventSinkRef name="traceSink"/>
</eventCategoryRef>
<eventCategoryRef name="Audit Events">
<eventSinkRef name="traceSink"/>
<eventSinkRef name="logSink"/>
</eventCategoryRef>
</filter>
</filters>
A Filter Binding links an Event Source to one or more Filters. This allows you to route Events raised by different parts of your system to different Filters.
For instance, if you are having a problem with a single process in your system, you could create a single Filter Binding between the Request Event Source that wraps that process and a Filter that directs Events in the “All Events” Event Category to the TraceEventSink. All other Events would be ignored. For example:
<eventSources>
<eventSource name="Create New Customer" type="request"/>
</eventSources>
<filters>
<filter name="Trace All">
<eventCategoryRef name="All Events">
<eventSinkRef name="traceSink"/>
</eventCategoryRef>
</filter>
</filters>
<filterBindings>
<eventSourceRef name="Create New Customer">
<filterRef name="Trace All" />
</eventSourceRef>
</filterBindings>
This fragment will ensure that all events raised as part of the "Create New Customer" process are sent to the traceSink.
First off, your system must at least meet the following requirements:
Here are the steps I followed to create a development environment:
enabled
“true”
Some notes on this:
Microsoft.Web.Services
Microsoft.Web.Services2
Perhaps the best way to get to grips with the Logging Application Block is to try it. The download for this article contains the code for a sample Windows Forms application (LoggingBlockInvestigator.exe) that should allow you to experiment. You will need to edit its EnterpriseInstrumentation.config file to replace the PublicKeyToken attributes with the value identified when you were building the Logging Application Block (perform a Search and Replace of the string 25ffac55882d4eb” Event Source. The first technique explicitly creates an Event object and initializes it with a message. The second, more compact technique, creates the Event implicitly.
If you build and run the LoggingBlockInvestigator.exe application, then close it down, you will find two new entries in your Windows Event Log, in the Application section. If you look at the details, you will see an XML document containing the appropriate messages.
With the EIF, Microsoft provides a simple trace log viewer. If you run [EIF]\Samples\Trace Viewer\TraceViewer.exe and open the TraceLog.Log file, you should see the two events listed there as well.
By default, all messages are marked as raised by the “Application” Event Source. The code in the sample application for the “Log From My Source” button handler uses an explicit source instead:
private static EventSource eventSource = new EventSource("My Source");
private void LogFromMySource_Click(object sender, System.EventArgs e)
{
TraceMessageEvent.Raise(eventSource, "Traced from my source");
}
Run the sample application and press the ” button uses two nested Request Event Sources:
private RequestEventSource requestEventSource1 =
new RequestEventSource("My Request Source For Tracing");
private RequestEventSource requestEventSource2 =
new RequestEventSource("My Nested Request Source For Tracing");
private void LogWithRequestTrace_Click(object sender, System.EventArgs e)
{
using (RequestTrace request = new RequestTrace(requestEventSource1))
{
TraceMessageEvent.Raise(eventSource, "My first request");
using (RequestTrace requestNested = new RequestTrace(requestEventSource2))
{
TraceMessageEvent.Raise("My second request");
}
}
}
If you press the “Log With Request Trace” button, the following Events are saved into the TraceLog.Log file:
TraceRequestStartEvent
TraceLogSink
TraceNestedRequestStartEvent
TraceNestedRequestEndEvent
TraceRequestEndEvent
As well as using the provided Event Sinks, you can create your own. CustomEventSink.cs in the sample application shows how simple this is. The class is derived from Microsoft.EnterpriseInstrumentation.EventSinks.EventSink and the Write method is overridden. Put a break point in the Write method, then press the ” button, shows how to raise the Log Level:
AuditMessageEvent auditErrorEvent = new AuditMessageEvent();
auditErrorEvent.Message = "Logged at error level";
auditErrorEvent.EventPublishLogLevel = (int)LogLevel.Error;
EventSource.Application.Raise(auditErrorEvent);
It should be noted that the Logging Application Block has not changed the functionality of the TraceEventSink, so it still logs all Events routed to it, regardless of Log Level.
The last few sections have introduced the code that you can add to your application to start logging through the Logging Application Block. There is one more very important task... configuration. Without configuring the EIF, your Events will never reach an Event Sink.
The EnterpriseInstrumentation.config file details how Events, Event Sources and Event Sinks are linked. A skeleton of this file can be generated for you. You must ensure your project references the System.Configuration.Install assembly and that it contains a class definition similar to this:
System.Configuration.Install
[RunInstaller(true)]
public class LoggingBlockInvestigatorInstaller : ProjectInstaller {};
Then, from the command line, run:
installutil MyApplication.exe
This will generate a functioning EnterpriseInstrumentation.config file, pre-populated with all the Events, Event Sources and Event Sinks that could be found in your application and its referenced assemblies. It will also contain some default Categories, Filters and Filter Bindings. You can then experiment with creating new Categories, Filters and Filter bindings.
If you look at the sample application find that the CustomEventSink throws an exception as it has been coded to expect the parameter. This exception is caught by the EIF and logged to the Windows Event Log. Note: whether the exception is logged or not is controlled by the internalExceptionHandler attribute on the declaration of the Event Source in the configuration file.
MyParam
CustomEventSink
internalExceptionHandler
Event Sink parameters can be used for whatever you like. The Event Sinks provided with the EIF and the Logging Application Block use them for such things as:
What about the future? Microsoft in partnership with Avanade have been developing the next evolution of the Patterns & Practices Application Blocks. Enterprise Library is due for general release sometime early in 2005. This library will bring together a number of the existing Application Blocks, including Logging.
You can find more information about the new library at GotDotNet. Looking at the documentation currently available, any experience you gain with the current Logging Application Block will make understanding the new library simpler. However, there will not be a simple upgrade path. Some of the new features include:
Update: The Enterprise Library is now available.
Hopefully, this article provides a good introduction to the benefits and features of the Logging be with Log4Net, NSpring, the Logging Application Block, System.Diagnostics.Trace, or a home grown framework).
System.Diagnostics.Trace
Finally, you may want to wait for the new Enterprise Library. In which case, the concepts introduced in this article should help you to get up to speed with it. | https://www.codeproject.com/articles/9081/getting-started-with-the-logging-application-block?msg=1079502 | CC-MAIN-2017-13 | refinedweb | 2,105 | 55.54 |
I'm desperate to try and solve this. I forgot to do this lab during mid semester in my computer science class and for some reason i'm drawing complete blanks. This is the last lab I need to complete for full credit. I put in some of the basic variables but really need help completing this.
I don't have the text files with me.
/* * Create a program that will calculate the min, max and average from a * data set(s). The purpose of the lab is to implement a loop construct * along with the ability to perform file I/O. * 1) Place C4L2_Data1.txt, C4L2_Data2.txt, and C4L2_Data3.txt in the * root directory of your project. These are your input files. * 2) Each file has a list of numbers which is a combination of * integer and floating point values. * 3) Each file has the same values, but in different order. * 4) Since it is likely you wouldn't know the count of values in an input * file I suggest you use the while or do-while loop. * 5) Your program should read the data line-by-line evaluating each value * to determine if it is the min or max while also accumulating the total * which will be used to calculate the avergage upon reading all values. * 6) After you execute your program using C4L2_Data1.txt, run it again * using the other two input files. * 7) The purpose of having the same data amongst three files is to test * the logic of your program to handle the case(s) where the min and max * could be anywhere in the file. * 8) After you execute your program with the three files you should always * obtain the same min, max, and average. If your results differ between * the input files then you have a logical error in your program */ import java.io.File; import java.util.Scanner; import java.text.DecimalFormat; public class C4L2_Statistics { public static void main(String[] args) throws Exception{ double num; //Temporary var representing a single value from the file double min; //The minimum value in the file double max; //The maximum value in the file double average; //The average of the values int count = 0; //Keeps track of the number of values in the file double total = 0; //The accumulator DecimalFormat formatter = new DecimalFormat("#,##0.00"); System.out.println("min="); System.out.println("max="); System.out.println("average="); } } | http://www.javaprogrammingforums.com/whats-wrong-my-code/37403-statistics-lab-loop-file-io-need-help.html | CC-MAIN-2016-30 | refinedweb | 401 | 63.7 |
Code:
Dim objContext As COMSVCSLib.ObjectContext
Then a bit later (the first actual line to debug)
Code:
Set objContext = GetObjectContext
This isn't doing much of anything. objContext is not populated with anything..
Dim objContext As COMSVCSLib.ObjectContext
Set objContext = GetObjectContext
Last edited by stin; November 11th, 2008 at 09:33 AM.
Reason: added clarification
Seeing a line of code like this suggests to me that there should be a COM Component somewhere that's probably registered and being used under MTS.
If you find my answers helpful, dont forget to rate me
Could you expand on this a bit? I'm not really very hip to the DLL thing yet.
If you go into the Management console, you should find a number of MTS Package's. If you run the code that calls this Dll, watch the Packages and see which one starts running. If you have load's it might be tricky to see which one it is, however, you should be able to work through the packages to work out which one it is.
I had to set up my console, but I didn't see MTS in the list of sanp-ins to add. I added 'Component services' Which has a 'Distributed Transaction Coordinator' in it, but I didn't see anything else. Any other ideas??
in that case it could just be a component under component services. Are there any components that look as though they may have been Application based as opposed to system Based.
I saw 1 that started up after I started the application, called w3wp.exe. (under 'Running Processes') That particular program though is used for uploading files.
Here are the components I have:
.Net Utilities
COM+ Explorer
COM+ QC Dead Letter Queue Listener
COM+ Utilities
IIS Out-Of-Process Pooled Application
Microsoft.SQLServer.MSMQTask
SA-FileUp (This is the one that started up)
System Application
Under System Application -> Components there is COMSVCS.TrackerServer, which, at least to me, appears to be the one I'm having trouble with. There isn't much I can do here in the way of configuration.
On the System Application I can right click and get a properties page and in the 'Security' tab there are options available to change for 'Authentication Level for Calls'. I have no idea what that does, but I'll look at it. The current setting is on 'Packet Privacy'
Thank you
As a follow up - I found this MS Knowledge base article -.
There are 4 resolutions:
The MTS Transaction Mode for the component is set to a value other than 0 - NotAnMTSObject. I tried all available options, nothing worked. Currently set to 1
Visual Basic 6 Service Pack 3 (SP3) or later is installed on the computer.
I'm using SP 6
The authenticating user has the appropriate permissions. For additional information, click the article number below to view the article in the Microsoft Knowledge Base:
259725 PRB: Error Occurs When You Debug a COM+ Component Under the Visual Basic IDE with an ASP Client
I went through the steps here and nothing
The component is deployed in a COM+ application.
Not really sure how to check this.
I realized just now that all these solutions are for allowing me to debug. I'm able to debug just fine. The issue is that COMSVCSLib isn't initializing or something.
Last edited by stin; November 13th, 2008 at 03:11 PM.
Hmm... If you've not got MTS showing chances are you have no apps using it, so anything to do with MTS you can ignore.
I notice that you have .Net Installed, do you have .Net available as a developer tool? if you do , you might be able to use .Net Instrumentation to get process information etc that might help.
If you could provide a quick demo on how to do that, that would be awesome.
ok, first thing you will need to do is attach your Dll to the .NET app via the Interop. you then need to create a listener and a trace object.
For both these classes you will need to reference the System.Diagnostics namespace. The program will also need to be built in Debug mode.
Code:);
My thinking here is that through the callstack option, it might gleen information as to what is getting called.
You will probably have to play with it a bit as it's been a while since I did this, the syntax above is C#);
View Tag Cloud
Forum Rules | http://forums.codeguru.com/showthread.php?464973-DLL-Debuggin&p=1781051 | CC-MAIN-2015-27 | refinedweb | 751 | 65.22 |
Hi Dan!
> -----Original Message-----
> From: news [mailto:news@sea.gmane.org] On Behalf Of Daniel Br?uen
> Sent: Thursday, August 23, 2007 10:23 PM
> To: users@jackrabbit.apache.org
> Subject: Re: Repository Lock Problem in JEE-Environment
>
> Hi!
>
> Marcel May wrote:
> > Can you use |@PreDestroy annotated method ?
>
> Yes, that's worth a try, at least for stateful session beans
> (@PrePassivate, @PostPassivate).
Be aware, that you should not rely on that these methods are really called
in any case. One example would be a server crash, but in general you should
be careful with that. Just ask yourself the question how serious it would be
if your bean dies without having executed the code in here.
>
> How do other folks use Jackrabbit in a JEE Environment, especially in a
> session bean? Do you acquire the repository in every single method call
> via JNDI, or do you store the repository handle in a member variable of
> the bean?
>
> If I get it correctly there can be only one single bean accessing the
> repository, since the repository can talk to only one single
> application. As a consequence there would be a locking-problem if the
> container decides to create another bean to handle requests, which is odd.
I haven't tested it, but the @Resource annotation should work to access
anything that's in JNDI. For example if you hooked your Repository into JNDI
under "java:jcr/local" then you should be able to get the handle like that:
public class MyBean() {
@Resource (name="jcr/local") private Repository repository;
(...)
}
The container now looks up the Repository in JNDI and injects it into the
bean when it's created. This should work with managed beans in EJB3 (i.e. SL
and SF beans), I doubt it's working e.g. for helper classes.
Hope it helps. Best regards
Hendrik
>
> cheers,
> Dan | http://mail-archives.apache.org/mod_mbox/jackrabbit-users/200708.mbox/%3C20070823160102.9F5564DA324@nike.apache.org%3E | CC-MAIN-2016-26 | refinedweb | 308 | 66.13 |
Created on 2007-01-31.05:15:26 by cgroves, last changed 2008-02-24.04:17:32 by pjenvey.
From laupke's bug #1534547
C:\>python
Python 2.4.4 (#71, Oct 18 2006, 08:34:43) [MSC v.1310 32 bit (Intel)] on
win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import os.path
>>> os.path.normcase('C:\\Temp')
'c:\\temp'
>>>
C:\>jython
Jython 2.2a2952 on java1.5.0_06 (JIT: null)
Type "copyright", "credits" or "license" for more information.
>>> import os.path
>>> os.path.normcase('C:\\Temp')
'C:\\Temp'
>>>
Charles's comment in the original bug report:
"""I'm not sure how to detect if a filesystem is case sensitive in java, so I opened a new bug(#1648449) for normpath."""
If that's the case I'd recommend following approach.
1) First check how CPython does this. If their solution is possible also in Jython take it into use.
2) I think it would be pretty ok if normcase would work correctly in all major platforms (Posix, Mac, Windows) where the correct behaviour is known.
3) For other platforms I see following two possibilities.
3.1) Just leave the path unchanged and document this behaviour.
3.2) When installing Jython or running it for the first time test the casesensitivity by creating a file like 'JYTHONTEST' into system temp directory and trying to read it with name 'jythontest'. If reading succeeds set some property indicating that platform is caseinsensitive to true and if reading fails (or there's any exception anywhere) set it to false.
Following only slightly hackish implementation seems to fix this issue at least based on manual tests on Windows. I'll submit this as a part of a patch for os.path.abspath bug unless separate patches are somehow better. Before that I anyway need to get some automated tests done first.
_CASE_INSENSITIVE = None
def _case_insensitive_system():
global _CASE_INSENSITIVE
if _CASE_INSENSITIVE is None:
path = abspath(os.curdir)
_CASE_INSENSITIVE = samefile(path.lower(), path.upper())
return _CASE_INSENSITIVE
def _tostr(s, method):
if isinstance(s, basestring):
return s
import org
raise TypeError, "%s() argument must be a str or unicode object, not %s" % (
method, org.python.core.Py.safeRepr(s))
def _tofile(s, method):
return File(_tostr(s, method))
def normcase(path):
path = _tofile(path, "normcase").getPath()
if _case_insensitive_system():
path = path.lower()
return path
Problem is javaos.py always unconditionally loads javapath
Line 31: import javapath as path
This despite the comment at the top
Line 6: - os.path is one of the modules posixpath, ntpath, macpath, or dospath
To 'fix' aka to make os.path act like the native underlying OS-isms, rather than 'java-isms', replace the unconditional import statement with a conditional one:
...
import java
from java.io import File
from UserDict import UserDict
# Load the right flavor of os.path
_osname = None
_jythonospath = java.lang.System.getProperty('jython.os.path')
if _jythonospath:
_jythonospath = _jythonospath.lower()
if _jythonospath in ['java', 'dos', 'mac', 'nt', 'posix']:
_osname = _jythonospath
if _osname == None:
_osname = java.lang.System.getProperty('os.name').lower()
if _osname == 'nt' or _osname.startswith('windows'):
import ntpath as path
elif _osname.startswith('mac') and not _osname.endswith('x'):
import macpath as path
elif _osname == 'dos':
import dospath as path
elif (_osname == 'posix') or ((File.pathSeparator == ':') and ('vms' not in _osname) and (not _osname.startswith('mac') or _osname.endswith('x'))):
import posixpath as path
else:
import javapath as path
class stat_result:
...
This auto-detects the OS unless explicitly overridden. The hunt sequence:
IF System.property jython.os.path = 'java', 'dos', 'mac', 'nt' or 'posix'
use it
ELSE
osname = java.lang.System.getProperty('os.name').lower()
IF osname startswith 'windows'
import ntpath as path
ELSEIF osname startswith 'mac'
import macpath as path
ELSEIF osname startswith 'dos'
import dospath as path
ELSEIF pathsep == ':' and not VMS and (osname != Mac(classic) OR osname endswith 'x')
import posixpath as path
ELSE
import javapath as path
ENDIF
ENDIF
That autodetect for Unix looks funky, but it's right.
If the path separator is not ":", it's not Unix
If it's VMS, it's not Unix
If it's not MacOS OR it ends with 'X', it's Unix
That last part's a head bender at first.
MacOS falls thru.
MacOSX gets trapped as Unix
AIX, HP-UX, Linux and Irix are trapped as Unix
Solaris and SunOS are trapped as Unix
That last one because Solaris is not Mac; If you've got a ":" path separator, you're probably Unix, MacOS and VMS being the (known) exceptions
Voila.
Run with -v and you can see what gets loaded is what you'd expect.
And for those wondering if ntpath or posixpath is the right answer instead of javapath, yes, it is. It has to be. Java doesn't hide all underlying file system details; new File("C:\\foo.bar") is what's required to work on Windows and so forth. Java only generalized the API function calls, not the 'data' like filenames or pathing and so forth, so javapath is interesting but, ultimately, not the right answer consistent with other JVM behavior (at least, not for nt, mac, posix and dos).
NOTE: My posted fix corrects the broader problem, javapath is always used instead of the appropriate os-specific os.path implementation. This solves more than just normcase(); isabs(), ismount(), split() and so forth, not to mention ntpath has functions not even defined in javapath (e.g. splitunc).
First of all I have to say I'm such a newbie with Jython internals that what I write here may not be 100% true. Anyway, I've understood that javapath acts as any other platform specific os.path implementation -- in Jython's case the platform is not the actual OS but the JVM. I believe the line in javaos that says "os.path is one of the modules posixpath, ntpath, macpath, or dospath" is just copied from CPython version and not updated accordingly.
The problem with Jython is that even though the JVM is the platform people using Jython expect it works correctly (i.e. similarly as CPython) in any particular OS. The idea to use OS specific path modules is of course one possibility to solve this problem but unfortunately using CPython modules without modifications is probably not an option. CPython modules like posixpath and ntpath are heavily dependent on os.stat which, due to JVM limitations, only sets file size and mtime correctly in Jython. Thus the current approach is probably the simplest because java.io.File provides most of the functionality needed to implement CPython compatible os.path. There are cases where Java and CPython implementations differ (e.g. os.path.abspath normalizes the returned path but java.io.File.getAbsolutePath doesn't) but in my opinion they can be handled inside javapath.
>javapath acts as any other platform specific os.path implementation
Yes, I thought about that. The problem is the JVM is both "a tool" _and_ "a platform". Depending on your particular definition and viewpoint, javapath is / not the right answer.
I can certainly see the rationale to say "Jython runs on the Java Platform" and thus os.path should always be javapath.
However that creates several problems, as it's inconsistent with the view "Jython runs on [Windows/Unix/...]".
Fundamentally, the JVM itself is skitsofrenic(sp?).
In this case, I think it's more correct to make os.path the same as CPython does - the 'native' flavor (ntpath, posixpath, etc) with an option to use a 'java-centric' flavor as an alternative.
This seems historically consistent (if you can use such a phrase regarding this), witness file system notation (values for that String parameter to the File constructor). If the JVM were truly its own platform, you'd use one notation regardless of the underlying system (see Cygwin for an example of how to ignore the native notation in favor of your own dogmatically consistent world view; right or wrong is irrelevant, at least it picks a single world view and sticks to it).
More significantly (IMO), using the 'native' *path.py is in line with recent trends in Sun's JVM (1.4 and, esp, 1.5+). Witness the XP and GTK visual themes for Swing, the getFreeDiskSpace API (finally) in 1.6 (which one could argue merely continues the java.io not-hiding-the-underlying-platform pattern since Java 1.0) and so forth.
And philosophy aside, here's a more concrete example: os.path.splitunc()
Should that exist in Jython's os.path module?
CPython says yes, it exists, on Windows (and nowhere else).
What would a Jython user expect?
Is a Jython developer building a Windows application using Java?
Or is a Jython developer building a Java application that happens to run on Windows?
The true underlying crux of the matter.
IMO there are far more people interested in writing Windows apps that happens to run in the JVM than vice versa.
And CPython fundamentally takes this approach.
One of the things beloved of Python: you *can* write portable code, but you don't *have* to.
This is one of Python's huge *strengths*, and Java's weakness.
Want to access the registry on Windows? Curses on Unix? ... Go ahead. Python not only allows it, Python will help you.
You're not required to write platform-specific code, but you're also not only not-precluded, you're outright aided and abetted. If you so choose.
If you so choose.
That's one hugely compelling benefit I see in Python vs. Java.
Python gives you the power to choose, what best fits YOUR needs.
Java dictates and limits your choices. You're not allowed to write non-portable code, because that would be...bad. So not only does Java avoid helping you, it actively hinders you.
[Witness the GetFreeDiskSpace scenario...]
It's only in recent years that Java's started to loosen up and (start to) embrace The Python Way <g>
This is a long winded way of saying yes, it's reasonable for some people to want to code to the os.path=javapath model, but it's more useful and expected for os.path=nativepath as a default while not precluding those who want the javapath model (hence the reason I added the System.getProperty() check).
And it's not only more useful and expected, but os.path=nativepath is also the right answer because it's consistent with The Python Way -- portability is nice, but not at the cost of freedom and functionality. If you wanted your functionality constrained and dictated, you can always use Java :->
I agree dividing javapath into separate modules would be The Right Thing but in the end that's pretty much an implementation detail and doesn't really matter for os.path users assuming that everything works.
Because dividing javapath would require some extra work I'm not personally planning to do it (at least at the moment) as I'm more interested in getting actual bugs in javapath fixed. I'm also planning to write comprehensive unit tests so that further optimization is possible afterwards.
This of course doesn't prevent others from making bigger refactoring for javapath even now. If you think dividing is something that must be done the best way to actually get it done is creating a patch. Setting up Jython development environment is pretty simple using guide at [1] -- I just set it up last weekend myself.
[1]
>I agree dividing javapath into separate modules would be The Right Thing
I didn't say that, or at least didn't mean to, though it did cross my mind.
Problem: Today, import ntpath does not work, so even if you wanted access to the various non-javapath goodies, you can't do it. And to be honest, I'm not sure why -- import posixpath and so forth work fine in CPython 2.4.1; need to try it in 2.2 but methinks this is a bug in Jython. Will explore.
>but in the end that's pretty much an implementation detail and doesn't
>really matter for os.path users assuming that everything works.
And that's my point. javapath provides a "Java-centric" view of os.path.
Is that what people expect?
Again, this comes back to the fundamental question:
Are people using Java to write Windows/AIX/Mac/... applications,
or are people using Windows/... to write Java applications?
os.path=javapath is right for the latter, and wrong for the former.
I am in the former camp. I write code that routinely has to run on AIX, HP-UX, Solaris, Linux and Windows, so portability is a good thing, but I also prefer Python's model over Java's -- I like having access to platform-specific details, when I so choose. I've dealt with os.name and sys.platform and the rest often enough, when needed, and find the Python portability model to be simple and effective - and equally simple and unobtrusive when I *don't* care about writing portable code.
Assuming import ntpath and import javapath not working today are simply bugs to fix and not a feature, 'twould appear best to make os.path=nativepath:
1. Default behavior is comparable to CPython
2. Default behavior is what people using-Java-to-write-Win/etc-apps expect and prefer
3. People using-Win/etc-to-write-Java-apps can get _their_ desired behavior by 'import javapath'
4. This also happens to work much like IronPython; IP provides its own libraries (.NET Framework), but if you point it to the CPython libraries *you get CPython library behavior*. IronPython actually provides *better* Python compatibility than Jython does using os.path=javapath.
#4 particularly bothers me.
There is no 'spec' for Python; the closest thing we have is CPython's documentation and, to a lesser degree, CPython's implementation (including source code comments). IronPython does it right, IMO - a module with the same name as CPython should act like CPython (as much as possible). Would you find it bothersome or troubling if IronPython provide the string or _winreg modules with different behaviors and footprints than CPython's? What if IronPython provided a module called "re" that supported regular expressions, but not the same notation as CPython's? Would that bother you?
Jython should be compatible with CPython as much as possible.
Provide *extensions*, yes, excellent idea, but not gratuitous incompatibilities.
I see no problem - quite the opposite - with Jython provide *additional* modules, so whether it's called javapath or jython.os.path or whatever is, IMO, a *good* idea.
But Jython should not provide the same symbols as CPython with different behavior.
>If you think dividing is something that must be done
I don't. I have no qualms with javapath as-is, actually.
Well, it'd be nicer if it was _richer_, actually, but no, I see no need to change or splinter javapath.
Just don't call it os.path.
>Setting up Jython development environment is pretty simple using guide at [1]
Thanks for the tip, I hadn't seen that but yes it doesn't look too painful.
The os.path fix is quite easy actually, since it's just 1 source file.
I'll submit a patch for it as per [1]
I have a patch for this but it depends on so I'll wait untill that one is applied before submitting it.
The patch is implemented so that it does
path = abspath(os.curdir)
_CASE_INSENSITIVE = samefile(path.lower(), path.upper())
and normcase returns path in lowercase if _CASE_INSENSITIVE is true. There doesn't seem to be any direct way to ask are paths case-insensitive from the JVM so this approach should be ok.
Patch is up at
fixed in r4171: we use the platform dependent path module (ntpath on
Windows) | http://bugs.jython.org/issue1648449 | CC-MAIN-2017-30 | refinedweb | 2,635 | 66.33 |
html - Internal links on a specific page have stopped working
I'm trying to link to internal 'Pages' that are created within the Shopify back end, but the button link on this specific page is broken: () It used to work and I haven't made any changes to it.
'Pages' can be linked elsewhere within the app but just not on that specific section for some reason. The button always links back the page that you are on for some reason.
Html code:
{% if block.settings.button_text != blank %} <a href="{{ block.settings.button_url }}" class="standard__cta {{ block.settings.button_style }} {{ block.settings.button_color }}" data- {{ block.settings.button_text }} </a> {% endif %}
Schema code:
{ "type":"url", "id":"button_url", "label":"Link" },
A href in Inspect mode:
>. | https://e1commerce.com/items/internal-links-on-a-specific-page-have-stopped-working | CC-MAIN-2022-40 | refinedweb | 120 | 58.28 |
Hi, On Sun, Jan 20, 2002 at 12:09:10PM +0000, Miquel van Smoorenburg wrote: > How about turning off all non-standard-vi features by default > such as "autoindent" and the especially annoying "filetype plugin on" > and surrounding all vim-features with > > if v:progname != "vi" > s00perd00per vim/gvim options > endif Bad idea, i don't like typing m after vi to get the editor. > I did this on my workstation, if I call vim as "vi" I get the > bog-standard "vi", if I call it as "vim" I get color, syntax higlighting, > autoindent, "filetype plugin on", viminfo, showcmd, autowrite, etc etc Maybe we could make 2 or more different example configurations, and put them into /usr/doc/vim. bye -- May the source be with you! | https://lists.debian.org/debian-devel/2002/01/msg01712.html | CC-MAIN-2017-13 | refinedweb | 127 | 58.96 |
3 docs contain the specific parameters for stochastic variables. (Or use
??
shape keyword in the call to a stochastic variable creates multivariate array of (independent) stochastic variables. The array behaves like a NumPy array when used like one, and references to its
tag.test_value attribute return NumPy arrays.
The
shape,) lambda_2 = pm.Exponential("lambda_2", 1)()[0]3]
A good starting thought to Bayesian modeling is to think about how your data might have been generated. Position later )
<img src="" width = 700/>
PyMC = =3=80 - tau)] plt.bar(np.arange(80), data, color="#348ABD") plt.bar(tau - 1, data[tau-1], color="r", label="user behaviour changed") plt.xlim(0, 80); figsize(12.5, 5) plt.title("More example of artificial datasets") for i in range(4): plt.subplot(4, 1, i+1) setup3 as pm # The parameters are the bounds of the Uniform. with pm.Model() as model: p = pm.Uniform('p', lower=0, upper=1)
Applied interval-transform to p and added transformed p_interval_ to model. =" % \ np.mean(delta_samples < 0)) print("Probability site A is BETTER than site B: %.3f" % \ np.mean(delta_samples > 0))
Probability site A is WORSE than site B: 0.208 Probability site A is BETTER than site B: 0.7923).tag.test_value
array(0.5600000023841858) expected to see();
N = 10 x = np.ones(N, dtype=object) with pm.Model() as model: for i in range(0, N): x[i] = pm.Exponential('x_%i' % i, (i+1)* tempature .legend(loc="lower left");
Adding a constant term $\alpha$ amounts to shifting the curve left or right (hence why it is called a bias).
Let's start modeling this in PyMC./_tau), label="$\mu = %d,\;\\tau = %.1f$" % (_mu, _tau), color=_color) plt.fill_between(x, nor.pdf(x, _mu, scale +3.
# artificial dataset which we can simulate. The rationale is that if the simulated dataset does not appear similar, statistically, to the observed dataset, then likely our model is not accurately represented the observed data.
Previously in this Chapter, we simulated artificial dataset, |$");
from IPython.core.display import HTML def css_styling(): styles = open("../styles/custom.css", "r").read() return HTML(styles) css_styling() | http://nbviewer.jupyter.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter2_MorePyMC/Ch2_MorePyMC_PyMC3.ipynb | CC-MAIN-2018-05 | refinedweb | 353 | 54.08 |
Red Hat Bugzilla – Bug 881741
Restarting l3_agent causes Stderr: 'RTNETLINK answers: Invalid argument\n'
Last modified: 2015-09-12 18:54:56 EDT
Description of problem:
When restarting l3_agent, it causes the following error.
----
Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ip', 'link', 'set', 'qg-8dd02d72-d5', 'netns', 'qrouter-08ad50c6-4081-45f8-954d-b76595a8bcb9']
Exit code: 2
Stdout: ''
Stderr: 'RTNETLINK answers: Invalid argument\n'
----
Version-Release number of selected component (if applicable):
# rpm -qa | grep quantum
python-quantum-2012.2-1.fc18.noarch
openstack-quantum-openvswitch-2012.2-1.fc18.noarch
python-quantumclient-2.1.1-0.fc18.noarch
openstack-quantum-2012.2-1.fc18.noarch
Additional info:
I found the same error report in the upstream.
I checked that the patches in that report has been already applied, but I still see the error. There may be a different root cause.
Hi,
I think that the reason for this is that the interface already exists. Can you please do the following:
1. Can you please clear the log
2. Can you please reboot
Does the message appear?
Thanks
Gary
Yes, after rebooting the server, l3_agent worked well again.
But as seen in the result below, I think it's still a problem that I cannot restart l3_agent without rebooting the server.
Here's the detailed result.
1. Starting from a router without any networks.
2. Create an external network and set it as a gateway.
$ tenant=$(keystone tenant-list | awk '/service/ {print $2}')
$ quantum net-create --tenant-id $tenant public01 --provider:network_type flat --provider:physical_network physnet1 --router:external=True
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | 2c614b55-2fd0-45ed-9190-018005f5199d |
| name | public01 |
| provider:network_type | flat |
| provider:physical_network | physnet1 |
| provider:segmentation_id | |
| router:external | True |
| shared | False |
| status | ACTIVE |
| subnets | |
| tenant_id | 2ff618d48d054638a9384e64b04d49b5 |
+---------------------------+--------------------------------------+
$ quantum subnet-create --tenant-id $tenant --name pub_subnet01 --gateway 10.64.201.254 public01 10.64.201.0/24 --enable_dhcp False
Created a new subnet:
+------------------+--------------------------------------------------+
| Field | Value |
+------------------+--------------------------------------------------+
| allocation_pools | {"start": "10.64.201.1", "end": "10.64.201.253"} |
| cidr | 10.64.201.0/24 |
| dns_nameservers | |
| enable_dhcp | False |
| gateway_ip | 10.64.201.254 |
| host_routes | |
| id | 5722cb45-b3d4-4c1a-9ef3-46ce4e45acf8 |
| ip_version | 4 |
| name | pub_subnet01 |
| network_id | 2c614b55-2fd0-45ed-9190-018005f5199d |
| tenant_id | 2ff618d48d054638a9384e64b04d49b5 |
+------------------+--------------------------------------------------+
$ quantum router-gateway-set router01 public01
Set gateway for router router01
$ quantum port-list
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------------+
| id | name | mac_address | fixed_ips |
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------------+
| 3851239a-5b22-41e1-92fa-0832121a6c13 | | fa:16:3e:85:fb:34 | {"subnet_id": "5722cb45-b3d4-4c1a-9ef3-46ce4e45acf8", "ip_address": "10.64.201.1"} |
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------------+
3. No error in l3-agent.log and the router's IP is reachable from an external server.
[From external server]$ ping 10.64.201.1
PING 10.64.201.1 (10.64.201.1) 56(84) bytes of data.
64 bytes from 10.64.201.1: icmp_seq=2 ttl=63 time=3.19 ms
64 bytes from 10.64.201.1: icmp_seq=3 ttl=63 time=0.517 ms
4. Restart l3_agent and see the errors.
$ systemctl restart quantum-l3-agent.service
l3-agent.log
----
RuntimeError:
Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ip', 'netns', 'exec', 'qrouter-08ad50c6-4081-45f8-954d-b76595a8bcb9', 'ip', '-o', 'link', 'list']
Exit code: 1
Stdout: ''
Stderr: 'seting the network namespace failed: Invalid argument\n'
...
RuntimeError:
Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ip', 'netns', 'exec', 'qrouter-08ad50c6-4081-45f8-954d-b76595a8bcb9', 'sysctl', '-w', 'net.ipv4.ip_forward=1']
Exit code: 1
Stdout: ''
Stderr: 'seting the network namespace failed: Invalid argument\n'
...
RuntimeError:
Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ip', 'link', 'set', 'qg-3851239a-5b', 'netns', 'qrouter-08ad50c6-4081-45f8-954d-b76595a8bcb9']
Exit code: 2
Stdout: ''
Stderr: 'RTNETLINK answers: Invalid argument\n'
...
-----
And now, ping from an external server _fails_.
[From external server]$ ping 10.64.201.1
-- No response.
5. Reboot the server and start quantum.
Now no error is seen in l3-agent.log, and ping from an external server works again.
[From external server]$ ping 10.64.201.1
PING 10.64.201.1 (10.64.201.1) 56(84) bytes of data.
64 bytes from 10.64.201.1: icmp_seq=1 ttl=63 time=2.05 ms
64 bytes from 10.64.201.1: icmp_seq=2 ttl=63 time=0.522 ms
Created attachment 654708 [details]
top 100lines from l3-agent.log just after restarting l3_agent
Thanks,
I'll take care of this next week.
Gary
Hi Gary,
I got the clue :)
It's related to the system-d's private tmp setting. In my Fedora18 setting, it's reolved by applying the following workarounds.
1. Apply this patch ->
This is not directly related to this problem. But it's necessary to run l3_agent.
2. Set "PrivateTmp=false" in the follwing files.
/usr/lib/systemd/system/quantum-dhcp-agent.service
/usr/lib/systemd/system/quantum-l3-agent.service
/usr/lib/systemd/system/quantum-openvswitch-agent.service
/usr/lib/systemd/system/quantum-server.service
See for the background.
Now you can restart l3_agent and it works well without restarting the server.
The side effect of this setting is that when l3_agent runs "ip netns exec ...", it breaks the /sys mounting as in .
As a result, systemctl stop working as below.
----
$ systemctl
Failed to get D-Bus connection: No connection to service manager.
----
I'm not sure the reason but by "run-and-stop systemd" as below makes systemctl works again.
----
$ systemd
Failed to open private bus connection: Unable to autolaunch a dbus-daemon without a $DISPLAY for X11
(Stop it with Ctrl+C)
----
Thanks.
Hi,
I have validated the "PrivateTmp=false". I think that this should just be done in the dhcp and l3 agent files as they are the only ones that make use of namespaces.
Thanks
G. | https://bugzilla.redhat.com/show_bug.cgi?id=881741 | CC-MAIN-2016-40 | refinedweb | 936 | 61.63 |
I wrote an article about Hirschberg's Algorithm
This article was originally written in Japanese here
Firstly, look at the problem below.
Summary: You are given a H×W grid which has a number in each cell. You want to go to the lower right cell from the upper left cell, and the score is the sum of the nunbers of the cells you pass through. Minimize the score, and show an example of the paths which satisfy the score. H ≤ 10000, W ≤ 10000.
If you just need to calculate the minimum score, you can solve this problem by a simple DP with the time complexity O(HW).
dp(x, y) = grid(x, y) + min(dp(x - 1, y), dp(x, y - 1))
Let me show the grids on which the numbers and the DP values are written.
In this example, you found out that the minimum score is 29. Now, let's restore the path which gets the score. On DP grid that I showed above, you can restore DP from the last cell. When you are on a cell in the optimal path, the previous cell must be the cell which has the smaller DP value. For each cell, therefore, just write 1, if the nuber in the upper cell is smaller than that in the left cell, and 0, vice versa. In this way, you can get the grid like the left of the image, and the optimal path can be restored.
What I wrote above is simply implemented. It's a piece of cake!
vector<vector<long long>> dp(h, vector<long long> (w, INFL)); dp[0][0] = grid[0][0]; for (int i = 0; i < h; i ++) { for (int j = 0; j < w; j ++) { if (i - 1 >= 0) dp[i][j] = min(dp[i][j], (long long) grid[i][j] + dp[i - 1][j]); if (j - 1 >= 0) dp[i][j] = min(dp[i][j], (long long) grid[i][j] + dp[i][j - 1]); } } vector<bitset<10010>> restore(10010); for (int i = 0; i < h; i ++) { for (int j = 0; j < w; j ++) { if (i == 0) { restore[i][j] = false; } else if (j == 0) { restore[i][j] = true; } else { restore[i][j] = dp[i - 1][j] < dp[i][j - 1]; } } } int y = h - 1, x = w - 1; string ans = ""; while (true) { if (x == 0 && y == 0) break; if (restore[y][x]) { ans += "D"; y --; } else { ans += "R"; x --; } } reverse(ans.begin(), ans.end()); cout << ans << endl;
This problem seemed quite easy for many coders, but if you look at the problem statement again, you'll notice that the memory limit is unusual (it's emphasized though!). This memory limit implies that you can't even use memory space O(HW).
If you just want to know the minimum score, you can do the same DP by reusing the 2 DP arrays, but how do you restore the path?
One possible solution is to restore the path from right side by reusing the 2 restoring arrays. However, since you can't save the DP values, every time you restore one step of the path, you need to do the DP from the left side again, which unfortunately requires the time O(HW * min(H, W)). Not fast enough!
This problem can be solved by Hirschberg's Algorithm! This algorithm makes it possible to restore this DP within space O(min(H, W)), and time O(HW), respectively.
This algorithm is based on Divide and Conquer Algorithm. Dividing the grids, Finding the path on each grid, and merging them.
Let's see this algorithm, using the same example above. First of all, divide the grid into two (almost) same size grids. According to the rule of the movement, there must be only one position, where you pass from the left grid to the right grid.
In order to find the position, you just do the DP from the upper left to the lower right on the left grid, while you do the DP from the lower right to the upper left on the right grid.
Consequently, at each position on the line of the division, the score was calculated from the upper left and from the lower right, thus, the position where these sums are the minimum, must be used. In this example, these sums are 35, 33, 29, 36 from the top, which implies that the 3rd position from the top should be used.
Then, just repeat this process until all the movement is restored. The second time, you need to consider only the following part.
This means that you need to traverse only half part than you previously traversed. In the third time, you only need to check the
part of the entire grid.
The space complexity is obviously O(min(H, W)), while the time complexity is
I hope you enjoyed this algorithm!
#include <iostream> #include <cstdio> #include <algorithm> #include <vector> #include <functional> #include <string> using namespace std; template<class T> void amin(T &a, T b) { if (a > b) a = b; } int main() { int h, w; scanf("%d%d", &h, &w); vector<int> u(h), v(w); for (int i = 0; i < h; i ++) scanf("%d", &u[i]); for (int i = 0; i < w; i ++) scanf("%d", &v[i]); function<long long (int, int)> get_val = [&](int i, int j) { return (long long) (u[i] + j + 1) ^ (long long) (v[j] + i + 1); }; //Hirschberg vector<pair<int, int>> pos; function<void (int, int, int, int)> Hirschberg = [&](int li, int lj, int ri, int rj) { int mid = (lj + rj) / 2; int height = ri - li + 1; if (rj - lj < 1) return; if (height == 1) { pos.emplace_back(mid, li); Hirschberg(li, lj, li, mid); Hirschberg(li, mid + 1, li, rj); return; } //left int w_left = mid - lj + 1; vector<vector<long long>> dp(2, vector<long long> (height)); dp[0][0] = get_val(li, lj); for (int i = 1; i < height; i ++) { dp[0][i] = dp[0][i - 1] + get_val(li + i, lj); } bool f = 1; for (int j = 1; j < w_left; j ++) { for (int i = 0; i < height; i ++) { dp[f][i] = 1LL << 60; } for (int i = 0; i < height; i ++) { int val = get_val(li + i, lj + j); amin(dp[f][i], dp[!f][i] + val); if (i - 1 >= 0) amin(dp[f][i], dp[f][i - 1] + val); } f = !f; } f = !f; vector<long long> m1(height); for (int i = 0; i < height; i ++) { m1[i] = dp[f][i]; } //right int w_right = rj - mid; dp[0][0] = get_val(ri, rj); for (int i = 1; i < height; i ++) { dp[0][i] = dp[0][i - 1] + get_val(ri - i, rj); } f = 1; for (int j = 1; j < w_right; j ++) { for (int i = 0; i < height; i ++) { dp[f][i] = 1LL << 60; } for (int i = 0; i < height; i ++) { long long val = get_val(ri - i, rj - j); amin(dp[f][i], dp[!f][i] + val); if (i - 1 >= 0) amin(dp[f][i], dp[f][i - 1] + val); } f = !f; } f = !f; vector<long long> m2(height); for (int i = 0; i < height; i ++) { m2[height - i - 1] = dp[f][i]; } // long long mi = 1LL << 60; int res = -1; for (int i = 0; i < height; i ++) { long long sum = m1[i] + m2[i]; if (sum < mi) { mi = sum; res = i; } } res += li; pos.emplace_back(mid, res); Hirschberg(li, lj, res, mid); Hirschberg(res, mid + 1, ri, rj); }; Hirschberg(0, 0, h - 1, w - 1); // sort(pos.begin(), pos.end()); int y = 0, x = 0; string ans = ""; while (true) { if (x == w - 1) { while (y != h - 1) { ans += "D"; y ++; } break; } if (pos[x].second == y) { x ++; ans += "R"; } else { y ++; ans += "D"; } } cout << ans << endl; return 0; } | https://codeforces.com/topic/57821/?locale=ru | CC-MAIN-2021-25 | refinedweb | 1,277 | 72.7 |
Differences between ActionScript and JavaScript: an overview
ActionScript 3.0 data types
Data types corresponding to custom classes
The void data type
The * data type
ActionScript 3.0 classes, packages, and namespaces
Runtime classes
Custom ActionScript 3.0 classes
ActionScript 3.0 packages
ActionScript 3.0 namespaces
Required parameters and default values in ActionScript 3.0 functions
ActionScript 3.0 event listeners
Adobe® ActionScript® 3.0 is a programming language like
JavaScript—both are based on ECMAScript. ActionScript 3.0 was released
with Adobe® Flash® Player 9 and you can therefore develop rich Internet
applications with it in Adobe® Flash® CS3 Professional, Adobe® Flash®
CS4 Professional, and Adobe® Flex™ 3.
The current version of ActionScript 3.0 was available only when
developing SWF content for Flash Player 9 in the browser. It is
now also available for developing SWF content running in Adobe®
AIR®.
The
Adobe AIR API Reference for HTML Developers
includes
documentation for those classes that are useful in JavaScript code
in an HTML-based application. It’s a subset of the entire set of
classes in the runtime. Other classes in the runtime are useful
in developing SWF-based applications (the DisplayObject class for example,
which defines the structure of visual content). If you need to use
these classes in JavaScript, refer to the following ActionScript
documentation:
The
Adobe ActionScript 3.0 Developer's Guide
The
Adobe ActionScript 3.0 Reference for the
Adobe Flash Platform
. (Only the top-level classes and
functions in the flash package are available to HTML content running
in AIR. The classes in the mx package are available only in Flex-based SWF
applications.)
ActionScript, like JavaScript, is based on the ECMAScript
language specification; therefore, the two languages have a common
core syntax. For example, the following code works the same in JavaScript
and in ActionScript:
var str1 = "hello";
var str2 = " world.";
var str = reverseString(str1 + str2);
function reverseString(s) {
var newString = "";
var i;
for (i = s.length - 1; i >= 0; i--) {
newString += s.charAt(i);
}
return newString;
}
However, there are differences in the syntax and workings of
the two languages. For example, the preceding code example can be
written as the following in ActionScript 3.0 (in a SWF file):
function reverseString(s:String):String {
var newString:String = "";
for (var i:int = s.length - 1; i >= 0; i--) {
newString += s.charAt(i);
}
return newString;
}
The version of JavaScript supported in HTML content in Adobe
AIR is JavaScript 1.7. The differences between JavaScript 1.7 and
ActionScript 3.0 are described throughout this topic.
The runtime includes some built-in classes that provide advanced
capabilities. At runtime, JavaScript in an HTML page can access
those classes. The same runtime classes are available both to ActionScript
(in a SWF file) and JavaScript (in an HTML file running in a browser).
However, the current API documentation for these classes (which
are not included in the
Adobe AIR API Reference for HTML Developers
)
describes them using ActionScript syntax. In other words, for some of
the advanced capabilities of the runtime, refer to The
Adobe ActionScript 3.0 Reference for the
Adobe Flash Platform
. Understanding the basics of ActionScript helps
you understand how to use these runtime classes in JavaScript.
For example, the following JavaScript code plays sound from an
MP3 file:
var file = air.File.userDirectory.resolve("My Music/test.mp3");
var sound = air.Sound(file);
sound.play();
Each of these lines of code calls runtime functionality from
JavaScript.
In a SWF file, ActionScript code can access these runtime capabilities
as in the following code:
var file:File = File.userDirectory.resolve("My Music/test.mp3");
var sound = new Sound(file);
sound.play();
ActionScript 3.0 is a
strongly typed
language. That
means that you can assign a data type to a variable. For example,
the first line of the previous example could be written as the following:
var str1:String = "hello";
Here, the
str1
variable is declared to be of
type String. All subsequent assignments to the
str1
variable
assign String values to the variable.
You can assign types to variables, parameters of functions, and
return types of functions. Therefore, the function declaration in
the previous example looks like the following in ActionScript:
function reverseString(s:String):String {
var newString:String = "";
for (var i:int = s.length - 1; i >= 0; i--) {
newString += s.charAt(i);
}
return newString;
}
Although assigning types is optional in ActionScript, there are
advantages to declaring types for objects:
Typed objects allow for type checking of data not only
at run-time, but also at compile time if you use strict mode. Type
checking at compile time helps identify errors. (Strict mode is
a compiler option.)
Using typed objects creates applications that are more efficient.
For
this reason, the examples in the ActionScript documentation use
data types. Often, you can convert sample ActionScript code to JavaScript
by simply removing the type declarations (such as "
:String
").
An ActionScript 3.0 object can have a data type that corresponds
to the top-level classes, such as String, Number, or Date.
In ActionScript 3.0, you can define custom classes. Each custom
class also defines a data type. This means that an ActionScript
variable, function parameter, or function return can have a type
annotation defined by that class. For more information, see
Custom ActionScript 3.0 classes
.
The
void
data type is used as the return
value for a function that, in fact, returns no value. (A function
that does not include a
return
statement returns
no value.)
Use of the asterisk character (*) as a data type is the
same as not assigning a data type. For example, the following function
includes a parameter, n, and a return value that are both not given
a data type:
function exampleFunction(n:*):* {
trace("hi, " + n);
}
Use of the * as a data type is not defining a data type at all.
You use the asterisk in ActionScript 3.0 code to be explicit that
no data type is defined.
ActionScript 3.0 includes capabilities related to classes
that are not found in JavaScript 1.7.
The runtime includes built-in classes, many of which are
also included in standard JavaScript, such as the Array, Date, Math,
and String classes (and others). However, the runtime also includes
classes that are not found in standard JavaScript. These additional
classes have various uses, from playing rich media (such as sounds)
to interacting with sockets.
Most runtime classes are in the flash package, or one of the
packages contained by the flash package. Packages are a means to
organize ActionScript 3.0 classes (see
ActionScript 3.0 packages
.
ActionScript 3.0 allows developers to create their own custom
classes. For example, the following code defines a custom class
named ExampleClass:
public class ExampleClass {
public var x:Number;
public function ExampleClass(input:Number):void {
x = input;
}
public function greet():void {
trace("The value of x is: ", x);
}
}
This class has the following members:
A constructor method,
ExampleClass()
,
which lets you instantiate new objects of the ExampleClass type.
A public property,
x
(of type Number), which
you can get and set for objects of type ExampleClass.
A public method,
greet()
, which you can
call on objects of type ExampleClass.
In this example, the
x
property
and the
greet()
method are in the
public
namespace.
The
public
namespace makes methods and properties
accessible from objects and classes outside the class.
Packages provide the means to arrange ActionScript 3.0
classes. For example, many classes related to working with files
and directories on the computer are included in the flash.filesystem
package. In this case, flash is one package that contains another
package, filesystem. And that package may contain other classes
or packages. In fact, the flash.filesystem package contains the
following classes: File, FileMode, and FileStream. To reference
the File class in ActionScript, you can write the following:
flash.filesystem.File
Both built-in and custom classes can be arranged in packages.
When referencing an ActionScript package from JavaScript, use
the special
runtime
object. For example, the following
code instantiates a new ActionScript File object in JavaScript:
var myFile = new air.flash.filesystem.File();
Here, the
File()
method is the constructor function
corresponding to the class of the same name (File).
In ActionScript 3.0, namespaces define the scope for which
properties and functions in classes can be accessed.
Only those properties and methods in the
public
namespace
are available in JavaScript.
For example, the File class (in the flash.filesystem package)
includes
public
properties and methods, such as
userDirectory
and
resolve()
.
Both are available as properties of a JavaScript variable that instantiates
a File object (via the
runtime.flash.filesystem.File()
constructor
method).
There are four predefined namespaces:
Namespace
Description
public
Any code that instantiates an object of
a certain type can access the public properties and methods in the
class that defines that type. Also, any code can access the public
static properties and methods of a public class.
private
Properties and methods designated as private
are only available to code within the class. They cannot be accessed
as properties or methods of an object defined by that class. Properties
and methods in the private namespace are not available in JavaScript.
protected
Properties and methods designated as protected
are only available to code in the class definition and to classes
that inherit that class. Properties and methods in the protected
namespace are not available in JavaScript.
internal
Properties and methods designated as internal
are available to any caller within the same package. Classes, properties,
and methods belong to the internal namespace by default.
Additionally, custom classes can use other namespaces that are
not available to JavaScript code.
In both ActionScript 3.0 and JavaScript, functions can
include parameters. In ActionScript 3.0, parameters can be required
or optional; whereas in JavaScript, parameters are always optional.
The following ActionScript 3.0 code defines a function for which
the one parameter,
n
, is required:
function cube(n:Number):Number {
return n*n*n;
}
The following ActionScript 3.0 code defines a function for which
the
n
parameter is required. It also includes the
p
parameter,
which is optional, with a default value of 1:
function root(n:Number, p:Number = 1):Number {
return Math.pow(n, 1/p);
}
An ActionScript 3.0 function can also receive any number of arguments,
represented by
...rest
syntax at the end of a list
of parameters, as in the following:
function average(... args) : Number{
var sum:Number = 0;
for (var i:int = 0; i < args.length; i++) {
sum += args[i];
}
return (sum / args.length);
}
In ActionScript 3.0 programming, all events are handled
using
event listeners
. An event listener is a function. When
an object dispatches an event, the event listener responds to the
event. The event, which is an ActionScript object, is passed to
the event listener as a parameter of the function. This use of event object
differs from the DOM event model used in JavaScript.
For example, when you call the
load()
method
of a Sound object (to load an mp3 file), the Sound object attempts
to load the sound. Then the Sound object dispatches any of the following
events:
Event
Description
complete
When the data has loaded successfully.
id3
When mp3 ID3 data is available.
ioError
When an input/output error occurs that causes
a load operation to fail.
open
When the load operation starts.
progress
When data is received as a load operation
progresses.
Any class that can dispatch events either extends the EventDispatcher
class or implements the IEventDispatcher interface. (An ActionScript
3.0 interface is a data type used to define a set of methods that
a class can implement.) In each class listing for these classes
in the ActionScript Language Reference, there is a list of events
that the class can dispatch.
You can register an event listener function to handle any of
these events, using the
addEventListener()
method
of the object that dispatches the event. For example, in the case
of a Sound object, you can register for the
progress
and
complete
events,
as shown in the following ActionScript code:
var sound:Sound = new Sound();
var urlReq:URLRequest = new URLRequest("test.mp3");
sound.load(urlReq);
sound.addEventListener(ProgressEvent.PROGRESS, progressHandler);
sound.addEventListener(Event.COMPLETE, completeHandler);
function progressHandler(progressEvent):void {
trace("Progress " + progressEvent.bytesTotal + " bytes out of " + progressEvent.bytesTotal);
}
function completeHandler(completeEvent):void {
trace("Sound loaded.");
}
In HTML content running in AIR, you can register a JavaScript
function as the event listener. The following code illustrates this.
(This code assumes that the HTML document includes a TextArea object
named
progressTextArea
.)
var sound = new runtime.flash.media.Sound();
var urlReq = new runtime.flash.net.URLRequest("test.mp3");
sound.load(urlReq);
sound.addEventListener(runtime.flash.events.ProgressEvent.PROGRESS, progressHandler);
sound.addEventListener(runtime.flash.events.Event.COMPLETE, completeHandler);
function progressHandler(progressEvent) {
document.progressTextArea.value += "Progress " + progressEvent.bytesTotal + " bytes out of " + progressEvent.bytesTotal;
}
function completeHandler(completeEvent) {
document.progressTextArea.value += "Sound loaded.";
Twitter™ and Facebook posts are not covered under the terms of Creative Commons. | https://help.adobe.com/en_US/air/html/dev/WS5b3ccc516d4fbf351e63e3d118666ade46-7fa6.html | CC-MAIN-2021-43 | refinedweb | 2,159 | 58.89 |
This article is Part 4 of the Building iOS Interfaces series which tackles the how and why of implementing iOS designs without prior native programming experience–perfect for Web designers and developers. You can find the previous articles here: Part 1 – Part 2 – Part 3.
In the previous article, we implemented a custom button by going back and forth between Interface Builder and Swift–a process that would quickly become strenuous if repeated over and over, unless you are building a flashlight app with a single button in the UI. Putting aside the fact that repetition is no fun, updating even the smallest details in the future would require going through every single button instance, and that’s untenable. There is a better way, and we’re here to talk about it.
The Proper Way
As we have previously mentioned in passing, subclassing is the
process of creating a new class that inherits the properties and methods of an
existing one.
The subclass can optionally override the superclass’s behavior, and
that’s exactly what we need to do to customize the default look of the
UIButton
class. Let’s take a look at how this works in practice.
Open the Swiftbot project if you happen to have it locally, or download it as we have previously left it off from GitHub.
Add a new file to the project by right-clicking the parent group in the project navigator, then selecting New File….
Select Source under the iOS section, then pick Cocoa Touch Class from the template collection.
Name your class RoundedCornerButton then set UIButton in the Subclass of field. Leave the rest unchanged. On the topic of naming, classes in Swift are named in CamelCase. It’s considered good practice to pick a name that describes the purpose of the class.
In the new Swift file, go ahead and delete all the comments–lines starting
with
//. The end result should look like this:
import UIKit class RoundedCornerButton: UIButton { }
This snippet above is the bare minimum needed to create a subclass in Swift.
As introduced in the first article, the
import UIKit part serves to give us
access to the APIs defined in that framework, namely the
UIButton class in
this case.
Creating a subclass is not enough however, since Interface Builder still considers our button a UIButton. Additionally, our subclass will remain a carbon-copy of its superclass until we add some implementation code.
Classes & Instances
We’ve previously noted that each control is represented by a UIKit class in Swift. What we’ve left out however, is that a class is nothing more than a blueprint defining how a view object should look and behave. In other words, they are of little use on their own.
This is where instances come into play. An instance is an object that was built following the
specifications provided by a given class. In our example, the button we added in IB is an
instance of the
RoundedCornerButton class.
Notice how the
UIButton class states that every button should have a
buttonType property, without actually setting its value. It’s
up to the instance to decide what type of button it wants to be.
Now let’s make our button an instance of the new subclass.
In the storyboard, select the button and look for the ID card icon in the Utilities sidebar to the right. This takes you to the Identity inspector, where you can change attributes that are unique to this button instance, such as the class and the identifier.
In the Class field, type the name of the subclass we created
earlier. This makes this specific button an instance of the
RoundedCornerButton class, meaning that any custom behavior we add in
code will be applied to it.
Now that we got this out of the way, let’s remove the outlet connection
that we’ve created in the previous article since we will no longer
need direct access to our button instance in the view controller. We
can achieve that in several ways, the easiest of which would be heading
to Connections inspector and clicking the x button next to the
roundedCornerButton outlet in the Referencing Outlets section.
With the outlet gone, we need to remove all references to the button in our
ViewController.swift. Go ahead and delete all the code in the class
declaration so as it ends up looking like this:
class ViewController: UIViewController { }
We are now almost done with Interface Builder in this exercise. Before we go back to our subclass however, we need to shed some light on how you typically approach using subclasses as a way to extend the stock UIKit controls.
The Subclassing Game Plan
When it comes to subclassing, the most common–and often challenging–task is figuring out what methods and properties to override and in which order your execute your custom code. If things go wrong, and they will, it’s often because you are overriding the wrong method or doing things in the wrong order.
For UIView subclasses, you typically want your custom styles to be applied as soon as the view is loaded. Common methods to override include:
awakeFromNib(), which is called when the view is loaded from IB.
drawRect(_:), which is called when the view needs to draw itself on screen.
layoutSubviews(), which is called when the view needs to determine the size and position of its subviews.
There are obviously more methods than we can cover in this tutorial and, if you are curious, you can read up on each in detail on the official UIView documentation.
Override
In order to override a method in Swift, we use the
override keyword at the
beginning of the declaration like so:
class RoundedCornerButton: UIButton { override func awakeFromNib() { } }
We overrode
awakeFromNib() since it seems like a good place to start
adding our layer customizations. If your run the app after these changes, you
will notice that there are no noticeable changes save for the sharp
corners. That’s expected since we removed the code that set the
cornerRadius
on our instance’s layer in the view controller.
In the previous code snippet we achieved that with this line:
roundedCornerButton.layer.cornerRadius = 4
Since we are doing this directly from within the button subclass, we can
drop the
roundedCornerButton reference in the declaration above:
class RoundedCornerButton: UIButton { override func awakeFromNib() { layer.cornerRadius = 4 } }
Note that
layer is equivalent to
self.layer in his case,
self being a
reference to the instance. That said,
self is rarely required in Swift, so
it’s safe to drop unless the compiler suggests it.
Go ahead and run the app. Our button should be basking again in its rounded-corner glory.
Now here is the fun part: if you duplicate the button in IB–by alt-dragging it to a new location–the new instance will look identical, without having to manually change its properties in the view controller.
There is one little issue however. Currently we’re setting the background color through IB. That means that we decide later to change all buttons in our app to a new color, we have to go through each button object in IB and manually change the background color attribute.
We can fix that by changing the property directly in the subclass so that all rounded-corner buttons are the same color:
class RoundedCornerButton: UIButton { override func awakeFromNib() { layer.cornerRadius = 4 backgroundColor = UIColor(red: 0.75, green: 0.20, blue: 0.19, alpha: 1.0) } }
If you run the app, both buttons should be using this Tall Poppy color–name courtesy of Kromatic. The same approach could be used for the font, text color, or even brand new behavior such as an in-progress state.
Closing words
Subclassing is a powerful tool to have at one’s disposal when building iOS interfaces. You can still do without it, but it will go a long way in helping you build a sane, scalable, and modular design system. | https://robots.thoughtbot.com/building-ios-interfaces-subclassing-views | CC-MAIN-2017-34 | refinedweb | 1,332 | 60.75 |
*by Davide Prati
In this tutorial we will have a look on how to draw lines with openFrameworks. Most of the code in this tutorial comes from this workshop held by Zach Lieberman. Lines are probably the most primitive abstract form of drawing. They are one of the first things kids learn to draw, and one of the most instinctive way we still use to draw on a surface. But in the end, they are just a bunch of points connected together; openFramework provides a class to easily draw lines connecting point: ofPolyline. Let's start to see how to use it!
In this example we will draw a line on the screen simply dragging the mouse around. In order to do that, we will start creating our container of points. Create a new app with the project generator and add this to your
App.h file:
ofPolyline line;
Now edit the method
setup,
draw,
mouseDragged and
mousePressed as follow:
void ofApp::setup(){ ofBackground(0,0,0); } void ofApp::draw(){ line.draw(); } void ofApp::mouseDragged(int x, int y, int button){ ofPoint pt; pt.set(x,y); line.addVertex(pt); } void ofApp::mousePressed(int x, int y, int button){ line.clear(); }
The interesting things are happening in the
mouseDragged method. When we press the left button of the mouse and we drag it around, the points with the coordinates corresponding to the mouse position are added into the instance
line of the
ofPolyline class. When we call
line.draw(), ofPolyline automatically connects the dots and draws a line on the screen. When we release the mouse,
line.clear() deletes all the points that we have insterted previously, getting ready to draw a new line.
Once that we have the points in the
ofPolyline we can edit them in the
update method, before drawing them. Let's move the points one pixel up or down along the x and y axis. Edit the
update method as follow:
for (auto &vert : line.getVertices()){ vert.x += ofRandom(-1,1); vert.y += ofRandom(-1,1); }
You should see something like this:
Note the
& in the loop. If we would have ommit it, we wouldn't have been able to edit the position of the vertices. The
& is telling to c++ that we are using a reference of the vertice contained in the
ofPolyline, and we need a reference because we want to change the values inside the
ofPolyline. When you simple want to read the values of the point inside the
ofPolyline, do not use
&. When you want to edit them, like in this case, use it.
Let's try something more complex. In this example we are going to create lines with
ofDrawLine an
ofPoint. Create a new app with the project generator and edit the
ofApp.h file as follow:
#include "ofMain.h" class Line { public: ofPoint a; ofPoint b; }; class ofApp : public ofBaseApp{ public: // ... //leave everything as it is // ... vector < ofPoint > drawnPoints; vector < Line > lines; };
In this header file we have defined a new class, the class
Line. This class simply consist of 2 points,
a and
b, this 2 points define where the line begins and where the line ends. We have also defined two new vectors,
drawnPoints, and
lines. In the
App.cpp file we are going to see why we need them:
void ofApp::setup(){ ofBackground(255,255,255); } void ofApp::draw(){ ofEnableAlphaBlending(); ofSetColor(30,30,30,30); for (auto line : lines) { ofDrawLine(line.a, line.b); } } void ofApp::mouseDragged(int x, int y, int button){ for (auto point : drawnPoints){ ofPoint mouse; mouse.set(x,y); float dist = (mouse - point).length(); if (dist < 30){ Line lineTemp; lineTemp.a = mouse; lineTemp.b = point; lines.push_back(lineTemp); } } drawnPoints.push_back(ofPoint(x,y)); }
The
draw() method is pretty easy, we use the
Line class that we have created in the header file to obtain the values of the point
a and
b. Then we use
ofDrawLine to connect them.
ofDrawLine simply draws a line from one point to another. The
mouseDragged method is a bit more complex. Let's start from the easyest part,
drawnPoints.push_back(ofPoint(x,y)). Everytime we drag the mouse, we are saving the position of the mouse in the
drawnPoints vector. This vector is like the history of all the movements of the mouse on the screen. Now let's have a look at the loop that starts with
for (auto point : drawnPoints). In this loop we are taking the current position of the mouse
ofPoint mouse, and we are confronting it with all its previous position. If the distance between the current position and a previous position is less than 30 pixel, we create a
Line lineTemp that connects the position of the mouse with the point in the history vector
drawnPoints which distance is less than 30px. After that we push the
Line in the
lines vector, ready do be drawned on the screen. Try to move the mouse around, you should be able to draw something like this.
Now that we know how to use
ofPolyline, we can combine it togehter with
ofNode, and draw a lines that moves smoothly on the screen.
ofNode is a class that defines a point in a 3D space and can be chained to other nodes. If we make 2 nodes, A and B, and we difne the node A as a parent of B. moving the A node will also move the node B. Let's see how
ofNode and
ofPolyline can play together. First, edit your
App.h file as follow:
ofNode baseNode; ofNode childNode; ofNode grandChildNode; ofPolyline line; ofEasyCam cam;
The
EasyCam class is used to see a scene in 3D space. Dragging the mouse around will allow you to move the camera around your scene. You do not need to worry for now about how does that work,
EasyCam is taking care of everything. Now, edit the
App.cpp file as follow:
void ofApp::setup(){ ofEnableDepthTest(); baseNode.setPosition(0, 0, 0); childNode.setParent(baseNode); childNode.setPosition(0, 0, 200); grandChildNode.setParent(childNode); grandChildNode.setPosition(0,50,0); } //-------------------------------------------------------------- void ofApp::update(){ baseNode.pan(1); childNode.tilt(3); line.addVertex(grandChildNode.getGlobalPosition()); if (line.size() > 200){ line.getVertices().erase( line.getVertices().begin() ); } } //-------------------------------------------------------------- void ofApp::draw(){ cam.begin(); //uncomment these 3 lines to understand how nodes are moving //baseNode.draw(); //childNode.draw(); //grandChildNode.draw(); line.draw(); cam.end(); }
You should see an image like this:
Lets' go through the code. In the
setup method we create a chain of 3 nodes and we assign them a position. Each node is parent to the previous one using
setParent. In the
update method we tell to the
baseNode to rotate 1 degree on the y axis. This will have a percussion also on the 2 other nodes. We also tell to the
childNode to rotate 3 degree on the x axis. These 2 rotation are enough to give to the last node of the chain, the
grandChildNode, an elegant movement around the 2 nodes. In the following lines, we are capturing the points from this movement and we are puttin them in the
ofPolyline. We keep only 200 points in the line, erasing the old ones as soon as new points are pushed in. Try to increase this value to see how the line change. | http://openframeworks.cc/ofBook/chapters/lines.html | CC-MAIN-2017-04 | refinedweb | 1,217 | 74.79 |
Hello!
Tue, 17 Aug 2010 06:26:37 -0400, Kris Maglione wrote:
> The patch was against your script.).
Anyway I included by hand in 'post' the modifications listed in
'patch'. Let us call 'past' the resulting script.
> I take it the unmodified script that I posted works (as it does for
> me)?
It works, undoubtedly. But…
- The command 'sh script' produces some strange outputs in the
terminal when I am typing. This might be normal, since what is
important is that the very last output is the content of the input
line, which is the case so the script works anyway.
- 'past' and 'script' have the same behaviour on what concerns the
input of options.
- 'post' has a different behaviour concerning options.
- Neither the behaviour of 'post' nor the behaviour of 'past' and
'script' are completely satisfactory in my opinion, for opposite
reasons.
What 'satisfactory' means is highly arguable, so I give examples below
of what I consider 'good' and 'bad' behaviours both in 'script' and
'post'. Of course you might consider that the behaviour of 'script'
and 'past' is exactly what it must be. But could you please let me
know if you have the same behaviour as below?
'ls ' List of files in $HOME
'ls -a' Nothing --> Good.
'ls -a ' Nothing as well --> Bad!
'ls -a w' List of files in current dir containing 'w'
'ls ' List of files in current dir
'ls -a' Nothing --> Good.
'ls -a ' List of files in current dir --> Good!
Perhaps, after all, this additional feature that I have tried to
implement is a 'false good idea', that is a Pandora box. But for the
moment I still hope that it can be given a more 'satisfactory'
behaviour, in the above sense, perhaps by changing slightly the syntax
of the 'opt' array (resp. the $opts file) or such.
Regards,
LuX.
PS: Typical intermediate outputs while I'm running 'script', followed
by what I typed in the input line (so the last output is normal):
$ sh wi_cli.sh *** This is the name I temporarily gave to 'script'
read cat $(wmiir namespace)/.proglist
read ls
read awk <$opts '$0 == "lp:", /^$/ {if(/^[ \t]/) { sub(/^[ \t]+/, "");
print }}'
lp -o media=a4 file
$ *** Back to the prompt (normal)
Received on Tue Aug 17 2010 - 15:56:38 CEST
This archive was generated by hypermail 2.2.0 : Tue Aug 17 2010 - 16:00:04 CEST | http://lists.suckless.org/dev/1008/5622.html | CC-MAIN-2021-10 | refinedweb | 397 | 72.16 |
appx windows build errorbertjeuh88 Jun 17, 2016 10:55 AM
Hey guys,
I have a project due next tuesday and I'm experiencing an annoying problem. At first I was building only for android and everything was fine, then I wanted to build for windows and to get the appx file in phonegap build but since I added the line:
preference name="phonegap-version" value="cli-6.1.0"
to get the appx file, the build won't go through. Only for windows, the android build is still working...
Does anyone have an idea on how I can fix this...
You will be my hero.
1. Re: appx windows build errorkerrishotts Jun 17, 2016 12:01 PM (in response to bertjeuh88)
You might try the cli-6.0.0 version or the cli-5.2.0 version (doing so might break some plugins), but if you need a fast build, it might be your best option.
Also, please post the full build log so that we can help narrow down the problem.
2. Re: appx windows build errorbertjeuh88 Jun 17, 2016 1:51 PM (in response to bertjeuh88)
Hey Kerri,
Thanks for your input! I have tried the 6.0.0 version and 5.2.0 version before... They both don't work. As long as I don't change the cli it works.
(now recently I have been uploading newer versions of my app, and for some reason it doesn't update the cli or anything else I change in my config.xml file... very odd)
3. Re: appx windows build errorbertjeuh88 Jun 17, 2016 1:52 PM (in response to bertjeuh88)
This was the only thing I could read in my log of the windows build error...
Build Date: 2016-06-17 20:49:43 +0000
4. Re: appx windows build errorbertjeuh88 Jun 17, 2016 2:18 PM (in response to bertjeuh88)
Sorry so the 5.2.0 standard version always worked... But the file I get to download is a xap file not appx.
5. Re: appx windows build errorryanskihead
Jun 20, 2016 2:15 PM (in response to bertjeuh88)
The 6.1.0 builds should work now... we're seeing intermittent certifcate errors that we haven't yet managed to eliminate. If it crops up, you may have to wait a bit before we re-provision the server. Inconvenient I know, working on eliminating. Windows
6. Re: appx windows build errorcjbrody Jul 8, 2016 4:42 AM (in response to bertjeuh88)
I reported a similar issue at PhoneGap build does not work for Windows appx · Issue #530 · phonegap/build · GitHub but did not get a response.
When can we get this fixed please?
7. Re: appx windows build errorcoolLJ Jul 8, 2016 8:47 AM (in response to bertjeuh88)
This may be a work around thought i would share. I was able to build App for WP8.1 and install it on WP10 -- BTW this was for a business /enterprise app, this may or may not be the same for Store apps.
- Build your app for WP8.1 on PGB then download it to your desktop computer
- Run this power Shell Precompile & sign script. make sure your paths are correct and you can change the name if you want.
- powershell.exe -File "c:\Program Files (x86)\Microsoft SDKs\Windows Phone\v8.0\Tools\MDILXAPCompile\BuildMDILXap.ps1" -xapfilename "c:\Users\username\Desktop\WP8\app-7.xap" npfxfilename "c:\Users\username\Desktop\WP8\Windows8MobileSigningCertificate.pfx" -password mypasswordhere
- You will need to:
- Download WP8.1 SDK
- create a pfx certificate to sign the app
- make an .AET token and installed it first so the app can be installed 2nd.
hope this helps
8. Re: appx windows build errorbenjamino96587917 Aug 10, 2016 1:42 PM (in response to ryanskihead)
Hi ryanskihead,
I'm having the same issue using CLI-6.1.0.
When I specify
<preference name="windows-appx-target" value="8.1-phone"/>
I get this error:
CertUtil: -importPFX command FAILED: 0x80070002 (WIN32: 2 ERROR_FILE_NOT_FOUND)
CertUtil: The system cannot find the file specified.
<preference name="windows-appx-target" value="uap"/>or no preference:
I get this error:
C:\cygwin\tmp\gimlet\2149308\project\build\phone\release\anycpu\AppxManifest.xml(41,6): error APPX1404: File content does not conform to specified schema. The element 'Capabilities' in namespace '... has invalid child element 'Capability' in namespace '.... List of possible elements expected: 'DeviceCapabilityChoice, DeviceCapability' in namespace '... as well as 'DeviceCapability' in namespace '.... [C:\cygwin\tmp\gimlet\2149308\project\CordovaApp.Phone.jsproj] !
10-Aug-16 UPDATE:
Reading about Capabilities & DeviceCapability I understood that DeviceCapability is a 'child' of Capabilities. Then I found that geolocation is a DeviceCapabiliy (as mentioned in the table here), and it was at the top of my plugins list in my `config.xml` file.
I had other plugins that were more of Capabilities function e.g. camera, contacts e.t.c. You can see more here.
So I resolved mine by moving the geolocation plugin to the bottom of the plugins list.
Hope it helps someone else.
9. Re: appx windows build errorbenjamino96587917 Jul 27, 2016 3:34 PM (in response to bertjeuh88)1 person found this helpful
bertjeuh88 The PGB team just added support for cli-6.3.0 version to PhoneGap Build ([](url)).
I rebuilt my app and the error has disappeared.
You may want to give it a try.
Just have a different issue now, it's not installing the vibration plugin (`cordova-plugin-vibration`)
10. Re: appx windows build errorcjbrody Nov 1, 2016 11:14 PM (in response to benjamino96587917)
It works for me in a test on Windows 10 UWP with cordova-plugin-dialogs if I use cli-6.3.0 and specify the CPU architecture.
11. Re: appx windows build erroraalfiann Dec 31, 2016 9:16 AM (in response to cjbrody)
Can I see your config.xml? because i've been specify CPU architecture like this:
<preference name="phonegap-version" value="cli-6.3.0" />
<preference name="windows-appx-target" value="uap" />
<preference name="windows-arch" value="arm" />
But still don't work, build appx is successful but, can't deploy for testing in winphone lumia 535.
Thanks | https://forums.adobe.com/thread/2169150 | CC-MAIN-2017-09 | refinedweb | 1,024 | 66.94 |
Previous tutorial we've done a simple hello world application that prints our name to the OLED screen using U8Glib. This tutorial I will just be focusing on the library itself and it's page looping construct.
The confusing thing with the Page Loop is that it's already in the loop function of Arduino. So we need to be careful about in which loop we need to alter our variables. In library's official wiki page it's very clear.
But what initially confused me here were the functions firstPage and nextPage. For me I would like to think those functions as "page start" and "page end" commands. Because between those functions I will be creating my drawing functions. And what if I have more than one page? Should I keep calling these functions multiple times inside the loop function? I hope not.
Let's say we have 3 sensors connected to our Arduino Uno.
- Time
- Humidity
- Temperature
Also I would like to have another page to print my brand's name. And I want to create different pages for each of those. As a developer, I don't like copying and pasting the same code where I can create a good structure instead.
I would like to use function pointers for my each page. So each screen will have its own function.
First I want to define my functions. I want 4 functions: First I will show the date time, then the temperature, then humidity and finally will show my name/brand etc. For this tutorial values will be hard coded for simplicity but in your real world application you will want to read them from the sensors. Just remember: read the sensor values in the main loop of Arduino not in the picture loop.
void pageTimeDay(); // Page-1 void pageTemperature(); // Page-2 void pageHumidity(); // Page-3 void pageInfo(); // Page-4
Then I want to define my function pointer and related variables that will call the pages one after another. I also defined duration function because I might want each page to have different duration.
const int pageCount = 4; int p; void (*pages[pageCount])() = { pageTimeDay, pageTemperature, pageHumidity, pageInfo }; int duration [pageCount] = { 1000, 1000, 1000, 2000 };
So my main loop will be much simpler.
void loop() { u8g.firstPage(); do { (*pages[p])(); } while( u8g.nextPage() ); delay(duration[p]); p = p+1; if (p == pageCount) p=0; }
So the whole application will look like this. Nice and clean.
#include "U8glib.h" #include
U8GLIB_SSD1306_128X64 u8g(U8G_I2C_OPT_NONE|U8G_I2C_OPT_DEV_0); // I2C / TWI int p; void pageTimeDay(); // Page-1 void pageTemperature(); // Page-2 void pageHumidity(); // Page-3 void pageInfo(); // Page-4 void (*pages[])() = { pageTimeDay, pageTemperature, pageHumidity, pageInfo } ; void setup() { u8g.setFont(u8g_font_unifont); u8g.setColorIndex(1); p = 0; Serial.begin(9600); } void loop() { u8g.firstPage(); do { (*pages[p])(); } while( u8g.nextPage() ); delay(1000); p = p+1; if (p == 4) p=0; } void pageTimeDay() { Serial.write("Time Day"); Serial.println(); u8g.drawStr( 0, 15, "July 15"); return 0; } void pageTemperature(){ Serial.write("Temperature"); Serial.println(); u8g.drawStr( 0, 15, "27 degrees"); return 0; } void pageHumidity() { Serial.write("Humidity"); Serial.println(); u8g.drawStr( 0, 15, "%65"); return 0; } void pageInfo(){ Serial.write("pageInfo"); Serial.println(); u8g.drawStr( 0, 15, "Cuneyt Aliustaoglu"); return 0; } | https://cuneyt.aliustaoglu.biz/en/managing-page-loop-for-u8glib-using-arduino-uno-oled-128x64-i2c/ | CC-MAIN-2019-09 | refinedweb | 534 | 58.69 |
table of contents
NAME¶
perf-buildid-cache - Manage build-id cache.
SYNOPSIS¶
perf buildid-cache <options>
DESCRIPTION¶
This command manages the build-id cache. It can add, remove, update and purge files to/from the cache. In the future it should as well set upper limits for the space used by the cache, etc. This also scans the target binary for SDT (Statically Defined Tracing) and record it along with the buildid-cache, which will be used by perf-probe. For more details, see perf-probe(1).
OPTIONS¶
-a, --add=
-f, --force
-k, --kcore
-r, --remove=
-p, --purge=
Purge all cached binaries including older caches which have specified path from the cache.
-P, --purge-all
-M, --missing=
-u, --update=
-l, --list
-v, --verbose
--target-ns=PID: Obtain mount namespace information from the target pid. This is used when creating a uprobe for a process that resides in a different mount namespace from the perf(1) utility.
SEE ALSO¶
perf-record(1), perf-report(1), perf-buildid-list(1) | https://manpages.debian.org/experimental/linux-perf-5.4/perf_5.4-buildid-cache.1.en.html | CC-MAIN-2022-21 | refinedweb | 168 | 65.01 |
For an internal search, use callbacks to retrieve what the server finds. The callbacks allow you to retrieve the info that would be sent back to a client application were the operation initiated by an external request: LDAP result codes, entries found, and referrals.
You set up the callback data that you want to retrieve. You also write the function that is called back by the server.
This code shows how the example plug-in internal.c uses a callback to retrieve an LDIF representation of the first entry that is found. The entry is found during an internal search with slapi_entry2str() as the callback function.
#include "slapi-plugin.h" struct cb_data { /* Data returned from search */ char * e_str; /* Entry as LDIF */ int e_len; /* Length of LDIF */ }; int test_internal_entry_callback(Slapi_Entry * entry, void * callback_data) { struct cb_data * data; int len; /* This callback could do something more interesting with the * data such as build an array of entries returned by the search. * Here, simply log the result. */ data = (struct cb_data *)callback_data; data->e_str = slapi_entry2str(entry, &len); data->e_len = len; slapi_log_info_ex( SLAPI_LOG_INFO_AREA_PLUGIN, SLAPI_LOG_INFO_LEVEL_DEFAULT, SLAPI_LOG_NO_MSGID, SLAPI_LOG_NO_CONNID, SLAPI_LOG_NO_OPID, "test_internal_entry_callback in test-internal plug-in", "\nFound entry: %sLength: %d\n", data->e_str, data->e_len ); return (-1); /* Stop at the first entry. */ } /* To continue, return 0. */
This callback stops the search at the first entry. Your plug-in might have to deal with more than one entry being returned by the search. Therefore, consider how you want to allocate space for your data depending on what your plug-in does.
With the callback data and function implemented, you are ready to process the internal search. First, allocate space for the parameter block and your callback data, and set up the parameter block with slapi_search_internal_pb_set(). Next, invoke the search with slapi_search_internal_pb(), and also set up the callback with slapi_search_internal_callback_pb(). When you are finished, free the space that you have allocated.
#include "slapi-plugin.h" static Slapi_ComponentId * plugin_id = NULL; int test_internal() { Slapi_PBlock * pb; /* PBlock for internal ops */ char * srch_attrs[] = /* Attr. to get during search */ {LDAP_ALL_USER_ATTRS, NULL}; struct cb_data callback_data; /* Data returned from search */ int rc; /* Return code; 0 means success. */ slapi_log_info_ex( SLAPI_LOG_INFO_AREA_PLUGIN, SLAPI_LOG_INFO_LEVEL_DEFAULT, SLAPI_LOG_NO_MSGID, SLAPI_LOG_NO_CONNID, SLAPI_LOG_NO_OPID, "test_internal in test-internal plug-in", "\nSearching with base DN %s, filter %s...\n", "dc=example,dc=com", "(uid=fcubbins)" ); pb = slapi_pblock_new(); /* Set up a PBlock... */ rc = slapi_search_internal_set_pb( pb, "dc=example,dc=com", /* Base DN for search */ LDAP_SCOPE_SUBTREE, /* Scope */ "(uid=fcubbins)", /* Filter */ srch_attrs, /* Set to get all user attrs. */ 0, /* Return attrs. and values */ NULL, /* No controls */ NULL, /* DN rather than unique ID */ plugin_id, SLAPI_OP_FLAG_NEVER_CHAIN /* Never chain this operation. */ ); if (rc != LDAP_SUCCESS) { slapi_pblock_destroy(pb); return (-1); } /* Internal search puts results into the PBlock, but uses callbacks * to get at the data as it is turned up by the search. In this case, * what you want to get is the entry found by the search. */ rc = slapi_search_internal_pb(pb); rc |= slapi_search_internal_callback_pb( pb, &callback_data, NULL, /* No result callback */ test_internal_entry_callback, NULL /* No referral callback */ ); /* ... get status ... */ slapi_pblock_get(pb, SLAPI_PLUGIN_INTOP_RESULT, &rc); if (rc != LDAP_SUCCESS) { slapi_pblock_destroy(pb); return -1; } /* Free the search results when * finished with them. */ slapi_free_search_results_internal(pb); slapi_pblock_destroy(pb); /* ... done cleaning up. */ return (0); }
Here, you allocate and free callback_data locally. You can manage memory differently if you pass the data to another plug-in function. | http://docs.oracle.com/cd/E19693-01/819-0996/aahhb/index.html | CC-MAIN-2014-52 | refinedweb | 537 | 56.05 |
Part 01 – Before we start
In this series I intend to guide you through writing your first ever computer program.
We will write our own game on the Raspberry Pi, which is a cheap-as-chips computer designed for learning about computers.
Get a Raspberry Pi
To follow along, here’s what you will need:
- A Raspberry Pi (about £25) – I got it from RS
- An SD card (about £10) – be careful – not all of them work. I use: Kingston Technology 16GB
- If your TV supports it, an HDMI cable (about £1) – I got: HDMI to HDMI Connector. If your TV doesn’t support HDMI, get a composite cable, but it won’t look as good or work as well.
- A power supply (about £4) e.g. a smartphone charger (micro USB, at least 5V, 1A) – I got Micro USB Mains Charger but my existing HTC Wildfire S charger worked too.
- USB keyboard and mouse (about £7) – I had them lying around, but a quick search suggests this one might be ok: CiT USB Keyboard and Mouse.
(Total cost, very approximately: £47)
Install “Raspian”
To use the Pi you will need to install some software onto your SD card.
To do this you will need a PC or laptop. If you don’t have one, or you’d prefer not to download and install software to an SD card, check out The Pi Hut. They sell SD cards that already have Raspian installed on them.
Raspian is the name of the software we will use to start up and run our Raspberry Pi. You need to download it and install it onto your SD card before you can put the SD card into the Pi and turn it on. To do this, you’ll need a way of writing to the SD card. Lots of laptops (and some desktops) have built-in SD card readers, or you can get a USB reader (I got this one: SD Card Reader USB 2.0).
To install Raspian “wheezy” (wheezy is the name of the latest version) go to the Raspberry Pi download page at and click the link in the Raspian “wheezy” section next to the words “Direct download”. Follow the instructions on how to install Raspian to your SD card here: elinux.org/RPi_Easy_SD_Card_Setup.
(There are also some helpful instructions here: reviews.cnet.co.uk/desktops/how-to-get-started-with-the-raspberry-pi-50009845/.)
Start the Pi
Once you’ve got an SD card with Raspian on it, insert it into your Pi (the SD card slot is underneath, which surprised me a bit). Plug the Pi into your TV by connecting the HDMI cable to it and plugging the other end into the TV’s HDMI port. Plug your keyboard and mouse into the 2 USB slots.
Take a deep breath, and plug the power supply into the micro-USB port.
If all goes well, some lights will appear on the Pi, you will be able to switch your TV to HDMI mode and your screen will show some writing and possibly pictures of raspberries. Wait for it all to settle down, and (hopefully) eventually you’ll see the setup screen.
First time setup
The first time your Pi boots it will ask you to do some setup. Read the raspi-config menu items and see whether there’s anything you want to change. You might want to change your keyboard and language settings, but I didn’t need to change anything at all. I just pressed TAB and then right-arrow to move onto the word Finish, then pressed RETURN.
There’s more information about how to set everything up at elinux.org/RPi_raspi-config, and there’s a nice detailed video here: First boot and Raspi-config.
Wait a bit more, and eventually you should see a huge raspberry, with a mouse cursor and desktop. If so, you’re ready for the next part!
Update: a real person really following this series!:
Part 02 – Saying hello
Writing your first ever computer program on the Raspberry Pi. See Part 1 for how to get and set up the Pi.
Today we will find out how to write a computer program, and how to run it.
We’re going to write one of the simplest programs you can write – we’re going to get the computer to say hello to us.
First, we need a text editor to write down our program. Click the weird aeroplane-y thing in the bottom left – that brings up the menu (like the Start button on Windows), then choose Accessories, then Leafpad. Leafpad is the text editor we will be using.
Leafpad will start, and show you an empty page. This is where we will write our program.
Type in exactly this:
print "Hello, world!"
and then click the File menu at the top of the Leafpad window and choose Save As. Click the word “pi” on the left and then click in the empty box next to the word Name, and type the name of our program, which is:
redgreen.py
“redgreen” is the name and the “.py” means this is a program written in the language Python. We’ll be finding out more about Python as we go on.
Click the Save button.
Our program is finished! Now we need to run it.
Click the aeroplane-y thing again, then Accessories, then LXTerminal. A terminal is a program you use to run other programs.
When LXTerminal has started, your cursor will appear next to a $ sign. This means it is ready for you to tell it what to do.
Type exactly this:
python redgreen.py
What this means is run the program called “python”, and pass the name of our program (redgreen.py) to it. This is how you run Python programs.
Now press the RETURN key.
If all goes well, our program will talk back to us, and say what we told it to say:
Hello, world!
Let’s look again at our program.
It’s just one “statement”, a print statement. A statement is something to do.
We pass one “argument” to print, “Hello, world!”. An argument is some information you give to a statement.
“print” doesn’t mean print to the printer, but write to the screen. So our program did exactly what we told it to do – it wrote our message to the screen.
Next time, we’ll map out the whole of our real program – a simple game.
Update: congratulations to sparkboy123 on getting this working!
Part 03 – It’s like a magic spell
Writing your first ever computer program on the Raspberry Pi.
Today we will write the very basic outline of our game. When we’ve finished it won’t actually do anything. In fact, it won’t even run.
Writing a computer program is a lot like doing magic. To write programs well it is really helpful to treat it like magic, and not think about how it works (until you have to…). We’re going to do that – write down in very simple terms what our program will do, but not yet think about how it will do it.
The program we are going to write is a very simple test of your reactions. It will show you either a green circle or a red square. If you see a green circle, you have to press a key as quickly as you can. If you see a red square, you must not press anything.
That’s the whole of the game (for now).
To make this game, we need to start up, then show a ready screen, then wait for a while showing the ready screen, then show a shape and wait for a key press, then finish.
So, let’s write our magic spell. Start up LeafPad just like we did in part 2. Click File, then Open, click on “pi” on the left, then double-click redgreen.py to load up our Hello, world program. Delete everything that’s there, and type exactly this instead:
start() ready_screen() wait() shape() end()
The brackets after each word mean “do it” – what they really mean is find a “function” with this name, and run it. A function is like a mini-program.
So, now our program is finished, right?
Let’s run it. Go to the File menu in LeafPad and click Save, then open LXTerminal just like in part 2 and run the program by typing “python redgreen.py” as before. Here’s what happens:
$ python redgreen.py Traceback (most recent call last): File "redgreen.py", line 3, in start() NameError: name 'start' is not defined
We told python to run a function called “start”, but we haven’t written any functions yet, so it couldn’t find it.
Our spell is cast, but we next time we need to start on the ingredients it uses. See you then.
Part 04 – A small black screen″.″…
Part 05 – Say something
Writing your first ever computer program on the Raspberry Pi.
Today we will write some writing on that blank screen we made last time.
But first, a couple of tricks (we are doing magic after all). We’re going to make our program know it is a Python program, without needing to be told. To do this we need to do 2 things.
First, open up LeadPad as normal and add this line at the very (very) top of the file. Make sure there are no empty lines above it:
#!/usr/bin/env python
That’s a “hash” (or, for Americans, “pound”) symbol, followed by an exclamation mark. Note that all the slashes are forward slashes, not backward.
This tells our Raspberry Pi that this is a Python program. Now we need to tell our Pi that this program is allowed to run by itself, instead of its name being passed to the python program like we have been doing before.
To do this, open up LXTerminal as before, and type exactly this (and press Return):
chmod +x redgreen.py
If all of this worked correctly, you should be able to run our program in a new way. Instead of typing python redgreen.py like we were before, we can type this in to LXTerminal:
./redgreen.py
That’s a dot, followed by a forward slash, followed by the name of our program.
This means run the program in this directory (that is the “./” part) called “redgreen.py”. Your Pi will look at redgreen.py and find the line we added that starts with “#!” and know to use Python to run it.
Now let’s get on with writing something on the screen. Go back to LeafPad and change the line starting with screen_size = to be these 3 lines:
screen_width = 640 screen_height = 480 screen_size = screen_width, screen_height
This creates 3 variables – screen_width, screen_height and the one we had before screen_size, which is now made by putting the first 2 together. We’re going to use screen_height later.
Just below those 3 lines, type this:
screen = None ready_text = None
This gets 2 variables ready for us, and makes them empty. We’re going to fill them in inside start.
Change the start function to look exactly like this:
def start(): global screen, ready_text pygame.init() screen = pygame.display.set_mode( screen_size ) font = pygame.font.Font( None, screen_height / 5 ) ready_text = font.render( "Ready?", 1, pygame.Color( "white" ) )
The green lines above are the bits we’ve added. The line beginning global tells Python we want to work with those variables we got ready earlier inside this function, even though we created them outside it. (Without saying they were “global” we would be in danger of working with versions of them that only existed while we were inside the function, and disappeared as soon as we left.)
The font line makes a new font (a font is a typeface, or way of writing text). The first argument we passed was None because we don’t care at the moment which font we use (e.g. “Arial” or “Times New Roman”) – we are happy with the default. The second argument is for the size of the font we want, and we passed in screen_height / 5, which means the value of the screen_height variable we created near the top, divided by 5. The “/” character is how we write division in Python – it is supposed to look a bit like a fraction.
Finally, on the last line we create another variable called ready_text, which contains the “rendered” version of the word “Ready?”, using the font we created, in white. “Render” means we create a picture showing the writing we wanted. We’ll draw this picture onto the screen in a second.
Now that all our preparation is over, we can finally write a fuller version of the ready_screen function. Change it to look like this:
def ready_screen(): textpos = ready_text.get_rect( centerx = screen.get_width() / 2, centery = screen.get_height() / 2 ) screen.blit( ready_text, textpos ) pygame.display.flip()
The first 4 lines (textpos up to the closing bracket all on its own) are really all one “line of code” – they are like one sentence of our program, that happens to span multiple lines. In Python if we want to continue a sentence (we call it a “statement”) we just leave a bracket unclosed. Python knows we have finished when we have closed all the brackets.
This “line” from textpost = up to ) creates a variable called textpos, which contains the place on the screen where we want to put our writing. We look inside ready_text (which is our rendered writing that we created above) for a function called get_rect that calculates a rectangle for us that is the right place on the screen to put the writing. The arguments we pass to get_rect are screen.get_width() / 2 and screen.get_height() / 2, which are telling it that it should calculate the rectangle by putting its centre in the middle of the screen. The middle of the screen is half-way across its width, and half-way down its height, which is why we are dividing the width and the height of the screen by 2.
Something worth noticing here is that the arguments to get_rect have names – we wrote centerx = and centery =. In Python we are allowed to give the names of arguments, or sometimes we can miss them out if we are happy just to put them in the right order. The get_rect function can actually take lots of different arguments, so it needs to know which ones you mean, which is why we named them.
Finally the last 2 lines do the real work. The screen.blit line tells PyGame to write the ready_text picture (the rendered writing) onto the screen at the position stored inside textpos. pygame.display.flip() is what we do to tell PyGame we’ve finished messing about with the screen, and we’re ready for it to display what we’ve done.
[The word “blit” is an oddity from the olden days which I’m afraid you’ll just have to memorise, and “flip” comes from the fact that behind the scenes there are really two screens – the one we are displaying, and the one we are working on. flip() switches them over, displaying the one we were working on, and making the other one ready to be worked on.]
If you’ve got this far, well done! With any luck, we’re going to see the word “Ready?” on the screen in big letters.
Switch to LXTerminal again and type our new spell:
./redgreen.py
Don’t move the mouse or press anything: a window should appear, with the word “Ready?” written in big white letters. When you press a key or move the mouse over it, it should disappear.
If something goes wrong, check back over the instructions carefully, and compare your file with this one: redgreen.py.
Part 06 – A better class of waiting!
Part 07 – A green circle
Writing your first ever computer program on the Raspberry Pi.
We’re going to write a game that tests your reactions – press a key when you see green, but don’t when you see red.
Today we see some of what we have been waiting for – a genuine bona-fide green circle, made by you!
We’re going to need some random numbers, so edit your program in LeafPad, and add a line, just before import pygame near the top:
import random
This makes the “random” module available to us so we can make some numbers later.
Remember we had a function called “wait”, but it never did anything? It was supposed to wait for a random amount of time before we showed our green or red shape. Let’s write it now. Find the empty wait function and replace it with:
def wait(): time_to_wait = random.randint( 1500, 3000 ) # Between 1.5 and 3 seconds pygame.time.wait( time_to_wait ) # Note bug: can't quit during this time
The first line makes a variable time_to_wait and puts a random number into it. The random.randint function gives us a random number between the two numbers we supplied, so here between 1,500 and 3,000. time_to_wait is a time in milliseconds, so this means between one and half and three seconds.
After the closing bracket, we have a hash symbol #, and then some writing. This is a “comment”, and it is completely ignored by Python. It’s just for us.
[As time goes on, I hope you will begin to see programming more and more as talking to other people, not just to the computer. It’s fairly easy to write a computer program, but much harder to understand one written by someone else. Most programs live a long time, and people need to understand them to keep them up-to-date, so making them as easy to understand as possible is very important. Comments are one way to help people understand, but in a way they are a last resort – if possible, the code itself should be so easy to understand that you don’t need many comments. Here, I thought that the translation between seconds and milliseconds might be helpful to someone looking at this later.]
The next line uses a function inside PyGame’s time module to wait for the amount of time we give it (in milliseconds, stored in time_to_wait). Note that this is not the same wait function we have seen before, pygame.event.wait. That one waits forever for an event to happen, but this one waits (and can’t be interrupted) for the amount of time we say.
I’ve added another comment to this line saying that there’s a bug in our program: if we write it like this, you can’t actually quit the game by closing the window while we’re waiting. The pygame.time.wait function won’t be interrupted by the window being closed, so we’ll ignore it. This is almost unbearably rude, but don’t worry – we’ll fix it soon (ish).
And now for the really exciting part: we’re going to draw a green shape on the screen. Let’s make a function, just above the shape function, called green_shape:
def green_shape(): green = pygame.Color( "green" ) centre = ( screen.get_width() / 2, screen.get_height() / 2 ) radius = screen.get_width() / 3 screen.fill( pygame.Color( "white" ) ) pygame.draw.circle( screen, green, centre, radius, 0 ) pygame.display.flip()
This code makes a variable green holding onto the colour green, one called centre holding the co-ordinates of the centre of the screen, and one called radius holding the size of the circle we want to draw.
Then it uses the fill function on screen to colour in the screen white, and then draws our circle with a call to pygame.draw.circle, using the variables we have prepared as arguments, telling it where to draw the circle, in what colour, and what size.
Finally it uses flip as before to tell PyGame we have finished.
The last piece of today’s jigsaw is just to call the function we created above. Find the empty shape function, and make it look like this:
def shape(): green_shape()
This literally just means run the green_shape function.
Take a deep breath, prepare to be excited, open LXTerminal and run our new program in the usual way:
./redgreen.py
If all has gone well, the ready screen will appear for a couple of seconds, before a white screen with a big green circle on it appears. This will then go away when you press a key.
If something goes wrong, check back what you typed, and compare your version against mine: redgreen.py.
Next time, we’ll find out whether you pressed a key or were too slow!
Part 08 – Success and failure
Writing your first ever computer program on the Raspberry Pi.
We’re writing a really simple game – you have to press a key when you see green.
Today we’re going to wait for a key press. If we get one, we’ll tell the player they did well. If not, we’ll tell them they are a bad person.
We’re going to change the green_shape function first, to make it wait for a key press (or give up waiting) and then tell the player what happened.
Find the green_shape function and add the new bit that I’ve highlighted in green, at the end:
def green_shape(): green = pygame.Color( "green" ) centre = ( screen.get_width() / 2, screen.get_height() / 2 ) radius = screen.get_width() / 3 screen.fill( pygame.Color( "white" ) ) pygame.draw.circle( screen, green, centre, radius, 0 ) pygame.display.flip() pressed = shape_wait() if pressed: green_success() else: green_failure()
green_shape is the function that shows a green shape to the player.
This new code does 2 things. First, it calls a function shape_wait (that we haven’t written yet) that waits for a key press. We are expecting this function to give us back an answer, which we will store inside a new variable, pressed.
Second, it checks the value of pressed, and calls a different function in each case. If a key was pressed, this is good (because we’re showing a green shape, so you’re supposed to press a key) so we call the green_success function (which we haven’t written yet either). If no key was pressed because we gave up waiting, we call the green_failure function (which we haven’t written yet!).
That covers everything we want to do today – all we have to do is write those 3 missing functions.
Let’s start with the hardest one – shape_wait. Go up to just above the green_shape function, and type this: )
There are a few things to explain here. First, the writing at the top just below the def line. This is the way we explain in Python what a function does and what it’s for. It’s optional, and we haven’t done it before, but I thought this function was interesting enough for us to provide some explanation. Notice the triple-quotes """ at the beginning and end. That is a way Python allows us to write longer strings of text that cover more than one line. The string starts at the first triple-quote, and ends at the last.
After our documentation string, we create a familiar variable event_types_that_cancel that holds on to all the types of event we are interested in – key presses and mouse clicks. Next we remember how long we are going to wait before giving up in another variable time_to_wait.
After that we do something a bit interesting. Up to now we have been dealing with “events” – things that happen such as mouse clicks, key presses and mouse movements, but we have only been responding to them, not creating them. The next 2 lines are how we create our own event, that we want to respond to later.
What we want to do is make an event happen in 2 seconds’ time, so that we can give up waiting when it comes. The way we do that is first create an “ID” for it. This is just a numeric “name” that we can use to talk about the same type of event later. In PyGame the right ID to choose for an event you created yourself is pygame.USEREVENT + 1 (and higher numbers if you need more than one). We don’t know what number PyGame has stored inside its own variable pygame.USEREVENT, and we don’t care – all we care about is that PyGame says if we use numbers bigger than that, we’ll be fine. If we use smaller numbers, we are going to clash with the built-in events like pygame.KEYDOWN that we have already seen.
Once we have an appropriate ID stored inside finished_waiting_event_id we are ready to ask PyGame to create an event that will happen in 2 seconds’ time. We do that by calling the set_timer function inside pygame.time.
Now continue the function by typing all this:
This is the code that waits for something to happen. It’s quite similar to the loop we saw in part 6, where we were also waiting for something to happen, but it’s slightly more complicated because we have to handle more possibilities.
This function will provide an answer to the code that called it, and the answer is going to be whether or not the player pressed a key. Providing an answer like this is called “returning a value” and we do it by writing a line like the last one here, using the return statement. The first line above creates a variable called pressed, which starts off set to False, meaning they haven’t pressed anything, and somewhere in between it might get set to True, and then the last line returns this answer – True or False for whether or not a key was pressed.
In between we have a loop similar to part 6 – we create a variable called waiting which tells us whether to keep looping, and then we loop using the while line through all the lines indented below it. The inside of the loop (the part that gets repeated) waits for an event to happen with pygame.event.wait and then has a series of if and elif sections, that do different things depending what type of event happened.
First (the if part), we check whether the player closed the window. If so, we call our function quit, that stops everything immediately.
Next (the first elif), we check whether a key or mouse button was pressed. If so, we make sure the value we will return inside pressed is updated to say a key was pressed (i.e. we make it True), and then we set waiting to False so that we will stop looping at the end of this repeat.
Now (the second elif), we check whether what happened was the special event we created earlier when we called set_timer. If so, we need to end the loop (so we set waiting to False), but no key was pressed, so we leave pressed as it was.
Finally, if the event that happened didn’t fit any of our categories (for example it might have been a mouse movement event), we do absolutely nothing because none of the if or elif sections was triggered. We jump straight back to the start of the loop and start waiting for the next event to happen.
So, eventually, either an interesting event happens, or the “we’ve been waiting too long” event we created happens, and we come out of the while loop. The last thing we have to do is cancel the “we’ve been waiting too long” event, just in case it hasn’t happened yet – we don’t want it confusing us later. We do that by calling set_timer again, with the same ID as before, but with 0 for the amount of time to wait – this tells PyGame we’re not interested in that event any more.
Once we’ve done that we return the answer about whether a key was pressed, and we’re done with shape_wait.
Next up are green_success and green_failure. These tell the player whether they succeeded or failed – did they manage to press when they saw green?
They’re both quite simple. Type these just above green_shape:
def green_success(): tick() pygame.time.wait( 2000 ) # Can't quit or skip! def green_failure(): cross() pygame.time.wait( 2000 ) # Can't quit or skip!
If a key was pressed on green, we want to draw a “tick” mark on the screen, so we call a function tick that we’ll write in a moment. Similarly, if a key wasn’t pressed, we will draw a cross.
Drawing shapes is fairly straightforward, but a bit verbose. Just above green_success type these 2 functions:
def tick(): colour = pygame.Color( "green" ) w = screen.get_width() / 2 h = screen.get_height() / 2() / 2()
Both of these functions get some variables ready, do some maths on them to decide where on the screen to start and end the lines they are drawing, and then draw the lines (after making a black background with screen.fill).
The tick is drawn by passing in a list of 3 points on the screen to the pygame.draw.lines function, and the cross is drawn using two separate calls to pygame.draw.line, one for each line. After we’ve drawn our lines, we call pygame.display.flip as normal to show them on the screen.
With those two functions in place, we’re ready to try it out. Open LXTerminal in the usual way, and type our usual incantation:
./redgreen.py
If all has gone well, you should see the green shape as before, but when you press a key a tick should appear. If you don’t press a key, after a while a red cross should appear.
If that doesn’t happen, check your typing really carefully, and compare your version with mine: redgreen.py.
Next time, we’ll add some writing explaining what you should do at each step.
Part 09 – Lots more words.
Part 10 – Red square
Writing your first ever computer program on the Raspberry Pi.
We’re writing a really simple game – you have to press a key when you see green, and not press a key when you see red.
I’ve been promising for a while that there will be a red square as well as a green circle, and this time we’re going to make that dream a reality.
The code we’ve written so far has this overall structure:
start() ready_screen() wait() shape() end()
It gets started, tells you to get ready, waits a random time, shows you a shape and collects your keypresses (or not), and then it ends.
Previously, the shape function just showed a green shape every time, by calling a function called green_shape. Not any more – change it to look like this:
def shape(): GREEN = 0 RED = 1 shape = random.choice( [GREEN, RED] ) if shape == GREEN: green_shape() else: red_shape()
The first 2 lines just make two variables for us called GREEN and RED. They can have any values, so long as they’re not the same, so I’ve chosen 0 and 1.
The reason we’ve made these variables is so that we can make a random choice of one or the other. To do this, we call the choice function from the random module (which we already have listed as imported at the top). choice takes in a list of things to choose from, and returns the one it chose randomly.
So the shape variable will contain the value of either GREEN or RED. We do an if to decide what to do based on which it is.
If we chose GREEN, we do what we used to do, and call green_shape but if we chose RED we will end up in the else part of the if, and call a new function we will call red_shape.
So, what will red_shape look like? Quite a lot like green_shape actually. Just above the shape function, add this:
def red_shape(): red = pygame.Color( "red" ) height = 2 * ( screen.get_height() / 3 ) left = ( screen.get_width() / 2 ) - ( height / 2 ) top = screen.get_height() / 6 screen.fill( pygame.Color( "white" ) ) pygame.draw.rect( screen, red, ( left, top, height, height ), 0 ) write_text( screen, "Don't press!", pygame.Color( "black" ), False ) pygame.display.flip() pressed = shape_wait() if pressed: red_failure() else: red_success()
Most of this function is taken up with drawing a red rectangle, which we do by working out what size it should be, then calling pygame.draw.rect with the right dimensions and colour. After that we write some text encouraging the player to leave their keyboard alone, and then we do the normal pygame.display.flip to show this on the screen.
Once we’ve drawn the shape, we do something very similar to what we did inside green_shape – we wait to see what happens, by calling the already-existing shape_wait function, and get the answer back from it saying whether or not the player pressed something.
This time, if they pressed something they got it wrong, so we call a new function called red_failure, and if they did nothing they did the right thing, so we call another new function called red_success.
These two functions are also quite simple. Type them in above green_shape:
def red_success(): tick() green = pygame.Color( "green" ) white = pygame.Color( "white" ) write_text( screen, "Well done!", green, True ) write_text( screen, "You didn't press on red!", white, False ) pygame.display.flip() pygame.time.wait( 2000 ) # Can't quit or skip! def red_failure(): cross() red = pygame.Color( "red" ) white = pygame.Color( "white" ) write_text( screen, "Bad Luck!", red, True ) write_text( screen, "Red means don't press anything!", white, False ) pygame.display.flip() pygame.time.wait( 2000 ) # Can't quit or skip!
These functions re-use lots of existing code – they draw a tick or a cross and then use write_text to tell the player what happened, and then they do the normal flip and wait for a bit.
Try your program – it should now show you a red square about half of the time, instead of a green circle every time, and it should give you feedback about whether you did the right or the wrong thing. Feel free to try it a few times, and make sure you’ve run through all the combinations. If you made a mistake somewhere you may not see it until you actually run the relevant bit of code.
Once you’re happy with that, let’s fix a little bug while we’re here. Somehow I missed a bit from the shape_wait function, so if you press a key on the ready screen, it will register as you pressing the key really quickly when the shape appears. Try running your program and hammering a key when it says “Ready?”. You’ll see it thinks you pressed immediately the shape appears (whether red or green). This is annoying, and could even allow cheating, but we can prevent it by adding a single line to shape_wait: ) pygame.event.clear()
As we’ve seen before, pygame.event.clear tells PyGame to forget all the events that have happened recently, which prevents this problem.
If you want to check your version against mine, you can find it here: redgreen.py.
We’ve nearly built a fully-working game. We’ve got two main tasks ahead of us: fix the “can’t exit” bug, and allow multiple rounds with a score at the end. We’ll do them in that order, so next time it’s bug-fixing, and some more refactoring to help us do it.
Part 11 – Being less rude
Writing your first ever computer program on the Raspberry Pi/
We’ve nearly finished our game. Next on our list is to fix that bug where you can’t exit some of the time, and make our code a bit tidier in the process.
The first thing I want to do is make the Esc key quit the game. This is fairly normal behaviour, and will help if we run in full screen mode, where there is no close button to click.
Open up redgreen.py in LeafPad as usual, and find the function shape_wait and the line if evt.type == pygame.QUIT:. Replace the whole line with this:
if is_quit( evt ):
We’ve replaced the code asking whether the event was a quit event (i.e. the user closed the window) with a call to a function. Let’s write that function. Just above shape_wait type in this function:
def is_quit( evt ): return evt.type == pygame.QUIT
You’ve just done another bit of refactoring. Instead of writing the code directly in the if line, we’ve added a call to the function, and the function does exactly the same thing as we did before: it returns True if the event is a quit event, and False otherwise. If you try the program now (by saving in LeafPad, opening LXTerminal and typing ./redgreen.py) you should see it behaves exactly as it did before.
You may well ask why we did it. The answer is because now we can change the is_quit function to do something extra. Replace it with this:
def is_quit( evt ): return ( evt.type == pygame.QUIT or ( evt.type == pygame.KEYDOWN and evt.key == pygame.K_ESCAPE ) )
This is a more complicated bit of logic, saying that we will return True if either of two things is true: EITHER the event is a quit event (as before), OR the event is a KEYDOWN, and the specific key that was pressed was the Escape key (“Esc”) on the keyboard. Notice that the brackets around the “and” part help us know which bits go together – we don’t want to quit for any keypress event, only one where the key is Escape.
If you try your program again, you should find you can press Escape to exit when you’re looking at a red or green shape.
The shape_wait function is quite a useful one, and the next thing we want to do is use it in a few more places. Before we can do that, we need to refactor it to make it a bit more flexible.
Make a new function called timed_wait further up, just before start, and cut the entire body of shape_wait and paste it into timed_wait. So it looks like this:
def timed_wait(): event_types_that_cancel = pygame.KEYDOWN, pygame.MOUSEBUTTONDOWN ... all the rest of shape_wait here ... pygame.time.set_timer( finished_waiting_event_id, 0 )
Now change shape_wait to look like this:
def shape_wait(): """ Wait while we display a shape. Return True if a key was pressed, or false otherwise. """ return timed_wait()
As usual, we’ve just replaced some code with a call to a function that contains the exact same code, so hopefully our program will work exactly as before.
Now we’re going to make timed_wait a bit more general, while still preserving all the same behaviour. We do this by changing some of the variables we use in timed_wait into arguments we pass in. Change shape_wait to look like this:
def shape_wait(): """ Wait while we display a shape. Return True if a key was pressed, or false otherwise. """ press_events = pygame.KEYDOWN, pygame.MOUSEBUTTONDOWN return timed_wait( 2000, press_events ) # 2 seconds
and modify timed_wait to accept those arguments (notice I also added a description of what it does): ) pygame.event.clear() pressed = False waiting = True while waiting: evt = pygame.event.wait() if is_quit( evt ): quit() elif evt.type in event_types_that_cancel: waiting = False pressed = True elif evt.type == finished_waiting_event_id: waiting = False pygame.time.set_timer( finished_waiting_event_id, 0 ) return pressed
Again, after these changes our program should work exactly as before – we’re passing values in as arguments that are exactly what we used to make as variables. But now, timed_wait is a lot more flexible, and we’ll use that flexibility very soon.
But first, we need to make a change to cover the unexpected. Inside timed_wait we’ve made a timer using pygame.time.set_timer and at the end we’ve cancelled it by calling pygame.time.set_timer again. However, if something goes wrong in between where we create the timer, and where we cancel it, it’s possible that something called an “exception” will be “thrown”. When an exception is thrown, the program stops running normally, line by line, and jumps out to somewhere else.
I’m not going to explain any more about exceptions here, but I am going to show you how to make absolutely sure that something will happen, even if an exception is thrown. The way to do that is to use a try ... finally block. We want to make sure our timer is always cancelled, so as soon as we’ve made it, we start a try block, and at the end we say finally. Anything inside that finally block will be run, even if an exception was thrown in the code inside the try block. The changes look like this: ) try: pygame.event.clear() pressed = False waiting = True while waiting: evt = pygame.event.wait() if is_quit( evt ): quit() elif evt.type in event_types_that_cancel: waiting = False pressed = True elif evt.type == finished_waiting_event_id: waiting = False finally: pygame.time.set_timer( finished_waiting_event_id, 0 ) return pressed
The lines in green are new, and the ones in blue are just indented by four more spaces to make them part of the try and finally blocks. Now, we know that even if something goes wrong while we’re waiting, we will always cancel the timer we set up. Yet more good manners!
Now, after all that work, we finally have a timed_wait function that is flexible enough to be used everywhere we want to wait for something. Let’s start with the wait function. Change it to look like this:
def wait(): time_to_wait = random.randint( 1500, 3000 ) # Between 1.5 and 3 seconds timed_wait( time_to_wait, () )
By using our clever timed_wait function instead of the built-in pygame.time.wait we gain some extra politeness: we can now quit the program on the “Ready?” screen by closing the window or pressing the Escape key. Try it!
Notice that we passed in () as the second argument to timed_wait. This argument is called event_types_that_cancel and is normally a list of types of event that will stop us waiting. () is Python’s way of saying an empty list, so we’re saying we don’t want to stop for any normal events (such as key presses) – only for quit events, or when the time is up.
Before we change lots more code to use timed_wait, we are going to make a new variable that we can use in lots of places in the code. Quite a few times, we want to wait for either a key press or a mouse click. We want this when we’re showing a red or green square, and when we’re at the end saying goodbye, and ideally we also want it when we’re telling the user how they did, so they can skip it if they’re impatient. So, right near the top, add a new line just below where we create screen_size:
screen_width = 640 screen_height = 480 screen_size = screen_width, screen_height press_events = pygame.KEYDOWN, pygame.MOUSEBUTTONDOWN
This variable press_events will be our list of normal event types that we consider to be a “press” – essentially, the player doing something. Now that we’ve defined this at the top, we can take out the variable with the same name from shape_wait – it will use the global one instead:
def shape_wait(): """ Wait while we display a shape. Return True if a key was pressed, or false otherwise. """ return timed_wait( 2000, press_events ) # 2 seconds
We can also re-use press_events in the end function, and at the same time call our new timed_wait function:
def end(): screen.fill( pygame.Color( "black" ) ) white = pygame.Color( "white" ) write_text( screen, "Thanks for playing!", white, True ) write_text( screen, "Press a key to exit", white, False ) pygame.display.flip() pygame.event.clear() timed_wait( 0, press_events )
Notice that this time we pass zero as the time to wait – this just means we will never time out on this screen – the zero gets passed in and used in the pygame.time.set_timer call, but passing in zero for the time there means “cancel this event”, and is harmless if the event doesn’t actually exist, so no timer will be set up – we will only stop waiting when the player presses something, which is what we want here.
Now we can make our success and failure functions more polite. Change them all to look like this:
def green_success(): tick() green = pygame.Color( "green" ) white = pygame.Color( "white" ) write_text( screen, "Well done!", green, True ) write_text( screen, "You pressed on green!", white, False ) pygame.display.flip() timed_wait( 2000, press_events ) # 2 seconds def green_failure(): cross() red = pygame.Color( "red" ) white = pygame.Color( "white" ) write_text( screen, "Bad Luck!", red, True ) write_text( screen, "Green means press something!", white, False ) pygame.display.flip() timed_wait( 2000, press_events ) # 2 seconds def red_success(): tick() green = pygame.Color( "green" ) white = pygame.Color( "white" ) write_text( screen, "Well done!", green, True ) write_text( screen, "You didn't press on red!", white, False ) pygame.display.flip() timed_wait( 2000, press_events ) # 2 seconds def red_failure(): cross() red = pygame.Color( "red" ) white = pygame.Color( "white" ) write_text( screen, "Bad Luck!", red, True ) write_text( screen, "Red means don't press anything!", white, False ) pygame.display.flip() timed_wait( 2000, press_events ) # 2 seconds
They all call timed_wait saying wait for 2 seconds, but skip if a key is pressed because the player is impatient to get on to the next round. This change means not only can you skip past these success and failure screens, but also you can quit while they are visible, and the last vestige of rudeness has been wiped out from our game.
Well done – just one job left, which is to allow several rounds, and count the player’s score as they play. We’ll do that next time.
In the meantime, you can fix a bug I made – I typed get_width instead of get_height, which made my circles too big. Change the line inside green_shape that looks like radius = screen.get_width() / 3 to look like this:
radius = screen.get_height() / 3
There we are – much better
You can check your verson against mine here: redgreen.py
See you next time, when hopefully we’ll finish the game!
Part 12 – Scoring, done!
Writing your first ever computer program on the Raspberry Pi.
Today, we finish!
Our game is almost done. All we need to do now is let you play several times, and give you a score at the end.
First, because we’re going to use it lots of times, we need to make the ready_screen function set its background colour properly. Open redgreen.py in LeafPad, and add a single line to the function ready_screen, making it look like this:
def ready_screen(): screen.fill( pygame.Color( "black" ) ) white = pygame.Color( "white" ) write_text( screen, "Ready?", white, True ) pygame.display.flip()
Previously, ready_screen was always the first thing we did, so we got away with not drawing a background colour because it starts off plain black. Now, we need to do it.
Next, let’s do the really interesting part. We want to play the game several times, and whenever we want to do something several times, we need a loop. This time we’ll use a for loop, letting us go through a list of things. Scroll to the very bottom, and change the code to look like this:
# We start from here start() for i in range( 10 ): ready_screen() wait() shape() end()
The new lines are green above, and lines that haven’t changed except being indented by putting four spaces at the beginning are blue.
A for loop lets you run through a list of things, running the same code each time. A for loop always looks like for NAME in LIST where NAME is the name of a new variable, and LIST is a list of things. What we’ve done here is make a list of 10 numbers by calling the range function and giving it an argument of 10, and told Python to put the particular item of the list that we’re working on now into a variable called i.
So, the ready_screen, wait and shape functions will each get called 10 times. Each time they are called, i will be a different number. We’re not using i yet, so all that matters for the moment is that the code runs 10 times. Try it out by opening LXTerminal and typing ./redgreen.py, and you’ll see that you can play the game 10 times, and then it will finish.
Playing 10 times is all very well, but it’s not a lot of fun if I can’t see how well I’ve done at the end. Let’s keep track of our score.
We’ll award the player 1 point for every time they get it right, and no points if they get it wrong. The places where we know which of these has happened are in red_shape and green_shape. Let’s change them to pass back a score (either 1 or 0) depending on what you did:
def green_shape(): ...the rest of green_shape is still here... pressed = shape_wait() if pressed: green_success() return 1 else: green_failure() return 0
def red_shape(): ...the rest of green_shape is still here... pressed = shape_wait() if pressed: red_failure() return 0 else: red_success() return 1
I’ve abbreviated it above, but we’re not changing anything in these functions except at the very bottom, where we’re adding two return lines to each function.
Whenever the player succeeds, we return a score of 1 point, and whenever they fail we return 0 points.
We’re not doing anything with this score yet. We call the green_shape and red_shape functions from inside shape, so first let’s make sure shape passes back the answer to where we need it:
def shape(): GREEN = 0 RED = 1 shape = random.choice( [GREEN, RED] ) if shape == GREEN: return green_shape() else: return red_shape()
shape doesn’t need to do anything special here – just take the answer coming from green_shape or red_shape and use the return statement to pass it back to us.
Now shape is giving us back an answer, we can use it in the main code right at the bottom:
start() correct = 0 for i in range( 10 ): ready_screen() wait() correct += shape() end( correct )
We’ve made a variable called correct that keeps hold of how many correct answers we’ve been given (i.e. the score). It starts off as zero, and every time we call shape we add on the answer that comes back. shape will either return 0 or 1, so correct will increase by either 0 or 1 each time.
The last thing we’ve done here is pass the answer (the player’s final score) into the end function so we can display it. To use this answer, we need to change end a bit:
def end( correct ): print "You got %d correct answers" % correct screen.fill( pygame.Color( "black" ) ) white = pygame.Color( "white" ) write_text( screen, "Thanks for playing!", white, True ) msg = "Score: %d Press a key to exit" % correct write_text( screen, msg, white, False ) pygame.display.flip() pygame.event.clear() timed_wait( 0, press_events )
We changed the def line to allow us to pass in the score, giving it the same name we used below, correct. Then we added a line that prints out the answer into the terminal, just for good measure, and we modified the write_text line, splitting it into 2 parts – creating a variable called msg containing our message, and then using it on the next line.
Twice above we’ve used a nice feature of Python that makes building our own messages quite simple. If you write a string like "Score: %d Press a key to exit" you can substitute a number into it using the % “operator” as we’ve done (an operator is something like + or / that combines 2 things). Where the %d appears in the string, it gets replaced by the number inside the variable you supply (correct in our case). You can also substitute in other strings (using %s) and lots of other things if you want to. This allows us to put the score into a string and then print it on the screen.
If you try your game now you will see it counts how many right answers you got and tells you at the end. Wouldn’t it be better, though, if it told you how you were doing all the way through?
Scroll up to the ready_screen function and modify it to take two arguments and use them to keep us informed:
def ready_screen( go_number, correct ): screen.fill( pygame.Color( "black" ) ) white = pygame.Color( "white" ) write_text( screen, "Ready?", white, True ) go_number_str = "Turn: %d Score: %d" % ( ( go_number + 1 ), correct ) write_text( screen, go_number_str, pygame.Color( "white" ), False ) pygame.display.flip()
The arguments we take are called go_number and correct. correct will be the current score, as we’ve seen before, and go_number is the counter telling us how far we’ve got.
We use a slightly different form of the % operator here to substitute two values into a string instead of one. To do this, we put a list of values on the right instead of just one: ( ( go_number + 1 ), correct ). We need brackets around the outside so that Python knows it is a list and doesn’t just take the first value on its own. When we use a list like this, the values will be substituted in order, one for each %d (or %s or similar) that is in the string. You must always have the same number of %ds in the string as values in the list.
You may be wondering why we have to add one to go_number. We’ll see in a moment.
To be able to provide the two new arguments to ready_screen we need to change the code right at the bottom to look like this:
start() correct = 0 for i in range( 10 ): ready_screen( i, correct ) wait() correct += shape() end( correct )
Remember when we made the for loop I mentioned that i would be a different number each time we ran the code inside the loop? We pass that number in to ready_screen where it will be used as the go_number. We also pass in the current score, correct.
The reason why we needed to add 1 to go_number inside ready_screen is that when you have a loop like for i in range( 10 ), the variable i actually gets the values 0, 1, 2, … with the last value being 9, instead of ranging from 1 to 10 as you might expect. The reasoning behind this is kind of lost in the mists of time, and kind of makes perfect sense, depending how you look at it. Anyway, believe me when I tell you that once you’ve got used to it you’re going to find it warm and comforting, but for now you may find it a bit weird.
And, on that typically strange note, we have finished! Try out your program, and you should find it tells you what go you’re on, and what your score is all the way through.
Something else you might like to do now is make your game run in full-screen mode (like many games). You can do that by changing the start function like this:
def start(): global screen pygame.init() screen = pygame.display.set_mode( screen_size, pygame.FULLSCREEN )
If you have any problems, compare your version with mine here: redgreen.py
I’ve made a slightly extended version of the game that measures your reaction speed and gives you a score based on how quickly you press. In future I may even add more features. If you’d like to follow the project, you can find it here: redgreen on github.
I’ll be doing more series in the future, some for beginners like this one, and some more advanced topics. If you’d like to find out what I’m doing, subscribe to the blog RSS feed, follow me on Twitter or go to my YouTube page and subscribe.
Low cost PCB on PCBWay - only $5 for 10 PCBs and FREE first order for new members
PCB Assembly service starts from $88 with Free shipping all around world + Free stencil
PCBWay 2nd PCB Design Contest | https://projects-raspberry.com/raspberry-pi-game/ | CC-MAIN-2019-09 | refinedweb | 9,444 | 80.82 |
tried to make a testsuite for stand alone macro code run, but get unexplained error message.
See the calling code below, see Testfile and module in linked post.
I am not just right now up to have gui testing for my stuff, but anyway I am interested, if there are limitations. Is testing the gui from bash command line any further problem? How is gui-testing done then typically?
Who can help me?
Tia
Discussion:Discussion:2.2.1 +macro, bash console, path.append in header: "ImportError"
snippet to head of Testfile:
CODE: SELECT ALL
import sys
sys.path.append("/usr/lib/freecad/lib/")
import FreeCAD as App
import FreeCADGui as Gui
CODE: SELECT ALL
python2.7 /home.../Testpointtopost.py
Traceback (most recent call last):
File "/home/.../Testpointtopost.py", line 21, in <module>
import FemGui
ImportError: Cannot load Gui module in console application.
Solution state:+without Gui
CODE: SELECT ALL
#import FemGui
#import pointtopost
Seems that my unittest does not couple with freecad gui python modules. It the tested module imports a gui module then it comes to this error message running the test. Searched for this error message and found that it is from freecad, placed 3 times here in the forum. The problems with it have not been solved,
So how to deal with it? .. I comment import-lines so I have an either unittest setup with broken gui, or comment it out for otherway arround. This isnot at all confortable: If someone could advice me better, thanks.
Code: Select all
File "/home/.../Testpointtopost.py", line 21, in <module> import FemGui ImportError: Cannot load Gui module in console application. | https://forum.freecadweb.org/viewtopic.php?f=10&t=14567&p=116644 | CC-MAIN-2019-22 | refinedweb | 271 | 66.54 |
addon
npm install addon-redux
In order for the React Redux addon to function correctly:: [] }
Further control of the redux state is provided using storybook args. Args can be linked to the redux store using the
ARG_REDUX_PATH key in the
argTypes key of the default CSF export. The value of the
ARG_REDUX_PATH is a dot delimited string representing the path that the arg corresponds to in the store. Integer segments are treated as array indices.
import React from 'react' import App from './App' import { ARG_REDUX_PATH } from 'addon-redux' export default { title: 'App', component: App, argTypes: { name1: { control: { type: 'text' }, [ARG_REDUX_PATH]: 'todos.0.text' } } }; const Template = (args) => <App />; export const All = Template.bind({}); All.args = { name1: 'First Value', completed1: false };
addon-redux currently supports one storybook parameter that can be used to change the redux state on story load,
PARAM_REDUX_MERGE_STATE. This parameter takes a JSON string or object = {}; | https://openbase.com/js/addon-redux | CC-MAIN-2021-39 | refinedweb | 147 | 55.64 |
Re: Simple math program.
- From: Simon <simon_nospam_rigby@xxxxxxxxxxxx>
- Date: Mon, 05 Dec 2005 16:40:06 +0000
The {0} is substituted for the appropriate argument (in this case 5+4). The argument evaluates to 9 and is inserted into the string as position {0}. Why did you expect it to be 1? Not having a go, just interested.
Simon
As a further example
Console.WriteLine("5 + 4 = {0} and 5 - 4 = {1}", 5+4, 5-4);
would result in
5 + 4 = 9 and 5 - 4 = 1
Eric Anderson wrote:
Would someone be kind and explain why and how this works. It comes up with the correct answer, but I dont understand how it does. Wouldn't Console.WriteLine("5+4={0}",5+4); wouldn't it come up as 5+4=1 instead of 5+4=9? Sorry guys im new to all of this but im learning. Thanks for the help in advance!
using System;
namespace SimpleMath
{
class DoMath
{
static void Main(string[] args)
{
int a = 1;
//addition with integers works as expected
Console.WriteLine("5 + 4 = {0}", 5 + 4);
Console.Write("Please press \"enter\" to continue");
Console.ReadLine();
} // end main
} // end class
} // end namespace
-- (\(\ (=':') (,(")(") Eric Anderson
.
- References:
- Simple math program.
- From: Eric Anderson
- Prev by Date: RE: Solution for storing a lot of web app configuration ?
- Next by Date: Re: Strange Exception in default Hashtable constructor
- Previous by thread: Re: Simple math program.
- Next by thread: Solution for storing a lot of web app configuration ?
- Index(es): | http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.languages.csharp/2005-12/msg01016.html | crawl-002 | refinedweb | 249 | 68.06 |
Picking up where we left off in the previous post, we are working in a new project that will accommodate all of the controls we highlight throughout this entire series, with the hub of our navigation being a RadJumpList instance. The next step we have is to add some style to our RadJumpList in order to both take advantage of the built-in functionality and to make it look like a more inviting navigation experience.
Step 1 – Adding an ItemTemplate
One of the huge advantages of any listbox-type control is the ability to style the individual items. RadJumpList is no different, allowing us to quickly add an ItemTemplate to make our items look a little nicer within the UI. In our case, we know that we want to utilize this as navigation, so the items need to be big enough to tap with our finger but small enough to ensure the FriendlyName (the name we will display from our MyMVVMItem) fits nicely and doesn’t run over the edge of the screen. A quick and easy template with size 28 font will do nicely for this:
<
telerikData:RadJumpList.ItemTemplate
>
DataTemplate
Grid
TextBlock
Text
=
"{Binding FriendlyName}"
FontSize
"28"
/>
</
>
We’re already looking a bit better here:
Step 2 – Adding Grouping
One of the things I mentioned previously is the two awesome features that RadJumpList have built-in to the control – grouping and sorting. This enables us to do what I was referring to as the ‘People Hub’ experience, in which we have grouped headers that also have their own functionality for jumping between groups (hence jump-list :D). Adding grouping is actually really easy and can be done 100% in Xaml:
telerikData:RadJumpList.GroupDescriptors
data:PropertyGroupDescriptor
PropertyName
"Category"
SortMode
"Ascending"
Stepping through the code, we first open up the GroupDescriptors on RadJumpList to allow for our PropertyGroupDescriptor definition. In our case, MyMVVMItem has a Category that I split things into, which is perfect for the PropertyName, and since I want things ordered alphabetically I set the SortMode to Ascending. Now we’ve got this looking a lot better:
Step 3 – Utilizing Jump Functionality
Now I just mentioned the ability to utilize jump functionality with this control, so the first question is of course how much code will this take? The answer is zero!
RadJumpList will automatically create a jump screen for you based on your categories. You can of course modify this by changing the GroupPickerItemTemplate or modify the default group headers with the GroupHeaderItemTemplate, but out-of-the-box we have styled these for you based on phone styling and theme choices, meaning they’ll blend seamlessly with your Windows Phone application regardless of the theme you are using. Just so you can see what I mean, here is a look at the same code with dark/blue theme versus light/lime:
Neat, right? :) When you select any of the headers from the jumplist, the control will automatically jump to that group. It’s great getting functionality like that for free!
Step 4 – ItemTap and MVVM Light (to the rescue)
The last thing that we need to do in order to fully utilize RadJumpList as a navigation hub is to enable users to navigate when tapping an item. Makes sense, right?
ItemTap is the event we would normally subscribe to when a user is going to tap an item and have something triggered. We could also do selection/selecteditem/selectionchanged, but I want to take advantage of the fact that we don’t need selection, just to know that an individual item was tapped by the end user. But we’re using the MVVM pattern, so just setting an event won’t quite do the trick as our code-behind is pretty clean right now. This is where MVVM Light comes into play.
As you may have noticed, we added the System.Windows.Interactivity assembly to our project as well as a namespace to utilize that assembly – this will let us set an EventTrigger on our control. That gets us halfway there – EventToCommand from MVVM Light will complete the picture. With EventToCommand, we can easily both bind the event to a command on our viewmodel (the name kind of gives that away!) as well as send either command parameters or the original event arguments to the command. In our case, we want to utilize the command parameters, so our Xaml will look a bit like this:
i:Interaction.Triggers
i:EventTrigger
EventName
"ItemTap"
command:EventToCommand
Command
"{Binding ItemTapCommand}"
PassEventArgsToCommand
"True"
While our ViewModel will now look like this, courtesy of a new RelayCommand and the Messenger built into MVVM Light:
public
RelayCommand<Telerik.Windows.Controls.ListBoxItemTapEventArgs> ItemTapCommand {
get
;
set
; }
{
ItemTapCommand =
new
RelayCommand<Telerik.Windows.Controls.ListBoxItemTapEventArgs>(ItemTap);
}
void
ItemTap(Telerik.Windows.Controls.ListBoxItemTapEventArgs e)
string
navName = (e.Item.DataContext
as
MyMVVMItem).NavigationName;
Messenger.Default.Send<
>(navName,
"NavigationLocation"
);
}
With this code, we set a new RelayCommand called ItemTapCommand, pass the ListBoxItemTapEventArgs to the ItemTap method, and then find the NavigationName of the item tapped to send off for navigation. You’ll get to see more of the details of that in the next post when I do a little maintenance on the app to include things like a RadPhoneApplicationFrame for a nicer user experience, but the end result will be tapping an item and navigating to that demo. Good stuff, right? :)
Check out the up-to-date source code for this project here (including a sneak peak of what that Messenger from MVVM Light is doing with our navName), otherwise stay tuned to more in this WP7 + MVVM series!. | http://www.telerik.com/blogs/windows-phone-7-mvvm-3---more-radjumplist | CC-MAIN-2017-22 | refinedweb | 929 | 57.1 |
[This article was contributed by the SQL Azure team.]
The Open Data Protocol (OData) is an emerging standard for querying and updating data over the Web. OData is a REST-based protocol whose core focus is to maximize the interoperability between data services and clients that wish to access that data. It is being used to expose data from a variety of sources, from relational databases like SQL Azure and file systems to content management systems and traditional websites. In addition, clients across many platforms, ranging from ASP.NET, PHP, and Java websites to Microsoft Excel, PowerPivot, and applications on mobile devices, are finding it easy to access those vast data stores through OData as well.
The SQL Azure OData Service incubation (currently in SQL Azure Labs) provides an OData interface to SQL Azure databases that is hosted by Microsoft. Currently SQL Azure OData Service is in incubation and is subject to change. We need your feedback on whether to release this feature. You can provide feedback by emailing SqlAzureLabs@microsoft.com or voting for it at. Another way to think about this is that SQL Azure OData Service provides a REST interface to your SQL Azure data.
The main protocol to call SQL Azure is Tabular Data Stream (TDS), the same protocol used by SQL Server. While SQL Server Management Studio, ADO.NET and .NET Framework Data Provider for SqlServer use TDS the total count of clients that communicate via TDS is not as large as those that speak HTTP. SQL Azure OData Service provides a second protocol for accessing your SQL Azure data, HTTP and REST in the form of the OData standard. This allows other clients that participate in OData standard to gain access to your SQL Azure data. The hope is that because OData is published with an Open Specification Promise there will be an abundance of clients, and server implementations using OData. You can think of ADO.Net providing a rich experience over your data and OData providing a reach experience.
SQL Azure OData Service Security
The first thing that jumps to mind when you consider having a REST interface to your SQL Azure data is how do you control access? The SQL Azure OData Service implementation allows you to map both specific users to Access Control Service (ACS) or to allow anonymous access through a single SQL Azure user.
Anonymous Access
Anonymous access means that authentication is not needed between the HTTP client and SQL Azure OData Service. However, there is no such thing as anonymous access to SQL Azure, so when you tell the SQL Azure OData Service that you allow anonymous access you must specify a SQL Azure user that SQL Azure OData Service can use to access SQL Azure. The SQL Azure OData Service access has the same restriction as the SQL Azure user. So if the SQL Azure user being used in SQL Azure OData Service anonymous access has read-only permissions to the SQL Azure database then SQL Azure OData Service can only read the data in the database. Likewise if that SQL Azure user can’t access certain tables, then SQL Azure OData Service via the anonymous user can’t access these tables.
If you are interested in learning more about creating users on SQL Azure, please see this blog post which shows how to create a read-only user for your database.
Access Control Service
The Windows Azure AppFabric Access Control (ACS) service is a hosted service that provides federated authentication and rules-driven, claims-based authorization for REST Web services. REST Web services can rely on ACS for simple username/password scenarios, in addition to enterprise integration scenarios that use Active Directory Federation Services (ADFS) v2.
In order to use this type of authentication with OData you need to sign up for AppFabric here, and create a service namespace that use with SQL Azure OData Service. In the CTP of SQL Azure OData Service, this allows a single user, which has the same user id as the database user to access SQL Azure OData Service via Windows Azure AppFabric Access Control, using the secret key issued by the SQL Azure Labs portal. It doesn’t currently allow you to integrate Active Directory Federation Services (ADFS) integration, nor map multiple users to SQL Azure permissions.
Security Best Practices
Here are a few best practices around using SQL Azure OData Service:
- You should not allow anonymous access to SQL Azure OData Service using your SQL Azure administrator user name. This allows anyone to read and write from your database. You should always create a new SQL Azure user, please see this blog post.
- You should not allow the SQL Azure user used by SQL Azure OData Service to have write access SQL Azure OData Service via anonymous access, because there is no way to control how much or what type of data they will write.
- Because the browser will not support Simple Web token authentication natively, which is required for SQL Azure OData Service using Windows Azure AppFabric Access Control, you will need to build your own client to do anything but anonymous access. For more information see this blog post. That said, it is easiest while OData is under CTP to just use anonymous access with a read-only SQL Azure user.
Summary
I wanted to cover the basic of OData and a lay of the land around security. This information will surely change as OData matures and migrates from SQL Azure Labs to a production release. Do you have questions, concerns, comments? Post them below and we will try to address them. | https://azure.microsoft.com/ru-ru/blog/introduction-to-open-data-protocol-odata-and-sql-azure/ | CC-MAIN-2017-09 | refinedweb | 936 | 58.72 |
When a React component is created, a number of functions are called:
React.createClass(ES5), 5 user defined functions are called
class Component extends React.Component(ES6), 3 user defined functions are called
getDefaultProps()(ES5 only)
This is the first method called.
Prop values returned by this function will be used as defaults if they are not defined when the component is instantiated.
In the following example,
this.props.name will be defaulted to
Bob if not specified otherwise:
getDefaultProps() { return { initialCount: 0, name: 'Bob' }; }
getInitialState()(ES5 only)
This is the second method called.
The return value of
getInitialState() defines the initial state of the React component.
The React framework will call this function and assign the return value to
this.state.
In the following example,
this.state.count will be intialized with the value of
this.props.initialCount:
getInitialState() { return { count : this.props.initialCount }; }
componentWillMount()(ES5 and ES6)
This is the third method called.
This function can be used to make final changes to the component before it will be added to the DOM.
componentWillMount() { ... }
render()(ES5 and ES6)
This is the fourth method called.
The
render() function should be a pure function of the component's state and props. It returns a single element which represents the component during the rendering process and should either be a representation of a native DOM component (e.g.
<p />) or a composite component. If nothing should be rendered, it can return
null or
undefined.
This function will be recalled after any change to the component's props or state.
render() { return ( <div> Hello, {this.props.name}! </div> ); }
componentDidMount()(ES5 and ES6)
This is the fifth method called.
The component has been mounted and you are now able to access the component's DOM nodes, e.g. via
refs.
This method should be used for:
componentDidMount() { ... }
If the component is defined using ES6 class syntax, the functions
getDefaultProps() and
getInitialState() cannot be used.
Instead, we declare our
defaultProps as a static property on the class, and declare the state shape and initial state in the constructor of our class. These are both set on the instance of the class at construction time, before any other React lifecycle function is called.
The following example demonstrates this alternative approach:
class MyReactClass extends React.Component { constructor(props){ super(props); this.state = { count: this.props.initialCount }; } upCount() { this.setState((prevState) => ({ count: prevState.count + 1 })); } render() { return ( <div> Hello, {this.props.name}!<br /> You clicked the button {this.state.count} times.<br /> <button onClick={this.upCount}>Click here!</button> </div> ); } } MyReactClass.defaultProps = { name: 'Bob', initialCount: 0 };
getDefaultProps()
Default values for the component props are specified by setting the
defaultProps property of the class:
MyReactClass.defaultProps = { name: 'Bob', initialCount: 0 };
getInitialState()
The idiomatic way to set up the initial state of the component is to set
this.state in the constructor:
constructor(props){ super(props); this.state = { count: this.props.initialCount }; } | https://riptutorial.com/reactjs/example/9240/component-creation | CC-MAIN-2021-17 | refinedweb | 483 | 51.75 |
Opened 3 years ago
Closed 3 years ago
#21924 closed New feature (fixed)
added reverse order for admin_order_field
Description
after adding this you can setup something like this:
def age_in_years(self): return # age calculated via DateField self.date_of_birth age_in_years.admin_order_field = '-date_of_birth'
the '-' at the beginning of '-date_of_birth' indicates the reverse order in the changelist.
already made a pull request here:
and patch is here:
missing:
- patch for docs
- test(s)
won't be able to make the docs update and tests within the next week. so everybody is very welcome if you like to do that :)
I added a test and documentation for this new feature.
My pull request is here:
All tests pass under SQLite. This was my first time contributing to the Django project and I think I did everything correctly but if there's anything 'off' please let me know. Thanks. | https://code.djangoproject.com/ticket/21924 | CC-MAIN-2016-50 | refinedweb | 144 | 65.05 |
Since I was asked, I will also throw Jest for testing into the stack composed of next.js and TypeScript. So let’s have a look into how that can be achieved.
First we need to add Jest:
npm i -D jest @types/jest ts-jest
Next we need to update our package.json for jest configs and to also add the run script:
// package.json { ... "scripts": { ... "test": "jest", } ... "jest": { "moduleFileExtensions": [ "ts", "tsx", "js" ], "transform": { "^.+\\.tsx?$": "ts-jest" }, "testMatch": [ "**/*.(test|spec).(ts|tsx)" ], "globals": { "ts-jest": { "babelConfig": true, "tsConfig": "jest.tsconfig.json" } }, "coveragePathIgnorePatterns": [ "/node_modules/" ], "coverageReporters": [ "json", "lcov", "text", "text-summary" ], "moduleNameMapper": { "\\.(jpg|jpeg|png|gif|eot|otf|webp|svg|ttf|woff|woff2|mp4|webm|wav|mp3|m4a|aac|oga)$": "<rootDir>/__mocks__/mocks.js", "\\.(css|less)$": "<rootDir>/__mocks__/mocks.js" } } }
As you can see in the jest config, we specified an additional
tsconfig file. So we create this one next:
// jest.tsconfig.json { "compilerOptions": { "module": "commonjs", "target": "esnext", "jsx": "react", "sourceMap": false, "experimentalDecorators": true, "noImplicitUseStrict": true, "removeComments": true, "moduleResolution": "node", "lib": ["es2017", "dom"], "typeRoots": ["node_modules/@types", "src/@types"] }, "exclude": ["node_modules", "out", ".next"] }
Running the test script with
npm run test lets us know, that jest cannot find any test. Makes sense, since we have not yet create one. So let us add a very simple unit test. First we create a function that we want to test:
// components/Button.tsx import * as React from 'react' export function giveMeFive(): number { return 5 } type Props = { buttonText: string } export default (props: Props) => ( <button onClick={e => console.log(giveMeFive())}>{props.buttonText}</button> )
We added a small function, that we also export so we can test it. Next we will add the jest test:
// components/__test__/button.tests.ts import { giveMeFive } from '../Button' test('generateAttributeIds', () => { expect(giveMeFive()).toBe(5) })
When running the test it succeeds and jest runs with TypeScript.
At first more packages… Yay!
npm i -D react-test-renderer
Next we can add a snapshot test (note that we create a tsx file this time):
// components/__tests__/button.snapshot.test.tsx import * as React from 'react' import Button from '../Button' import renderer from 'react-test-renderer' it('renders correctly', () => { const tree = renderer.create(<Button buttonText="Some Text" />).toJSON() expect(tree).toMatchSnapshot() })
Running the test shows again everything is successful. Since it was the first time we were running the snapshot test, a snapshot is created. You can find them within the newly created
__snapshots__ folder.
Now let us change the Button component just a little bit. And pretend it was an accident:
import * as React from 'react' export function giveMeFive(): number { return 5 } type Props = { buttonText: string } export default (props: Props) => ( <button onClick={e => console.log(giveMeFive())}>{props.buttonText}!</button> )
Do you see the difference? No? It is hard to find… But let us run the test once again:
Now we can see there is a
! that was added. Since we really like it, we can now update the snapshot:
npm test -- -u
And run the test again with
npm run test. We can see the snapshot was updated and the new testrun is successful again:
That is it for now, I hope it was useful :)
Du willst wissen wenn es etwas Neues gibt?
Dann melde dich kurz bei meinem Newsletter an! | https://www.manuel-schoebel.com/blog/jest-unit-snapshot-testing-typescript-nextjs/ | CC-MAIN-2018-51 | refinedweb | 538 | 58.89 |
WARNING: - Search in distribution
- MooseX::Meta::Role::Strict - Ensure we use strict role application.
(19 reviews) - 07 Nov 2015 19:49:08 GMT - Search in distribution
- Moose::Exporter - make an import() and unimport() just like Moose.pm
- Moose::Manual::Delta - Important Changes in Moose
- Moose::Manual::Concepts - Moose OO concepts.5 (4 reviews) - 03 Aug 2015 16:12:01.33 - 17 Apr 2015 05:13:17 GMT - Search in distribution
- MooseX::App::Meta::Role::Class::Documentation - Meta class role for command classes
This module makes it easier to build and manage a base set of imports. Rather than importing a dozen modules in each of your project's modules, you simply import one module and get all the other modules you want. This reduces your module boilerplate ...PREACTION/Import-Base-0.014 - 07 Sep 2015 23:48:14 GMT - Search in distribution
See <> for info about multitons....DMUEY/Role-Multiton-0.2 - 12 Nov 2013 03:25:10 GMT - Search in distribution
- Role::Singleton - Add a singleton constructor to your class
GMT - Search in distribution
- Type::Tiny::Manual - an overview of Type::Tiny
Keeping packages clean When you define a function, or import one, into a Perl package, it will naturally also be available as a method. This does not per se cause problems, but it can complicate subclassing and, for example, plugin classes that are i...RIBASUSHI/namespace-clean-0.26 (1 review) - 07 Oct 2015 17:45:14 GMT - Search in distribution
This distribution consists of a single role, Database::Migrator::Core. This role can be consumed by classes which implement the required methods for the role. These classes will then implement a complete database schema creation and migration system....DROLSKY/Database-Migrator-0.11 - 08 Apr 2014 21:16:52 GMT - Search in distribution
Provides two new keywords, "func" and "method", so that you can write subroutines with signatures instead of having to spell out "my $self = shift; my($thing) = @_" "func" is like "sub" but takes a signature where the prototype would normally go. Thi...BAREFOOT/Method-Signatures-20141021 (3 reviews) - 21 Oct 2014 09:14:38 GMT - Search in distribution
A Type::Tiny-based clone of MooseX::Types::LoadableClass. This is to save yourself having to do this repeatedly... my $tc = subtype as ClassName; coerce $tc, from Str, via { Class::Load::load_class($_); $_ }; Despite the abstract for this module, "Lo...TOBYINK/Types-LoadableClass-0.003 - 04 Apr 2014 11:44:38 GMT - Search in distribution
According to UNIVERSAL the point of "DOES" is that it allows you to check whether an object does a role without caring about *how* it does the role. However, the default Moose implementation of "DOES" (which you can of course override!) only checks w...TOBYINK/MooseX-Does-Delegated-0.004 - 10 Sep 2014 22:40:18 GMT - Search in distribution
MSTROUT/Catalyst-Runtime-5.90103 (20 reviews) - 12 Nov 2015 10:19:42 GMT - Search in distribution
When you import a function into a Perl package, it will naturally also be available as a method. The "namespace::autoclean" pragma will remove all imported symbols at the end of the current package's compile cycle. Functions called in the package its...ETHER/namespace-autoclean-0.28 (3 reviews) - 13 Oct 2015 01:27:25 GMT - Search in distribution t...RHOELZ/MooseX-Getopt-Explicit-0.03 - 03 Nov 2015 03:37:18 GMT - Search in distribution | https://metacpan.org/search?q=MooseX-Role-Strict | CC-MAIN-2015-48 | refinedweb | 565 | 53.61 |
1.5 pazsan 1: \ A less simple implementation of the blocks wordset. 1.1 anton 2: 1.52 anton 3: \ Copyright (C) 1995,1996,1997,1998,2000,2003,2006,2007 Free Software Foundation, Inc. 1.7 anton 4: 5: \ This file is part of Gforth. 6: 7: \ Gforth is free software; you can redistribute it and/or 8: \ modify it under the terms of the GNU General Public License 1.53 ! anton 9: \ as published by the Free Software Foundation, either version 3 1.53 ! anton 18: \ along with this program. If not, see. 1.7 anton 19: 20: 21: \ A more efficient implementation would use mmap on OSs that 1.1 anton 22: \ provide it and many buffers on OSs that do not provide mmap. 23: 1.5 pazsan 24: \ Now, the replacement algorithm is "direct mapped"; change to LRU 25: \ if too slow. Using more buffers helps, too. 26: 1.1 anton 27: \ I think I avoid the assumption 1 char = 1 here, but I have not tested this 28: 1.2 pazsan 29: \ 1024 constant chars/block \ mandated by the standard 1.1 anton 30: 1.5 pazsan 31: require struct.fs 32: 33: struct 1.17 anton 34: cell% field buffer-block \ the block number 35: cell% field buffer-fid \ the block's fid 36: cell% field buffer-dirty \ the block dirty flag 37: char% chars/block * field block-buffer \ the data 38: cell% 0 * field next-buffer 1.5 pazsan 39: end-struct buffer-struct 40: 41: Variable block-buffers 42: Variable last-block 43: 44: $20 Value buffers 45: 1.36 anton 46: \ limit block files to 2GB; gforth <0.6.0 erases larger block files on 47: \ 32-bit systems 48: $200000 Value block-limit 49: 1.5 pazsan 50: User block-fid 1.30 anton? 1.1 anton 60: 1.17 anton 61: : block-cold ( -- ) 1.16 jwilke 62: block-fid off last-block off 1.17 anton 63: buffer-struct buffers * %alloc dup block-buffers ! ( addr ) 64: buffer-struct %size buffers * erase ; 1.1 anton 65: 1.43 anton 66: :noname ( -- ) 67: defers 'cold 68: block-cold 69: ; is 'cold 1.5 pazsan 70: 71: block-cold 72: 1.24 crook 73: Defer flush-blocks ( -- ) \ gforth 1.5 pazsan 74: 1.24 crook 75: : open-blocks ( c-addr u -- ) \ gforth 1.36 anton 76: \g Use the file, whose name is given by @i{c-addr u}, as the blocks file. 77: try ( c-addr u ) 78: 2dup open-fpath-file throw 1.8 pazsan 79: rot close-file throw 2dup file-status throw bin open-file throw 1.50 anton 80: >r 2drop r> 81: endtry-iferror ( c-addr u ior ) 1.36 anton 82: >r 2dup file-status nip 0= r> and throw \ does it really not exist? 83: r/w bin create-file throw 1.48 anton 84: then 1.36 anton 85: block-fid @ IF 86: flush-blocks block-fid @ close-file throw 87: THEN 1.5 pazsan 88: block-fid ! ; 1.8 pazsan 89: 1.10 anton 90: : use ( "file" -- ) \ gforth 1.24 crook 91: \g Use @i{file} as the blocks file. 1.11 anton 92: name open-blocks ; 1.1 anton 93: 1.3 anton 94: \ the file is opened as binary file, since it either will contain text 95: \ without newlines or binary data 1.24 crook 96: : get-block-fid ( -- wfileid ) \ gforth 97: \G Return the file-id of the current blocks file. If no blocks 98: \G file has been opened, use @file{blocks.fb} as the default 99: \G blocks file. 1.1 anton 100: block-fid @ 0= 101: if 1.11 anton 102: s" blocks.fb" open-blocks 1.1 anton 103: then 104: block-fid @ ; 105: 1.20 pazsan 106: : block-position ( u -- ) \ block 1.36 anton 107: \G Position the block file to the start of block @i{u}. 108: dup block-limit u>= -35 and throw 1.26 pazsan 109: offset @ - chars/block chars um* get-block-fid reposition-file throw ; 1.1 anton 110: 1.20 pazsan 111: : update ( -- ) \ block 1.29 crook 112: \G Mark the state of the current block buffer as assigned-dirty. 1.5 pazsan 113: last-block @ ?dup IF buffer-dirty on THEN ; 1.1 anton 114: 1.20 pazsan 115: : save-buffer ( buffer -- ) \ gforth 116: >r 1.42 pazsan 117: r@ buffer-dirty @ 1.1 anton 118: if 1.5 pazsan 119: r@ buffer-block @ block-position 120: r@ block-buffer chars/block r@ buffer-fid @ write-file throw 1.36 anton 121: r@ buffer-fid @ flush-file throw 122: r@ buffer-dirty off 1.5 pazsan 123: endif 124: rdrop ; 125: 1.20 pazsan 126: : empty-buffer ( buffer -- ) \ gforth 1.5 pazsan 127: buffer-block off ; 128: 1.20 pazsan 129: : save-buffers ( -- ) \ block 1.24 crook 130: \G Transfer the contents of each @code{update}d block buffer to 1.30 anton 131: \G mass storage, then mark all block buffers as assigned-clean. 1.20 pazsan 132: block-buffers @ 1.24 crook 133: buffers 0 ?DO dup save-buffer next-buffer LOOP drop ; 1.1 anton 134: 1.24 crook 135: : empty-buffers ( -- ) \ block-ext 136: \G Mark all block buffers as unassigned; if any had been marked as 137: \G assigned-dirty (by @code{update}), the changes to those blocks 138: \G will be lost. 1.20 pazsan 139: block-buffers @ 1.24 crook 140: buffers 0 ?DO dup empty-buffer next-buffer LOOP drop ; 1.1 anton 141: 1.20 pazsan 142: : flush ( -- ) \ block 1.24 crook 143: \G Perform the functions of @code{save-buffers} then 144: \G @code{empty-buffers}. 1.1 anton 145: save-buffers 146: empty-buffers ; 147: 1.12 anton 148: ' flush IS flush-blocks 1.5 pazsan 149: 1.26 pazsan 150: : get-buffer ( u -- a-addr ) \ gforth 151: 0 buffers um/mod drop buffer-struct %size * block-buffers @ + ; 1.5 pazsan 152: 1.51 anton 153: : block ( u -- a-addr ) \ block 1.24 crook}. 1.26 pazsan 160: dup offset @ u< -35 and throw 1.5 pazsan 161: dup get-buffer >r 162: dup r@ buffer-block @ <> 1.9 pazsan 163: r@ buffer-fid @ block-fid @ <> or 1.1 anton 164: if 1.5 pazsan 165: r@ save-buffer 1.1 anton 166: dup block-position 1.5 pazsan 167: r@ block-buffer chars/block get-block-fid read-file throw 1.1 anton 168: \ clear the rest of the buffer if the file is too short 1.5 pazsan 169: r@ block-buffer over chars + chars/block rot chars - blank 170: r@ buffer-block ! 171: get-block-fid r@ buffer-fid ! 1.1 anton 172: else 173: drop 174: then 1.5 pazsan 175: r> dup last-block ! block-buffer ; 1.1 anton 176: 1.20 pazsan 177: : buffer ( u -- a-addr ) \ block 1.24 crook}. 1.1 anton 187: \ reading in the block is unnecessary, but simpler 188: block ; 189: 1.28 crook 190: User scr ( -- a-addr ) \ block-ext s-c-r 1.27 crook 191: \G @code{User} variable -- @i{a-addr} is the address of a cell containing 1.21 crook 192: \G the block number of the block most recently processed by 1.24 crook 193: \G @code{list}. 194: 0 scr ! 1.1 anton 195: 1.24 crook 196: \ nac31Mar1999 moved "scr @" to list to make the stack comment correct 1.20 pazsan 197: : updated? ( n -- f ) \ gforth 1.29 crook 198: \G Return true if @code{updated} has been used to mark block @i{n} 199: \G as assigned-dirty. 1.24 crook 200: buffer 1.5 pazsan 201: [ 0 buffer-dirty 0 block-buffer - ] Literal + @ ; 202: 1.24 crook 203: : list ( u -- ) \ block-ext 204: \G Display block @i{u}. In Gforth, the block is displayed as 16 205: \G numbered lines, each of 64 characters. 1.1 anton 206: \ calling block again and again looks inefficient but is necessary 207: \ in a multitasking environment 208: dup scr ! 1.5 pazsan 209: ." Screen " u. 1.24 crook 210: scr @ updated? 0= IF ." not " THEN ." modified " cr 1.1 anton 211: 16 0 212: ?do 1.4 anton 213: i 2 .r space scr @ block i 64 * chars + 64 type cr 1.1 anton 214: loop ; 215: 1.34 pazsan 216: [IFDEF] current-input 217: :noname 2 <> -12 and throw >in ! blk ! ; 218: \ restore-input 219: :noname blk @ >in @ 2 ; \ save-input 220: :noname 2 ; \ source-id "*a block*" 1.42 pazsan 221: :noname 1 blk +! 1 loadline +! >in off true ; \ refill 1.34 pazsan. 1.39 anton 230: block-input 0 new-tib dup loadline ! blk ! s" * a block*" loadfilename 2! 1.45 pazsan 231: ['] interpret catch pop-file throw ; 1.34 pazsan 232: [ELSE] 1.23 crook 233: : (source) ( -- c-addr u ) 1.2 pazsan 234: blk @ ?dup 235: IF block chars/block 236: ELSE tib #tib @ 237: THEN ; 238: 1.23 crook 239: ' (source) IS source ( -- c-addr u ) \ core 1.24 crook 240: \G @i{c-addr} is the address of the input buffer and @i{u} is the 1.23 crook 241: \G number of characters in it. 1.2 pazsan 242: 1.20 pazsan 243: : load ( i*x n -- j*x ) \ block 1.24 crook 244: \G Save the current input source specification. Store @i{n} in 245: \G @code{BLK}, set @code{>IN} to 0 and interpret. When the parse 246: \G area is exhausted, restore the input source specification. 1.40 anton 247: s" * a block*" loadfilename>r 1.24 crook 248: push-file 249: dup loadline ! blk ! >in off ['] interpret catch 1.31 anton 250: pop-file 1.40 anton 251: r>loadfilename 1.45 pazsan 252: throw ; 1.34 pazsan 253: [THEN] 1.24 crook}. 1.20 pazsan 262: blk @ + load ; 1.2 pazsan 263: 1.24 crook 264: : +thru ( i*x n1 n2 -- j*x ) \ gforth 265: \G Used within a block to load the range of blocks specified as the 266: \G current block + @i{n1} thru the current block + @i{n2}. 267: 1+ swap ?DO I +load LOOP ; 268: 1.28 crook 269: : --> ( -- ) \ gforthman- gforth chain 1.24 crook 270: \G If this symbol is encountered whilst loading block @i{n}, 271: \G discard the remainder of the block and load block @i{n+1}. Used 1.25 anton 272: \G for chaining multiple blocks together as a single loadable 273: \G unit. Not recommended, because it destroys the independence of 274: \G loading. Use @code{thru} (which is standard) or @code{+thru} 275: \G instead. 1.20 pazsan 276: refill drop ; immediate 1.5 pazsan 277: 1.24 crook. 1.11 anton 284: block-fid @ >r block-fid off open-blocks 1.5 pazsan 285: 1 load block-fid @ close-file throw flush 286: r> block-fid ! ; 287: 1.13 anton 288: \ thrown out because it may provide unpleasant surprises - anton 289: \ : include ( "name" -- ) 290: \ name 2dup dup 3 - /string s" .fb" compare 291: \ 0= IF block-included ELSE included THEN ; 1.5 pazsan 292: 1.4 anton 293: get-current environment-wordlist set-current 1.51 anton 294: true constant block \ environment- environment 1.4 anton 295: true constant block-ext 296: set-current 1.5 pazsan 297: 1.21 crook 298: : bye ( -- ) \ tools-ext 299: \G Return control to the host operating system (if any). 300: ['] flush catch drop bye ; | http://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/blocks.fs?annotate=1.53;only_with_tag=MAIN | CC-MAIN-2019-30 | refinedweb | 1,907 | 88.02 |
Console output messed up using Pymkr plugin "Run"
- Andrea Filippini
Hi all, I'm using a few threads in my script but I have a problem with the console. The first thread that I start is used to manage the WiFi connection as described in the tutorials like this:
[REMOVED AS OF EDIT2]
The problem is that at first boot (at power up) I only get this thread's output. Any other "print" gets lost.
The only way to be able to see the output is to stop and restart the script (i'm using Atom plugin) so the WiFi stays connected an the output is brief.
How can I solve this? I'm facing serious problems developing
EDIT:
Thanks to @Innocenzo I can say for sure that the issue is with the Pymkr plugin for Atom. If I use "execfile('main.py')" all the output is printed (even inside the Atom console); this means that there's a bug in the "Run" command
EDIT 2:
Use this code to reproduce the problem (change wifi credential to a valid one). Using the "Run" command the output stops after successfully connecting to the WiFi network.
If you use "execfile()" the output keeps being printed
import _thread import time import machine from network import WLAN def th_wifi_mng(delay): try: wlan = WLAN() found = 0 while True: if not wlan.isconnected(): nets = wlan.scan() for net in nets: if net.ssid == "yourSSID": wlan.connect(net.ssid, auth=(net.sec,"yourPsw"), timeout=5000) found = 1 if found == 1: while not wlan.isconnected(): machine.idle() # save power while waiting wifi_connected = wlan.isconnected() time.sleep(delay) except Exception as e: print('th_wifi_mng', e) # WiFi initialization wlan = WLAN() if(wlan.mode() != WLAN.STA): wlan = WLAN(mode=WLAN.STA) # spawn threads _thread.start_new_thread(th_wifi_mng, (2,)) while True: print("main loop") time.sleep(.1)
- Andrea Filippini
@Ralph It seems there's a new bug. (almost) Every time I click the "Sync" button all the files get uploaded. Also I notice something that looks like a piece of checksum being printed like this:
>>> Syncing project (main folder)... >>> >>> [1/12] Writing file communication.py c8474829a403c5fefb5e339b"]] [2/12] Writing file config.py [3/12] Writing file globals.py [...etc...] [12/12] Writing file utils.py Synchronizing done, resetting board...
EDIT:
Also, "Run" command on a file doesn't update the file on the flash, resulting in a strange behaviour if the current file is also present in the flash.
Editing the script and running it seems to use the classes from the file on the flash memory, and the "top level" instructions from the editor file :/
EDIT2:
Now the "Sync" end with an error. I rebooted the board, the PC and updated the firmware....
>>> Syncing project (main folder)... >>> >>> [1/13] Writing file communication.py [2/13] Writing file config.py Synchronizing failed: Write failed. Please reboot your device manually.
EDIT3:
Fixed with the new release
@Andrea-Filippini Thanks for posting your code. Using it I got the same issue and found the bug that was causing it. It's been solved in the latest commit on the develop branch on github. It'll be included in the next release. Until then, feel free to manually install the plugin from the develop branch
hi @Andrea-Filippini can you post your code, or a simplified version of it, so I can try to replicate the problem? The quick usage example from the docs seems to be working for me.
@Andrea-Filippini Use Filezilla or ampy to put your script on board. After, from REPL, use
execfile("myscript.py")
@Innocenzo I can connect to the REPL console via USB cable, but I don't exactly know how to run a python script from there :/
@Andrea-Filippini Try to execute your code on Terminal like Tera Term or Termite. Sometimes the Atom plugins does not print me everything. | https://forum.pycom.io/topic/1091/console-output-messed-up-using-pymkr-plugin-run | CC-MAIN-2017-34 | refinedweb | 641 | 66.74 |
Adding Lua 5.2 to your application
Note: The code in here is adapted from an actual project, however I've not yet had time to verify it doesnt have typos. Google search results are just overflowing with info on old Lua versions that I wanted to dump this to the web now in the hopes of being at least vaguely helpful.
Lua is a really cool, clean little programming language that is easy to embed in your applications. Not only is it under a permissive license, it's ANSI C.
However, recent updates have made most of the documentation about it on the web a bit outdated, so I thought I'd drop this quick tutorial on how to add Lua to your application and do some of the typical inter-operation things with it that youd want to do when hosting scripts in your application.
Building Lua
Building Lua is very easy. After getting the source code (Im using the unoffical Git repository from LuaDist on Github), you duplicate the file lua/src/luaconf.h.orig under the name lua/src/luaconf.h. Then you point Terminal at Luas folder and do
make macosx
(Or if you're not on a Mac, use the appropriate platform name here, you can see available ones by just calling make without parameters in that folder)
This will churn a short moment, and then youll have a liblua.a file. Add that to your Xcode project (or equivalent) so it gets linked in, and make sure the header search paths include the lua/src/ folder. Thats it, now you can use Lua in your application.
cd ${PROJECT_DIR}/lua/src/ if [ ! -f luaconf.h ]; then cp luaconf.h.orig luaconf.h fi make macosx
By specifying ${PROJECT_DIRECTORY}/lua/src/lua.h as the input file and ${PROJECT_DIRECTORY}/lua/src/liblua.a, Xcode will take care to not unnecessarily rebuild Lua if you make your application depend on this target.
Running a Lua script
To use Lua, you include the following headers:
#include "lua.h" #include "lauxlib.h" #include "lualib.h"
(If you're using C++, be sure to wrap them in extern "C" or youll get link errors) Then you can simply compile the following code to initialize a Lua context and run a script from a text file:
lua_State *L = luaL_newstate(); // Create a context. luaL_openlibs(L); // Load Lua standard library. // Load the file: int s = luaL_loadfile( L, "/path/to/file.lua" ); if( s == 0 ) { // Run it, with 0 params, accepting an arbitrary number of return values. // Last 0 is error handler Lua function's stack index, or 0 to ignore. s = lua_pcall(L, 0, LUA_MULTRET, 0); } // Was an error? Get error message off the stack and print it out: if( s != 0 ) { printf("Error: %s\n", lua_tostring(L, -1) ); lua_pop(L, 1); // Remove error message from stack. } lua_close(L); // Dispose of the script context.
The script file would contain something like:
-- this is a comment io.write("Hello world, from ",_VERSION,"!\n")
Calling from Lua into C
Now you can run a file full of commands. But how do you have it call back into your application? Theres a special call for that, lua_register, which creates a new function that actually wraps a special C function. You call it like this:
// Create a C-backed Lua function, myavg(): lua_register( L, "myavg", foo ); // Create a global named "myavg" and stash an unnamed function with C function "foo" as its implementation in it.
to register a C function named foo as a Lua function named myavg. The actual function would look like this:
// An example C function that we call from Lua: */ }
This example function loops over all parameters that have been passed (using lua_isnumber to check theyre numbers, and lua_tonumber to actually retrieve them as ints), which may be a variable number, adds and averages them, and then pushes two return values on the stack (the average and the sum), and returns the number of return values it gave.
Functions in Lua and other oddities
You could now call it like:
io.write( "Average is: ", myavg(1,2,3,4,5) )
from Lua. The funny thing here is, in Lua, there are no functions in the traditional sense. Its a prototype-based programming language, so all functions are closures/blocks/lambdas, and can be treated just like any value, like an integer or a string. To declare a function, lua_register simply creates a global variable named myavg and sticks such a function object in it.
When you declare a function in Lua, it's also really just a shorthand for an assignment statement. So to run a function declared in a Lua file, like:
function main( magicNumber ) io.write("Main was called with magicNumber ", magicNumber, "!") end
you first have to execute it, which will create the global named main and stick a function in it. Only now do you look up the function object from that global and call it, again using lua_pcall like here:
lua_getglobal(L,"main"); if( lua_type(L, -1) == LUA_TNIL ) return; // Function doesn't exist in script. lua_pushinteger(L,5); s = lua_pcall(L, 1, LUA_MULTRET, 0); // Tell Lua to expect 1 param & run it.
The 2nd parameter to lua_pcall tells it how many parameters to expect.
Creating Lua objects from C
Objects are likewise just tables (i.e. key-value dictionaries) where ivars are just values, and methods are functions stored as values. So, to create a new object with methods implemented in C, you do:
// Create a C-backed Lua object: lua_newtable( L ); // Create a new object & push it on the stack. // Define mymath.durchschnitt() for averaging numbers: lua_pushcfunction( L, foo ); // Create an (unnamed) function with C function "foo" as the implementation. lua_setfield( L, -2, "durchschnitt" ); // Pop the function off the back of the stack and into the object (-2 == penultimate object on stack) using the key "durchschnitt" (i.e. method name). lua_setglobal( L, "mymath" ); // Pop the object off the stack into a global named "mymath".
To call this, function, you do it analogous to before, just that you first use lua_getglobal( L, "mymath" ) to push the object on the stack, then lua_getfield to actually push the durchschnitt function stored under that key in the object.
Since functions are closures/blocks/lambdas, they can also capture variables (upvalues). To set those, you use lua_pushcclosure instead of lua_pushcfunction and pass the number of values you pushed on the stack to capture as the last parameter. E.g. if you wanted to pass along a pointer to an object in your program that the session object wraps, instead of stashing it in an ivar, you could capture it like:
// Define session.write() for sending a reply back to the client: lua_pushlightuserdata( L, sessionPtr ); // Create a value wrapping a pointer to a C++ object (this would be dangerous if we let the script run longer than the object was around). lua_pushcclosure( L, session_write, 1 );// Create an (unnamed) function with C function "session_write" as the implementation and one associated value (think "captured variable", our userdata on the back of the stack). lua_setfield( L, -2, "write" ); // Pop the function value off the back of the stack and into the object (-2 == penultimate object on stack) using the key "write" (i.e. method name). lua_setglobal( L, "session" ); // Pop the object off the stack into a global named "session".
and inside the session_write function, youd retrieve it again like:
session* sessionPtr = (session*) lua_touserdata( L, lua_upvalueindex(1) );
Overriding Luas getters and setters with C
And finally, what if you wanted to have properties on this object that, when set, actually call into your C code? You install a metatable on your object, which contains a __newindex (setter) and __index (getter) function:
// Set up our 'session' table: lua_newtable( luaState ); // Create object to hold session. lua_newtable( luaState ); // Create metatable of object to hold session. lua_pushlightuserdata( luaState, myUserData ); lua_pushcclosure( luaState, get_variable_c_func, 1 ); // Wrap our C function in Lua. lua_setfield( luaState, -2, "__index" ); // Put the Lua-wrapped C function in the metatable as "__index". lua_pushlightuserdata( luaState, myUserData ); lua_pushcclosure( luaState, set_variable_c_func, 1 ); // Wrap our C function in Lua. lua_setfield( luaState, -2, "__newindex" ); // Put the Lua-wrapped C function in the metatable as "__newindex". lua_setmetatable( luaState, -2 ); // Associate metatable with object holding me. lua_setglobal( luaState, "session" ); // Put the object holding session into a Lua global named "session".
Where, like before, myUserData is some pointer to whatever data you need to access to do your work in the getter/setter (like the actual C struct this Lua object stands for) and get_variable_c_func and set_variable_c_func are the C functions that you provide that get called to retrieve, and add/change instance variables of the session object.
Note that get_variable_c_func will receive 2 Lua parameters on the stack: The session table itself, and the name of the instance variable you're supposed to get. You return this value as the only return value. set_variable_c_func gets a third parameter, the value to assign to the variable, but obviously doesnt return anything.
Dynamically providing your own globals
Sometimes you want to dynamically expose objects in your application to a script. One easy way to do that is to make them global variables. In our examples above, we did so by manually registering a new global with a table. But if you have lots of objects or they might change often, you don't want to do that.
Luckily, Lua keeps all its globals in an invisible table named _G, which you can put on the stack using lua_pushglobaltable(L). Now, you can create a meta-table with an __index fallback function for that as well. The key your callback gets will be the name of the global, which you can use to look up the object and dynamically generate and push a table for this object.
Note: In the code above, we called lua_setglobal() in the end, which pushed our table off the stack and stuffed it in the _G table. Since were not doing that here, be sure to do a lua_pop( L, 1 ) to remove the globals table from the stack again. Otherwise, your call to lua_pcall() will try to call that table and give the error attempt to call a table value.
And now you know all you need to call Lua from C, and have Lua call your C functions back. | http://orangejuiceliberationfront.com/adding-lua-5-2-to-your-application/ | CC-MAIN-2018-34 | refinedweb | 1,717 | 60.95 |
are multiple issue
- duplicate stack entries
- step over does not step over all clauses
- catch variables are not available
- without setting a breakpoint in filter step-over or step-into does not work at all
using System;
class X
{
static int Main ()
{
try {
throw new ApplicationException ();
} catch (Exception e) when (Foo (delegate { Console.WriteLine (e); })) {
return 1;
} catch (Exception e) when (e is InvalidOperationException) {
Console.WriteLine (e);
int paramIndex = 0;
while (paramIndex < 3) {
paramIndex++;
}
return 1;
} catch (ApplicationException) {
return 0;
}
}
static bool Foo (Action a)
{
a ();
return false;
}
}
I have checked this issue and I am also getting the same behavior
Screencast:
IDE logs:
Environment Info:
=== Xamarin Studio ===
Version 5.9.2 (build 2)
Installation UUID: 6ea47b0d-1852-4aaf-808d-373ff0a5002b
Runtime:
Mono 4.0.1 ((detached/11b5830)
GTK+ 2.24.23 (Raleigh theme)
Package version: 400010043
=== Apple Developer Tools ===
Xcode 6.3 (7569)
Build 6D570
=== Xamarin.iOS ===
Version: 8.10.1.64 (Enterprise Edition)
Hash: e6ebd18
Branch: master
Build date: 2015-05-21 21:55:09-0400
=== Xamarin.Android ===
Version: 5.1.2.1 (Enterprise Edition)
Android SDK: /Users/jatin66/Desktop/Backup
=== Xamarin.Mac ===
Version: 2.0.0.262
This has to be fixed on runtime side. | https://xamarin.github.io/bugzilla-archives/30/30396/bug.html | CC-MAIN-2019-39 | refinedweb | 198 | 50.33 |
I have an instant messaging app, with client written in python.
Code that connects to a server:
def connect(self, host, port, name):
host = str(host)
port = int(port)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((host, port))
s.send('CONNECT:' + name)
print s.recv(1024)
return s
def chat(self):
while True:
data = self.socket.recv(4096)
if not data:
pass
else:
print data
The thing is as a socket sends a message it expects a return message from the other side, That seems to be the only reason for that.
however there is a way with
select function to monitor multiple streams, make a list of all the streams you need to handle and use the
select function for it, for the user input use
sys.stdin and all the sockets that you expect to chat with.
check this:
But still the best way to do asynchronous chat will be with
udp, it will work really fine | https://codedump.io/share/m3l594LRQVY1/1/asynchronous-receiving-tcp-packets-in-python | CC-MAIN-2017-13 | refinedweb | 161 | 72.26 |
Hello null-byte!
This is my first post and i need some help.
The thing is followed these steps in the given link succeeded in finding location. Now I'm planning to write a script which does the whole process when we execute the script. First i made a .sh script with following commands:
cd /root/pygeoip-0.1.3
./IPloc.py (This is the script i made after)
In IPloc.py, the following commands are there:
x=raw_input('Insert the IP address:')
import pygeoip
gip=pygeoip.GeoIP('GeoLiteCity.dat')
rec=gip.recordbyaddr('x')
for key,val in rec.items():
print "%s: %s" %(key,val
The results are not coming. But when i manually execute the commands in IPloc.py, there is no error.
This is how the error looks like:
Insert the IP address:64.233.161.99
Traceback (most recent call last):
File "./IPlocfind.py", line 5, in <module>
rec=gip.recordbyaddr('x')
File "/root/NH3/pygeoip-0.3.2/pygeoip/init.py", line 546, in recordbyaddr
ipnum = util.ip2long(addr)
File "/root/NH3/pygeoip-0.3.2/pygeoip/util.py", line 39, in ip2long
return int(binascii.hexlify(socket.inetpton(socket.AFINET6, ip)), 16)
socket.error: illegal IP address string passed to inet_pton
Any help would be appreciated!
1 Response
Nevermind....i found the fix.
There is no need of apostrophe('x').
Share Your Thoughts | https://null-byte.wonderhowto.com/forum/writing-python-script-0174095/ | CC-MAIN-2018-30 | refinedweb | 228 | 64.17 |
.
im useing Xcode on Mac
when i put in the code i am having a red error message pop up telling me i have "conflicting types for ‘main’
how do i trace back to this mistake.
I’m really enjoying these lessons, I’m just starting out and everything thus far has been presented in comprehensive manor. and im glad to see that some of the comments are recent.
thank you
Hi Mikey!
I don’t know anything about Xcode, you’ll have to search for a solution on your own or wait for someone who knows.
Up until then you can use an online IDE like (Select C++14 as language, top right).
I am having issues with the code as it is bringing up x as undefined or is this done on purpose?
#include "stdafx.h"
#include <iostream>
void doNothing(const int &x)
{
}
int main()
{
//define an integer variable named x
int x; //this variable is unitialised
doNothing(x);
//print the value of x to the screen (dangerous, because x is unitialised)
std ::cout << x;
return 0;
}
x definitely isn’t undefined -- it’s defined at the top of main.
Can you post the actual error message you’re seeing, along with the specific line it’s referring to?
Hey Alex, I believe I’ve found a minor typo in the section for l-values and r-values.
When explaining r-values, line 6 of the first example reads:
"x = y; // l-value y evaluates to 7 (from before), which is then assigned to l-value x."
I believe the beginning of the comment should read "r-value y evaluates to 7 (from before)" and not "l-value y evaluates to 7 (from before)".
Sorry if I’m mistaken! I do realize that y had been assigned as a l-value in the past but with it being used on the right side of the assignment, would that have made it a r-value for that line?
Thank you for all the work that you do. I really, really appreciate it!
y is still an l-value, since it has an address and is persisted beyond the expression. However, in this context, it’s evaluated as if it were an r-value to the value stored in l-value y.
Oh, alright, I understand now. Thank you Alex!
somewhere in the text you have said: (the = sign);
i think this will rise misunderstanding. i suggest to fix is as: ("=" sign)
I agree.
If I do that, then an equal number of people will think I literally mean “=”, including the quotes. Just a plain = seems less prone to misinterpretation.
Thank you for this. Been coding for a few months but I stopped doing some tutorials and now when doing some projects I feel like there’s a gap in my knowledge, therefore I’ve come here. This is real helpful, thank you.
Hi, Alex.
I was able to run the Hello World program in chapter 0.6. I was also able to run it using the sample code (just commented out the previous lines) of not initializing. However, after initializing it, I encountered an error. I reverted it to the Hello World code, it produces the same error:
=== Build: Debug in HelloWorld (compiler: GNU GCC Compiler)…
ld.exe cannot open output file bin\Debug\HelloWorld.exe Permission…
error: ld returned 1 exit status
=== Build failed: 2 error(s), 0 warning(s) (0 minute(s), 0 …
What could be the problem?
Thank you! 🙂
It sounds like you’ve lost permission to overwrite HelloWorld.exe. The most obvious way this would happen is if HelloWorld.exe is still running. Make sure the program has closed (if you can’t figure out how, try rebooting). Other possibilities would be virus scanners or anti-malware interfering with writing the file.
Hi Alex;
I put the following code into Visual Studio 2017, and it came up saying, -> C2143 syntax error: missing ‘;’ before ‘return’
Also, how do I do the cool code thing you do in the comments with the colors?
#include "stdafx.h" // Uncomment if Visual Studio user
#include <iostream>
void doNothing(const int &x)
{
}
int main()
{
// define an integer variable named x
int x; // this variable is uninitialized
doNothing(x); // make compiler think we’re using this variable
// print the value of x to the screen (dangerous, because x is uninitialized)
std::cout << x;
Nevermind, I re-pasted it and it works now for some reason.
Put your code between code tags to use the syntax highlighter:
[code]
Your code here
[/code]
I am looking for a data type which accepts fraction as the input and then convert it into a decimal. Is there any?
You can use a float or a double and do something like this:
I will not be initialising the variable. The value will be given by the user during the run-time. Will double work in this case too?
Thanks
Kinda sorta not really. 🙂
A double will store the decimal value, but it won’t convert a user-entered fraction into a decimal. To do that, you’ll need to write your own code.
The initialization method works because the compiler will treat 5/3 as 5 divided by 3, which it will resolve to 1.66666… But it can only do that at compile time.
1) Could you give a hint for writing the code to convert a user-entered fraction into a decimal.
2) I wrote the code in gedit and compiled it in Ubuntu terminal which supports c++11. It prints 1. How can I solve this problem?
3) Do you get annoyed answering so many question everyday!?
1) I’d recommend writing a class for this. Lesson 9.3 contains a sample Fraction class that could do this.
2) Your code is doing integer division. You’d need to do something like this:
We cover doubles and the difference between integer and floating point division in chapter 2.
3) Sometimes. 🙂 But mostly for questions that have already been answered a thousand times. And for the remaining questions, mostly because answering questions takes away time from writing new stuff.
Hello,
I have been doing relatively minor coding in VBA and SQL for a few years now and seriously thinking about taking up C++. So far these tutorials have been great. In this section, even though the syntax is different the concept and output are right in line with VBA and mostly understandable to me. However, I do wonder about possible subtleties. Specifically, can the variables be more than one character. X & Y are common variables to anywhere and I understand why you use them in your examples but is it safe to assume that something more descriptive could be used as well? Say something like strcnt for string count. Int strcnt = 5, for instance. Then, for initialization, is zero always an acceptable value? Are there any instances when a zero would cause an error? I would not think it would cause any errors but would like to be clear on it as it seems that would logically be the safest practice.
Yes, variable names can be multiple characters. Variable naming conventions are covered in a few chapters. Initialization is a valid initialization value for all the fundamental types (like int, char, double, etc…), but may not be valid for user defined types, such as enums, structs, and classes.
Hi again Alex!
I think that you need to mention that variables need to be declared in every function separately. It isn’t there and caused a lot of confusion in 1.4a excersise #5.
I added a note to the lesson indicating something similar.
I’m using Visual Studio Community Ed. 2017. When I’m in Debug, a warning will pop up, unable to do anything else. When I’m in Release, uninitialized variables are initialized to 0.
> When I’m in Debug, a warning will pop up, unable to do anything else
What warning are you getting?
> When I’m in Release, uninitialized variables are initialized to 0.
This may be incidentally true, but is not guaranteed.
I’ve seen many c++ tutorials
all of them disappointed me, until I saw yours.
I love how everything is explained in simple language without skipping stuff or over-complicating anything. Thank you.
You’re welcome!
int main()
{
int x
cout << x;
return 0;
}
It gives an "C4700 uninitialized local variable" error in visual stdio 2015.
You forgot the semicolon after int x.
int x;
std::cout << x + "Abs";
_______________________________________________
Running this in **Release**
output: Abs
w/o string "Abs", output: 0
_______________________________________________
Running this in **Debug**
output: "Error, variable x is being used without being initialized"
________________________________________________
FYI, I am using visual studio community 2017 edition. I thought it was supposed to be the other way around where I get error in **Relase** and not **Debug**. THank you
I would have expected you to get the same error in both cases, since that’s a compiler error, not a runtime error. Not sure why the behavior is different.
Hi,
I’m running the following code below and it is only returning the value 0. I thought that it would print the memory location or something, I’m super confused. I’m using the release build on Visual Studio.
Am I doing something wrong or is this just how C++ works now?
Thanks,
Nathan.
It’s how C and C++ have always worked. Sending a variable to std::cout prints the value that the variable holds. In this case, since you haven’t initialized x, you’ll get an undefined value.
If you want to print the memory location of x for some reason, you can use the & operator to get the address of x:
Ah, yes!
Thank you very much for your help! I’m super new to learning c++. Apologies if I wasted your time, haha.
I just noticed an English language error in the form of a missing "the".
Current: Most of the objects that we use in C++ come in form of variables.
Should Be: Most of the objects that we use in C++ come in the form of variables.
Indeed. Thanks for pointing this out. Fixed!
#include <iostream>
Int main ()
{
Int x;
x=5;
x=6;
std::cout<<"x";
Return 0;
}
It will first assign value 5 to x and then overwrite to 6 right???
Correct.
Hey, can u help me answering this. Which of the following C expressions are l-values? Why?
1. x +2
2. &x
3. *&x
4. &x + 2
5. *(&x +2)
6. &*y
This sounds like a homework assignment. Rather than answering for you, let me give you a hint: if you can take the address of the result, it’s an l-value.
With that knowledge, you should be able to determine your answers experimentally (i.e. try it).
Awesome tutorial. I’m an old man trying to learn programming and I appreciate you for teaching this old dog new tricks!
My dear c++ Teacher,
Please let me say what I understand, and what not, regarding "Initialization vs. assignment".
I understand that C++ will let you both define a variable AND give it an initial value in the SAME STEP. It follows, when a variable is defined and THEN assigned a number, is NOT initialized, is uninitialized. It is defined and assigned (a number) in two steps (statements).
When it is not assigned a number, either by "assignment" or by "initialization", should be said "unassigned".
"Uninitialized" is meant not initialized, but may be defined and assigned in two steps.
With regards and friendship.
We colloquially use the term “uninitialized” to mean the variable has not been given a value yet (by any means). If the variable is later assigned a value, we say the variable is no longer uninitialized, even though we gave it a value via assignment.
Hi,
class variables of for example std::string can have default initialization, see
I know this will be talked about later in this tutorial. People might get the impression that all variables are not initialized at all. Maybe add a note that this is not always the case.
I updated this lesson to note that “most” variable don’t self-initialize. Although it’s certainly true that some types of variable do self-initialize, I don’t think that’s particularly relevant to know at this point in the tutorial, since we don’t cover those types of variables for quite some time. For now, it’s better to act as if no variables self-initialize, and then we’ll cover those other cases later.
My dear c++ Teacher,
Please let me following comment:
First two sentences are:
"C++ programs create, access, manipulate, and destroy objects. An object is a piece of memory that can be used to store values."
It means c++ programs eventually destroy pieces of memory!
By the way let me wish you enjoy a Hawaiian pizza and a pint of Newcastle.
With regards and friendship.
Okay, so I had failures going with the code. It looks like that:
But it was still giving me two errors:
C2065: "cout": undeclared identifier
C2065: "endl": undeclared identifier
How do I solve it? I’ve sifted through the comment section, but still no solution.
#include “stdafx.h” needs to be the first line of the program.
My dear c++ Teacher,
Please let me say you Visual Studio 2017 behavior when variable "x" is uninitialized.
A. After pressing "Build Solution":
1. Seven lines, three of them are Addresses in hard disk. First address is followed by
"warning C4700: uninitialized local variable ‘x’ used"
2. Last line is
"Build: 1 succeeded, 0 failed, 0 up-to-date, 0 skipped".
B. After pressing "Start Without Debugging":
Console window is appeared just saying (in french) "press any key to continue…"
With regards and friendship.
Yes, your compiler is giving you a warning that you are using an uninitialized local variable. This is not an error though, so the compilation succeeds. Your program may not perform as you expect though.
My dear c++ Teacher,
Please let me send you following program for your quiz, with std::endl after every output, so that numbers be on different lines.
With regards and friendship.
I’ve incorporated this into the quiz. Thanks for the suggestion.
hi,
I was wondering how to initialize strings?
It depends on what kind of string you’re referring to. Strings in C++ are a little more complicated than in some other languages. I talk about std::string in chapter 4, and C-style strings in chapter 6.
Name (required)
Website | http://www.learncpp.com/cpp-tutorial/13-a-first-look-at-variables-initialization-and-assignment/ | CC-MAIN-2018-05 | refinedweb | 2,438 | 73.68 |
Welcome to part 19 of the intermediate Python programming tutorial series. In this tutorial, we are going to introduce the "special" or "magic" methods of Python, and, with that, talk about operator overloading.
Coming back to our Blob World code, I am going to change the code slightly, giving us a
BlueBlob,
GreenBlob and
RedBlob class, all of which inherit from the
Blob class. Notice that none of these classes take in a color parameter at all, and the color itself is actually hard-coded. This allows us to move this class to another file, and doesn't require us to pass a color at all.
RedBlobs will always be red.
blobworld.py
import pygame import random from blob import Blob STARTING_BLUE_BLOBS = 10 STARTING_RED_BLOBS = 3 STARTING_GREEN_BLOBS = 5 WIDTH = 800 HEIGHT = 600 WHITE = (255, 255, 255) BLUE = (0, 0, 255) RED = (255, 0, 0) game_display = pygame.display.set_mode((WIDTH, HEIGHT)) pygame.display.set_caption("Blob World") clock = pygame.time.Clock() class BlueBlob(Blob): def __init__(self, x_boundary, y_boundary): Blob.__init__(self, (0, 0, 255), x_boundary, y_boundary) class RedBlob(Blob): def __init__(self, x_boundary, y_boundary): Blob.__init__(self, (255, 0, 0), x_boundary, y_boundary) class GreenBlob(Blob): def __init__(self, x_boundary, y_boundary): Blob.__init__(self, (0, 255, 0), x_boundary, y_boundary) def draw_environment(blob_list): game_display.fill(WHITE) for blob_dict in blob_list: for blob_id in blob_dict: blob = blob_dict[blob_id] pygame.draw.circle(game_display, blob.color, [blob.x, blob.y], blob.size) blob.move() blob.check_bounds() pygame.display.update())])) while True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() quit() draw_environment([blue_blobs,red_blobs,green_blobs]) clock.tick(60) if __name__ == '__main__': main()
blob.py
import random class Blob: def __init__(self, color, x_boundary, y_boundary, size_range=(4,8), movement_range=(-1,2)): self.size = random.randrange(size_range[0],size_range[1]) self.color = color self.x_boundary = x_boundary self.y_boundary = y_boundary self.x = random.randrange(0, self.x_boundary) self.y = random.randrange(0, self.y_boundary) self.movement_range = movement_range def move(self): self.move_x = random.randrange(self.movement_range[0],self.movement_range[1]) self.move_y = random.randrange(self.movement_range[0],self.movement_range[1]) self.x += self.move_x self.y += self.move_y def check_bounds(self): if self.x < 0: self.x = 0 elif self.x > self.x_boundary: self.x = self.x_boundary if self.y < 0: self.y = 0 elif self.y > self.y_boundary: self.y = self.y_boundary
Now, consider that we want these blobs maybe to start having some sort of interaction with eachother. Let's say that the
RedBlobs will harm
BlueBlobs, and that
GreenBlobs are edible, and beneficial, to
BlueBlobs, and maybe add to their size, which can aid them in fighting the
RedBlob attacks. How might we handle this in code? Obviously, we need some sort of collision detection, but then, how might we handle the collision itself in the code? Wouldn't it be nice if, in the event of a collision of a
BlueBlob and
GreenBlob, we could just do a
BlueBlob + GreenBlob, just like that in the code, and things work? At the moment, it wont, but we can actually write code to make it this easy! Modifying our
BlueBlob class, we can start by doing:
class BlueBlob(Blob): def __init__(self, x_boundary, y_boundary): Blob.__init__(self, (0, 0, 255), x_boundary, y_boundary) def __add__(self, other_blob):
Like our "dunder init" method, we can have a "dunder add" (
__add__) method. This will let us define what happens when someone actually uses a
+ operator with our object. This is called "operator overloading." In our case, what should we put here? We obviously must pass some sort of variable that will be added. Python handles this for us in the background, knowing that what comes after the
+ operator is the next parameter. In this example, that will be a blob class object, and we can check it's color attribute. From here, we can handle it however we see fit. I suggest:
def __add__(self, other_blob): if other_blob.color == (255, 0, 0): self.size -= other_blob.size other_blob.size -= self.size elif other_blob.color == (0, 255, 0): self.size += other_blob.size other_blob.size = 0 elif other_blob.color == (0, 0, 255): # for now, nothing. Maybe later it does something more. pass else: raise Exception('Tried to combine one or multiple blobs of unsupported colors!')
This will just be a nice simple way to handle for this, and we're not using any globals here. Also, note the comments. We're choosing to not do anything when blue-colored
Blobs come into contact. Also, when we come into contact with a red blob, note that FIRST, the size of the blue blob will be reduced. Then this new size will be deducted from the red blob. This is different from simply subtracting each other's sizes from the collision, and it might not be what you intend. If none of our conditions are met, we raise an exception, since someone is attempting to use the operator, and there's no handling for it. We'd not want this to pass silently.
Now, we can add blobs together with our custom handling. For example, we could do:)])) print('Current blue size: {}. Current red size: {}'.format(str(blue_blobs[0].size), str(red_blobs[0].size))) blue_blobs[0] + red_blobs[0] print('Current blue size: {}. Current red size: {}'.format(str(blue_blobs[0].size), str(red_blobs[0].size))
Output:
Current blue size: 4. Current red size: 7 Current blue size: -3. Current red size: 10
Alright, so we've got this fancy handling for the
+ operator, but how will we know when to use it? We need some sort of collision detection, which is what we're going to work on in the next tutorial. | https://pythonprogramming.net/operator-overloading-intermediate-python-tutorial/ | CC-MAIN-2022-40 | refinedweb | 939 | 59.9 |
A lightweight & unobtrusive CMS for ASP.NET Core.
Startup.cs, inside
public void Configure()after
App.Init(api);I'm adding Sitemap items. When browsing to
App.Hooks.OnGenerateSitemapis run and the entries are added. However, in the web browser,
OnGenerateSitemap. What I'm doing wrong?
CultureInfoin order to serve the right translation to the
.cshtmlpages. I know how to use the localization features to translate - but I wantto know the mechanics behind it. Yes, I have the code for the
Microsoft.Excensions.Localizationpackage etc, but I might as well ask here.
@kirtikapadiya If you’re building the main content using blocks the workflow is to use smaller blocks, for example a content block for the first paragraph, then a column block with a content block and an image block to get a two column layout with an image.
You could of course write the entire page in a single content block, but that would defeat the purpose of the whole block concept
means where to define this :
using Piranha.Extend.Blocks;
using Piranha.Models;
var page = MyPage.Create(api);
var image = api.Media.GetAll().First(m => m.Type == MediaType.Image);
page.Blocks.Add(new ImageBlock
{
Body = image
});
@kirtikapadiya Think you do this by setting multiple page routes on the page model and then select the page route to use from the editor, Not something I have come to do myself yet just vaquely remember reading something about it. Somthing like this taken from the Docs
[PageType(Title = "Hero Page")] [ContentTypeRoute(Title = "Default", Route = "/heropage")] [ContentTypeRoute(Title = "Start Page", Route = "/startpage")] public class HeroPage : Page<HeroPage> { ... }
Then have relevant Page/Controller setup according to the routes you have declared
@FelipePergher I think something like this should remove a block from the manager
App.Blocks.UnRegister<AudioBlock>();
As for extending a block, I think this would just be a case of having your custom block inherit from the block you want to extend. Something like this
public class AdvancedAudioBlock : AudioBlock { public TextField MetaDescription { get; set; } } | https://gitter.im/PiranhaCMS/Piranha | CC-MAIN-2021-17 | refinedweb | 334 | 57.06 |
Geocoder: Display Maps and Find Places in RailsBy Ilya Bodrov-Krukowski
The world is big. Seriously, I’d say it’s really huge. Different countries, cities, various people, and cultures…but still, the internet connects us all and that’s really cool. I can communicate with my friends who live a thousand miles away from me.
Because the world is huge, there are many different places that you may need to keep track of within your app. Luckily, there is a great solution to help you find locations by their coordinates, addresses, or even measuring distances between places and finding places nearby. All of this location-based work is called “geocoding”. in Ruby, one geocoding solution is called Geocoder and that’s our guest today.
In this app you will learn how to
- Integrate Geocoder into your Rails app
- Tweak Geocoder’s settings
- Enable geocoding to be able to fetch coordinates based on the address
- Enable reverse geocoding to grab an address based on the coordinates
- Measure the distance between locations
- Add a static map to display the selected location
- Add a dynamic map to allow users to select a desired location
- Add the ability to find the location on the map based on coordinates
By the end of the article you will have a solid understanding of Geocoder and a chance to work with the handy Google Maps API. So, shall we start?
The source code is available at GitHub.
The working demo can be found at sitepoint-geocoder.herokuapp.com.
Preparing the App
For this demo, I’ll be using Rails 5 beta 3, but Geocoder supports both Rails 3 and 4. Create a new app called Vagabond (we’ll you don’t really have to call it that, but I find this name somewhat suitable):
$ rails new Vagabond -T
Suppose we want our users to share places that they have visited. We won’t focus on stuff like authentication, adding photos, videos etc., but you can extend this app yourself later. For now let’s add a table called
places with the following fields:
title(
string)
visited_by(
string) – later this can be replaced with
user_idand marked as a foreign key
address(
text) – address of the place a user has visited
latitudeand
longtitude(
float) – the exact coordinates of the place. The first draft of the app should fetch them automatically based on the provided address.
Create and apply the appropriate migration:
$ rails g model Place title:string address:text latitude:float longitude:float visited_by:string $ rake db:migrate
Before moving forward, let’s add bootstrap-rubygem that integrates Bootstrap 4 into our app. I won’t list all the styling in this article, but you can refer to the source code to see the complete markup.
Gemfile
[...] gem 'bootstrap', '~> 4.0.0.alpha3' [...]
Run
$ bundle install
Now create a controller, a route, and some views:
places_controller.rb
class PlacesController < ApplicationController def index @places = Place.order('created_at DESC') end def new @place = Place.new end def create @place = Place.new(place_params) if @place.save flash[:success] = "Place added!" redirect_to root_path else render 'new' end end private def place_params params.require(:place).permit(:title, :address, :visited_by) end end
config/routes.rb
[...] resources :places, except: [:update, :edit, :destroy] root 'places#index' [...]
views/places/index.html.erb
<header><h1 class="display-4">Places</h1></header> <%= link_to 'Add place', new_place_path, class: 'btn btn-primary btn-lg' %> <div class="card"> <div class="card-block"> <ul> <%= render @places %> </ul> </div> </div>
views/places/new.html.erb
<header><h1 class="display-4">Add Place</h1></header> <%= render 'form' %>
Now the partials:
views/places/_place.html.erb
<li> <%= link_to place.title, place_path(place) %> visited by <strong><%= place.visited_by %></strong> </li>
views/places/_form.html.erb
<%= form_for @place do |f| %> <fieldset class="form-group"> <%= f.label :title %> <%= f.text_field :title, class: "form-control" %> </fieldset> <fieldset class="form-group"> <%= f.label :visited_by %> <%= f.text_field :visited_by, class: "form-control" %> </fieldset> <fieldset class="form-group"> <%= f.label :address, 'Address' %> <%= f.text_field :address, class: "form-control" %> </fieldset> <%= f.submit 'Add!', class: 'btn btn-primary' %> <% end %>
We set up the
index,
new, and
create actions for our controller. That’s great, but how are we going to grab coordinates based on the provided address? For that, we’ll utilize Geocoder, so proceed to the next section!
Integrating Geocoder
Add a new gem:
Gemfile
[...] gem 'geocoder' [...]
and run
$ bundle install
Starting to work with Geocoder is really simple. Go ahead and add the following line into your model:
models/place.rb
[...] geocoded_by :address [...]
So, what does it mean? This line equips our model with useful Geocoder methods, that, among others, can be used to retrieve coordinates based on the provided address. The usual place to do that is inside a callback:
models/place.rb
[...] geocoded_by :address after_validation :geocode [...]
There are a couple of things you have to consider:
Your model must present a method that returns the full address – its name is passed as an argument to the
geocodedmethod. In our case that’ll be an
addresscolumn, but you can use any other method. For example, if you have a separate columns called
country,
city, and
street, the following instance method may be introduced:
def full_address
[country, city, street].compact.join(‘, ‘)
end
Then just pass its name:
geocoded_by :full_address
Your model must also contain two fields called
latitudeand
longitude, with their type set to
float. If your columns are called differently, just override the corresponding settings:
geocoded_by :address, latitude: :lat, longitude: :lon
Geocoder supports MongoDB as well, but requires a bit different setup. Read more here and here (overriding coordinates’ names).
Having these two lines in place, coordinates will be populated automatically based on the provided address. This is possible thanks to Google Geocoding API (though Geocoder supports other options as well – we will talk about it later). What’s more, you don’t even need an API key in order for this to work.
Still, as you’ve probably guessed, the Google API has its usage limits, so we don’t want to query it if the address was unchanged or was not presented at all:
models/place.rb
[...] after_validation :geocode, if: ->(obj){ obj.address.present? and obj.address_changed? } [...]
Now, just add the
show action for your
PlacesController:
places_controller.rb
[...] def show @place = Place.find(params[:id]) end [...]
views/places/show.html.erb
<header><h1 class="display-4"><%= @place.title %></h1></header> <p>Address: <%= @place.address %></p> <p>Coordinates: <%= @place.latitude %> <%= @place.longitude %></p>
Boot up your server, provide an address (like “Russia, Moscow, Kremlin”) and navigate to the newly added place. The coordinates should be populated automatically. To check whether they are correct, simply paste them into the search field on this page.
Another interesting thing is that users can even provide IP addresses to detect coordinates – this does not require any changes to the code base at all. Let’s just add a small reminder:
views/places/_form.html.erb
[...] <fieldset class="form-group"> <%= f.label :address, 'Address' %> <%= f.text_field :address, class: "form-control" %> <small class="text-muted">You can also enter IP. Your IP is <%= request.ip %></small> </fieldset> [...]
If you are developing on your local machine, the IP address will be something like
::1 or
localhost and obviously won’t be turned into coordinates, but you can provide any other known address (
8.8.8.8 for Google).
Configuration and APIs
Geocoder supports a bunch of options. To generate a default initializer file, run this command:
$ rails generate geocoder:config
Inside this file you can set up various things: an API key to use, timeout limit, measurement units to use, and more. Also, you may change the “lookup” providers here. The default values are
:lookup => :google, # for street addresses :ip_lookup => :freegeoip # for IP addresses
Geocoder’s docs do a great job of listing all possible providers and their usage limits,
so I won’t place them in this article.
One thing to mention is that even though you don’t require an API key to query the Google API, it’s advised to do so because you get an extended quota and also can track the usage of your app. Navigate to the console.developers.google.com, create a new project, and be sure to enable the Google Maps Geocoding API.
Next, just copy the API key and place it inside the initializer file:
config/initializers/geocoder.rb
Geocoder.configure( api_key: "YOUR_KEY" )
Displaying a Static Map
One neat feature about Google Maps is the ability to add static maps (which are essentially images) into your site based on the address or coordinates. Currently, our “show” page does not look very helpful, so let’s add a small map there.
To do that, you will require an API key, so if you did not obtain it in the previous step, do so now. One thing to remember is that the Google Static Maps API has to be enabled.
Now simply tweak your view:
views/places/show.html.erb
[...] <%= image_tag "{@place.latitude},#{@place.longitude}&markers=#{@place.latitude},#{@place.longitude}&zoom=7&size=640x400&key=AIzaSyA4BHW3txEdqfxzdTlPwaHsYRSZbfeIcd8", class: 'img-fluid img-rounded', alt: "#{@place.title} on the map"%>
That’s pretty much it – no JavaScript is required. Static maps support various parameters, like addresses, labels, map styling, and more. Be sure to read the docs.
The page now looks much nicer, but what about the form? It would be much more convenient if users were able to enter not only address but coordinates, as well, by pinpointing the location on an interactive map. Proceed to the next step and let’s do it together!
Adding Support for Coordinates
For now forget about the map – let’s simply allow users to enter coordinates instead of an address. The address itself has to be fetched based on the latitude and longitude. This requires a bit more complex configuration for Geocoder. This approach uses a technique known as “reverse geocoding”.
models/place.rb
[...] reverse_geocoded_by :latitude, :longitude [...]
This may sound complex, but the idea is simple – we take these two values and grab the address based on it. If your address column is named differently, provide its name like this:
reverse_geocoded_by :latitude, :longitude, :address => :full_address
Moreover, you can pass a block to this method. It is useful in scenarios when you have separate columns to store country’s and city’s name, street etc.:
reverse_geocoded_by :latitude, :longitude do |obj, results| if geo = results.first obj.city = geo.city obj.zipcode = geo.postal_code obj.country = geo.country_code end end
More information can be found here.
Now add a callback:
models/place.rb
[...] after_validation :reverse_geocode [...]
There are a couple of problems though:
- We don’t want to do reverse geocoding if the coordinates were not provided or modified
- We don’t want to perform both forward and reverse geocoding
- We need a separate attribute to store an address provided by the user via the form
The first two issues are easy to solve – just specify the
if and
unless options:
models/place.rb
[...] after_validation :geocode, if: ->(obj){ obj.address.present? and obj.address_changed? } after_validation :reverse_geocode, unless: ->(obj) { obj.address.present? }, if: ->(obj){ obj.latitude.present? and obj.latitude_changed? and obj.longitude.present? and obj.longitude_changed? } [...]
Having this in place, we will fetch coordinates if the address is provided, otherwise try to fetch the address if coordinates are set. But what about a separate attribute for an address? I don’t think we need to add another column – let’s employ a virtual attribute called
raw_address instead:
models/place.rb
[...] attr_accessor :raw_address geocoded_by :raw_address after_validation -> { self.address = self.raw_address geocode }, if: ->(obj){ obj.raw_address.present? and obj.raw_address != obj.address } after_validation :reverse_geocode, unless: ->(obj) { obj.raw_address.present? }, if: ->(obj){ obj.latitude.present? and obj.latitude_changed? and obj.longitude.present? and obj.longitude_changed? } [...]
We can utilize this virtual attribute to do geocoding. Don’t forget to update the list of permitted attributes
places_controller.rb
[...] private def place_params params.require(:place).permit(:title, :raw_address, :latitude, :longitude, :visited_by) end [...]
and the view:
views/places/_form.html.erb
<h4>Enter either address or coordinates</h4> <fieldset class="form-group"> <%= f.label :raw_address, 'Address' %> <%= f.text_field :raw_address, class: "form-control" %> <small class="text-muted">You can also enter IP. Your IP is <%= request.ip %></small> </fieldset> <div class="form-group row"> <div class="col-sm-1"> <%= f.label :latitude %> </div> <div class="col-sm-3"> <%= f.text_field :latitude, class: "form-control" %> </div> <div class="col-sm-1"> <%= f.label :longitude %> </div> <div class="col-sm-3"> <%= f.text_field :longitude, class: "form-control" %> </div> </div>
So far so good, but without the map, the page still looks uncompleted. On to the next step!
Adding a Dynamic Map
Adding a dynamic map involves some JavaScript, so add it into your layout:
layouts/application.html.erb
<script src="" async defer></script>
Note that the API key is mandatory (be sure to enable “Google Maps JavaScript API”). Also note the
callback=initMap parameter.
initMap is the function that will be called as soon as this library is loaded, so let’s place it inside the global namespace:
map.coffee
jQuery -> window.initMap = ->
Obviously we need a container to place a map into, so add it now:
views/places/_form.html.erb
[...] <div class="card"> <div class="card-block"> <div id="map"></div> </div> </div>
The function:
map.coffee
window.initMap = -> if $('#map').size() > 0 map = new google.maps.Map document.getElementById('map'), { center: {lat: -34.397, lng: 150.644} zoom: 8 }
Note that
google.maps.Map requires a JS node to be passed, so this
new google.maps.Map $('#map')
will not work as
$('#map') returns a wrapped jQuery set. To turn it into a JS node, you may say
$('#map')[0].
center is an options that provides the initial position of the map – set the value that works for you.
Now, let’s bind a
click event to our map and update the coordinate fields, accordingly.
map.coffee
lat_field = $('#place_latitude') lng_field = $('#place_longitude') [...] window.initMap = -> map.addListener 'click', (e) -> updateFields e.latLng [...] updateFields = (latLng) -> lat_field.val latLng.lat() lng_field.val latLng.lng()
For our users’ convenience, let’s also place a marker at the clicked point. The catch here is that if you click on the map a couple of times, multiple markers will be added, so we have to clear them every time:
map.coffee
markersArray = [] window.initMap = -> map.addListener 'click', (e) -> placeMarkerAndPanTo e.latLng, map updateFields e.latLng placeMarkerAndPanTo = (latLng, map) -> markersArray.pop().setMap(null) while(markersArray.length) marker = new google.maps.Marker position: latLng map: map map.panTo latLng markersArray.push marker [...]
The idea is simple – we store the marker inside the array and remove it on the next click. Having this array, you may keep track of markers that were placed a clear them on some other condition.
It’s high time to test it out. Navigate to the new page and try clicking on the map – the coordinates should be updated properly. That’s much better!
Placing Markers Based on Coordinates
Suppose a user knows coordinates and want to find them on the map instead. This feature is easy to add. Introduce a new “Find on the map” link:
views/places/_form.html.erb
[...] <div class="col-sm-3"> <%= f.text_field :longitude, class: "form-control" %> </div> <div class="col-sm-4"> <a href="#" id="find-on-map" class="btn btn-info btn-sm">Find on the map</a> </div> [...]
Now bind a
click event to it that updates the map based on the provided coordinates:
map.coffee
[...] window.initMap = -> $('#find-on-map').click (e) -> e.preventDefault() placeMarkerAndPanTo { lat: parseInt lat_field.val(), 10 lng: parseInt lng_field.val(), 10 }, map [...]
We pass an object to the
placeMarkerAndPanTo function that contains the user-defined latitude and longitude. Note that coordinates have to be converted to integers, otherwise an error will be raised.
Reload the page and check the result! To practice a bit more, you can try to add a similar button for the address field and introduce error handling.
Measuring Distance Between Places
The last thing we will implement today is the ability to measure the distance between added places. Create a new controller:
distances_controller.rb
class DistancesController < ApplicationController def new @places = Place.all end def create end end
Add a route:
config/routes.rb
[...] resources :distances, only: [:new, :create] [...]
and a view:
views/distances/new.html.erb
<header><h1 class="display-4">Measure Distance</h1></header> <%= form_tag distances_path do %> <fieldset class="form-group"> <%= label_tag 'from', 'From' %> <%= select_tag 'from', options_from_collection_for_select(@places, :id, :title), class: "form-control" %> </fieldset> <fieldset class="form-group"> <%= label_tag 'to', 'To' %> <%= select_tag 'to', options_from_collection_for_select(@places, :id, :title), class: "form-control" %> </fieldset> <%= submit_tag 'Go!', class: 'btn btn-primary' %> <% end %>
Here we display two drop-downs with our places.
options_from_collection_for_select is a handy method that simplifies the generation of
option tags. The first argument is the collection, the second – a value to use inside the
value option and the last one – the value to display for the user inside the drop-down.
Geocoder allows the measuring of distance between any points on the planet – simply provide their coordinates:
distances_controller.rb
[...] def create @from = Place.find_by(id: params[:from]) @to = Place.find_by(id: params[:to]) if @from && @to flash[:success] = "The distance between <b>#{@from.title}</b> and <b>#{@to.title}</b> is #{@from.distance_from(@to.to_coordinates)} km" end redirect_to new_distance_path end [...]
We find the requested places and use the
distance_from method.
to_coordinates transforms the record into an array of coordinates (for example,
[30.1, -4.3]) – we have to use it, otherwise the calculation will result in an error.
This method relies on a flash message, so tweak layout a bit:
layouts/application.html.erb
[...] <% flash.each do |name, msg| %> <%= content_tag(:div, msg.html_safe, class: "alert alert-#{name}") %> <% end %> [...]
By default Geocoder uses miles as the measurement units, but you can tweak the initializer file and set the
units option to
km (kilometers) instead.
More from this author
Conclusion
Phew, that was a long one! We’ve covered many features of Geocoder: forward and reverse geocoding, tweaking options, and measuring distance. On top of that, you learned how to use various types of Google maps and work with them via the API.
Still, there are other features of Geocoder that I have not covered in this article. For example, it supports finding places near the selected location, it can provide directions while measuring distance between locations, it supports caching, and can even be used outside of Rails. If you are planning to use this great gem in your project, be sure to skim the documentation!
That’s all for today folks. Hopefully, this article was useful and interesting for you. Don’t lose your track and see you soon!
- greengiant
- Ilya Bodrov
- Mashab Anwar Qureshi
- Ilya Bodrov
- Mashab Anwar Qureshi
- Ilya Bodrov
- Ilya Bodrov | https://www.sitepoint.com/geocoder-display-maps-and-find-places-in-rails/ | CC-MAIN-2017-13 | refinedweb | 3,119 | 58.89 |
Red Hat Bugzilla – Bug 472767
In booty spec file is missing python-pyblock requirement.
Last modified: 2014-01-12 19:08:03 EST
Created attachment 324476 [details]
spec file patch, python-pyblock requirement
Description of problem:
Import of booty is impossible without installed python-pyblock, but in spec file there's no that requirement.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. run python
2. import booty
3.
Actual results:
>>> import booty
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/booty/booty.py", line 28, in <module>
from bootloaderInfo import *
File "/usr/lib/booty/bootloaderInfo.py", line 37, in <module>
import block
ImportError: No module named block
Expected results:
Successful import
Additional info:
My easy patch solve it.
Ha. Another one:
>>> booty.getBootloader()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/booty/booty.py", line 43, in getBootloader
return x86BootloaderInfo()
File "/usr/lib/booty/bootloaderInfo.py", line 1236, in __init__
grubBootloaderInfo.__init__(self)
File "/usr/lib/booty/bootloaderInfo.py", line 1126, in __init__
bootloaderInfo.__init__(self)
File "/usr/lib/booty/bootloaderInfo.py", line 525, in __init__
from flags import flags
ImportError: No module named flags
booty looks too much dependent on anaconda. So it isn't useful elsewhere? I would like to use booty (or any other package?) for configuring bootloader config file. grubby can't handle xen kernels correctly, so I tried to use booty, but this is not probably right way...
grubby should handle xen kernels correctly; if it's not, please file a bug with details on how.
And yes, booty is (still) pretty intertwined with anaconda. The original idea was to more fully split it out but there was never time or the motivation (due to lack of anything else really stepping up to use. | https://bugzilla.redhat.com/show_bug.cgi?id=472767 | CC-MAIN-2018-30 | refinedweb | 307 | 51.24 |
Created on 2013-09-22 13:22 by grahamd, last changed 2013-10-30 06:36 by grahamd.
The classmethod decorator when applied to a function of a class, does not honour the descriptor binding protocol for whatever it wraps. This means it will fail when applied around a function which has a decorator already applied to it and where that decorator expects that the descriptor binding protocol is executed in order to properly bind the function to the class.
A decorator may want to do this where it is implemented so as to be able to determine automatically the context it is used in. That is, one magic decorator that can work around functions, instance methods, class methods and classes, thereby avoiding the need to have multiple distinct decorator implementations for the different use case.
So in the following example code:
class BoundWrapper(object):
def __init__(self, wrapped):
self.__wrapped__ = wrapped
def __call__(self, *args, **kwargs):
print('BoundWrapper.__call__()', args, kwargs)
print('__wrapped__.__self__', self.__wrapped__.__self__)
return self.__wrapped__(*args, **kwargs)
class Wrapper(object):
def __init__(self, wrapped):
self.__wrapped__ = wrapped
def __get__(self, instance, owner):
bound_function = self.__wrapped__.__get__(instance, owner)
return BoundWrapper(bound_function)
def decorator(wrapped):
return Wrapper(wrapped)
class Class(object):
@decorator
def function_im(self):
print('Class.function_im()', self)
@decorator
@classmethod
def function_cm_inner(cls):
print('Class.function_cm_inner()', cls)
@classmethod
@decorator
def function_cm_outer(cls):
print('Class.function_cm_outer()', cls)
c = Class()
c.function_im()
print()
Class.function_cm_inner()
print()
Class.function_cm_outer()
A failure is encountered of:
$ python3.3 cmgettest.py
BoundWrapper.__call__() () {}
__wrapped__.__self__ <__main__.Class object at 0x1029fc150>
Class.function_im() <__main__.Class object at 0x1029fc150>
BoundWrapper.__call__() () {}
__wrapped__.__self__ <class '__main__.Class'>
Class.function_cm_inner() <class '__main__.Class'>
Traceback (most recent call last):
File "cmgettest.py", line 40, in <module>
Class.function_cm_outer()
TypeError: 'Wrapper' object is not callable
IOW, everything is fine when the decorator is applied around the classmethod, but when it is placed inside of the classmethod, a failure occurs because the decorator object is not callable.
One could argue that the error is easily avoided by adding a __call__() method to the Wrapper class, but that defeats the purpose of what is trying to be achieved in using this pattern. That is that one can within the bound wrapper after binding occurs, determine from the __self__ of the bound function, the fact that it was a class method. This can be inferred from the fact that __self__ is a class type.
If the classmethod decorator tp_descr_get implementation is changed so as to properly apply the descriptor binding protocol to the wrapped object, then what is being described is possible.
Having it honour the descriptor binding protocol also seems to make application of the Python object model more consistent.
A patch is attached which does exactly this.
The result for the above test after the patch is applied is:
BoundWrapper.__call__() () {}
__wrapped__.__self__ <__main__.Class object at 0x10ad237d0>
Class.function_im() <__main__.Class object at 0x10ad237d0>
BoundWrapper.__call__() () {}
__wrapped__.__self__ <class '__main__.Class'>
Class.function_cm_inner() <class '__main__.Class'>
BoundWrapper.__call__() () {}
__wrapped__.__self__ <class '__main__.Class'>
Class.function_cm_outer() <class '__main__.Class'>
That is, the decorator whether it is inside or outside now sees things in the same way.
If one also tests for calling of the classmethod via the instance:
print()
c.function_cm_inner()
print()
c.function_cm_outer()
Everything again also works out how want it:
BoundWrapper.__call__() () {}
__wrapped__.__self__ <class '__main__.Class'>
Class.function_cm_inner() <class '__main__.Class'>
BoundWrapper.__call__() () {}
__wrapped__.__self__ <class '__main__.Class'>
Class.function_cm_outer() <class '__main__.Class'>
FWIW, the shortcoming of classmethod not applying the descriptor binding protocol to the wrapped object, was found in writing a new object proxy and decorator library called 'wrapt'. This issue in the classmethod implementation is the one thing that has prevented wrapt having a system of writing decorators that can magically work out the context it is used in all the time. Would be nice to see it fixed. :-)
The wrapt library can be found at:
The limitation in the classmethod implementation is noted in the wrapt documentation at:
I don't think it was ever intended that decorators be chained together.
The whole point is to control binding behavior during dotted look-up (when __getattribute__ is called) and not in other circumstances (such as a direct lookup in a class dictionary).
Note that classmethods typically wrap regular functions which have both __call__ and __get__ methods. The classmethod object intentionally invokes the former instead of the latter which would unhelpfully create an inner bound or unbound method.
The classmethod __get__() method does:
static PyObject *
cm_descr_get(PyObject *self, PyObject *obj, PyObject *type)
{
classmethod *cm = (classmethod *)self;
if (cm->cm_callable == NULL) {
PyErr_SetString(PyExc_RuntimeError,
"uninitialized classmethod object");
return NULL;
}
if (type == NULL)
type = (PyObject *)(Py_TYPE(obj));
return PyMethod_New(cm->cm_callable,
type, (PyObject *)(Py_TYPE(type)));
}
So it isn't intentionally calling __call__(). If it still doing binding, but doing it by calling PyMethod_New() rather than using __get__() on the wrapped function. Where it wraps a regular function the result is same as if __get__() was called as __get__() for a regular function internally calls PyMethod_New() in the same way.
static PyObject *
func_descr_get(PyObject *func, PyObject *obj, PyObject *type)
{
if (obj == Py_None)
obj = NULL;
return PyMethod_New(func, obj, type);
}
By not using __get__(), you deny the ability to have chained decorators that want/need the knowledge of the fact that binding was being done. The result for stacking multiple decorators which use regular functions (closures) is exactly the same, but you open up other possibilities of smarter decorators.
I'll take a look at this in more detail in the next week or so.
If you have the time, would be great if you can have a quick look at my wrapt package. That will give you an idea of where I am coming from in suggesting this change.
In short, aiming to be able to write decorators which are properly transparent and aware of the context they are used in, so we don't have this silly situation at the moment where it is necessary to write distinct decorators for regular functions and instance methods. A classmethod around another decorator was the one place things will not work as would like to see them work.
I even did a talk about writing better decorators at PyCon NZ. Slides with notes at:
Thanks.
Antoine, do you have any thoughts on this proposal?
Well... I've not written enough descriptor-implementing code to have a clear opinion on this, but this looks quite obscure. I have personally never needed anything like the wrapt library (I've also never used the PyPI "decorator" module, FWIW).
@grahamd: I occasionally have felt the pain of wrapping @classmethod (or @staticmethod). Never enough though to think of how to fix it. I really don't have the stomach to review your wrapt library, but your code looks okay except for style and missing tests. I'd also recommend adding a few words to the docs. (And yes, all of this is your responsibility -- nobody has time to do all that stuff for you.)
Style-wise:
- the continuation line in your patch is not properly formatted;
- either the else block should also use { } or the else clause should be omitted.
Graham, do we have a contributor agreement from you?
I don't believe so. | http://bugs.python.org/issue19072 | CC-MAIN-2014-10 | refinedweb | 1,211 | 56.66 |
The python wrapper for the Basler pylon Camera Software Suite.
Project description
The official python wrapper for the Basler pylon Camera Software Suite.
Background information about usage of pypylon, programming samples and jupyter notebooks can also be found at pypylon-samples.
Please Note: This project is offered with no technical support by Basler AG. You are welcome to post any questions or issues on GitHub or on ImagingHub.
Getting Started
- Install pylon
This is strongly recommended but not mandatory. See known issues for further details.
- Install pypylon:
pip3 install pypylon
For more installation options and the supported systems please read the Installation paragraph.
- Look at samples/grab.py or use the following snippet:
from pypylon import pylon camera = pylon.InstantCamera(pylon.TlFactory.GetInstance().CreateFirstDevice()) camera.Open() # demonstrate some feature access new_width = camera.Width.GetValue() - camera.Width.GetInc() if new_width >= camera.Width.GetMin(): camera.Width.SetValue(new_width) numberOfImagesToGrab = 100 camera.StartGrabbingMax(numberOfImagesToGrab) while camera.IsGrabbing(): grabResult = camera.RetrieveResult(5000, pylon.TimeoutHandling_ThrowException) if grabResult.GrabSucceeded(): # Access the image data. print("SizeX: ", grabResult.Width) print("SizeY: ", grabResult.Height) img = grabResult.Array print("Gray value of first pixel: ", img[0, 0]) grabResult.Release() camera.Close()
Installation
Prerequisites
- Installed pylon
For the binary installation this is not mandatory but strongly recommended. See known issues for further details.
- Installed python with pip
Binary Installation
The easiest way to get pypylon is to install a prebuild wheel. Binary releases for most architectures are available on pypi**. To install pypylon open your favourite terminal and run:
pip3 install pypylon
The following versions are available on pypi:
Additional Notes on binary packages:
- (*) The linux wheels for python 3.4 and 3.5 are not available on pypi.
You can get them from Github Releases.
- (**) The linux binaries are manylinux_2_24 conformant.
This is roughly equivalent to a minimum glibc version >= 2.24.
:warning: You need at least pip 20.3 to install them.
- (***) MacOS binaries are built for macOS >= 10.14 (Mojave)
Installation from Source
Building the pypylon bindings is supported and tested on Windows and Linux.
You need a few more things to compile pypylon:
- A compiler for your system (Visual Studio on Windows, gcc on linux)
- Python development files (e.g.
sudo apt install python-devon linux)
- swig >= 4.0
To build pypylon from source:
git clone cd pypylon pip install .
Development
Pull requests to pypylon are very welcome. To help you getting started with pypylon improvements, here are some hints:
Starting Development
python setup.py develop
This will "link" the local pypylon source directory into your python installation. It will not package the pylon libraries and always use the installed pylon.
After changing pypylon, execute
python setup.py build and test...
Running Unit Tests
NOTE: The unit tests try to import
pypylon...., so they run against the installed version of pypylon.
python -m unittest tests/.... python tests/....
Known Issues
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/pypylon/ | CC-MAIN-2022-21 | refinedweb | 501 | 53.37 |
Ralf Wildenhues wrote:
Current state of libtldl branch-2-0 is inconsistent: libtoolize --ltdl=some/sub/dirwill not work. The include paths in ltdl.h | ltdl.h:#include <libltdl/lt_system.h>| ltdl.h:#include <libltdl/lt_error.h> | ltdl.h:#include <libltdl/lt_dlloader.h> and libltdl/Makefile.am's | AM_CPPFLAGS = -I$(top_builddir) -I$(top_srcdir) will conflict in a package which uses libltdl. We have to decide: Either the `libtoolize --ltdl' argument must end in `libltdl', or we need to provide a flat directory structure.(Note that this stems from the requirement to be able to either use an installed libltdl or a package-internal one, with the repective otherone absent). I like the former better for several reasons: - better forward compatibility from 1.5 (maybe) - supposedly less trouble when within a larger collection of sub-packages. - People much rather like to bury autotools in some directory structure whose top they decide and the innards they could care less. Anybody know packages using libltdl with a different strategy already (which would then conflict)?:
#if HAVE_LTDL # include <libltdl/ltdl.h> #else # include "ltdl.h" #endif and then in our #includes do something similar?I realize that there is Makefile.am stuff to do too to get the -Iflags correct (add a -I$(srcdir) to the AM_CPPFLAGS?), but I don't see why we have to restrict developers in the way you propose. Am I missing something? It is surely possible to fix it so it works "as advertised" by playing with ltdl.m4 and the Makefile.am and .c and .h files in libltdl.
Peter | http://lists.gnu.org/archive/html/libtool-patches/2004-11/msg00169.html | CC-MAIN-2016-50 | refinedweb | 262 | 60.01 |
Overview of Generics in the .NET Framework
[This documentation is for preview only, and is subject to change in later releases. Blank topics are included as placeholders.]
This topic provides an overview of generics in the .NET Framework and a summary of generic types or methods. It also defines the terminology used to discuss generics..
Public Class Generic(Of T) Public Field As T End Class
public class Generic<T> { public T Field; }
generic<typename T> public ref.
Dim g As New Generic(Of String) g.Field = "A string"
Generic<string> g = new Generic<string>(); g.Field = "A string";
Generic<String^>^ g = gcnew Generic<String^>(); g->Field = "A string";
Generics Terminology.
Constraints are limits placed on generic type parameters. For example, you might limit a type parameter to types that implement the IComparer<T> generic interface, to ensure that instances of the type can be ordered. You can also constrain type parameters to types that have a particular base class, that have a default.
Function Generic(Of T)(ByVal arg As T) As T Dim temp As T = arg ... End Function
T Generic<T>(T arg) { T temp = arg; ...}
generic<typename T> T Generic(T arg) { T temp = arg; ...}; Visual Basic, Introduction to Generics (C# Programming Guide),.
See Also
Tasks
How to: Define a Generic Type with Reflection Emit
Reference
System.Collections.Generic
System.Collections.ObjectModel
Introduction to Generics (C# Programming Guide)
Concepts
When to Use Generic Collections
Generic Types in Visual Basic
Overview of Generics in Visual C++
Generic Collections in the .NET Framework
Generic Delegates for Manipulating Arrays and Lists
Advantages and Limitations of Generics
Other Resources
Commonly Used Collection Types
Generics in the .NET Framework | https://docs.microsoft.com/en-us/previous-versions/ms172193(v=vs.100)?redirectedfrom=MSDN | CC-MAIN-2019-43 | refinedweb | 278 | 50.02 |
RationalWiki:Technical support/Archive7
Contents
- 1 URL workaround for watchlist button
- 2 Percent signs in usernames
- 3 Bot flag for User:ZooBot
- 4 Is Capturebot working yet?
- 5 Search broken again?
- 6 Something is broken
- 7 Oct 06: Customisations failing
- 8 Vandal brake
- 9 Messing around with shit
- 10 LQT edit toolbar not loading
- 11 Hiccups in the last coupla hours
- 12 User name character limit
- 13 Blank watchlist
- 14 WIGO: CP talk's edit screen templates re: capturebot
- 15 Ru
- 16 WIGO voting not working?
- 17 Search
- 18 Log in issue
- 19 New error
- 20 Rename
- 21 Slow and trim missing
- 22 Please
- 23 Tiny text
- 24 Disappearing Edits
- 25 {{talkpage}}
- 26 Math tag
- 27 Apache tweaks
- 28 File templates
[edit]
My mobile browser won't let me click on the "add/remove from watchlist" star. Is there a url
&action= workaround for this? Or possibly an alternative method?
Radioactive afikomen Please ignore all my awful pre-2014 comments. 10:15, 13 September 2012 (UTC)
- &action=watch? sterilesporadic heavy hitter 11:04, 13 September 2012 (UTC)
- and action=unwatch. -- Nx / talk 11:35, 13 September 2012 (UTC)
- Thanks, both of you! It doesn't work if I tack it on to a .org/wiki/Article_name url, but it does with the .org/w/index.php?title=Article_name urls. But no on ever said workarounds were elegant -__-
Radioactive afikomen Please ignore all my awful pre-2014 comments. 07:46, 16 September 2012 (UTC)
Percent signs in usernames[edit]
Is there an easy way to prevent usernames containing problematic characters from being registered?
A vandal registered the annoyingly long username, "The God-fearing Christian 1% must crush the inferior, sick, starving, morally bankrupt pagans of the 99%." This was eventually rectified, but until then the wiki kept giving me problems, telling me it couldn't find their user talk page (it allowed me to create the page, but gave me an error message when trying to access it) or their contributions page.
Obviously, these problems were solved when Ty renamed them, but I still find it odd that MediaWiki even allows usernames with characters that cause problems like that.
Radioactive afikomen Please ignore all my awful pre-2014 comments. 02:11, 26 September 2012 (UTC)
- Is it both percentage signs that are the problem, I wonder, or just the one at the end? Peter Subsisting on honey 03:54, 26 September 2012 (UTC)
- % is a reserved character in URLs to, ironically, denote special characters. Hence WIGO:World is actually "RationalWiki:What_is_going_on_in_the_world%3F" with %3F standing in for the question mark. Although the URL to that user page does substitute % for %25, I imagine the error continues down into the Media Wiki back-end itself, meaning it returns an error rather than trying to direct to a page with that in. Which means you can't access any page with % in the title.
sshole
10:41, 27 September 2012 (UTC)
- I think its' the server not recognising it more than Media Wiki itself. This is something we should look into, especially if this whole 53%/47% and 99%/1% shit continues.
bomination
10:43, 27 September 2012 (UTC)
- And by "we" I mean, Trent.
pathetic
10:43, 27 September 2012 (UTC)
- You can get it though the "rationalwiki.org/w/index.php?title=[TITLE]" link rather than "rationalwiki.org/wiki/[TITLE]". So the error is whatever parses the latter isn't picking up %25 as a valid character and matching it with the database. That's all I can figure out.
pathetic
10:46, 27 September 2012 (UTC)
- When you type in a url that doesn't have index.php in it (called a short url), the server rewrites them internally and sends them to index.php. The rewrite rules RationalWiki uses are incredibly complex because we need to rewrite rationalwiki.net and rationalwiki.com into rationalwiki.org visibly while keeping the short url, (this doesn't work right, because it strips off the ? at the end, so? is rewritten to, and the redirect takes care of it; try it out), and then in another step, this url is internally, invisibly rewritten into rationalwiki.org/w/index.php?title=TITLE, because we want the user to see the short url in the address bar. The problem is what Armondikov said, % is a special character that's used to encode other character, and isn't encoded properly itself.
- The server module that handles these rewrites, called mod_rewrite, is the most uselessly complicated piece of shit software in existence. This error is likely the result of me trying to work around its shittyness when implementing the complicated rewriting scheme described above. One solution could be to scrap the whole thing and redo it by rewriting to rationalwiki.org/w/index.php/TITLE - this is a different full url scheme than the index.php?title=TITLE scheme that was added to MediaWiki to aid in rewriting and to make default urls without rewriting set up look a bit better - it's handled entirely by mediawiki, not mod_rewrite, so it's a bit easier to set up, but it has its own problems too. -- Nx / talk 08:28, 28 September 2012 (UTC)
Bot flag for User:ZooBot[edit]
Please.--ZooGuard (talk) 12:05, 1 October 2012 (UTC)
Is Capturebot working yet?[edit]
Just wondering - David Gerard (talk) 10:05, 29 September 2012 (UTC)
- No. People are manually uploading screenshots using the links auto-generated by the capture tag.
- Does anyone know what exactly is the problem? Incompatibility with the latest version of the MediaWiki API? Is there any way to see any error messages, or it requires server access?
- I've had another look at User:Capturebot2. It seems that we have the source code and it runs on Python and Qt. I know a guy who works with Qt and I can ask him for help. Though I doubt that the problem is in the Qt part.--ZooGuard (talk) 10:13, 29 September 2012 (UTC)
- Since it happened after the upgrade, the problem is almost certainly that the pywikipediabot version capturebot is using is outdated and incompatible. Though Pibot works, and if I remember correctly, Capturebot still responds to status commands, so it's just a problem with file uploads. -- Nx / talk 10:31, 29 September 2012 (UTC)
- Pibot died on the 10th of August (a little after the upgrade) and hasn't been seen since. Peter Subsisting on honey 23:30, 3 October 2012 (UTC)
- Who can I thank for pushing the button? Peter Subsisting on honey 23:45, 3 October 2012 (UTC)
- So, I've decided to take a look: User:ZooBot. The first bug I found: the regexp looking for capture links in check() assumes that between the name of the "a" tag and the "href" attribute there are only whitespace chars. Our current version of MW puts there the "rel" and "class" attributes. Someone who knows more about regexps than me should have a look. I made a crude fix that seems to find the links, but the web2png script exploded. Now I have to look at that.--ZooGuard (talk) 12:53, 29 September 2012 (UTC)
- I still can't make the webkit2png script work (using the code from User:Capturebot2/webkit2png.py). I don't know if the problem is in it or in my setup. The original error ("exploded") was a stupid syntax error that was trivial to fix. But there seems to be a problem in render(), and I don't know enough about Python/Qt to identify it.
- So, someone please patch the regexp first and see if this fixes Capturebot.--ZooGuard (talk) 16:56, 29 September 2012 (UTC)
Pywikipedia has been updated, bots are running, file uploads are failing. Don't know why, looked into it, don't have time for much anymore. Is the code published somewhere? Other people can give it a shot using their own bot acocunts. Tmtoulouse (talk) 18:57, 29 September 2012 (UTC)
- Acocunts?
ГенгисRationalWiki GOLD member
19:00, 29 September 2012 (UTC)
-
- I just did that (User:ZooBot). Reade my posts above, there are links to the code. The Capturebot code is working (reports status), but due to some changes, it no longer can recognize <capture> tags. Proposed fix: open capturebot2.py and replace this line:
regexp = re.compile('<a\s*<a\s*
href="[^"]*"\s*
class="new"\s*title="(?:Image:|File:)([^&"]*).png', re.UNICODE)
with this:
regexp = re.compile('<a[^>]*\s*<a\s*
href="[^"]*"\s*class="new"\s*title="(?:Image:|File:)([^&"]*).png', re.UNICODE)
Of course, make sure it has the original indentation level.--ZooGuard (talk) 19:19, 29 September 2012 (UTC)
Just as I found the fix, Conservapedia seems to have died (can't access it from my browser either). I've updated User:Capturebot2/webkit2png.py. -- Nx / talk 22:36, 29 September 2012 (UTC)
- Also, the regexp fix ZooGuard found, User:Capturebot2/capturebot2.py. -- Nx / talk 22:37, 29 September 2012 (UTC)
- Thank you very much for your work on this! - David Gerard (talk) 10:09, 30 September 2012 (UTC)
Any update on the bot status?--ZooGuard (talk) 14:46, 2 October 2012 (UTC)
- I sent you an e-mail offering to setup access to the bot account using your RW e-mail link, otherwise it has to fit into my schedule. Tmtoulouse (talk) 14:54, 2 October 2012 (UTC)
- I'm really sorry for being pushy. Thank you for your time. As for the email, my Inbox is a mess. :(--ZooGuard (talk) 09:08, 5 October 2012 (UTC)
- Don't have to apologize, but there are several complicating factors to just pasting in something and calling it a day. First, as the only person actually doing tech support on our servers I have to take the time to make sure I have at least a working understanding of what the problem was and the fix, and second even after replacing the code the bot was crashing. This was an issue with the setup, but one that required actually sitting down and debugging for a chunk of time to figure out/fix. If you are interested in directly supporting some of this stuff server side I will be posting more info soon about trying to recruit some sys admins. Tmtoulouse (talk) 14:42, 5 October 2012 (UTC)
Search broken again?[edit]
Free text search, at least. "Lisle" and "Lightner" come up with nothing. "Jason Lisle" comes up with " There were no results matching the query. There is a page named "Jason Lisle" on this wiki." - David Gerard (talk) 22:30, 3 October 2012 (UTC)
Something is broken[edit]
On the Category:Logic page beneath High Priority Articles there is a line which reads Extension:DynamicPageList (DPL), version 1.8.9 : <dpl_log_16>. I have a sneaking suspicion that this is not what was intended.
ГенгисIs the Pope a Catholic?
15:47, 4 October 2012 (UTC)
- Probably means that there are no "high priority articles" in the category "logic". The DPL code for that only uses the manual maintenance category Category:High priority, not the rate/priority template as that's a separate system that only lists talk pages.
15:52, 4 October 2012 (UTC)
Oct 06: Customisations failing[edit]
Not sure what this lot was, but it's all extra customisations, JS, CSS, etc. hiccupping around the same time, so I'm grouping it together - David Gerard (talk) 22:48, 12 October 2012 (UTC)
Disappearing header bits[edit]
The links to "to do" and "new pages" have vanished from the watchlist; the "click here to format this for the voting extension" bit has disappeared from "to do" - David Gerard (talk) 22:06, 5 October 2012 (UTC)
Blocks and Deletions[edit]
So Blocks have, for most of the day, been on standard wiki lengths and reasons (and blocked users can't unblock themselves), and I think the deletion reasons are the same. Whats with?--Mikal Harass Follow 00:17, 6 October 2012 (UTC)
- You also can't block yourself (atleast when i tried) and I can't get the IP block on me to end for other accounts -.- --Mikal Harass Follow 00:23, 6 October 2012 (UTC)
CSS on the main page is screwed up[edit]
What I said.
00:51, 6 October 2012 (UTC)
Yellow floaty box fail[edit]SophieWilder 11:38, 6 October 2012 (UTC)
Vandal brake[edit]
Displays <ipbanononly> next to one of the checkboxes -- Nx / talk 19:32, 15 October 2012 (UTC)
- Also, <ipblocklist-username> on Special:VandalBin -- Nx / talk 19:33, 15 October 2012 (UTC)
- AIUI VandalBrake was never quite brought up to 1.19-ness. Trent? - David Gerard (talk) 22:12, 16 October 2012 (UTC)
Messing around with shit[edit]
Having just been rooted on RW, I'm wandering around contemplating the wonders of the server, which is sort of like a cartoon steam engine that's flexing back and forth, making foreboding noises and belching clouds of black smoke. I've done the following: 0. added a shitload of stuff to the Munin monitoring graphs so I know wtf is going on 1. upped memcached from 64MB to 128MB (looking at Munin, I'm not sure it'll do a lot, might try APC too if needed) 2. switched off KeepAlive in Apache (which promptly saved us 400MB) and the biggie, 3. enabled HTML page caching for anonymous users, which should greatly help with the intermittent Reddit-dottings and doesn't break the hit counts at the bottom of the pages. Let's see how things go - David Gerard (talk) 22:45, 12 October 2012 (UTC)
- We in fact have APC. And its cache hit rate is 99.8%, which can reasonably be described as "optimal" - David Gerard (talk) 23:21, 12 October 2012 (UTC)
- Also just wound PHP-in-Apache allowed memory from 256MB (ridiculous) to 64MB. This shouldn't break anything in MediaWiki (could wind it back to 32MB in theory), and just put free memory over a gig. But let us know if weird shit starts happening on complicated pages - David Gerard (talk) 23:34, 12 October 2012 (UTC)
So basically it is like the TARDIS? Cobbled together from stolen spare parts, and mostly invisible to the user? ħuman
00:21, 15 October 2012 (UTC)
- Not as photogenic. I've just switched KeepAlive back on with a short timeout (2 sec) and only 10 reqs per thread, which should be enough to load CSS/JS/images. We got hit by something a bit under an hour ago that sent the server into swap, in which state it was pretty much unresponsive; I'll see if this will help with that - David Gerard (talk) 22:11, 16 October 2012 (UTC)
Just asked mediawiki-l for general help. The response from seasoned MediaWiki devs is heartwarming - David Gerard (talk) 23:50, 17 October 2012 (UTC)
LQT edit toolbar not loading[edit]
Try replying to a thread. The loader gif for the toolbar keeps spinning. The request actually completes successfully, so the problem is probably in the javascript. -- Nx / talk 13:50, 13 October 2012 (UTC)
- That's been happening to me for weeks, so I assumed it was an inherent problem with LQT.
Radioactive afikomen Please ignore all my awful pre-2014 comments. 20:42, 13 October 2012 (UTC)
- Yeah, the upgrade broke it. -- Nx / talk 20:47, 13 October 2012 (UTC)
- LQT is officially unmaintained. We basically get to keep both pieces - David Gerard (talk) 21:16, 13 October 2012 (UTC)
- Nx or anyone - do we have a list of what's broken with LQT? I may ask on mediawiki-l and mwusers.com in case anyone else has successfully hacked the damn thing into limping along - David Gerard (talk) 21:43, 13 October 2012 (UTC)
- That, and css and js loading too late, so before the page fully loads, it looks like shit. I just checked, and there's a half-finished LQT 3.0 in git, but I don't know how broken that is, and I don't know if there's a conversion script for the new db schema (probably not). -- Nx / talk 21:46, 13 October 2012 (UTC)
- Or if this one will ever be finished either. Personally I'd look for a conversion script to turn LQT pages into wikitext - David Gerard (talk) 21:49, 13 October 2012 (UTC)
- @Nx: Is that what makes it look like this? (Chatting via LQT is even more difficult than editing MW is normally on my phone. That should not be possible, but it is.)
Radioactive afikomen Please ignore all my awful pre-2014 comments. 22:11, 13 October 2012 (UTC)
David, LQT is not completely dead, it's on life support. The problem is, our version had a few patches, and I don't know where those are (probably somewhere in ~nx) or if Trent upgraded LQT and discarded my patches or what. -- Nx / talk 22:27, 13 October 2012 (UTC)
- Krinkle (MW dev) recommends making sure we get the latest [1]. I'm disinclined to mess with it until I have time to set up rationalbeta.com to be as exactly like rationalwiki.org as possible (libphp rather than cgi, for a start) and then make sure nothing breaks. The idea is not to break the rest of the site by messing with LQT on live. (Mostly I'm waiting for the next Reddit-dotting to see if the minimal messing about I've done so far will hold up.)
- Hmm, is your patched version likely to still be on rationalbeta? - David Gerard (talk) 23:00, 13 October 2012 (UTC)
- Probably not. Is the lqt extension dir still on svn? you could do an svn diff I guess...
- How much disk space does RationalBeta have? Would it be possible to copy the entire database over to beta verbatim? -- Nx / talk 23:09, 13 October 2012 (UTC)
- The master branch has LQT2.0, LQT3.0 is in the lqt-updates branch. -- Nx / talk 23:32, 13 October 2012 (UTC)
- We're not critically short on disk - 94GB used, 71GB free. It's on the to-do list - David Gerard (talk) 23:35, 13 October 2012 (UTC)
- In order to dump the database for back up we have to have atleast the size of the DB free. Tmtoulouse (talk) 23:37, 13 October 2012 (UTC)
- Isn't RationalBeta on a different server?
- Back to the topic at hand, I just checked out MW1.21 alpha and latest LQT and guess what, toolbar doesn't load.... -- Nx / talk 23:40, 13 October 2012 (UTC)
- No everything has been consolidated to one server, with the bots running on my personal cloud. Tmtoulouse (talk) 23:50, 13 October 2012 (UTC)
- Ok, figured it out, LQT has a hardcoded dependency for WikiEditor (the shiny new edit toolbar), and if it's not present, this happens. RW has WikiEditor installed, but either it or LQT or both are out of date. -- Nx / talk 00:02, 14 October 2012 (UTC)
- Meanwhile, LQT3.0 is in a completely unfinished and broken state, enabling it means losing all content on LQT talk pages, and I don't think there's a conversion script. -- Nx / talk 00:22, 14 October 2012 (UTC)
- My psychic powers tell me it'll never be finished either, given LQT2 is in production on WMF wikis despite its experimental status - David Gerard (talk) 00:31, 14 October 2012 (UTC)
- That sounds promising! - David Gerard (talk) 00:31, 14 October 2012 (UTC)
- I'll see what scurvy tricks I can come up with. Things to take slowly and get right - David Gerard (talk) 00:31, 14 October 2012 (UTC)
- Don't try to use LQT style indentation with wikitext, it doesn't work without visual guides. -- Nx / talk 00:53, 14 October 2012 (UTC)
- WHEN I WAS A LAD WE JUST COUNTED THE COLONS. Kids these days - David Gerard (talk) 08:55, 14 October 2012 (UTC)
- LQT is a festering boil on the buttocks of the wiki-world; the sooner it is lanced and the resultant oozing pus is cleansed away, the better. Doctor Dark (talk) 04:36, 15 October 2012 (UTC)
- Your sentiment is 100% accurate and we're stuck with the damn thing - David Gerard (talk) 05:53, 15 October 2012 (UTC)
LQT patches[edit]
One of the patches I made was to prevent someone from editing the sig of a comment without editing the comment text. This was a hack, but it was necessary, because I thought it was possible to edit a sig irreversibly and without any trace. Turns out I was wrong, it does show up in a page's history, though it's not obvious at all, the entry says comment text edited, and it links to a diff that doesn't show anything, because there was no change to the text, only the sig, which is stored separately. It also doesn't show up in Recent Changes, so it's a great vandal target, my patch at least fixed that. I just tested, and it seems the patch is gone. -- Nx / talk 00:58, 14 October 2012 (UTC)
- Tchaaa! Is there a submitted bug for this? - David Gerard (talk) 11:00, 14 October 2012 (UTC)
- Yeah, 36096. -- Nx / talk 18:00, 14 October 2012 (UTC)
- David, please clone the master branch of LQT (keep a backup of the current extension we're using just in case), it should fix the immediate issues, and I'd also like to know if it's compatible with MW19 -- Nx / talk 06:06, 15 October 2012 (UTC)
- Done. Using the handy test thread, it appears worse. (The version installed was the one that was supposed to go with 1.19.) Reverting. I think LQT brokenness is presently not easily fixable, given the updated maintenance version works worse than the one we have already. The devs' reply is "works for us, must be you" - David Gerard (talk) 21:10, 16 October 2012 (UTC)
- Try git master. -- Nx / talk 04:59, 17 October 2012 (UTC)
- That was what I did try (if the above wasn't clear). The one that supposedly goes with 1.19 was what was there before, now it's back to that - David Gerard (talk) 08:00, 17 October 2012 (UTC)
- Ah, I misunderstood you. What was broken with git master? -- Nx / talk 08:04, 17 October 2012 (UTC)
- Multiple spinners, CSS loaded even more unreliably (in a minute's thumping it) - David Gerard (talk) 10:01, 17 October 2012 (UTC)
Hiccups in the last coupla hours[edit]
The server went into swap and thrashed itself to death (not apparently due to lots of hits, it just did it by itself); I restarted Apache, that didn't quite do it, Trent restarted the box itself and it fsck'ed (that's Unix for chkdsk). Everything should be back. This sort of thing is the problem we have, which is underresourcing. Going from the 4GB Linode to the 8GB Linode is the bit that would cost us another $200/mo; Trent's researched this and says this is about the going rate for non-crappy hosting. Expect this to happen from time to time as the box occasionally shudders and farts. (Lucene doesn't help, 'cos it's a fat bastard, but it's also the only search that's any good.) I will be researching and writing a plaintive message to mediawiki-l asking what gaffer-tape we might try in the meantime - David Gerard (talk) 19:48, 17 October 2012 (UTC)
- Actually, sphinx is also very good. There's just the issue of updating the search index (unlike lucene, sphinx needs a complete rebuild of the index every time, but that's offset by the vastly superior rebuilding speed, and the incremental updates feature). There's also a realtime index feature, which doesn't require rebuilding and is always up-to-date, but it is less efficient, especially as the index grows, and it lacks some search features. There's an MW extension for easy integration into MediaWiki. -- Nx / talk 21:05, 17 October 2012 (UTC)
User name character limit[edit]
Could we please have a character limit for user names? When a user name troll or just a random idiot show up with a screed within a name it can really muck up recent changes, especially if they decide to edit. inb4 cries of fascism and sarcastic requests for "nite moad"--"Shut up, Brx." 22:29, 17 October 2012 (UTC)
- What would a reasonable limit even look like? OnTheInternetNobodyKnowsYou'reAGod's own username is 34 characters long.
Radioactive afikomen Please ignore all my awful pre-2014 comments. 23:49, 17 October 2012 (UTC)
- I don't know. I am just trying to open dialogue on what I feel is a minor problem that could use fixing. How about a non-retroactive ban on all user names longer than 25 characters? 40 characters?--"Shut up, Brx." 02:00, 18 October 2012 (UTC)
- If the only problem is aesthetic then there is no problem. Acei9 02:14, 18 October 2012 (UTC)
- Pretty much what ace said. Our username friend is sporadic with account creation and most others are rare occasions of trolling and unless that changes theres not really much other need. --Mikal Harass Follow 02:31, 18 October 2012 (UTC)
- The problem is that it's retroactive, users who have a name longer than the limit won't be allowed to log in. -- Nx / talk 04:50, 18 October 2012 (UTC)
- What? And kill great legacies such as this: User:Mynameisexcessivelylongbutidon'tcarebecauseiamatrollvandalandsockoficewedge,sometimesallthreeatthesametime,
actuallyimadethisaccounttotestcharacterlimitsbutwhatever? I won't stand for it!
Radioactive afikomen Please ignore all my awful pre-2014 comments. 05:05, 18 October 2012 (UTC)
Blank watchlist[edit]
(from Saloon Bar)
This evening when I click my Watchlist tab all I ever get is a totally blank page -- not even a RW logo or error message. Anybody else, or do computers hate me again? Doctor Dark (talk) 01:16, 13 October 2012 (UTC)
- How many pages do you think you have on your watchlist?--"Shut up, Brx." 01:56, 13 October 2012 (UTC)
- If it had anything to do with size, Human would've mentioned something. He has practically half the site on his watchlist.
Radioactive afikomen Please ignore all my awful pre-2014 comments. 02:06, 13 October 2012 (UTC)
- My guess is it could be related to the changes in memory allocation, drop this not over at tech support. Tmtoulouse (talk) 02:54, 13 October 2012 (UTC)
- Actually, that's alarmingly plausible. Might be memory, might be timeout. Could everyone please try their watchlist? If you get a blank page, how long did it take to return a blank page? - David Gerard (talk) 08:26, 13 October 2012 (UTC)
- Blank page loads promptly, and done. Same thing as yesterday evening. I only watch pages I happen to have edited, no idea how many. Sprocket J Cogswell (talk) 13:00, 13 October 2012 (UTC)
- Hmm, that's disconcerting. Like many here I suspect, my watchlist is the page I usually come to RW on. Mine says at the top "360 pages on your watchlist, not counting talk pages." Of course, you can't see that ...
- (Tech detail: the confusing thing is the Apache logs don't show any "200 0" for Special:Watchlist, at all. So either it's serving something that renders as a blank page, or something before Apache, which should be nothing whatsoever, is showing a blank page. I see a few "200 0" for other pages. A bit over 1% of connections give a 408 request timeout, for example - but the request timeout is 300 seconds, which is quite huge enough.)
- One thing that would really help: to know the IP you're coming from and the approximate time, so I can look in the logs and see what's up. If anyone's getting a blank watchlist, please check your IP and email it to me at dgerard@gmail.com with the time, and I'll see if I can spot it in the logs - David Gerard (talk) 13:31, 13 October 2012 (UTC)
- Nothing in the php error log? It might be something else. -- Nx / talk 13:44, 13 October 2012 (UTC)
- Nothing obvious, though if I get timed error reports I'll look there too. I see 22 memory exhaustion errors in 859,000 server hits in yesterday's log, which is not awful - David Gerard (talk) 14:06, 13 October 2012 (UTC)
- I tried Susan's old DPL-heavy user page after you lowered the memory limit, and it worked, Best of CP also works,, can't think of anything else that would be memory intensive right now. -- Nx / talk 14:10, 13 October 2012 (UTC)
- Make a test account and copy the rows from the watchlist table where user id is Sprocket's. Then test. Also, Btw, do you know on which pages those 22 memory exhaustion errors were? -- Nx / talk 14:25, 13 October 2012 (UTC)
- I'll try the copy-watchlist trick in an idle moment. Dunno what pages, the error log just lists the actual MediaWiki PHP files that errored. S'pose I could correlate with Apache logs, but I'm putting off cleaning the house as it is. Ah, the joy of a new pet (RW) - David Gerard (talk) 14:44, 13 October 2012 (UTC)
- Well, you could do something like this, don't know if it'll conflict with anything MediaWiki does, but if you put it into localsettings, it'll probably be earlier than anything MW is trying to do and will get overwritten when needed. -- Nx / talk 14:56, 13 October 2012 (UTC)
- That is a fantastic idea and I just put it into place - David Gerard (talk) 17:34, 13 October 2012 (UTC)
- 0 reports since I posted the above. Is this still happening? Please work with me here if it is - David Gerard (talk) 17:12, 13 October 2012 (UTC)
- Still happening for me. Sprocket J Cogswell (talk) 17:22, 13 October 2012 (UTC)
- Aha, data! Now to dive in ... - David Gerard (talk) 17:33, 13 October 2012 (UTC)
- ps: if it was in the last few minutes, that was me fecking about to try to log URIs per above - David Gerard (talk) 17:34, 13 October 2012 (UTC)
- Still blank as of a few seconds ago... Sprocket J Cogswell (talk) 17:37, 13 October 2012 (UTC)
- Me too. In response to some earlier questions (1) my watch list is not huge, maybe a few dozen pages (though of course I can't see it now..) and (2) the blank page returns within 5-10 seconds. Doctor Dark (talk) 17:53, 13 October 2012 (UTC)
- BLOODY ARGH nothing in the logs. I'll thump it again after dinner - David Gerard (talk) 18:03, 13 October 2012 (UTC)
- Doctor Dark, please go to Special:EditWatchlist/raw, and if it loads, copy and paste the list somewhere, e.g. a subpage of your userpage. I'd like to try your watchlist, to see if it'll happen to me as well. -- Nx / talk 18:34, 13 October 2012 (UTC)
Copied watchlist[edit]
I copied Doctor Dark's watchlist over to one of my other accounts and it loads for me. Tmtoulouse (talk) 20:02, 13 October 2012 (UTC)
Doctor Dark do you use any gadgets or other java script? Does it error on any browser/computer you use if you have tried more than one? Tmtoulouse (talk) 20:04, 13 October 2012 (UTC)
- Ooh yes, the "other browser" test. Mind you, the user-agents showing the 500 error on Special:Watchlist are pretty diverse in browser and OS - David Gerard (talk) 20:09, 13 October 2012 (UTC)
- My guess is that it isn't a browser per say, but wondering about addons/gadgets/apps etc. Tmtoulouse (talk) 20:12, 13 October 2012 (UTC)
- If it's a 500 error, then there should be something in the php error log. -- Nx / talk 20:13, 13 October 2012 (UTC)
- Edit: LQT hooks into the watchlist to display that "there are new messages" notice, perhaps that's the culprit? -- Nx / talk 20:14, 13 October 2012 (UTC)
- Plausible - LQT throws a few warnings - David Gerard (talk) 20:56, 13 October 2012 (UTC)
- What are they? -- Nx / talk 20:58, 13 October 2012 (UTC)
- Here's the last few days' worth. Jeez, fat bugger with memory. By the way - I think those timestamps are UTC, not EDT (the Apache logs are EDT) - David Gerard (talk) 21:19, 13 October 2012 (UTC)
[11-Oct-2012 02:13:43] PHP Notice: Undefined variable: _SESSION in /home/rationalwiki/public_html/w/extensions/LiquidThreads/classes/View.php on line 363/
[11-Oct-2012 02:33:57] PHP Fatal error: Call to a member function getTitle() on a non-object in /home/rationalwiki/public_html/w/extensions/LiquidThreads/classes/View.php on line 87/
[12-Oct-2012 17:24:15] PHP Fatal error: Call to a member function getPrefixedText() on a non-object in /home/rationalwiki/public_html/w/extensions/LiquidThreads/api/ApiFeedLQTThreads.php on line 117/
[12-Oct-2012 23:57:24] PHP Fatal error: Call to a member function getPrefixedText() on a non-object in /home/rationalwiki/public_html/w/extensions/LiquidThreads/api/ApiFeedLQTThreads.php on line 122/
[12-Oct-2012 23:58:19] PHP Fatal error: Call to a member function getPrefixedText() on a non-object in /home/rationalwiki/public_html/w/extensions/LiquidThreads/api/ApiFeedLQTThreads.php on line 122/
[13-Oct-2012 00:10:27] PHP Fatal error: Call to a member function getTitle() on a non-object in /home/rationalwiki/public_html/w/extensions/LiquidThreads/classes/View.php on line 87/
[13-Oct-2012 01:44:51] PHP Fatal error: Allowed memory size of 67108864 bytes exhausted (tried to allocate 82 bytes) in /home/rationalwiki/public_html/w/extensions/LiquidThreads/classes/Thread.php on line 487/
[13-Oct-2012 01:44:59] PHP Fatal error: Allowed memory size of 67108864 bytes exhausted (tried to allocate 83 bytes) in /home/rationalwiki/public_html/w/extensions/LiquidThreads/classes/Thread.php on line 505/
[13-Oct-2012 05:25:52] PHP Fatal error: Call to a member function getPrefixedText() on a non-object in /home/rationalwiki/public_html/w/extensions/LiquidThreads/api/ApiFeedLQTThreads.php on line 122/
[13-Oct-2012 05:40:22] PHP Fatal error: Call to a member function getPrefixedText() on a non-object in /home/rationalwiki/public_html/w/extensions/LiquidThreads/api/ApiFeedLQTThreads.php on line 122/
[13-Oct-2012 06:01:42] PHP Fatal error: Call to a member function getTitle() on a non-object in /home/rationalwiki/public_html/w/extensions/LiquidThreads/classes/View.php on line 87/
[13-Oct-2012 12:07:58] PHP Fatal error: Call to a member function getTitle() on a non-object in /home/rationalwiki/public_html/w/extensions/LiquidThreads/classes/View.php on line 87/
[13-Oct-2012 12:59:21] PHP Fatal error: Allowed memory size of 67108864 bytes exhausted (tried to allocate 88 bytes) in /home/rationalwiki/public_html/w/extensions/LiquidThreads/classes/Thread.php on line 487/
[13-Oct-2012 17:20:44] PHP Fatal error: Allowed memory size of 67108864 bytes exhausted (tried to allocate 32768 bytes) in /home/rationalwiki/public_html/w/extensions/LiquidThreads/classes/Threads.php on line 81/
[13-Oct-2012 18:08:25] PHP Fatal error: Call to a member function getTitle() on a non-object in /home/rationalwiki/public_html/w/extensions/LiquidThreads/classes/View.php on line 87/
- Doesn't look like there's anything watchlist related in there. LQT modifies the db query so that pages in the thread namespace are left out of the watchlist. If we can get someone who's having the problem to try it before and after, it might be worth trying with this line commented out temporarily in LiquidThreads.php:
$wgHooks['SpecialWatchlistQuery'][] = 'LqtHooks::beforeWatchlist';
- -- Nx / talk 21:30, 13 October 2012 (UTC)
- I think I'm not going to mess with it on a live wiki when I can't duplicate it myself and revert bad changes in seconds, but that would be the next thing if it happens again - David Gerard (talk) 21:41, 13 October 2012 (UTC)
Upping memory[edit]
I've just turned PHP-in-Apache's memory up from 64MB to 96MB. (And there goes our memory headroom *sob*) Doctor Dark, Sprocket, please try again - David Gerard (talk) 21:19, 13 October 2012 (UTC)
- Now I see a watchlist. FWIW, I try to stay away from LQT, because either this Firefox thingy takes its own sweet time parsing it, or somewhat's hosed at server side? I don't know. DIfferent issue. Sprocket J Cogswell (talk) 21:23, 13 October 2012 (UTC)
-
- It works! Sorry for the delay in replying. Doctor Dark (talk) 21:30, 13 October 2012 (UTC)
Human reports a broken watchlist. FUCK. At least I have an IP and time, will try to track down what's going on later - David Gerard (talk) 10:23, 14 October 2012 (UTC)
- [14-Oct-2012 01:05:50] PHP Fatal error: Allowed memory size of 100663296 bytes exhausted (tried to allocate 32 bytes) in /home/rationalwiki/public_html/w/includes/GlobalFunctions.php on line 317 COCK COCK COCK FUCKSOCKS WANKBISCUIT I've upped memory allocation to 112M, will up to 128M if that doesn't fix Human's watchlist (which is probably, indeed, pathological) - David Gerard (talk) 21:45, 14 October 2012 (UTC)
- I was going to mention that I was having watchlist problems again earlier today, but I don't want to be responsible for paying David's diazepam bill. Doctor Dark (talk) 22:21, 14 October 2012 (UTC)
Upped it to 128MB. Chad on mediawiki-l notes that MW is, in point of fact, a fat bastard. PLEASE DO TRY AGAIN, GENTLEPERSONS. (And supply time of blank pages.) - David Gerard (talk) 05:53, 15 October 2012 (UTC)
None in the 14 hours since I did that, so that's nice. I have a stomach bug so will not be attempting anything fancy with a root prompt for now - David Gerard (talk) 18:11, 15 October 2012 (UTC)
Human was still getting a blank page, so I've upped it to 140MB - David Gerard (talk) 22:18, 16 October 2012 (UTC)
And again (and I can see the memory exhaustion in the PHP error log), so upped it to 160MB - David Gerard (talk) 10:00, 18 October 2012 (UTC)
WIGO: CP talk's edit screen templates re: capturebot[edit]
If you begin to edit WIGO:CP Talk, the notice on how to insert Capturebot's tags is followed by "(And then manually screenshot it yourself, as Capturebot is presently down.)". Capturebot is back. Someone should probably remove the instruction to bypass using Capturebot and screencap manually since that's not necessary anymore. I had a dig through the templates list to try and fix it myself, but then I realized that a Wiki template that lives inside of the edit box (i.e., on normal pages) is not going to be able to affect the edit page's UI. I'm guessing it's something in MediaWiki. Ochotonaprincepsnot a pokémon 09:01, 18 October 2012 (UTC)
- For future reference. -- Nx / talk 09:48, 18 October 2012 (UTC)
- I had no idea that was there. Well, that's my something learned for the day, time to go play video games until my thumbs can't move. Thanks for the tip, Nx. Ochotonaprincepsnot a pokémon 11:26, 19 October 2012 (UTC)
Ru[edit]
Now ru.rationalwiki.org redirects to rationalwiki.org . Does somebody know what the problem?--Mr. B 22:07, 18 October 2012 (UTC)
- We are working on some infrastructure changes to better handle traffic spikes, the likely culprit here, keep reporting any problems you have and will try and get them solved asap. Tmtoulouse (talk) 00:41, 19 October 2012 (UTC)
- Ok.--Mr. B 10:38, 19 October 2012 (UTC)
- One more question. What the approximate time estimation of repairing?--Mr. B 21:51, 19 October 2012 (UTC)
- It should be fixed as far as I know, its a DNS propagation so maybe hasn't reached you yet. Anyone else not able to access it? Tmtoulouse (talk) 21:59, 19 October 2012 (UTC)
- Mmm... The site is not available from Firefox, and IS availadle from IE. Thank you. Now, I see that the problem is with my own PC.--Mr. B 22:36, 19 October 2012 (UTC)
- Clear your cache. Tmtoulouse (talk) 22:37, 19 October 2012 (UTC)
Rationalwiki.com[edit]
Dead in the water. I actually enter the .com domain more often than .org. Osaka Sun (talk) 04:32, 19 October 2012 (UTC)
- fixed? 50.130.133.249 (talk) 04:37, 19 October 2012 (UTC)
WIGO voting not working?[edit]
Anyone else seeing problems with the WIGO voting? I couldn't get it to count my up-vote on the (CP) drunk driving entry, and when I tried changing it to a down-vote, it removed someone's up-vote and replaced it with my down, as though I had previously voted on that entry even though I know I hadn't. Wehpudicabok (talk) 06:39, 21 October 2012 (UTC)
- Humm, this might be tech support. I was wondering how the WIGO extension would work through the new Squid proxy and was satisfied when I verified I could up/down things. Is this a consistent failure for you or a hiccup? - David Gerard (talk) 09:22, 21 October 2012 (UTC)
- For me, the drunk driving WIGO on CP is showing +2 atm. If I click "up", nothing happens; if I click "down", it goes to zero. I haven't voted on it before. The one below was showing 77; when I click up it goes to 79 then back down to 77. This seems to be the rule - either nothing happens or it changes by 2. rpeh •T•C•E• 09:40, 21 October 2012 (UTC)
- It changes by 2 because it thinks you have already voted and that you are therefore changing your vote, there's obviously something wrong though because I clicked the latest WIGOWorld up (also standing at 2) and nothing happened while the down arrow went to 0.
ГенгисGum disease
10:37, 21 October 2012 (UTC)
- It doesn't change, it refreshed the value from the server. So if someone votes up at the same time as you vote down, you won't see a change. -- Nx / talk 11:15, 21 October 2012 (UTC)
- Well I'd hate to get too technical with you Nx but if someone votes down surely the up/meh/down bar would change, even if the balance doesn't.
ГенгисOur ignorance is God; what we know is science.
11:27, 21 October 2012 (UTC)
- That's true, but do people notice that? And if someone votes up and you vote down, and assuming there are no neutral votes, it won't change, because the ration doesn't change. -- Nx / talk 11:35, 21 October 2012 (UTC)
- I voted on the most recent WIGO World for which there were just 2 ups votes and the bar was all green. It didn't change to 3 when I pressed the up but it went down to 0 and equal red/green split when I pressed the down.
ГенгисRationalWiki GOLD member
11:39, 21 October 2012 (UTC)
- 3 down votes came in between the time you opened the page (it's possible that the new caching interferes with that, giving you an older version of the page, though it shouldn't if you're logged in) and voted? -- Nx / talk 11:40, 21 October 2012 (UTC)
- It happens with several different votes. One on the economy had 1-1-1, I gave it a red and there was no change but when I voted meh it went to 2 orange and 0 red, another vote down on one that was big up registered immediately. I am wondering if the cache is holding the most recent and ignoring an incoming of the same type and only acting on it if it is completely different.
Генгисpillaging
11:52, 21 October 2012 (UTC)
- How do test votes go on older WIGO items, not recent ones? I'm seeing what might be others voting at the same time on recent ones, but older ones seem to be behaving as expected - David Gerard (talk) 11:53, 21 October 2012 (UTC)
- One from mid-September that was 100% positive increased OK. But if you've been messing around with backroom stuff maybe there was nothing in the cache for that.
ГенгисIs the Pope a Catholic?
12:06, 21 October 2012 (UTC)
For WIGO addicts like myslelf, this is getting excruciatingly annoying. Osaka Sun (talk) 19:15, 21 October 2012 (UTC)
- David, I think I know why this is happening. WIGO uses IPs to uniquely identify users. I just tried voting on a few WIGO:CP entries, and it appears as if I had already voted. My guess is that it's getting the proxy's IP, not the user's IP, The code uses getenv ("REMOTE_ADDR"), which isn't correct behind a proxy (I don't know why it works sometimes though, probably bypassing the proxy if it's not in the cache or something). I'll take a look at some core mediawiki code for the correct code to use. -- Nx / talk 19:29, 21 October 2012 (UTC)
My guess is that its pulling the IP address from the squid not the user. Tmtoulouse (talk) 19:27, 21 October 2012 (UTC)
- Hm, EC was resolved automatically, give me a second to find a fix. -- Nx / talk 19:31, 21 October 2012 (UTC)
- Confirmed its using proxy IP, the squid passes the user IP using | X-Forwarded-For, I will look into it too but if there is any easy command to pull that header info. Tmtoulouse (talk) 19:32, 21 October 2012 (UTC)
- Ok, search for $wgWigo3ConfigStoreIPs is wigo3.php, then put global $wgRequest; before the lines that say (there should be three)
$voter = $wgWigo3ConfigStoreIPs ? getenv ("REMOTE_ADDR") : $wgUser->getName();
- and replace those lines with:
$voter = $wgWigo3ConfigStoreIPs ? $wgRequest->getIP() : $wgUser->getName();
- That should fix it. -- Nx / talk 19:36, 21 October 2012 (UTC)
- Still seems to pull squid ip. Tmtoulouse (talk) 19:51, 21 October 2012 (UTC)
- That's the same code that's used by MediaWiki core, and it uses X-Forwarded-For (the code is a bit more advanced than that though) -- Nx / talk 19:53, 21 October 2012 (UTC)
- Did some quick tests, one window where I'm logged in, and an incognito window logged out, tested by voting down in one window, then up in another, and also by voting up in one, then up in the other. Sometimes it didn't work, but I can't reproduce it now. -- Nx / talk 20:10, 21 October 2012 (UTC)
- Purged the cache, hopefully fixed. Tmtoulouse (talk) 20:13, 21 October 2012 (UTC)
- Then it was the cache. It seems to be working fine now. -- Nx / talk 20:20, 21 October 2012 (UTC)
- BTW, all the extensions we use that you wrote - are they in github or similar, or are the copies on the server basically the copies? (That and the copy of the /w directory I just downloaded.) - David Gerard (talk) 21:42, 21 October 2012 (UTC)
- The latter, and even I don't have a recent copy of them. -- Nx / talk 05:27, 22 October 2012 (UTC)
- Heh. Drop me a line and I'll email you a zip of the ones I just grabbed, just of course ask for any others. We should put this stuff up somewhere with a mediawiki.org page, WIGO is pretty important to the site (and if it works through a Squid, then it's the sort of thing other sites would like) - David Gerard (talk) 05:36, 22 October 2012 (UTC)
- I'm too busy to go into detail but I just tried voting in the Saloon Bar and that still has the same problem.
Генгисmarauding
00:07, 22 October 2012 (UTC)
- Fixed.50.130.133.249 (talk) 00:23, 22 October 2012 (UTC)
Search[edit]
Search isn't working again; only returning exact title matches. This is a recurrent problem. Wèàšèìòìď
Methinks it is a Weasel 11:55, 21 October 2012 (UTC)
- Thought it was Lucene, restarted Lucene, it wasn't, now puzzled. Investigating - David Gerard (talk) 13:23, 21 October 2012 (UTC)
Aware of this too. Tmtoulouse (talk) 19:28, 21 October 2012 (UTC)
- I got no frickin' clue. How did we fix this before? Or did we just wait for it to magically come good? - David Gerard (talk) 19:59, 21 October 2012 (UTC)
- Previous faults where lucene being killed by oom manager. Current fault has something to do with being behind proxies. Investigating. Tmtoulouse (talk) 20:13, 21 October 2012 (UTC)
- Should be fixed. Tmtoulouse (talk) 20:30, 21 October 2012 (UTC)
- \o/ Please email me what you did :-) - David Gerard (talk) 21:41, 21 October 2012 (UTC)
- Nice work Tmt. ₩€₳$€£ΘĪÐ
Methinks it is a Weasel 22:19, 21 October 2012 (UTC)
Log in issue[edit]
Even though I'll log in and check the "Remember my login on this browser (for a maximum of 180 days)" box, I keep getting logged out after a certain number of page views. I'm not sure if it's the site or something on my end, but was wondering if anyone had an idea or is experiencing the same thing. Sam Tally-ho! 07:41, 21 October 2012 (UTC)
- I will put it on the list of things to look into. Tmtoulouse (talk) 23:02, 21 October 2012 (UTC)
- Thanks! Sam Tally-ho! 04:04, 22 October 2012 (UTC)
- I have attempted a fix, let me know if it is still a problem. Tmtoulouse (talk) 17:58, 22 October 2012 (UTC)
New error[edit]
This is a new one, happened a few minutes ago. Evil fascistoh noez 19:22, 22 October 2012 (UTC)
- That's weird, it looks like the error is in the browser. A google search brought up this bug report. Maybe try disabling some extensions/another browser etc. -- Nx / talk 19:28, 22 October 2012 (UTC)
WTF ... Mind you, Trent and I have been messing severely with the servers (servers, plural!) this evening, trying to track down weirdness a few hours ago. So if anything odd has manifested in the past coupla hours, it may well have been me - David Gerard (talk) 22:21, 22 October 2012 (UTC)
Rename[edit]
Could someone please rename me? I would like to be called "The Municipal Hero." Thank you. Good username (talk) 12:07, 23 October 2012 (UTC)
- Done. Evil fascistoh noez 13:49, 23 October 2012 (UTC)
Slow and trim missing[edit]
Just noting I've noticed this and am poking and prodding - David Gerard (talk) 19:08, 25 October 2012 (UTC)
- Ah, site crapped itself in various ways (Apache went nuts, the out-of-memory-killer came out to play and broke stuff). Trent has fixed - David Gerard (talk) 19:33, 25 October 2012 (UTC)
Please[edit]
can somebody with superpowers change my nick back? Otherwise I'll nationalise your servers. --PsyGremlinParlez! 13:23, 2 November 2012 (UTC)
- Done. Evil fascistoh noez 13:35, 2 November 2012 (UTC)
- Ta muchly! --PsyGremlin話しなさい 13:52, 2 November 2012 (UTC)
Tiny text[edit]
Disappearing Edits[edit]
I am having edits disappearing from recent changes (which is why the other day/night I accused Nx of some tech wizardry in blocking me because the block-log kept disappearing on me), not only that talkpage edits don't appear when not logged in then appear when again hen I log in. Weird. Would you like me to screen shot it? Acei9 05:34, 25 October 2012 (UTC)
- "... talkpage edits don't appear when not logged in then appear when again hen I log in." That's because of squid caching - if you are not logged in, you are getting a cached version of the page. I don't know about the Recent Changes issue.--ZooGuard (talk) 06:09, 25 October 2012 (UTC)
- I must have slept through the last wiki-fu class, squid caching? Acei9 06:26, 25 October 2012 (UTC)
- Anonymous users get a cached version of the page, which might be out of date. But what you describe shouldn't be happening. What can happen is that when you're logged out, the most recent changes don't appear on the list. -- Nx / talk 06:32, 25 October 2012 (UTC)
- Just tested that too, and it isn't happening for me. -- Nx / talk 06:33, 25 October 2012 (UTC)
- Anonymous users get a cached version of the page Why?
- When I get this weirdness again I'll take some screen shots. Acei9 06:34, 25 October 2012 (UTC)
- If you read some of the posts above you might see that David Gerard has been given the keys to the toy cupboard and he's been messing around with back room stuff to improve the site's traffic handling. As far as I undersand it, part of this has been the installation of Squid which caches pages so that the server is not being continually hit with new requests. This helps smooth the burden and means we aren't being forced to upgrade our hosting package which obviously costs money. (Hope I've got this right.)
ГенгисYou have the right to be offended; and I have the right to offend you.
06:58, 25 October 2012 (UTC)
- Pretty much. Trent put it into place because he had important Ph.D work to procrastinate on and I was sick for a week. RWF is paying slightly more money (three extra small nodes) and the wiki has sped the fuck up. Modulo a few hiccups, as detailed above. The cached page, served to anons, shouldn't be terribly older than the uncached version logged-in users get, but it would explain the behaviour Ace is seeing. Nx doesn't use tech wizardry powers any more, it's all me and Trent now, with Nx merely providing enormously helpful information on where the bodies were buried - David Gerard (talk) 07:24, 25 October 2012 (UTC)
- Thank you gentlemen, I shall cease worrying and go back to my ruminations and recriminations. Acei9 07:34, 25 October 2012 (UTC)
- Well if you are still getting problems like you described we should try and track it down. Screen shots are helpful, but keep us informed. Tmtoulouse (talk) 16:13, 25 October 2012 (UTC)
Any sign of this weirdness still? - David Gerard (talk) 21:49, 4 November 2012 (UTC)
{{talkpage}}[edit]
Something appears to be wrong with this template's "create an archive" function. When I try to use it on Essay_talk:AD's_Beliefs, it just puts me on a page with "Bad Title."--
talk
04:41, 26 October 2012 (UTC)
- I'll take a look, probably a bit later. -- Nx / talk 04:53, 26 October 2012 (UTC)
Math tag[edit]
There is a little question. How to enable a math tag?--Mr. B 23:19, 29 October 2012 (UTC)
- Should be easy enough, the math tag works on English RW, it just hasn't been installed on ru.rw. I'll try it tomorrow night (it's bedtime right now) - David Gerard (talk) 23:48, 29 October 2012 (UTC)
- Thank you!--Mr. B 01:09, 30 October 2012 (UTC)
- I still haven't done this. I'll let you know when I do - David Gerard (talk) 17:33, 1 November 2012 (UTC)
- I've enabled the extension and put in the config identically to en.rw. I have no idea if it works, please test and let me know - David Gerard (talk) 23:20, 1 November 2012 (UTC)
Of course, there's the small detail that it doesn't work on en.rw, e.g. <math>U_e = \int {\frac{E A_0 \Delta L} {L_0}}\, d\Delta L = \frac {E A_0} {L_0} \int { \Delta L }\, d\Delta L = \frac {E A_0 {\Delta L}^2} {2 L_0}</math> gives Failed to parse (unknown error): U_e = \int {\frac{E A_0 \Delta L} {L_0}}\, d\Delta L = \frac {E A_0} {L_0} \int { \Delta L }\, d\Delta L = \frac {E A_0 {\Delta L}^2} {2 L_0}. This has several corrective approaches, I'll apply some and see how we go - David Gerard (talk) 16:41, 2 November 2012 (UTC)
Now working (set $wgTexvc to the binary, not the path):
The above works for me on ru.rw too - David Gerard (talk) 21:48, 4 November 2012 (UTC)
Apache tweaks[edit]
I've set MaxClients 50 in the hope this will keep the server from completely shitting itself in busy moments. From the fact that (a) busy Apache processes seem to be quite large in top (b) the CPU goes right up in said moments (c) MySQL gets busy, I'm slightly suspecting this is not an outside traffic surge, but logged-in editors doing expensive things (e.g. centre-clicking a whole bunch of diffs in a go). Editors should be able to do this stuff, but they may have to wait in a queue for their request to be fulfilled so that the server doesn't actually collapse and need a reboot. Let me know if it gets particularly annoying - David Gerard (talk) 18:44, 30 October 2012 (UTC)
- So far the server's been blissfully happy, even with occasional load peaks and with PHP back to being allowed 256MB memory (Human's pathological watchlist still runs out of memory, but at least it's not more broken than before I touched it) - David Gerard (talk) 21:52, 4 November 2012 (UTC)
- And of course the bloody thing got knocked over this morning. Gah - David Gerard (talk) 14:19, 11 November 2012 (UTC)
File templates[edit]
When I upload a file both the Info and Fair Use templates appear unformatted, displaying hexcodes instead of the CRLF and pipe characters.
ГенгисOur ignorance is God; what we know is science.
10:09, 7 November 2012 (UTC)
- You mean in recent changes? -- Nx / talk 10:40, 7 November 2012 (UTC)
- What precisely do you do, step by step, to get this to happen? Do you have an example upload where we can see this? I'm not clear on what you're describing - David Gerard (talk) 10:41, 7 November 2012 (UTC) | https://rationalwiki.org/wiki/RationalWiki:Technical_support/Archive7 | CC-MAIN-2020-24 | refinedweb | 9,753 | 69.52 |
Linked List Implementation in Java
Get FREE domain for 1st year and build your brand new site
You are most likely familiar with arrays. They have great access times- essentially instantaneous. The caveat is their inflexibility: they cannot grow or shrink in size, and to store a dynamic amount of objects, one must create a new array each time.
Let's say you wanted to store a list of guests currently staying at the hotel. You could use an array, but it would be rather difficult to shrink or expand the number of guests. This is where linked lists come in!
Linked lists are stored as a list of nodes, starting at a head and pointing to the next node in the list. This is an incredibly simple example, so hopefully it is easy to digest!
Nodes
In Java, the nodes for this example look a little something like this:
public class HotelNode { String guestName; HotelNode next; public HotelNode() { guestName = null; next = null; } // HotelNode public HotelNode(String guestName) { this.guestName = guestName; next = null; } // HotelNode public HotelNode(String guestName, HotelNode next) { this.guestName = guestName; this.next = next; } // HotelNode public HotelNode getNext() { return next; } // getNext public String getGuestName() { return guestName; } // getGuestName public void setNext(HotelNode next) { this.next = next; } // setNext public void setGuestName(String guestName) { this.guestName = guestName; } // setGuestName } // class
Each node has both a value (in this case a guest name), and a pointer to the next node. There are three constructors- a default, one with a name value supplied, and one with both a name value and a next HotelNode. Then, there are getters and setters for both the value and the next reference.
Feel free to add other instance variables to your node class; in this example I could have included the age of a guest or the room number they stayed in. Don't feel limited to just having a single value and a next reference!
Code for a Linked List Class
Here is a simple linked list. Note that there is only one variable pointer to the head. Also note that the last node has a null value for its next pointer; this signifies the end of the list.
The skeleton of the class is as follows:
// Class to store HotelGuest Names, implements several other interfaces // but the most important once is List public class HotelGuestList implements List { private HotelNode head; // other methods to be implemented } // HotelGuestList
Traversing the Linked List
In order to traverse the linked list, you must follow the pointers to the next node, which is slower than an array's access.
// n must be less than size() and nonnegative // returns the node at position n in the list public HotelNode nodeAt(int n) { HotelNode searcher = head; for (int i = 0; i < n; i++) { searcher = searcher.getNext() } // for return searcher; } // nodeAt
Adding and deleting
Adding and deleting elements is considerably easier than in an array. Simply move the pointers accordingly. For instance, let's say that I wanted to add John in between Sally and Tim in the above example.
First, I would make a new node with John as the guest name and the next value pointing at Tim.
Then, I would set Sally's pointer to John's node. Voila! Easy as that. Think about how hard that would be with an array- you'd have to shift all of the indexes to the right or even make a new array altogether and copy all of the values.
This is the implementation:
// adds a new node at a specified index public boolean add(int index, String name) { if (name == null || index > size || index < 0) { return false; } // if if (index == 0) { head = new HotelNode(name, head); } else { HotelNode newNode = new HotelNode(s, nodeAt(index)); nodeAt(index - 1).setNext(newNode); } // else return true;
Deleting is even simpler. To delete, simply set the previous element another element ahead so that it skips the deleted node. For instance, if I wanted to delete the node at position 5, I would just set the next value of the element at 4 to the element at 6. 5 is lost, and there is no pointer to it. Easy peasy!
// removes node at index and returns content of removed node // index must be between 0 and size() - 1, inclusive public String remove(int index) { String removed = nodeAt(index).getGuestName(); if (index == 0) { head = head.getNext(); } else { nodeAt(index - 1).setNext(nodeAt(index + 1)); } // else return removed; } // remove
Summary
To recap: linked lists are good for adding and deleting elements, which is advantageous for activities such as dynamic storage of changing elements (i.e. guests checking in and out of a hotel). However, they are not as good at retrieval and traversing the items as arrays are. For a linked list, retrieval is O(n) compared to random access in an array.
Hopefully this was an easy way for you to become briefly acquainted with the idea of a linked list in Java! For more reading, go to this OpenGenus article: Java Collections: Linked List
Questions for further consideration:
- Consider if each node pointed to two other nodes. How would this improve search time? Storage?
- Is there a way we can combine arrays and linked lists to further improve search, retrieval, and storage? See hash maps for further reading.
- Java's implementation of a linked list is doubly linked, so that each element has a pointer to the nodes both before and after it. How does this affect traversal speed?
Thank you for reading! | https://iq.opengenus.org/linked-list-implementation-in-java/ | CC-MAIN-2021-43 | refinedweb | 914 | 70.53 |
You must have heard the buzz about IOT. Internet of Things that is.
It is supposed to fix the european economy and cure cancer. The industy heavies are waiting the IOT to solve all our problems ...
Well, in the meantime us regular dudes can actually do something without spending huge amounts of time or cash to get our feet wet in IOT by utilizing ... You guessed it ... good old PaaS approach.
So today we are going to build an infinitely scalable input point and send in some random temperature readings.
Here's a picture of the overall architecture of a generic IOT solution running on Azure PaaS offering.
We have our gadgets that produce events, optionally some gateways to aggregate messages or transform protocols, Event hubs to catch all the traffic, then some more or less realtime analysis and storage.
So what's new ? ... nothing really.
Other than the fact that now we can do this in an afternoon without building a costly infrastructure first. We do this by utlizing PaaS components and by doing so we get the benefit of world class scalability and security by design. But looking at this purely from a functional standpoint ... it's all stuff that we have been doing for a long ... long time.
Enough talk, let's actually do something...
- Create Service Bus namespace to contain our event hub
- Create event hub to catch the message from field
- Build a small Simulator to send in events
- Create a Stream Processing Job to do some mild analysis
- Store the data for Part II where we will try to do some reporting and analysis
Namespace
The namespace is a manageable unit that can house several different services like Queues,Topics and Event Hubs, which we will be using today. One interesting thing about the namespace is that it controls the scaling of the Services it contains.
Namespace does this via Throughput Units. One TU can take in roughly 1MB per second. Given a 12 hour day that makes about 50GB per day. Not bad.
TU's are also a billable resource. One TU in standard mode cost about 0.0224€ per hour, which totals about 17€ per month. SO cost isn't the thing keeping Your company from testing IOT.
Pay attention to this ! This is the point that decides how scalable Your solutions intake will be and how costly it will be to run. With this slider You can easily provision up to 20 TU's for running Your ingress.
Event Hub
An Event Hub is the actual endpoint to which all the gadgets send their events.You can use Partitions and parition keys in Your data to splice the data up, but if not used the partitions will be used evenly for all data.
Remember to configure Shared Access Policy for Your Event Hub or the Job will not be able find it later. I created a poly called "manage" and ticked all the boxes for access.
Stream Analytics Job
A Job is actually very simple thing connecting the ingreaa queue with long term storage and performing some optional analysis on the fly.
Jobs monitoring storage account should reside in same region as the job itself or You will be paying for the traffic.
A Job consists of three moving parts :
Input: Here You will reference Your Event Hub.
Output: The long-term storage can be various things I chose Sql Database in the same region that the Job is in. Remember to enable Access for other Azure Services in DB's config ot You will spend some time wondering why the Job is not running.
There needs to be a table in the Sql database that matches the structure that Your "query" produces and it needs to be named in the outputs-config, so here goes:
CREATE TABLE [dbo].[AvgReadings](
[WinStartTime] DATETIME2 (6) NULL,
[WinEndTime] DATETIME2 (6) NULL,
[DeviceId] BIGINT NULL,
[AvgTemperature] FLOAT (53) NULL,
[EventCount] BIGINT null
);
CREATE CLUSTERED INDEX [AvgReadings] ON [dbo].[AvgReadings]([DeviceId] ASC);
Query: This is where the realtime analysis and the overall definition of the data structure happens. Basically You create a query and whatever it produces will be the new datatype and it will be store in "output".
A simple query could be like this : "SELECT DeviceId, Temperature FROM input" - this would just store the values received into output-table for later usage.
If You want to do some aggregation or math with the data before it is written to storage You can use some clever sql:
In this case we will just average some values with 5-second window:
"SELECT DateAdd(second,-5,System.TimeStamp) as WinStartTime, system.TimeStamp as WinEndTime, DeviceId, Avg(Temperature) as AvgTemperature, Count(*) as EventCount FROM input GROUP BY TumblingWindow(second, 5), DeviceId"
Note that the "input" needs to refer to the alias-name that You gave when defining Your Job input !!!
Go to Your Jobs Dashboard and hit Start ... and our system should be ready to accept some input.
Creating some events
The most important things in our little Simulator client are:
NamespaceManager to connect to our namespace ,credentials stored in app.config.
EventHubClient for accessing the api
EventData to house our data structure.
EventHubCLient.SendAsyc for actually sending.
Don't worry there will be a link from where You can get the client code from.
Running this guy feeds some events in our pipeline.
Note that the second parameter needs to be the name of Your Event Hub inside Your namespace.
Querying tha database will show that we indeed get some events coming in and we are able to perform some light operations on it too.
Conclusions and next steps
So , IOT isn't that hard after all.
By using some PaaS Services we were able to make a solution that can ingest millions of messages per second. By using some good old SQL we were also able to enrich and analyze the incoming data stream.
Not bad.
Next part of this article will elaborate on the client side, namely we will be sending events from scripting language, probabply from Python, We'll use a Raspberry pi as a client if all goes well .
We will also do some reporting and maybe some Machine Learning.
Stay tuned.
In case You have problems finding Your way around Azure management screens there is a more step-by-step oriented tutorial available here, it also has the source code for the client used: | https://blogs.msdn.microsoft.com/petsablog/2014/12/11/iot-for-mere-mortals-part-i/ | CC-MAIN-2017-30 | refinedweb | 1,072 | 70.84 |
Created on 2014-01-11 03:09 by tabrezm, last changed 2014-10-17 16:12 by zach.ware. This issue is now closed.
In pyconfig.h (line 216), there is this line:
#define hypot _hypot
This conflicts with the definition of _hypot that ships with VS2010 (math.h, line 161):
static __inline double __CRTDECL hypot(_In_ double _X, _In_ double _Y)
The result of the redefinition is that the following warning occurs during compilation:
1>c:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\include\math.h(162): warning C4211: nonstandard extension used : redefined extern to static
When compiling with warnings being treated as errors, this will obviously result in failed builds.
The recommendation from Microsoft (see) is to change the definition to:
#if _MSC_VER < 1600
#define hypot _hypot
#endif
How are you compiling to get that warning? I've never seen it, and none of the Windows buildbots do either. Also, which version of Python are you compiling on which version of Windows?
v100 toolset, with compiler setting /W4. Microsoft recommends W4 for all new C++ projects (see). I'm using 3.3.
Sorry, I realize I didn't mention this in the original post. I'm getting this compiler warning when building my Python extension, not when building Python itself.
We have a similar issue in Blender (where Python is embedded), but it's actually causing a crash instead of only a compiler warning:
The code in the Visual Studio 2013 math.h looks like this:
__inline double __CRTDECL hypot(_In_ double _X, _In_ double _Y)
{
return _hypot(_X, _Y);
}
With the #define hypot _hypot that becomes:
__inline double __CRTDECL _hypot(_In_ double _X, _In_ double _Y)
{
return _hypot(_X, _Y);
}
So you get infinite recursive calls when using hypot and the application crashes. The patch fix20221.patch that was attach here solves the issue.
I'm having difficulty wrapping my head around why the math and cmath modules (both of which use hypot) compile fine, but your extensions don't. Anyone have any insight into why that is?
My extension doesn't compile because I treat warnings as errors. I believe Blender compiles fine, but crashes at runtime because of the infinite recursion (see the second code snippet in Brecht's response ().
Sorry, I wasn't entirely clear. By "compile fine", I meant "compiles without warnings, and actually works when you try to use it". Both math and cmath make use of hypot (as defined in pyconfig.h), but don't have any problems.
They also have no problems with your patch applied, but I don't feel I understand the situation enough yet to move the patch forward.
For Visual Studio 2013, here's how to redo the problem. Take this simple program:
#include <Python.h>
int main(int argc, char **argv)
{
return (int)hypot(rand(), rand());
}
And compile it:
cl.exe test.c -I include/python3.3 lib/python33.lib /W4
c:\program files (x86)\microsoft visual studio 12.0\vc\include\math.h(566) : warning C4717: '_hypot' : recursive on all control paths, function will cause runtime stack overflow
I don't know about the conditions to get a warning in VS 2010, never tried that version.
Your test program works for VS2010 as well (/W4 is unnecessary, the default warning level gives the warning), but still doesn't answer the question of why the math module (specifically math.hypot) doesn't show the problem.
I understand why both of your cases *don't* work, I want to understand why mathmodule.c and cmathmodule.c (and complexobject.c, for that matter) *do* work. Attempting to compile mathmodule.c alone results in the warning, and even picking through mathmodule.i as produced by preprocessing to file, it looks like math_hypot should always have the problem.
The fact that math_hypot works when compiled with the rest of Python frankly frustrates me quite a lot, because I can see no reason why it should.
Ok, I had missed that the warnings in your two separate cases were in fact different. I still don't understand why VC++ sees the two cases so differently (throwing different warnings), but it at least explains the different results in your two cases.
I don't see how this change can actually break anything, so I'll go ahead and commit so this makes it into 3.3.5 (and hopefully 3.4.0, but that will be up to Larry Hastings).
New changeset 9aedb876c2d7 by Zachary Ware in branch '3.3':
Issue #20221: Removed conflicting (or circular) hypot definition
New changeset bf413a97f1a9 by Zachary Ware in branch 'default':
Issue #20221: Removed conflicting (or circular) hypot definition
Fixed, thanks for the report and patch!
New changeset 033d686af4c1 by Zachary Ware in branch '3.4':
Issue #20221: Removed conflicting (or circular) hypot definition
Any chance this would be merged to 2.7 as well?
The comment from 2014-01-11 06:33:18 says "2.7" but PC/pyconfig.h still has #define hypot _hypot
New changeset 430aaeaa8087 by Zachary Ware in branch '2.7':
Issue #20221: Removed conflicting (or circular) hypot definition
It grafted very easily, so it turns out to be "yes" :) | https://bugs.python.org/issue20221 | CC-MAIN-2019-43 | refinedweb | 862 | 65.83 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.