text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Hello Coders, This article is a soft and practical introduction to Django Routing System. The sample we will code during this tutorial, in the end, will implement three routes: a default route that shows a classic Hello World, a 2nd route that displays a random number at each page refresh, and the last route will show a random image pulled from the internet. Thanks for reading! - Content provided by App Generator. - Django Routing Sample - the source code (Github/MIT License) - More Django Samples provided with authentication, basic modules What - official website - Django Docs - recommended starting point for every aspire Django developer Let's Code Django Check Python Version - recommended version is Python3 $ python --version Python 3.8.4 <-- All good, we have a 3.x version Create/activate a virtual environment - Unix-based system $ virtualenv env $ source env/bin/activate For Windows, the syntax is slightly different $ virtualenv env $ .\env\Scripts\activate Install Django $ pip install django Create a new Django Project $ mkdir my-django-sample $ cd my-django-sample Inside the new directory, we will invoke startproject subcommand: $ django-admin startproject config . Note: Take into account that . at the end of the command. Setup the database $ python manage.py makemigrations $ python manage.py migrate Start the app $ python manage.py runserver $ $ # Access the web app in browser: At this point we should see the default Django page in the browser: Create a new Django application $ python manage.py startapp sample Add a simple Django Route Let's edit sample/views.py as shown below: def hello(request): return HttpResponse("Hello Django") Configure Django to use the new route - update config/urls.py as below: from django.contrib import admin from django.urls import path from django.conf.urls import include, url # <-- NEW from sample.views import hello # <-- NEW urlpatterns = [ path('admin/', admin.site.urls), url('', hello), # <-- NEW ] In other words, the default route is served by hello method defined in sample/views.py. On access the root page, we should see a simple Hello Word message: New Route - Dynamic content Let's create a new route that shows a random number - sample/views.py. ... from random import random ... def myrandom(request): return HttpResponse("Random - " + str( random() ) ) The new method invoke random() from Python core library, converts the result to a string and returns the result. The browser output should be similar to this: New Route - Random Images This route will pull a random image from a public (and free) service and inject the returned content into the browser response. To achieve this goal, we need a new Python library called requests to pull the random image with ease. $ pip install requests The code for the new route should be defined in sample/views.py. ... import requests ... def randomimage(request): r = requests.get('') return HttpResponse( r.content, content_type="image/png") To see the effects in the browser, the routing configuration should be updated accordingly. # Contents of config/urls.py ... from sample.views import hello, myrandom, randomimage # <-- Updated ... urlpatterns = [ path('admin/' , admin.site.urls), url('randomimage' , randomimage), # <-- New url('random' , myrandom), url('' , hello), ] Here is a sample output - randomly selected from a public service: Thanks for reading! Feel free to AMA in the comments section. More Django Resources - Read more about Django (official docs) - Start fast a new project using development-ready Django Starters Discussion (8) Nice and simple. Keep writing! Ty! More (simple) tutorials will come. 🚀🚀 Loved the simple article! Can someone tell me why '.' was added at the end of startproject command? Django documentation did not have that so was wondering what that does. I am kinda new django and was trying to learn as much as I can. Thank you! The .means to create the product in the current directory. If you use the option to provide a directory name my_projectfor instance, Django will generate something like this: my_project\my_project\settings... In my projects i prefer to isolate the initial Django project in something names coreor config. Anyway, feel free to code the project in your own way. 🚀🚀 You should rename this tutorial to How to display a random cat with Django. P.S. Really nice content! :) .. Thanks for your suggestion. For the next article I will listen your advice and choose a funny title. Thank for writing! Yw! More will come. Feel free to suggest a Django-related topic.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/sm0ke/django-routing-a-practical-introduction-2m6e
CC-MAIN-2021-31
refinedweb
722
59.09
Docs | Forums | Lists | Bugs | Planet | Store | GMN | Get Gentoo! Not eligible to see or edit group visibility for this bug. View Bug Activity | Format For Printing | XML | Clone This Bug version : 0.0.5 Build fails during configure saying it can't find popt.h which exists... configure: error: *** Cannot find popt.h header !!! ERROR: The ebuild did not complete successfully. !!! Function econf, Line 9, Exitcode 1 !!! econf failed about popt.h : root@ghort666:/tmp> stat /usr/include/popt.h File: `/usr/include/popt.h' Size: 13154 Blocks: 32 IO Block: 4096 Regular File Device: 3a00h/14848d Inode: 374141 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2002-10-15 06:08:49.000000000 +0200 Modify: 2002-10-15 06:08:49.000000000 +0200 Change: 2002-10-15 06:08:49.000000000 +0200 ok, what is in config.log: configure:5963: checking for popt.h configure:5973: /lib/cpp -I/usr/include conftest.c >/dev/null 2>conftest.out cpp0: warning: changing search order for system directory "/usr/include" cpp0: warning: as it has already been specified as a non-system directory configure: failed program was: #line 5968 "configure" #include "confdefs.h" #include <popt.h> both lines with cpp0:... are the whole stderr output of the above cmdline, and configure does : ac_err=`grep -v '^ *+' conftest.out | grep -v "^conftest.${ac_ext}\$"` and checks if ac_err is null, it isn't because it contains both lines beginning by cpp0:... so configure fails... thinking the output was like "popt.h : not found" I don't know how these configure scripts work, so I've got no idea of where to fix that... Hope you've got all needed infos ... Arnaud gcc 3 issue it looks like I committed a new ebuild with a patch that eliminates the cause of the warnings. This is really a bug in autoconf, though.
http://bugs.gentoo.org/9413
crawl-002
refinedweb
312
71.31
. If the first parameter is an array reference, it will create an attribute for every $name in the::Basics::Recipe5 and the updated value.::Meta::Recipe1. See "TRAIT NAME RESOLUTION" for details on how a trait name is resolved to a class name. Also see Moose::Cookbook::Meta::Recipe3 for a metaclass trait example. The value of this key is the name of the method that will be called to obtain the value used to initialize the attribute. See the builder option docs in Class::MOP::Attribute and/or Moose::Cookbook::Basics::Recipe9. Automatically define lazy => 1 as well as builder => "_build_$attr", clearer => "clear_$attr', predicate => 'has_$attr' unless they are already defined. This may be a method name (referring to a method on the class with this attribute) or a CODE ref. The initializer is used to set the attribute value on an instance when the attribute is set during instance initialization (but not when the value is being assigned to). See the initializer option docs in Class::MOP::Attribute for more information. decision.::Basics::Recipe6. An augment method, is a way of explicitly saying "I am augmenting this method from my superclass". Once again, the details of how inner and augment work is best described in the Moose::Cookbook::Basics:. When you use Moose, you can specify which metaclass to use: use Moose -metaclass => 'My::Meta::Class'; You can also specify traits which will be applied to your metaclass: use Moose -traits => 'My::Trait'; This is very similar to the attribute traits feature. When you do this, your class's meta object will have the specified traits applied to it. See "TRAIT NAME RESOLUTION" for more details. By default, when given a trait name, Moose simply tries to load a class of the same name. If such a class does not exist, it then looks for. If all this is confusing, take a look at Moose::Cookbook::Meta::Recipe3,::Recipe1, which provides an overview of all the different ways you might extend Moose. The init_meta method. init_meta returns the metaclass object for $class. You can specify an alternate metaclass with the metaclass option. For more detail on this topic, see Moose::Cookbook::Extending::Recipe2. This method used to be documented as a function which accepted positional parameters. This calling style will still work for backwards compatibility, but is deprecated. Moose's import method supports the Sub::Exporter form of {into => $pkg} and {into_level => 1}. NOTE: Doing this is more or less deprecated. Use Moose::Exporter instead, which lets you stack multiple Moose.pm-alike modules sanely. It handles getting the exported functions into the right place for you. An alias for confess, used by internally by Moose., unlike Class::MOP, which simply dies if the metaclasses are incompatible. In actuality, Moose fixes incompatibility for all of a class's metaclasses, not just the class metaclass. That includes the instance metaclass, attribute metaclass, as well as its constructor class and destructor class. However, for simplicity this discussion will just refer to "metaclass", meaning the class metaclass, most of the time. Moose has two algorithms for fixing metaclass incompatibility. The first algorithm is very simple. If all the metaclass for the parent is a subclass of the child's metaclass, then we simply replace the child's metaclass with the parent's. The second algorithm is more complicated. It moose@perl.org. You must be subscribed to send a message. To subscribe, send an empty message to moose-subscribe@perl.org You can also visit us at #moose on irc.perl.org. This channel is quite active, and questions at all levels (on Moose-related topics ;) are welcome. This is the official web home of Moose, it contains links to our public SVN repository as well as links to a number of talks and articles on Moose and Moose related technologies. Part 1 - Part 2 - MooseX::namespace. See:: for extensions.. Moose is an open project, there are at this point dozens of people who have contributed, and can contribute. If you have added anything to the Moose project you have a commit bit on this file and can add your name to the list. However there are only a few people with the rights to release a new version of Moose. The Moose Cabal are the people to go to with questions regarding the wider purview of Moose, and help out maintaining not just the code but the community as well. Stevan (stevan) Little <stevan@iinteractive.com> Yuval (nothingmuch) Kogman Shawn (sartak) Moore Dave (autarch) Rolsky <autarch@urth.org> Aankhen Adam (Alias) Kennedy Anders (Debolaz) Nor Berle Nathan (kolibrie) Chris (perigrin) Prather Wallace (wreis) Reis Jonathan (jrockway) Rockway Piotr (dexter) Roszatycki Sam (mugwump) Vilain Cory (gphat) Watson Dylan Hardison (doc fixes) ... and many other #moose folks This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~drolsky/Moose-0.87/lib/Moose.pm
crawl-002
refinedweb
817
65.12
How many times have you wanted a library that manages all objects entering and leaving the windows clipboard? Maybe without having to handle the various types of objects .. Here it is! Nothing could be simpler. This library will handle everything for you through a series of delegates to communicate the appearance of new notes and a thread that has the task of effectively monitor system clipboard. The library needs, for now, a Windows Forms object for the proper management of the delegates. Future versions will replace this part with an interface that will manage the delegates. It will be for those who will implement this interface to manage the Invoke in the form (or XAML view) correctly. // Use these methods to interact directly with the clipboard without using the manager // These methods are a common wrapper namespace "System.Windows.Forms.Clipboard" string[] files = ClipboardManager.GetClipboardFiles(); Image image = ClipboardManager.GetClipboardImage(); string text = ClipboardManager.GetClipboardText(); ClipboardManager.SetClipboardFiles(new string[] { "file1", "file2" }); ClipboardManager.SetClipboardImage(new Bitmap("path")); You can use the library only in a static way. This way you can read and write files, images and text to the clipboard. Please note that in case the clipboard was empty or tried, for example, to read a text in the clipboard when there is a file, static methods will return always null. The use of "Clipboard Manager" as static class is not recommended in cases where there is no certainty of what's in the clipboard. // Use the "ClipboardManager" to manage in a more comprehensive the clipboard // I assume that "this" is a Form ClipboardManager manager = new ClipboardManager(this); // Use "All" to handle all kinds of objects from the clipboard // otherwise use "Files", "Image" or "Text" manager.Type = ClipboardManager.CheckType.All; // Use events to manage the objects in the clipboard manager.OnNewFilesFound += (sender, eventArg) => { foreach (String item in eventArg) { Console.WriteLine("New file found in clipboard : {0}", item); } }; manager.OnNewImageFound += (sender, eventArg) => { Console.WriteLine("New image found in clipboard -> Width: {0} , Height: {1}", eventArg.Width, eventArg.Height); }; manager.OnNewTextFound += (sender, eventArg) => { Console.WriteLine("New text found in clipboard : {0}", eventArg); }; // Use the method "StartChecking" to start capturing objects in the clipboard manager.StartChecking(); // Close the capturing manager.Dispose(); Using the class "Clipboard Manager" as an instance you have the possibility of having, by event, reporting that a new object is entered into the clipboard. This class is very convenient because it allows the user to manage the objects in the clipboard in near real time (the manager operates through a thread and so there may be about a one second delay). 02/10/2012: Version 1.0.0.1 (Resolved image comparison bug + adding test project) 28/09/2012: Version 1.0.0.0 This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) StringCollection CopyTo CheckClipBoard() Quote:--Image/Files will not work as it creates a new instance each time a Get is called. You'll want to try that one out. LastImageFound false OnNewImageFound CheckType.Files General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/tips/467361/using-clipboard-csharp-4-0-wrapper-inside?fid=1789937&df=90&mpp=10&noise=1&prof=false&sort=position&view=none&spc=none
CC-MAIN-2017-04
refinedweb
534
57.06
getitimer, setitimer - get and set value of interval timer [XSI] #include <sys/time.h>#include : - [EINVAL] - The value argument is not in canonical form. (In canonical form, the number of microseconds is a non-negative integer less than 1000000 and the number of seconds is a non-negative integer.) The getitimer() and setitimer() functions may fail if: - [EINVAL] - The which argument is not recognized. None. None. None. None. alarm(), sleep(), timer_getoverrun(), ualarm(), usleep(), the Base Definitions volume of IEEE Std 1003.1-2001, <signal.h>, <sys/time.h> First released in Issue 4, Version 2. Moved from X/OPEN UNIX extension to BASE. The restrict keyword is added to the setitimer() prototype for alignment with the ISO/IEC 9899:1999 standard.
http://pubs.opengroup.org/onlinepubs/000095399/functions/setitimer.html
CC-MAIN-2018-05
refinedweb
121
66.23
With the release of XNA, Microsoft has made available a platform for writing games on the PC and for the Xbox 360. XNA provides more simplified access to hardware than managed DirectX did. The Xbox 360 controllers are also fully supported by the XNA Framework. In this article, I'll be using Visual Studio 2005 and the XNA Framework to create a demonstration form that will display the full state of Xbox 360 controllers connected to the system. Update 11 November 2012 - I've posted another article that makes use of XInput instead of Xna to access the same functionality. XInput gives access to additional information (such as information on the battery) but the version of the code in the other articles requires Windows 8. You can find the article here. This code example uses assemblies from the XNA Framework and you will need to install the XNA redistributables to run the example program. You can download it from here. In creating your own XNA project, remember to add an assembly reference to Microsoft.Xna.Framework and Microsoft.Xna.Framework.Input. Microsoft.Xna.Framework Microsoft.Xna.Framework.Input The XNA Framework is based on a subset of the .NET Compact Framework and was designed to run on Windows XP SP2. At the time of this writing, Windows 2003 and Vista are not officially supported, although the Framework does operate on these systems. The Xbox 360 has an XNA game loader which provides a runtime environment for XNA based games. Since XNA is based on a subset of the .NET Compact Framework, there is no support for the Windows Forms namespace or the network namespace. However, if you are targeting the Windows platform only, you can use System.Windows.Forms and System.Net and other namespaces as needed. System.Windows.Forms System.Net The inputs on the controller are either analog or digital. The digital inputs will only have one of two states to indicate whether or not they are being pressed: ButtonState.Pressed or ButtonState.Released. The digital inputs on the controller are the buttons labelled A, B, X, Y, LB , RB, Back, Start, the buttons under both of the thumb sticks, and the D-Pad. ButtonState.Pressed ButtonState.Released The state of the analog inputs is represented with a floating point number. The triggers on the controller will cause values between 0.0 (the trigger is not pressed) and 1.0 (the trigger is pressed all the way down) to be returned. The two thumb sticks return values between -1.0 and +1.0 for the x and y axis, where 0 is the center point for the axis. The button in the center of the Xbox 360 controller is not accessible. It's only used by the Xbox operating system. The state of the controller is accessed through the Microsoft.XNA.Framework..Input.GamePad class. This class only has three methods (beyond those inherited from System.Object). Those methods are GetCapabilities, GetState, and SetVibration. For this demonstration I will only use GetState and SetVibration. These methods take for their first argument a PlayerIndex value. This is used to specify the controller with which we will interact. Microsoft.XNA.Framework..Input.GamePad System.Object GetCapabilities GetState SetVibration GetState PlayerIndex The GetState method returns a GamePadState struct. GamePadState contains the state of the controller in several sub-structs named Buttons, DPad, IsConnected, ThumbSticks, and Triggers. The Button and DPad structs have members named after the 10 buttons on the controller and the four directions on the DPad that will each be set to ButtonState.Pressed and ButtonState.Released (there is also an enumeration in the System.Windows.Forms namespace, you may need to disambiguate between the two if you are using the System.Windows.Forms with Microsoft.Xna.Framework.Input). GamePadState Buttons DPad IsConnected ThumbSticks Triggers Button DPad ButtonState.Pressed ButtonState.Released System.Windows.Forms namespace //Setting or clearing a checkbox depending on the state of a button on the controller. this.gamePadState = GamePad.GetState(this.playerIndex); this.buttonA.Checked = (this.gamePadState.Buttons.A == Input.ButtonState.Pressed); The Triggers struct contains the members Left and Right that have floating point to represent how far down the trigger is pressed. Triggers Left Right The ThumbSticks member has two members Left and Right. Each one of these is a Vector2, or a group of floating point X and Y values indicating the position of the joystick. A Vector with the values (X:0,Y:0) mean the joystick is in the center position, (X:-1,Y:0) means the joystick is to the far left position, (X:0, Y:1) means the controller is being pressed up, and so on. Note that you will never see the X and Y values in their most extreme state at the same time. ThumbSticks Right Vector2 Vector The Xbox 360 controllers have two motors for vibration. The left motor causes a slow vibration. The right motor causes a quick vibration. The motors can be turned on with the SetVibration method. This method takes a PlayerIndex and two floating point numbers between 0.0f and 1.0f to indicate how strong each motor should vibrate. SetVibration //Turning the slow vibration motor on to 50% maximum GamePad.SetVibration(playerIndex, 0.5f, 0.0f); The example program continually polls the Xbox 360 controller on a timed interval and updates the onscreen state. Checkboxes are used for the digital inputs. The analog inputs are represented with progress bars. The axis for the thumb sticks will be empty in the most negative position, half full in the center position, and full in the most positive position. Checkbox You can also turn the motors on using the numeric up/down inputs at the bottom of the UI. I've only allowed the motors to be on for a limited time and also turn them off when the program is terminating. You'll find that the XNA Framework greatly simplifies interaction with the Xbox 360 controllers. In my next article, I'll show how to render graphics using XNA and using an XNA input device to control onscreen objects. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) A first chance exception of type 'System.IO.FileNotFoundException' occurred in mscorlib.dll<br /> An unhandled exception of type 'System.IO.FileNotFoundException' occurred in mscorlib.dll<br /> <br /> <br /> Additional information: The specified module could not be found. (Exception from HRESULT: 0x8007007E)<br /> <br /> 'J2i.Net.XnaXboxController.vshost.exe' (Managed): Loaded 'H:\WINDOWS\assembly\GAC_MSIL\System.Configuration\2.0.0.0__b03f5f7f11d50a3a\System.Configuration.dll'<br /> The thread 'vshost.RunParkingWindow' (0xae4) has exited with code 0 (0x0).<br /> The program '[3344] J2i.Net.XnaXboxController.vshost.exe: Managed' has exited with code -532459699 (0xe0434f4d). Could not load file or assembly 'Microsoft.Xna.Framework, Version=1.0.0.0, Culture=neutral, PublicKeyToken=6d5c3888ef60e27d' or one of its dependencies. Could not load file or assembly 'Microsoft.Xna.Framework, Version=2.0.0.0, Culture=neutral, PublicKeyToken=6d5c3888ef60e27d' or one of its dependencies. An attempt was made to load a program with an incorrect format. General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/16983/Using-XNA-to-Access-an-Xbox-360-Joystick?msg=4424982
CC-MAIN-2017-17
refinedweb
1,216
58.69
Technically this is Part 2 of Firewall-busting ASN-lookups. However, I said (in Part 1) that Part 2 would be about making a vectorized version and this is absolutely not about that. Rather than fib, I merely misdirect. Moving on… As you can see in Part 1, we have to resort to a system() call to do the TXT record lookup with dig. Frankly, I really dislike that. It’s somewhat sloppy, wasteful of resources and we can do better. Much better (initially, just a little better, tho). R, like most modern interpreted languages, has a C interface. Hadley Wickham goes into far more detail in his epic online (and I’m assuming soon-to-be print) book Advanced R Programming than I will be doing in this post and Jonathan Callahan also has some great in-depth material you should review. You might want to take a peed at some of Dirk Eddelbuettel‘s work, too. This post will (should?) get you jumpstarted with the basics of integrating C & R and will dovetail nicely with Part 2 of the proper series, since we’ll not only be creating a vectorized version of the ip2asn() function but will also be putting it into a proper R package. Peeking Under The Covers Even if you’ve only dabbled with R, you’ve already used traditional C-backed functions, and if you’ve done more extensive computing with R—say, worked with the RMySQL package to connect to a database—then you’ve absolutely used the more “modern”/prevalent Rcpp-backed functions. For example, the mysqlCloseConnection looks like this: mysqlCloseConnection function (con, ...) { if (!isIdCurrent(con)) return(TRUE) rs <- dbListResults(con) if (length(rs) > 0) { if (dbHasCompleted(rs[[1]])) dbClearResult(rs[[1]]) else stop("connection has pending rows (close open results set first)") } conId <- as(con, "integer") .Call("RS_MySQL_closeConnection", conId, PACKAGE = .MySQLPkgName) } <environment: namespace:RMySQL> PROTIP: you can see the source code for any R function by just typing the function name sans-parentheses & parameters at the R console prompt Towards the bottom of the above code listing, you’ll see the .Call("RS_MySQL_closeConnection"...) line which is reaching out to the underlying C/C++ code (in RS-MySQL.c) that makes up the libary. Here’s the definition for that function: /* open a connection with the same parameters used for in conHandle */ Con_Handle * RS_MySQL_cloneConnection(Con_Handle *conHandle) { S_EVALUATOR return RS_MySQL_createConnection( RS_DBI_asMgrHandle(MGR_ID(conHandle)), RS_MySQL_cloneConParams(RS_DBI_getConnection(conHandle)->conParams)); } Now, we’re not working with MySQL in this post, but that’s a fairly familiar tool and provides the framework for us to discuss how we’ll be using C/C++, Rcpp & .Call() to make DNS calls from R in a much more efficient manner. Picking A DNS Library For folks familiar with DNS, you may be thinking that we’re going to build an interface to the trusted old standard BIND libresolv library. While that was an option, we’re going to skip with tradition and use the ldns library from NLNet Labs, makers of the #spiffy Unbound validating recursive caching resolver (which uses ldns). Their ldns implementation has a simple but also robust API which supports IPv4, IPv6, TSIG & DNSEC plus is wicked fast, small and can make synchronous calls (which makes it easier to do a basic port). If you’re running Mac OS X, you’ll need to either use Homebrew or MacPorts or compile the library from source. I prefer Homebrew, and used: brew install ldns For Linux users, you’ll need both the ldns library and the bsd library (the latter primarily for strlcopy). I gravitate towards Ubuntu for Linux and used the following there: sudo apt-get install libldns-dev libbsd-dev You’ll note the lack of a Windows section. Consider this an open offer to anyone on Windows to augment our blog with a Windows version. The Rtools package can help you get started. Hit us up for details on how to join in the fun! We are making the broad assumption that you have the necessary development environment setup on either Linux or Mac OS X. It’s unlikely you’d be this far along in the post if not :-) Starting Small Rather than build an entire R interace to the whole ldns library, we’re going to focus this post on: - Getting a small Rcppexample built - Interfacing with ldnsto retreive a TXTrecord - Building an ip2asn()R function that uses this new capability NOTE: all of the code for this post is in this gist. You can download them all in one fell swoop with: git clone, and we’ll have a proper repository for the full package impementation in later posts. We’ll begin with having you install the Rcpp package. Fire up an R console (or use the RStudio R console pane) and do: > install.packages("Rcpp") Next, create a directory (perhaps ip2asn for this limited example) and put the following code block into the file txt.cpp (or just use the one you cloned above): // these three includes do a great deal of heavy lifting // by making the necessary structures, functions and macros // available to us for the rest of the code #include <Rcpp.h> #include <Rinternals.h> #include <Rdefines.h> #ifdef __linux__ #include <bsd/string.h> #endif // REF: for API info #include <ldns/ldns.h> // need this for 'wrap()' which *greatly* simplifies dealing // with return values using namespace Rcpp; // the sole function that does all the work. it accepts an // R character vector as input (even though we're only expecting // one string to lookuo) and returns a character vector (one row // of the DNS TXT records) RcppExport SEXP txt(SEXP ipPointer) { ldns_resolver *res = NULL; ldns_rdf *domain = NULL; ldns_pkt *p = NULL; ldns_rr_list *txt = NULL; ldns_status s; ldns_rr *answer; // SEXP passes in an R vector, we need this as a C++ StringVector Rcpp::StringVector ip(ipPointer); // we only passed in one IP address domain = ldns_dname_new_frm_str(ip[0]); if (!domain) { return(R_NilValue) ; } s = ldns_resolver_new_frm_file(&res, NULL); if (s != LDNS_STATUS_OK) { return(R_NilValue) ; } p = ldns_resolver_query(res, domain, LDNS_RR_TYPE_TXT, LDNS_RR_CLASS_IN, LDNS_RD); ldns_rdf_deep_free(domain); // no longer needed if (!p) { return(R_NilValue) ; } // get the TXT record(s) txt = ldns_pkt_rr_list_by_type(p, LDNS_RR_TYPE_TXT, LDNS_SECTION_ANSWER); if (!txt) { ldns_pkt_free(p); ldns_rr_list_deep_free(txt); return(R_NilValue) ; } // get the TXT record (could be more than one, but not for our IP->ASN) answer = ldns_rr_list_rr(txt, 0); // get the TXT record (could be more than one, but not for our IP->ASN) ldns_rdf *rd = ldns_rr_pop_rdf(answer) ; // get the character version via safe copy char *answer_str = ldns_rdf2str(rd) ; // Max TXT record length is 255 chars, but for this example // the Team CYMRU ASN resolver TXT records should never exceed // 80 characters (from bulk analysis of large sets of IPs) char ret[80] ; strlcpy(ret, answer_str, sizeof(ret)) ; Rcpp::StringVector result(1); result[0] = ret ; // clean up memory free(answer_str); ldns_rr_list_deep_free(txt); ldns_pkt_free(p); ldns_resolver_deep_free(res); // return the TXT answer string which is ridiculously // simple even for wonkier structures thanks to `wrap()` return(wrap(result)); } The code is commented pretty well and I won’t be covering all of the nuances of the individual ldns calls. Please note that the function has minimal error checking since it is serving first and foremost as a compact example. The full package version will have all i’s dotted and t’s crossed and I’ll make it a point to show the differences between a “toy” example and production-worthy code when we post the package follow-up. The code flow pattern will be the same for most of these API library mappings: - define data types that need to be passed in and returned - convert them to structures C/C++ can handle - perform your calculations/operations on that converted data - clean up after yourself - return a value R can handle To compile that code into an object we can use in R, you need to do the following: export PKG_LIBS=`Rscript --vanilla -e 'Rcpp:::LdFlags()'` export PKG_CPPFLAGS=`Rscript --vanilla -e 'Rcpp:::CxxFlags()'` R CMD SHLIB -lldns txt.cpp The export lines setup environment variables that help R/ Rcpp know where to look for libraries and define the proper compiler flags for your environment. The last line does the hard work of building the proper compilation and linking directives/commands. All three of them belong in a proper Makefile (or your build system of choice). Again, we’re taking a few shortcuts to make the overall concept a bit more digestible. Complexity coming soon! If the build was successful, you’ll have txt.o and txt.so files in your directory. Now, on to the good bits! Interfacing With R Having a compiled object is all well and good, but we need to be able to access the txt() function from R. It turns out that this part is pretty straightforward. Put the following into a file (perhaps ip2asn.R) or use the gist version: # yes, this (dyn.load) is all it takes to expose the function we # just created to R. and, yes, it's a bit more complicated than # that, but for now bask in the glow of simplicity dyn.load("txt.so") # this function should look more than vaguely familiar # ip2asn <- function(ip="216.90.108.31") { orig <- ip ip <- paste(paste(rev(unlist(strsplit(ip, "\\."))), sep="", collapse="."), ".origin.asn.cymru.com", sep="", collapse="") # in essence, we replaced the `system("dig ...")` call with this result <- .Call("txt", ip) out <- unlist(strsplit(gsub("\"", "", result), "\ *\\|\ *")) return(list(ip=orig, asn=out[1], cidr=out[2], cn=out[3], registry=out[4])) } To use this new function, make sure your R session is in the working directory of the library (via setwd()) and do: source("ip2asn.R") ip2asn() ## $ip ## [1] "216.90.108.31" ## ## $asn ## [1] "23028" ## ## $cidr ## [1] "216.90.108.0/24" ## ## $cn ## [1] "US" ## $registry ## [1] "arin" That uses the function default IP address, but you can use any IP (and, it still only works with a single IP address). Kittens and polar bears will suffer greatly if you pass in anything but a single, 100% valid IP address (see, error checking saves wildlife and pets), but it gets the job done without a system() call and sets us up nicely for adding more capability. Wrapping Up We gave you a whirlwind tour of interfacing with R and we’ll be re-visitng this topic in later posts. If any parts were a bit confusing or your setup has some errors, drop a note in the comments here or over at the gist and we’ll do our best to help...
http://www.r-bloggers.com/making-better-dns-txt-record-lookups-with-rcpp/
CC-MAIN-2015-11
refinedweb
1,753
58.82
In the following exercise i want to manipulate a random string input by using functions. Step 1: I want to remove all characters which are not digits, letters or spaces Step 2: I want to replace all spaces with '_' Step 3: I want to convert all numbers to spaces Step 4: I want to replace all 'a' with 'z' and all 'A' with 'Z' For lists i already used the filter function and i am wondering if this function can also be used for string inputs. I am not quite sure how to approach this exercise. Update: I found an approach to solve step 1 and step 3 but i am not quite sure how to put the different functions together in a function which includes every step. Is it possible to call the different functions one after another in the right order in some kind of main function? import Data.Char toUpperStr xs = map toUpper xs -- function to convert lower to upper dropInvalids xs = (filter (\x -> isUpper x || isSpace x || isDigit x)) $ toUpperStr xs replaceBlank [] = [] -- function to replace " " with "_" replaceBlank (x:xs) = if x == ' ' then '_' : replaceBlank xs else x : replaceBlank xs Yes, absolutely! That's one of the beautiful things about Haskell. You can treat Strings as [Char]. In fact, that's what they are! In GHCi, type :i String- you get type String = [Char]. You can easily compose functions. There's an operator for that, (.). So (f . g) xis f (g x). I would improve the code in a few key ways. Firstly, make the replaceBlankfunction more general, so it takes a condition and a replacement function. Secondly, compose all the functions in a "main" function, as you call it. But do not name the main function main! That name is reserved for the IO action of a program. It's also important not to think of the final function as "calling" the other functions. That is imperative terminology, here, we are applying the function(s). Also, why does your dropInvalidscontain a toUpperStr? You never specified the string to be all uppercase in the end. Also also, be sure to declare the type of your functions. In this case, the following would be the correct code:
https://techqa.club/v/q/string-manipulation-haskell-c3RhY2tvdmVyZmxvd3w1NTk5NDUwMg==
CC-MAIN-2020-50
refinedweb
371
72.97
What’s new in Ruby 2.6? I’ve started a newsletter to share my stories and interesting posts I find,, don’t worry I won’t post unwanted or any promotional emails.. Update WOW I never imagined that Matz himself would thank me on stage at Rubyconf in his opening keynote and drive this post forward. Thank you Matz for creating this awesome programming language and this awesome community. Just In Time compilation (MJIT) Vladimir Makarov, who optimized the Hash code at Ruby 2.4 and a core maintainer at the GCC project, and Takashi Kokubun, a Ruby core maintainer who rewrote the Ruby VM from 1.9 to 2.0, have proposed a JIT Compiler to the Ruby VM. Roughly, the idea behind a JIT compiler is to “inspect” the code at run time and try to optimize more intelligently the current running code as opposed to a Ahead Of Time compiler. There were some offers as to how to create the JIT compiler itself, but in an oversimplified explanation it has been decided to reuse available C compilers on the system (gcc or clang) as the compiler itself and turn bytecode into C code and compile it. It’s pretty elegant as the the MJIT implementation finds hot-spots in the code, take the compiled bytecode, uses an ERB template to turn it into a .c file, compile it as a shared object, and point the VM to run the shared object code instead of the bytecode. Some initial benchmarks show significant results, with the Optcarrot (a NES emulator benchmark) being 1.77x slower on Ruby 2.5.3 and 2.48x slower on Ruby 2.0.0. Moreover some more micro benchmarks to MJIT implementation itself show more significant results such as the Mandlebrot benchmark being 1.27x faster, Fibonacci benchmark being 3.19x faster and the const and const2 benchmarks being almost 4x faster. John Hawthorn shows in his early post from ten months ago, internally how it looks like. Endless ranges Ruby introduces the (0..) range and makes these available: ary[1..] # identical to ary[1..-1] (1..).each {|index| ... } # infinite loop from index 1 ary.zip(1..) {|elem, index| ... } # ary.each.with_index(1) { } Array#union and Array#difference There is an easier way to difference and union multiple arrays. [1, 1, 2, 2, 3, 3, 4, 5 ].difference([1, 2, 4]) #=> [ 3, 3, 5 ] ["a", "b", "c"].union(["c", "d", "a"]) #=> [ "a", "b", "c", "d" ] ["a"].union([["e", "b"], ["a", "c", "b"]]) #=> [ "a", "e", "b", "c" ] Array#filter is a new alias for Array#select Much like other common used languages such as Javascript, PHP, Haskell, Java 8, Scala, R, filter was aliased, and this is now possible: [:foo, :bar].filter { |x| x == :foo } # => [:foo] Enumerable#to_h now accepts a block that maps keys to values There are many ways to create a hash out of an array in Ruby, some of them are (1..5).map { |x| [x, x ** 2] }.to_h #=> {1=>1, 2=>4, 3=>9, 4=>16, 5=>25} (1..5).each_with_object({}) { |x, h| h[x] = x ** 2 } #=> {1=>1, 2=>4, 3=>9, 4=>16, 5=>25} Starting 2.6 it is now possible to use a block which eliminates the intermediate array, (1..5).to_h { |x| [x, x ** 2] } #=> {1=>1, 2=>4, 3=>9, 4=>16, 5=>25} Hash#merge, merge! now accept multiple arguments No more jumping hoops doing stuff like this to merge multiple hashes hash1.merge(hash2).merge(hash3) [hash1, hash2, hash3].inject do |result, part| result.merge(part) { |key, value1, value2| key + value1 + value2 } end We can now have variable amount of arguments when merging hashes hash1.merge(hash2, hash3) The #then method Back in Ruby 2.5, the yield_self method was introduced, it made possible to pass a block to any instance and get that instance inside the block as the argument. "Hello".yield_self { |str| str + " World" } #=> "Hello World" It might not sound useful at first but when you look at two methods that are available, Ruby’s tap and Rails’ try, you start to see that it is used quite a lot, moreover it does open the possibility for more readable code like Michal Lomnick shows in his post. "" .yield_self { |url| URI.parse(url) } .yield_self { |url| Net::HTTP.get(url) } .yield_self { |response| JSON.parse(response) } .yield_self { |repo| repo.fetch("stargazers_count") } .yield_self { |stargazers| "Rails has #{stargazers} stargazers" } .yield_self { |string| puts string } Or usual Rails-like controller code events = Event.upcoming events = events.limit(params[:limit]) if params[:limit] events = events.where(status: params[:status]) if params[:status] events can become Event.upcoming .yield_self { |events| params[:limit] ? events.limit(params[:limit]) : events } .yield_self { |events| params[:status] ? events.where(status: params[:status]) : events } # Or even Event.upcoming .yield_self { |_| params[:limit] ? _.limit(params[:limit]) : _ } .yield_self { |_| params[:status] ? _.where(status: params[:status]) : _ } # Or even def with_limit(events) params[:limit] ? events.limit(params[:limit]) : events end def with_status(events) params[:status] ? events.where(status: params[:status]) : events end Event.upcoming .yield_self(&method(:with_limit)) .yield_self(&method(:with_status)) Okay, so yield_self is nice but what about then ? Well the then method is just an alias to yield_self so it makes code even a little bit more readable. Event.upcoming .then { |events| params[:limit] ? events.limit(params[:limit]) : events } .then { |events| params[:status] ? events.where(status: status) : events } # or Event.upcoming .then(&method(:with_limit)) .then(&method(:with_status)) Some people are concerned that it might resemble A+ Promises too much but eventually it was merged as then . Random.bytes There’s already Random.new.bytes(10) # => "\xD7:R\xAB?\x83\xCE\xFAkO" and now there’s Random.bytes(8) # => "\xAA\xC4\x97u\xA6\x16\xB7\xC0\xCC" as Matz pointed, better later than never. Range#=== now uses cover? rather than include? As pointed out by Zverok Kha on reddit, using cover? in case statements now brings possibilities such as case DateTime.now when Date.today..Date.today + 1 'win!' else 'fail' end Another notable speed improvements Proc#callis now around 1.4x faster. - Transient Heap support for Hash has been added. This reduce the memory footprint of short-living memory objects. The benchmark shows reduced memory consumption of short living Hash objects by about 7%. I’ve started a newsletter to share my stories and interesting posts I find,, don’t worry I won’t post unwanted or any promotional emails.
https://medium.com/tailor-tech/whats-new-in-ruby-2-6-a4774f3631c1
CC-MAIN-2019-04
refinedweb
1,066
59.9
You infer the intent in my expression,Although not always without qualification. My heart was torn between function and object, But you have rendered them as one. I've always been lazy by nature,Yet I'm drawn to your conditional strictness. I have emerged downcast from a haze of inheritance,To a higher order Nirvana. Although by any unit of measure,You're not a principled type. I yearn so deeply to functorize you,But you say that all you want is your own private namespace. Who needs inner class,When you have polymorphic abstraction? I was a zero in a bind at the point of no return,Cast anonymously into the void. However my guarded expression was matched,By your irrefutable pattern. Now I cannot begin to list my comprehension,Of the joy I find in your symbolic beauty. I moved a tritone up the octave and I found you,Let us be bound together in harmony forever. Satnam Singh
http://blogs.msdn.com/b/satnam_singh/archive/2009/12/25/distracted-by-abstraction-a-poem-about-f.aspx
CC-MAIN-2014-41
refinedweb
162
66.23
. That’s what we’re going to do in this project. Before proceeding with this tutorial you should have the ESP32 add-on installed in your Arduino IDE. Follow one of the following tutorials to install the ESP32 on the Arduino IDE, if you haven’t already. You might also like reading other BME280 guides: - ESP32 with BME280 Sensor using Arduino IDE - ESP8266 with BME280 using Arduino IDE - ESP32/ESP8266 with BME280 using MicroPython - Arduino Board with BME280 Watch the Video Tutorial This tutorial is available in video format (watch below) and in written format (continue reading). Parts Required To follow this tutorial you need the following parts: - ESP32 DOIT DEVKIT V1 Board – read ESP32 Development Boards Review and Comparison - BME280 sensor module - Breadboard - Jumper wires You can use the preceding links or go directly to MakerAdvisor.com/tools to find all the parts for your projects at the best price! Introducing the BME280 Sensor Module The BME280 sensor module reads temperature, humidity, and pressure. Because pressure changes with altitude, you can also estimate altitude., you use the following pins: - SCK – this is the SPI Clock pin - SDO – MISO - SDI – MOSI - CS – Chip Select To use I2C communication protocol, the sensor uses the following pins: - SCK – this is also the SCL pin - SDI – this is also the SDA pin Schematic We’re going to use I2C communication with the BME280 sensor module. For that, wire the sensor to the ESP32 SDA and SCL pins, as shown in the following schematic diagram. (This schematic uses the ESP32 DEVKIT V1 module version with 36 GPIOs – if you’re using another model, please check the pinout for the board you’re using.) Installing the BME280 library To take readings from the BME280 sensor module we’ll_2<<_3<< After installing the libraries, restart your Arduino IDE. Reading Temperature, Humidity, and Pressure To get familiar with the BME280 sensor, we’re going to use an example sketch from the library to see how to read temperature, humidity, and pressure. >>IMAGE(); } Libraries The code starts by including the needed libraries #include <Wire.h> #include <Adafruit_Sensor.h> #include <Adafruit_BME280.h> SPI communication As we’re going to use I2C communication you can comment the following lines: /*#include <SPI.h> #define BME_SCK 18 #define BME_MISO 19 #define BME_MOSI 23 #define BME_CS 5*/ Note: if you’re using SPI communication, you need to change the pin definition to use the ESP32 GPIOs. by default. As you can see, you just need to create an Adafruit_BME280 object called bme. Adafruit_BME280 bme; // I2C If you would like to use SPI, you need to comment this previous line and uncomment one of the following lines depending on whether you’re using hardware or software SPI. //Adafruit_BME280 bme(BME_CS); // hardware SPI //Adafruit_BME280 bme(BME_CS, BME_MOSI, BME_MISO, BME_SCK); // software SPI setup() In the setup() you start a serial communication Serial.begin(9600); And the sensor is initialized: status = bme.begin(0x76); if (!status) { Serial.println("Could not find a valid BME280 sensor, check wiring!"); while (1); } Printing values In the loop(), the printValues() function reads the values from the BME280 and prints the results in the Serial Monitor. void loop() { printValues(); delay(delayTime); } Reading temperature, humidity, pressure, and estimate altitude is as simple as using: -. Upload the code to your ESP32, and open the Serial Monitor at a baud rate of 9600. You should see the readings displayed on the Serial Monitor. Creating a Table in HTML As you’ve seen in the beginning of the post, we’re displaying the readings in a web page with a table served by the ESP32. So, we need to write HTML text to build a table. To create a table in HTML you use the <table> and </table> tags. To create a row you use the <tr> and </tr> tags. The table heading is defined using the <th> and </th> tags, and each table cell is defined using the <td>and </td> tags. To create a table for our readings, you use the following html text: <table> <tr> <th>MEASUREMENT</th> <th>VALUE</th> </tr> <tr> <td>Temp. Celsius</td> <td>--- *C</td> </tr> <tr> <td>Temp. Fahrenheit</td> <td>--- *F</td> </tr> <tr> <td>Pressure</td> <td>--- hPa</td> </tr> <tr> <td>Approx. Altitude</td> <td>--- meters</td></tr> <tr> <td>Humidity</td> <td>--- %</td> </tr> </table> We create the header of the table with a cell called MEASUREMENT, and another called VALUE. Then, we create six rows to display each of the readings using the <tr> and </tr> tags. Inside each row, we create two cells, using the <td> and </td> tags, one with the name of the measurement, and another to hold the measurement value. The three dashes “—” should then be replaced with the actual measurements from the BME sensor. You can save this text as table.html, drag the file into your browser and see what you have. The previous HTML text creates the following table. The table doesn’t have any styles applied. You can use CSS to style the table with your own preferences. You may found this link useful: CSS Styling Tables. Creating the Web Server Now that you know how to take readings from the sensor, and how to build a table to display the results, it’s time to build the web server. If you’ve followed other ESP32 tutorials, you should be familiar with the majority of the code. If not, take a look at the ESP32 Web Server Tutorial. Copy the following code to your Arduino IDE. Don’t upload it yet. First, you need to include your SSID and password. /********* Rui Santos Complete project details at *********/ // Load Wi-Fi library #include <WiFi.h> #include <Wire.h> #include <Adafruit_BME280.h> #include <Adafruit_Sensor.h> //uncomment the following lines if you're using SPI /* web server port number to 80 WiFiServer server(80); // Variable to store the HTTP request String header; // Current time unsigned long currentTime = millis(); // Previous time unsigned long previousTime = 0; // Define timeout time in milliseconds (example: 2000ms = 2s) const long timeoutTime = 2000; void setup() { Serial.begin(115200); bool status; // default settings // (you can also pass in a Wire library object like &Wire2) //status = bme.begin(); if (!bme.begin(0x76)) { Serial.println("Could not find a valid BME280 sensor, check wiring!"); while (1); } // table(""); } } Modify the following lines to include your SSID and password between the double quotes. const char* ssid = ""; const char* password = ""; Then, check that you have the right board and COM port selected, and upload the code to your ESP32. After uploading, open the Serial Monitor at a baud rate of 115200, and copy the ESP32 IP address. Open your browser, paste the IP address, and you should see the latest sensor readings. To update the readings, you just need to refresh the web page. How the Code Works This sketch is very similar with the sketch used in the ESP32 Web Server Tutorial. First, you include the WiFi library and the needed libraries to read from the BME280 sensor. // Load Wi-Fi library #include <WiFi.h> #include <Wire.h> #include <Adafruit_BME280.h> #include <Adafruit_Sensor.h> The next line defines a variable to save the pressure at the sea level. For more accurate altitude estimation, replace the value with the current sea level pressure at your location. #define SEALEVELPRESSURE_HPA (1013.25) In the following line you create an Adafruit_BME280 object called bme that by default establishes a communication with the sensor using I2C. Adafruit_BME280 bme; // I2C; setup() In the setup(), we start a serial communication at a baud rate of 115200 for debugging purposes. Serial.begin(115200); You check that the BME280 sensor was successfully initialized. if (!bme.begin(0x76)) { Serial.println("Could not find a valid BME280 sensor, check wiring!"); while (1);(); Displaying the HTML web page The next thing you need to do is sending a response to the client with the HTML text to build the web page. The web page is sent to the client using this expression client.println(). You should enter what you want to send to the client as an argument. The following code snippet sends the web page to display the sensor readings in a table.>"); Note: you can click here to view the full HTML web page. Displaying the Sensor Readings To display the sensor readings on the table, we just need to send them between the corresponding <td> and </td> tags. For example, to display the temperature: client.println("<tr><td>Temp. Celsius</td><td><span class=\"sensor\">"); client.println(bme.readTemperature()); client.println(" *C</span></td></tr>"); Note: the <span> tag is useful to style a particular part of a text. In this case, we’re using the <span> tag to include the sensor reading in a class called “sensor”. This is useful to style that particular part of text using CSS. By default the table is displaying the temperature readings in both Celsius degrees and Fahrenheit. You can comment the following three lines, if you want to display the temperature only in Fahrenheit degrees. /*client.println("<tr><td>Temp. Celsius</td><td><span class=\"sensor\">"); client.println(bme.readTemperature()); client.println(" *C</span></td></tr>");*/ Closing the Connection Finally, when the response ends, we clear the header variable, and stop the connection with the client with client.stop(). // Clear the header variable header = ""; // Close the connection client.stop(); Wrapping Up) - ESP32 Data Logging Temperature to MicroSD Card - ESP32 with Multiple DS18B20 Temperature Sensors This is an excerpt from our course: Learn ESP32 with Arduino IDE. If you like ESP32 and you want to learn more, we recommend enrolling in Learn ESP32 with Arduino IDE course. Thanks for reading. 93 thoughts on “ESP32 Web Server with BME280 – Advanced Weather Station” Interesting, but seems a bit overkill to use an esp32 for this as an esp 8266 can do the same. With the esp32”s capabilities one may want to send the data via Bluetooth to ones phone Yes, that’s true. But this is a project example that shows how to use the BME280 and how to display sensor readings on a web server. The idea is that the users adapt this project to fulfill their needs. Hi, Sara How about ESP32 as a client? I found some example but always error to POST data to server that i build. thanks You need to ensure that you’ve setup a proper HTTP POST request. It’s usually a problem with the data size (content length) in the request that might cause that issue. very true and my comment certainly was not meant as criticism. If it came across that way I apologize Interestingly the price difference between the esp8266 and the esp32 – at least where I get them from – has narrowed so much the esp32 is becoming the obvious choice anyway Has anyone else having problems making the Nodemcu Esp 32-S v1.1 board talk to the GY BME/P 280 sensor? I’m trying to program it using Windows 10 Arduino app (latest version) with all the different add on for both boards plus downloaded to esp memory addresses 077 and 078 . Used several boards of both with same result of no response monitored at 9600 and 115200. Using Driver Cp210x for win 64bit ,I set it he driver baud rate to the monitor rate. Is it me or just junk from China? You should always check the I2C address of the sensor, because it can change from board to board… Hi Wf line I had to change the BME280 address. “bme.begin(0x76);” didn’t work, but “bme.begin(0x77);” did Hi, had similar problem with Node MCU board. I rolled back the Adafruit BME library version to 1.1.0 and that solved my problem. Hope it works for you. Great Project, it works fine…Thanks !!! I only have one problem. My WIFI Router shutdown from 00:00 until 06:00 After WIFI lost there is no reconnect so i must reset ESP32 Web Server with BME280 How can i fix the Problem ? You might want to fix the link for the BME280 module. This is a $4 sensor and your link shows $46 Hi Jim. Thanks for noticing. It is fixed now. Thanks for this great example project for the ESP32 and BME280 sensor! Hi These tutorials about the ESP32 are good, but you missing one thing that makes the code look awful! ESP32.also support the same WebServer library to handle request and response Please try it and update the code on these projects Thanks Hi. Thank you for sharing. We definitely have to take a look at that library. Regards, Sara 🙂 Here is the link to the example library on the official repository Hi, Very interesting project !! However, when I compile in the Arduino IDE I get a “function not declared” error for the Adafruit BME280. I have the latest library installed but have noted that there exists numerious versions of the library. In particular the Adafruit_BME280.h and Adafruit_BME280.cpp files are different. I noticed that the constructors for the BME function are different. Which library/version should I use ? Would be great to be able to compile the project and develop it further. Many thanks. Hi George. We’re using the library on the following link: You just need to click the “download” button, and then, follow the installation procedure. You also need to include the Adafruit Sensor library: I hope this helps, Regards, Sara 🙂 Hi Rui, I would like to combine, this server with the “Weather Station – Data Logging to Excel” one. This provides the actual measurments and a long term storage, for statistic purposes for exemple. How should you proceed? Be carefull, I’m not a specialist… Thanks. jlb Hi Jean. Our “Weather Station – Data Logging to Excel” tutorial is written in LUA programming language. So, it is not straightforward to combine the two projects. However, instead of datalogging your readings to excel, you can experiment datalogging to Google Sheets. We have a tutorial on how to datalog to google sheets with the ESP32: I hope this helps. Regards, Sara 🙂 Hi Sara, Thank you very much for your answer. Indeed, it’s a good solution. I’ll try it. I’ve already an IFFT account, I’ve asked for one of your first course… a few years ago. Best regards, Jean-Luc Hello Sara, How to measure the CPU usage and number of data processed per second of ESP32. Is there any method to determine it. In the Arduino IDE, I don’t of any good methods to get that data and details… You might consider using ESP-IDF. Hi there … perhaps you can help me a little bit ?! I wanted to do your porjekt with my hardware, but when i want to comlile the software for my D2 Mini i get this error messege anytime: “WiFi.h:79:9: error: initializing argument 1 of ‘int WiFiClass::begin(char*, const char*)’ [-fpermissive] int begin(char* ssid, const char *passphrase); ^ exit status 1 invalid conversion from ‘const char*’ to ‘char*’ [-fpermissive]” What can I do to solve it ? Thanks for a littel help. CU Kai Hi Kai. It seems that your Arduino IDE is not using the right WiFi library for this example. Because this sketch works as it is. I recommend taking a look at the ESP32 troubleshooting guide bullet 5: I hope this helps, Regards, Sara 🙂 Hi .. Sorry, after i removed the wifi.h libary the system can not finde it anymore. I will not be compiled. I treid also the WiFiEsp Libaries, but they will not work also. I want the create a ESP8266 with BME280 to build it into my Homebridge/Homekit. But i dit not find the right howto till now ;( Thanks a lot Kai Does this allow you to access the weather readings webpage directly or does it have to through a home network? Hello! I never got an answer to my question! Hi kevin. I’m sorry. We receive lots of emails and questions every day. It is very difficult for us to keep track of all questions. Answering your question: in this specific example, the ESP32 connects to your home network. But you can modify the code so that the ESP32 acts as an access point. That way, it doesn’t need to connect to your home network, and you can check the sensor readings by connecting to the ESP32 access point. Here is a tutorial on how to set your web servers with the ESP32 as an access point: I hope this helps and thank you for your interest in our tutorials. Regards, Sara How can I add an image? Hello Juan, this tutorial might help “How to Display Images in ESP32 and ESP8266 Web Server“: How can I make the server page refresh itself every so often? With this web server the sensor readings are updated automatically “ESP32 DHT11/DHT22 Web Server“: I am not getting the Humidity value.All other i will get value. I am using same library which you mention here. Can you help me how to fix ? Hi Rushi. Are you getting any errors? What value do you get for humidiyt? 0? Hi, I want to know if I upload the example BME280 test code successfully, and adjust the baud to 9600. Why cant I see anything on Serial Monitor. Thank you Hi KAI. After uploading the code and opening the Serial Monitor, press the ESP32 on-board RST button. Then, you should get all the information. Or are you getting any errors? Regards, Sara Hi Sara, New problem appeared. It said that cant find a valid BME280. I link the SCL to GPIO22 and SDA to GPIO21, and also link the GND VIN to GND and VIN pins of ESP32. Maybe I need to buy another BME280? Regards, Kai Hi Kai. Can you run this I2C scanner sketch and find the address of your BME280 sensor? Yours may be different than ours. Then, change the address on the following line if it is different: status = bme.begin(0x76); I hope this helps. Regards, Sara Wow I find the address is 0×76 And change the code. Then BME280 works! Thank you! Hi! Very usefull project! Thank you! True, I wanted to ask me to help with the module CJMCU 8128 which contains sensors CCS811 HDC10XX BMP280 Can I use it in this project? Unfortunately, I’m just a beginner and it’s very difficult for me to figure it out. Thanks in advance! Hi. If it contains the BME280 sensor, I think it should work. But I haven’t one of those sensors to experiment. Regards, Sara Hi folks!! I did it successfully….but sometime changes may help if given code not works.. for example;;; (1) if (!bme.begin(0x76)) { Serial.println(“Could not find a valid BME280 sensor, check wiring!”); while (1); } Here, I2C use bme.begin() only… (2) this code does not work… WiFi.begin(ssid, password); while (WiFi.status() != WL_CONNECTED) { delay(500); Serial.print(“.”); } Ans: use following:——— WiFi.softAP(ssid, password); IPAddress IP = WiFi.softAPIP(); Serial.print(“AP IP address: “); Serial.println(IP); Enjoy for rest of your coding……!!! Great tutorial! However, I could get none of the examples for WiFi.h (core) to work…all cause watchdog resets. Changed library and code for ESP8266WiFi.h and works like a charm. Oops! forgot to mention I’m using an esp8266 nodemcu, not an esp32. Redfaced…tried it with an esp32 and yep, it compiled and ran ok as written. Hello, I am using this sketch on an Arduino Nano 33 IoT, and it seems to work fine up to the point where the code is: WiFi.begin(ssid, password); while (WiFi.status() != WL_CONNECTED) { delay(500); Serial.print(“.”); At this point it continually prints …… and goes no further. COuld someone help me why it does not connect to the Wifi? I have tried two different Wifi connections and it connects to neither. KR MGC Hi Miles. That usually means that you didn’t insert your network credentials, there’s a typo on the network credentials, or your board is very far away from your router and it’s not able to pick up wi-fi signal. Regards, Sara hi. why this error: invalid conversion from ‘const char‘ to ‘char‘ [-fpermissive] This is abought wi fi i think. Thanks Hi. Did you change anything in our code? HI. when i upload the code i get this error. Any idea why? Thanks No. The code is copy past and then the error Only insert wifi credentials like usual and inserted the requested libraries HI. On sketch i get the error : invalid conversion from ‘const char‘ to ‘char‘ [-fpermissive] That refers to: WiFi.begin(ssid, password); Hi. In don’t know why you’re getting that error. But, change this const char* ssid = “REPLACE_WITH_YOUR_SSID”; to this: char* ssid = “REPLACE_WITH_YOUR_SSID”; Regards, Sara Hi Sara, i did that and Worked but after upload i get the Error i have mencioned after in last message Sara thanks that is the solution, now it is working! 73 Dick OK. its done, changed sketch const char* ssid = “MEO-abcde”; const char* password = “123456”; to char* ssid = “MEO-abcde”; char* password = “123456”; Now it uploads but new problem that i have been working around with no results and i get this: Soft WDT reset ctx: cont sp: 3fff00f0 end: 3fff02e0 offset: 01b0 I did check and chek the wiring is like you descrive and no readings, any help please? Thanks Need help, when running the bme280test program I get this: Could not find a valid BME280 sensor, check wiring, address, sensor ID! SensorID was: 0xFF ID of 0xFF probably means a bad address, a BMP 180 or BMP 085 ID of 0x56-0x58 represents a BMP 280, ID of 0x60 represents a BME 280. ID of 0x61 represents a BME 680. But when I run I2C scanner program it finds an I2C device at 0x76: Scanning… I2C device found at address 0x76 ! done I don’t know what to try next? anyone have an idea? I tried different BME280-same thing. Thanks guys. Stephen Try the BMP280_sensortest example. I purchased (as advertised & labelled) a “BME/BMP280” sensor and the test program confirms that it is a BMP chip. Quick visual check is the BME has a square case and BMP is rectangular. Unfortunately most amazon sellers show the generic picture of the rectangular BME version, and you end up with the rectangular BMP. These only work with the <Adafruit_BMP280.h> library. Correction: Unfortunately most amazon sellers show the generic picture of the rectangular BME version, and you end up with the square BMP. Didn’t you do an article about using ESP8266 with this sensor but using a webpage with a file called ESP_Chart.php? I built that and worked wonderfully, but I lost the esp8266 sketch file, can you give me a reference to it? thanks. Hi. I think this is the project you are looking for: Regards, Sara Im trying to run this on the exact hardware that you show in the demo. When i try to send the bme280 test code to the esp32 i get the error: src\main.cpp: In function ‘void loop()’: src\main.cpp:66:17: error: ‘printValues’ was not declared in this scope printValues(); How would I resolve this issue? Hi. Copy the printValues() function before the setup(). Cut this:(); } And paste it before the setup(). Regards, Sara That worked perfectly. Thank you! Hi I have a problem. C ++ shows me an error when compiling error: ‘printValues’ was not declared in this scope. Copying the entire function before setup() does not work. How would I resolve this issue? Sorry. Had cut not copy. Now everything works fine. Thanks. I remember seeing two of these being used with a wiring schmatic, but I cant recall which tutorial it was. sketch showed using two bme’s in the same sketch. Could you please direct me to that tutorial? Thanks Nino I found it, had a senior moment,. Thanks again nino Dear Rui/Sara, As I understand it, this ESP32 Web Server with BME280 – Advanced Weather Station can only be read when a mobile phone connects trough the local network. Is there a solution when I would try to connect from my mobile phone while being out of reach of the local network, for instance when I would try to read the temperature in my house in the Netherlands, from a location in e.g. Portugal? A related question: I thought that I’ve seen a book on your website that specifically relates to ESP32 web server based weather stations. But I cannot find that book anymore. Any idea what the title was? I believe it was somewhere around 20 Euro? Thanks very much in advance, Ronald Hi Ronald. This web server is only available on your local network. You need to port forward your router to access it from another location or use a service like ngrok. We have an eBook dedicated to Web Servers: On this page, you can find all our eBooks: Regards, Sara Hi Sara, all clear. Thanks for your quick reply. Best regards, Ronald Hi Sara, Could it possible be that in the HTML the ( end table) is not yet included…. ? Also in the other sketch (BME80 on multiple I2C) interfaces this seems the case. Regards, Piter. Hi Piter. What do you mean? Can you try to explain your issue? Regards, Sara Hello! After learning from and adapting from your examples. I find myself in a strange situation. I uploaded the ESP BME sketch to my ESP32-WROOM-32 for testing a soldered board (after fully developing my application on a solder-less breadboard.) Now, I cannot load any other sketch because of the error: A fatal error occurred: MD5 of file does not match data in flash! I have tried to erase the flash with esptool.py and even reload the firmware with: python esptool.py –chip auto –port /dev/ttyUSB0 –baud 115200 –before default_reset –after hard_reset write_flash -z –flash_mode dio –flash_freq 40m –flash_size 4MB 0x8000 partition_table/partition-table.bin 0x10000 ota_data_initial.bin 0xf000 phy_init_data.bin 0x1000 bootloader/bootloader.bin 0x100000 esp-at.bin 0x20000 at_customize.bin 0x24000 customized_partitions/server_cert.bin 0x39000 customized_partitions/mqtt_key.bin 0x26000 customized_partitions/server_key.bin 0x28000 customized_partitions/server_ca.bin 0x2e000 customized_partitions/client_ca.bin 0x30000 customized_partitions/factory_param.bin 0x21000 customized_partitions/ble_data.bin 0x3B000 customized_partitions/mqtt_ca.bin 0x37000 customized_partitions/mqtt_cert.bin 0x2a000 customized_partitions/client_cert.bin 0x2c000 customized_partitions/client_key.bin But, still, I cannot load any new sketch, though I can reload the existing sketch! Did I accidently load the sketch into the ESP’s SPI instead of the VSPI? Is there a good tutorial on how to completely erase and reprogram the ESP to factory condition? Hi. I’m not sure. I never faced that issue. In the links below you can find some suggestions: – – I hope this helps. Regards, Sara Thanks for your help. I had already found those links. It seems to point to a different way of programming the ESP without using the Arduino IDE. After installing both esp-idf and esp-idf-v4.0.4 directories and other stuff, I am still unable to upload the firmware. I am unable to create a dfu.bin file. Using the Arduino IDE and the WiFi101 updater, I attempted to load the FirmWareUpdater, but I get: Error compiling for board ESP32 Dev Module. So, I’m stuck. Bonjour, Concernant le ESP32 CAM (version DM-ESP32 S) j’obtiens une erreur: “E (25829) wifi:AP has neither DSSS parameter nor HT Information, drop it” lors du lancement du programme Web serveur . Le sketche “scanner” fonctionne parfaitement, mais me renvoie cette erreur sur un seul serveur. Avez vous une idée concernant cette erreur ? Merci par avance de votre aide. Is that sensor directly provide “Absolute Humidity” or “Relative Humidity” ? Hi. It provides relative humidity. Regards, Sara Thanks…. Hi Sara, I used below line instead of what is in the above sketch and able to show degree symbol on web page instead of star ( * ) sign. Hope that this will help others. client.println(” °C”); Sorry it is not possible to write here since it automatically converts to degree sign. it should be & deg ; C without spaces in between. Hi, All I’m getting with this sketch is “Could not find a valid BME280 sensor, check wiring!” Can anybody tell me what I’m doing wrong? Cheers Paul Hi. Check this article: Regards, Sara My network password includes % sign. The compiler throws an expected initializer error due to the % sign. Is there an escape code or some other method to include the password? (I am unable to change the network passcode.) really love to read and test all your work, thank you, it is a great way to learn the esp32. I have one issue with this project, the altitude doesn’t display correct, i know the netherlands for the most part is under sealevel but -187Meters is a bit much. What can I do to change this. Hi. You have to adjust the sea level pressure at your location on the following variable: #define SEALEVELPRESSURE_HPA (1013.25) Regards, Sara Is the BME280 5V or 3.3V The module can run with 3.3V or 5V.
https://randomnerdtutorials.com/esp32-web-server-with-bme280-mini-weather-station/?replytocom=378006
CC-MAIN-2022-33
refinedweb
4,883
75.2
In the following code: /** MyClass. Call {@link #foo} whenever. */ public class MyClass { public void foo() { ... } } When renaming (refactor) the method foo, the javadoc link "{@ #foo}" does not get updated. The same goes for renaming fields and classes. Have you check "Apply Rename on Comments" checkbox? I'm doing "instant rename", i.e. pressing CMD+R (on a Mac) when the cursor is placed on the thing (field, method, etc) I want to rename. No dialog is shown. Also, the checkbox "Apply Rename on Comments" only seems to do a string replace, which may be wrong. It should _analyze_ the Javadoc. Say that I have the following code: /** MyClass is really cool. Call {@link #cool} whenever. */ public class MyClass { public void cool() { ... } } Then I want to rename (refactor) the method "cool" to "stupid". I do not want to update my javadoc to say that "MyClass is really stupid", I only want to update the link. The problem with instant rename is duplicate of issue 102669. The string replace is as designed, so it works in all types of comments. You can turn off some replacements in preview. -> I'm changing the summary and switching this issue to enhancement.
https://netbeans.org/bugzilla/show_bug.cgi?id=126907
CC-MAIN-2016-07
refinedweb
197
78.55
Top: Streams: outmd5 #include <pstreams.h> class outmd5: outstm { outmd5(); outmd5(outstm* outthru); string get_digest(); const unsigned char* get_bindigest(); } MD5, the message digest algorithm described in RFC 1321, computes a 128-bit sequence (sometimes called 'message digest', 'fingerprint' or 'MD5 checksum') from arbitrary data. As stated in RFC 1321, it is conjectured that it is computationally infeasible to produce two messages having the same message digest, or to produce any message having a given prespecified target message digest. MD5 can be viewed as a one-way encryption system and can be used, for example, to encrypt passwords in a password database. The MD5 fingerprint is more often converted to so-called ASCII-64 form in order to conveniently store it in plain text environments and protocols. Thus, the 128-bit binary sequence becomes a 22-character text string consisting of letters, digits and two special symbols '.' and '/'. (Note that this is not the same as Base64 encoding in MIME). In order to compute a MD5 fingerprint you first create a stream object of type outmd5 and then send data to it as if it was an ordinary output file or a socket stream. After you close the stream, you can obtain the fingerprint in ASCII-64 form using the object's get_digest() method. The implementation of MD5 is derived from L. Peter Deutsch's work. This class derives all public methods and properties from iobase and outstm, and in addition defines the following: outmd5::outmd5() creates a bufferless MD5 stream. outmd5::outmd5(outstm* outthru) creates a MD5 stream and attaches an output stream outthru to it. Everything sent to the MD5 stream will also be duplicated to outthru. You may want, for example, to attach perr to your MD5 stream for debugging purposes. string outmd5::get_digest() closes the stream and returns the computed fingerprint in text form (ASCII-64). const unsigned char* outmd5::get_bindigest() closes the stream and returns a pointer to a 16-byte buffer with the binary MD5 fingerprint. Example: string cryptpw(string username, string password) { outmd5 m; m.open(); m.put(username); m.put(password); m.put("Banana with ketchup"); return m.get_digest(); } See also: iobase, outstm
http://www.melikyan.com/ptypes/doc/streams.md5.html
crawl-001
refinedweb
360
54.52
Opened 9 years ago Closed 9 years ago #4095 closed (invalid) 'Writing your first Django app, part 3' is incomplete Description There are few inconsistences in this part of tutorial. In general you seem to forgot about 'detail' method. - Section: "A shortcut: render_to_response()" says: "Note that once we’ve done this in all these views, we no longer need to import loader, Context and HttpResponse." while at this point we still have method 'detail' defined as follows: def detail(request, poll_id): return HttpResponse("You're looking at poll %s." % poll_id) So removing HttpResponse import breaks code. - Sections: "Raising 404" and "A shortcut: get_object_or_404()" are changing 'detail' method so it contains reference to: detail.html file, eg.: return render_to_response('polls/detail.html', {'poll': p}) but there is nothing about creating 'detail.html' in this tutorial. Furthermore, in part IV of the tutorial, there is statement: "Let’s update our poll detail template from the last tutorial, so that the template contains an HTML <form> element:" but we have no 'detail.html' template. Change History (1) comment:1 Changed 9 years ago by Simon G. <dev@…> - Needs documentation unset - Needs tests unset - Patch needs improvement unset - Resolution set to invalid - Status changed from new to closed Thanks for the report, but I'm marking this as invalid for the following reasons: 1) yes, HttpResponse is still needed at that point, but only for another five lines. 2) yes, you haven't created detail.html yet - it's coming. After a brief digression into 500/404 views (which really are quite important), detail.html is created in Use the template system.
https://code.djangoproject.com/ticket/4095
CC-MAIN-2016-07
refinedweb
268
55.64
Subject: Re: [boost] [log] [asio] Conflicting default configs From: Gavin Lambert (gavinl_at_[hidden]) Date: 2015-03-31 03:05:42 On 31/03/2015 13:41, Niall Douglas wrote: > On 31 Mar 2015 at 1:54,. > > Why don't you just use standalone ASIO internally? It has a different > ABI, and I believe is expected to not interact with Boost.ASIO. > > The internal copy can be generated using Chris's special "include all > of ASIO" magic file. Just fire it through a bit of python which > implements only the #include directive or use a STL excluding > preprocessor to generate a single file including all of standalone > ASIO. If you can tweak the namespace it uses to be something inside boost::log rather than whatever it defaults to, that should help avoid collisions between Boost.Log and either Boost.Asio or the user using standalone ASIO themselves. Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2015/03/221199.php
CC-MAIN-2021-25
refinedweb
168
67.45
Typically, the layout code of templated columns is defined at design time, but you might face situations in which using predefined templates is not the optimal solution and you need a dynamic template. For example, if you know in advance that a lot of changes must be applied at run time via events such as ItemCreated and ItemDataBound, there is no reason to define a static template, forcing the control to support a double effort: processing the template first and the changes next. Also, when users can change among different views of the same data, a dynamic template is preferable. Whatever your reason for adding templated columns dynamically, you face the problem of how to create a template programmatically. In such a situation, using an external ASCX file can help. A template for a column property is a class that implements the ITemplate interface. An instance of such an object can be created using the LoadTemplate method of the Page class. LoadTemplate takes only one argument: the name of the text file that describes the template. The file must have an ASCX extension—ASCX is the typical extension of user control files, formerly known as pagelets. You create a file-based template column using the following code: TemplateColumn tc = new TemplateColumn(); tc.ItemTemplate = Page.LoadTemplate("template.ascx"); The template file can be written in any .NET language and not necessarily in the language of the page. The Page.LoadTemplate method can be used to load the layout code for any template property of the column, including EditItemTemplate and HeaderTemplate. The ASCX user control you use to populate the templates of a column defines the HTML and the ASP.NET controls you want to employ. The following code shows the ASCX file that concatenates TitleOfCourtesy, LastName, and FirstName: <%@ Language="C#" %> <%# DataBinder.Eval(((DataGridItem) Container).DataItem, "TitleOfCourtesy") + " " + "<b>" + DataBinder.Eval(((DataGridItem) Container).DataItem, "LastName") + "</b>, " + DataBinder.Eval(((DataGridItem) Container).DataItem, "FirstName") %> The user control must indicate its language, even if the language is the same one being used in the hosting page. The following code shows the same user control written in Visual Basic .NET. You can use this code interchangeably with the previous code. <%@ Language="VB" %> <%# DataBinder.Eval( _ CType(Container, DataGridItem).DataItem, "TitleOfCourtesy") + _ "<b>" + _ DataBinder.Eval( _ CType(Container, DataGridItem).DataItem, "LastName") + _ "</b>, " + _ DataBinder.Eval( _ CType(Container, DataGridItem).DataItem, "FirstName") %> The ability to dynamically load templates from disk can be exploited to build an interesting application that lets users change the view of a templated column. Figure 3-8 shows what I mean. The DataGrid control in Figure 3-8 shows a footer that has been dynamically modified to span all columns. The footer contains a drop-down list with the available views for the templated column. The user selects the desired view and then clicks the Apply link button to enable it. You can place the controls for selecting the view mode anywhere on the page. A good place is in the footer of the templated column. In this case, you can use the <FooterTemplate> tag: <FooterTemplate> <b>View:</b> <asp:dropdownlist <asp:linkbutton <FooterTemplate> By default, the DataGrid control’s footer is disabled, and you turn it on by setting the ShowFooter attribute to true. Usually the footer is a blank row added at the bottom of the grid’s current page and needs a different structure from the rest of the rows. Let’s say you want the footer to span all the columns in the grid to become a table row with a single cell. Once again you have to resort to the ItemCreated event to accomplish this kind of structural manipulation of the grid items. The following code shows the strategy for creating the footer shown in Figure 3-9. if (lit == ListItemType.Footer) { // Remove 1st and 3rd columns in the original schema e.Item.Cells.RemoveAt(0); e.Item.Cells.RemoveAt(1); e.Item.Cells[0].ColumnSpan = 3; // Populate the drop-down list with available views // Each view corresponds to an ASCX file in the current folder DropDownList ddViews = (DropDownList) e.Item.FindControl("ddViews"); ListItem l; l = new ListItem("Ms. Surname, Name", "courtesylastfirst.ascx"); ddViews.Items.Add(l); l = new ListItem("Name Surname - (Ms.)", "firstlastcourtesy.ascx"); ddViews.Items.Add(l); // Select the previously selected element, if any // Need to preserve list state across postback events ddViews.SelectedIndex = Convert.ToInt32(ViewState["ViewIndex"]); } The view in Figure 3-9 is enabled when the user clicks the Apply link button. This button has been assigned a CommandName property. Any clickable element within the body of the DataGrid control raises an ItemCommand event when the user clicks it. This event is handled in much the same way it is for the DataList control, which I covered in Chapter 1. When you handle the ItemCommand event, you distinguish among the various elements that might have fired the event by using the CommandName attribute. In the example of Figure 3-9, I defined a link button with a command name of ApplyView. To handle the button click, you need code such as the following: public void ItemCommand(Object sender, DataGridCommandEventArgs e) { if (e.CommandName == "ApplyView") { DropDownList ddViews = (DropDownList)e.Item.FindControl("ddViews"); String strFile = ddViews.SelectedItem.Value; ViewState["CurrentViewFile"] = strFile; ViewState["ViewIndex"] = ddViews.SelectedIndex.ToString(); UpdateView(); } } The ItemCommand handler is registered through the OnItemCommand attribute of the <asp:datagrid> tag. <asp:datagrid/> You first retrieve the instance of the drop-down list control. The control ddViews is not accessible at the page level because of the implementation of ASP.NET templates. Each template runs in a separate naming container that makes it impossible for the run time processing the ASP.NET page to retrieve the drop-down list control by name within the page scope. So you cannot use ddViews, which is the ID of the drop-down list, as the programmatic identifier of the control in the page code. The ID ddViews makes sense only in the context of the template—a sort of child control with its own namespace—that contains it. To retrieve a valid instance of the drop-down list control, then, you need to resort to the following code: DropDownList ddViews = (DropDownList) e.Item.FindControl("ddViews"); Notice that FindControl called from the Page object or the DataGrid object will not work, because FindControl knows how to locate a control only in the current naming container. In the body of ItemCommand, when the command name is ApplyView, e.Item represents the footer. Calling FindControl within the range of the footer turns out to be successful. In ItemCommand, once you hold a reference to the drop-down list control, you pick up the name of the currently selected view and store it, as well as its index, in the Attributes collection of the grid. The drop-down list’s Value property contains the name of the ASCX file that represents the view, whereas its Text property points to a display name. The name of the file and its index need to be persisted across multiple invocations of the same page to guarantee that the control’s state can be correctly restored. Once you know the name of the file to load the template from, you are pretty much finished. The only remaining work is letting the DataGrid control know about the new template for one of its columns. In our running example, the templated column is the second column. The next code snippet explains how to change the item template dynamically. The full source code for the ColumnView.aspx application is available on the companion CD. TemplateColumn tc = (TemplateColumn) grid.Columns[1]; tc.ItemTemplate = Page.LoadTemplate((String)ViewState["CurrentViewFile"]); grid.DataBind(); The final call to DataBind causes the grid to redraw its user interface, taking into account all the changes to the columns. Unlike many other methods that are expected to read from disk, the Page object’s LoadTemplate method does not support streams and writers. If this support were possible, then in-memory strings could have been used to create dynamic templates. If you don’t want to, or can’t afford to, make your application dependent on external ASCX files, how can you create dynamic templates? Typically, you don’t want to deal with ASCX files because having several file names hard coded in the source is too restrictive. Another good reason to avoid disk-based templates is that it results in the template code being just one of the configuration parameters of the application. In this case, in fact, you might have all this information stored in a centralized medium such as a SQL Server table or an XML file. Is there a way to dynamically create a template from a string? I haven’t found any documentation that says it is possible or how to do it, but it may be that the documentation hasn’t been written yet. After looking at the programming interface of the involved classes, I haven’t been able to discover a way to do this yet. However, nothing really prevents you from creating a temporary file, writing the string that represents the layout, and loading a template from the temporary file. When you create temporary files from within an ASP.NET application, make sure that the file name is really unique for each concurrent session. For this purpose, use the Session ID or create a unique temporary file through the static method, Path.GetTempFileName. Bear in mind that the LoadTemplate method assumes it has been given a virtual path. On the other hand, stream and writer classes require absolute paths and don’t know how to cope with virtual paths. As a result, you come up with the following code to create and load a string-based template. // Create the column object TemplateColumn bc = new TemplateColumn(); // Create the temp file and write the template code String tmp = Session.SessionID + ".ascx"; StreamWriter sw = new StreamWriter(Server.MapPath(tmp)); sw.Write(strLayoutCode); sw.Close(); // Load the template from the temp file and add the column bc.ItemTemplate = Page.LoadTemplate(tmp); grid.Columns.Add(bc); // Delete the temp file File.Delete(Server.MapPath(tmp)); You need to use Server.MapPath to map a URL from a virtual to a physical path. You need to do this only when working with streams and files. When the ASP.NET run time processes a template, it parses the string, extracting the definition of the various controls that actually make it. These controls are then instantiated and added to the Controls collection of the naming container—typically a DataGridItem object. You can implement this pattern yourself by writing a made-to-measure class that inherits from ITemplate. A living instance of this class can then be assigned to any template property such as ItemTemplate. The ITemplate interface has only one method, which is named InstantiateIn. The method is called to populate the user interface of the container control with instances of child controls in accordance with the expected template. You can certainly come up with a flexible and configurable class that reads from an external source the controls to create and bind to data. More simply, though, you might want to write ad-hoc classes—one for each needed template. Figure 3-10 shows just this. The ITemplate class can be defined in the <script> section of an ASP.NET page as well as in separate C# or Visual Basic .NET class files that you link to the project. Another good place for this kind of code is in the code-behind file for the ASP.NET pages that use it. An ITemplate-based class looks like in the following code. class LastFirstNameTemplate : ITemplate { public void InstantiateIn(Control container) {...} private void BindLastName(Object s, EventArgs e) {...} private void BindFirstName(Object s, EventArgs e) {...} } In the body of InstantiateIn, you create instances of controls and add them to the specified container. For DataGrid controls, the container is an object of type DataGridItem. It will be DataListItem for a DataList control. In general, a container is any class that implements the INamingContainer interface. If the control being added to the container’s Controls collection has to be bound to a data source column, then you also register your own handler for the DataBinding event. When the event occurs, you retrieve the text out of the data source and refresh the user interface of the control. When defined for a server control, the DataBinding event handler is expected to resolve all data-binding expressions in the server control and in any of its children. Let’s consider the layout that we repeatedly encountered earlier: <%# "<b>" + DataBinder.Eval(Container.DataItem, "lastname") + "</b>, " + DataBinder.Eval(Container.DataItem, "firstname") %> The following code demonstrates a template class that obtains the same results. public class LastFirstNameTemplate : ITemplate { public void InstantiateIn(Control container) { container.Controls.Add(new LiteralControl("<b>")); Label lblLName = new Label(); lblLName.DataBinding += new EventHandler(this.BindLastName); container.Controls.Add(lblLName); container.Controls.Add(new LiteralControl("</b>, ")); Label lblFName = new Label(); lblFName.DataBinding += new EventHandler(this.BindFirstName); container.Controls.Add(lblFName); } private void BindLastName(Object sender, EventArgs e) { Label l = (Label) sender; DataGridItem container = (DataGridItem) l.NamingContainer; l.Text = ((DataRowView) container.DataItem)["lastname"].ToString(); } private void BindFirstName(Object sender, EventArgs e) { Label l = (Label) sender; DataGridItem container = (DataGridItem) l.NamingContainer; l.Text = ((DataRowView) container.DataItem)["firstname"].ToString(); } } A DataBinding event handler accomplishes two tasks. If programmed correctly, it should first take and hold the underlying data item. Second, it has to refresh the user interface of the bound control to reflect data binding. A reference to the involved control can be obtained through the always declared, but not frequently used, sender parameter. The container that hosts the control is returned by the NamingContainer property of the control itself. At this point, you have all that you need to set up and use another well-known ASP.NET expression: Container.DataItem. The type of the data item depends on the data source associated with the DataGrid. In most real-world scenarios, it will be DataRowView. What remains is to access a particular column on the row and set the control’s bound properties. The full source for the ITemplateClass.aspx application can be found on the companion CD.
http://etutorials.org/Programming/Web+Solutions+based+on+ASP.NET+and+ADO.NET/Part+I+Data+Access+and+Reporting/Templated+DataGrid+Controls/Creating+Templates+Dynamically/
crawl-001
refinedweb
2,371
56.86
StoicismTraditions and Transformations Edited bySTEVEN K. STRANGEEmory University JACK ZUPKOEmory University Contents page viiix List of ContributorsAcknowledgments xi List of AbbreviationsIntroductionSteven K. Strange and Jack Zupko 10 32 52 95 76 108 148 177 132 vi 198 214 12 Stoic EmotionLawrence C. Becker 250 Works CitedName Index 277291 Subject Index 293 Contributors vii viii List of Contributors Abbreviations CAGCCLCSELFDSLSPGPLSVF IntroductionSteven K. Strange and Jack Zupko Stoicism remains one of the most significant minority reports in the history of Western philosophy. Unfortunately, however, the precise natureof its impact on later thinkers is far from clear. The essays in this volume are intended to bring this picture into sharper focus by exploringhow Stoicism actually influenced philosophers from antiquity throughthe modern period in fields ranging from logic and ethics to politicsand theology. The contributing authors have expertise in different periods in the history of philosophy, but all have sought to demonstrate thecontinuity of Stoic themes over time, looking at the ways in which Stoicideas were appropriated (often unconsciously) and transformed by laterphilosophers for their own purposes and under widely varying circumstances. The story they tell shows that Stoicism had many faces beyondantiquity, and that its doctrines have continued to appeal to philosophersof many different backgrounds and temperaments.In tracing the influence of Stoicism on Western thought, one cantake either the high road or the low road. The high road would insiston determining the ancient provenance of Stoic and apparently Stoicideas in the work of medieval and modern thinkers, using the writingsof the ancient Stoics to grade their proximity to the genuine article;this would require paying close attention to the particular questions thatexercised thinkers such as Zeno and Chrysippus, in order to determinethe extent to which later figures contributed to their solutions. The lowroad, on the other hand, would focus less on questions that interestedancient Stoics and more on broader tendencies and trends, looking at theway Stoic doctrines were employed in new settings and against differentcompetitors, becoming altered or watered-down in the process. The1 Introduction the plural. There are essays addressed to how Stoic doctrines were understood in different historical periods and within specific philosophicaltraditions, as well as essays on the way Stoic ideas were transformed byhistorical and political circumstances, a process of appropriation thatcontinues to this day, as the essays by Martha Nussbaum and LawrenceBecker suggest. We hope that the present volume helps to set historicalparameters for further discussion of the traditions of Stoicism, or, rather,of its traditions and transformations.It is well known that the Stoics were the first philosophers to call themselves Socratics, but there has been relatively little study of the influenceof Socrates on individual Stoic philosophers. In The Socratic Imprint onEpictetus Philosophy, A. A. Long investigates the importance of Socratesas portrayed in Platos early dialogues for Epictetus Discourses. Socratesis Epictetus favorite philosopher, whom he treats as a model for his students not only for the Stoic theory of the preeminence of virtue overall other values but also for the practice of life, in self-examination, andin methodology. It is especially in the appropriation of Socrates characteristic method that Epictetus stands apart from earlier Stoics, as Longshows. He is able to provide numerous illustrations from the Discoursesof Epictetus use of the figure and method of Socrates, and especially ofthe striking portrait of Socrates and his protreptic and dialectic in PlatosGorgias. This allows us to see that the theory of preconceptions (prolepseis)or natural notions in Epictetus epistemology can help provide a solutionto some notorious problems about the workings of Socratic elenchus,along the lines of the interpretation proposed by the late Gregory Vlastos.Steven K. Strange, in The Stoics on the Voluntariness of the Passions,provides a fresh reconstruction and defense of Chrysippus view of theemotions and his unitary philosophical psychology. This defense is important, he maintains, if we are to properly understand the dispute betweenPosidonius and Chrysippus about the passions and its implications forStoic ethics and psychotherapy. The rejection of Chrysippus unitary psychology, which holds that the only motivational function of the humansoul is reason, is common to almost the whole tradition of moral psychology, but some of its virtues may have been missed, and its influence,especially on the history of the concept of the will, may have been obscured. Strange argues that the Chrysippean view is that the motivatingfactor in human action is always the judgment of reason, an assent tosomethings being good or bad in relation to the agent, but at the sametime, an emotion either a passion or, in the case of the wise person, agood emotion. This judgment of reason, of course, may be, and often is, false and even irrational (in which case it is also a passion), in thesense that it goes against things that the agent has good reason, and evenknows that he has good reason, to believe. The element of self-deceptionin such passional judgments is crucial. The nature of passions such asanger or grief as due to such passional judgments is illuminated by comparing them with the so-called good feelings, the emotions of the sage,and by examining the Stoic account of incontinence, which turns out tolie much closer to Aristotle than has generally been appreciated.In Stoicism in the Apostle Paul: A Philosophical Reading, TroelsEngberg-Pedersen shows how important it is to read between the lineswhen looking for Stoic influences in a text. For the Apostle Paul, likehis younger contemporary Philo Judaeus, uses Stoic ideas to articulatehis Jewish message, although its powerful, apocalyptic character makesthe influence of Greek philosophy harder to see. Engberg-Pedersenfocuses on Galatians 5:1326 as his proof text, wherein Paul tells theGalatians that they have no need of Mosaic law because Christian faithand possession of the spirit are sufficient to overcome the selfish, bodilyurges that would otherwise enslave them. The influence of Stoicismemerges in the structural similarity between Pauls notion of faith (pistis)and the Stoic conception of wisdom (sophia): just as the Christ-personis free in his obedience to the will of God an obedience that is, paradoxically, self-willed so in oikeiosis the Stoic comes to see himself as aperson of reason, liberated from the body in his agreement with the willof him who orders the universe. In both cases, the person who is trulyfree is able to reject the bonds of external law because the law has in animportant sense been fulfilled in him. Engberg-Pedersen concludes withthe suggestion that Paul is best understood as a crypto-Stoic thinkerbecause, although he did not think of himself as a philosopher, he usedStoic ideas very effectively in presenting the gospel of Christ.In his essay Moral Judgment in Seneca, Brad Inwood investigatesSenecas use of the metaphors of judicial deliberation and legal judgmentto illustrate the concept of moral judgment. He is able to show that this isa particularly rich analogy in Senecas moral thought, which he developsin an original way. It has been argued that Stoics developed differentcodes of moral conduct for sages and for ordinary moral progressors,but Senecas use of the juridical metaphor strongly suggests that thismay be a misinterpretation. For, in a number of places, as Inwood shows,Seneca contrasts a strict or severe judge with a more flexible and mercifulone, and claims that only a sage or wise person could be justified inimposing judicial severity, because everyone else (and indeed the sage Stoics. This makes it difficult to trace Stoic influences. Still, Ebbesen showsthat Stoicism is unmistakably present on several fronts: in the widespreaduse in medieval logic and grammar of the distinction between a signifying thing and thing signified, which shaped the eleventh- and twelfthcentury debate between the thing-people and word- or name-people(a.k.a. realists and nominalists); in certain un-Aristotelian additions toAristotelian logic, such as the properties of syncategorematic words andthe notion of logical consequence, the terminology of which is almost certainly derived from the Stoics; and in the Stoic doctrine of assent, whichemerges in a variety of places, from Peter Abelards account of moralgoodness to John Buridans definition of knowledge. Like the Stoics,scholastic thinkers also had a penchant for crazy examples that testedthe limits of their philosophical systems and, by the late thirteenth century, a conception of the sage-philosopher as embodying the life of virtue.In the end, though, it was what Ebbesen calls their community of spiritwith the Stoics that best explains the preservation and development ofStoic ideas by medieval philosophers in the absence of authoritative textsand teachings.In Abelards Stoicism and Its Consequences, Calvin Normore identifies an important strand of Stoic ethical theory preserved in Abelardsidea that the locus of sin is intention or consent. Just as the Stoics weredrawn by their assumption of a world determined by fate to hold thatmoral responsibility consists in assent, so Abelard, recognizing that theworld cannot be otherwise than God willed it, ascribes moral goodnessand badness primarily to intentions and only secondarily or derivativelyto actions. The prescription is similar in both cases: for the Stoics, weshould live in accordance with nature; for Abelard, we should will whatis objectively pleasing to God. Normore takes Abelards Stoicism to beembodied in the Philosopher of the Dialogue between a Philosopher, a Jew,and a Christian. Although his internalist conception of sin proved unpopular in the thirteenth century, it was kept alive by critics such as PeterLombard, eventually to be taken up again in the fourteenth century byWilliam of Ockham although this time without the crucial Stoic ideathat the actual world is in itself the best possible.The Reformation brought tremendous social and political upheavalto Western Europe, and in its wake Stoic ethics enjoyed a brief, butintense, revival in the late sixteenth and early seventeenth centuries.Jacqueline Lagrees essay, Constancy and Coherence, is addressed tothis encounter between Christianity and Stoic ethics. For Renaissancethinkers inspired by Seneca, constancy was a virtue pertaining to the military man in the heat of battle, not to the private citizen. But everything changed with the appearance in 1584 of the treatise On Constancyby the Flemish philosopher Justus Lipsius. Lipsius showed that Stoicismwas compatible with Christianity and that Stoic constancy manifests itself in the coherence and immutability of truths in the soul of the wiseman, who, in the midst of political turmoil, cleaves to universal law in theform of divine providence. This kind of Stoicism fit with the austere, rationalistic conception of religion being advanced by the reformers. Theresonance was probably not accidental John Calvin had himself published a commentary on Senecas De Clementia in 1532. In any case, Lipsiusmust have struck a chord in many of his readers, weary of decades of religious conflict, because he soon became the most popular author of histime. Other thinkers followed in his footsteps, the most successful beingGuillaume du Vair (15561621). Du Vair took Stoicism in a decidedlymore Christian direction, transforming pagan constancy into Christianconsolation by means of the theological virtues of faith, hope, and charity. But Christians always viewed the paganism of ancient Stoicism with acertain ambivalence. Eventually, Christian Stoicism fell out of favor andthe Stoic virtue of constancy came to be seen as illusory and idolatrous,completely put to shame by the virtue of patience with hope that fortifiedthe Christian martyrs. In the end, modern Stoicism became more of anethical and juridical attitude than a philosophy properly speaking.In his essay On the Happy Life: Descartes vis-`a-vis Seneca, DonaldRutherford looks at the reaction to Senecas work On the Happy Life inDescartes correspondence with Princess Elisabeth, and the light that itthrows both on Descartes attitude to the ancient Stoics and on the influence of their eudaimonism on his ethical thought. Descartes develops thelatter at length in his letters to Elisabeth in critical reaction to Senecaswork, claiming that his ethical theory represents a compromise betweenStoicism and Epicureanism. By identifying happiness with tranquillity anddistinguishing it from virtue and its cultivation, Descartes is led to abandon eudaimonism in favor of a proto-Kantian view, although his claimthat the crucial component in the exercise of virtue is not rationality butfreedom differs both from the Stoics and Kant. Importantly, Descartesalso breaks ethics free from the dependence on divine providence thatone finds in Stoicism, for an important factor in human freedom is thatdivine providence is inscrutable to us, a theme familiar from Descartesrejection of Stoic providentialism in science. Descartes also rejects theStoic ideal of extirpation of the passions, both in this correspondenceand in The Passions of the Soul. of justice and of material aid, which also helps illuminate some obscureareas of Stoic social thought. She closes with some interesting remarksabout the relevance of this dispute for the notion of property rights ininternational justice.In his essay, Stoic Emotion, Lawrence C. Becker reinforces and completes the argument of his important recent book, A New Stoicism, with anaccount of how his contemporary revival of a Stoic ethics would deal withthe important topic of emotion. Becker maintains that the proper Stoicposition, going back to the ancient Stoics, is that ones ethical perspectiveshould be shaped and determined by the best available scientific accountof the natural world and of human nature. He grants that advances inscience require the Stoic to give up some important elements of the ancient Stoic world view, in particular providentialism and a teleologicalconception of the universe, but argues that this does not really undercutthe Stoic approach to ethics and the good life. It may even reinforce it.He compares ancient Stoic accounts of the nature of emotion with thoseof contemporary psychological research and shows that they are not incompatible. Chrysippus claim that emotions are judgments will have tobe modified in the direction of Posidonius claim that there are standingirrational sources of motivation in humans, but this does not undermineStoic cognitive therapy of the emotions. A proper Stoic view of the emotions would incorporate the best available account of their role in humanpsychological health. And such an account might well be more compatible with a fundamentally Stoic approach to emotional life than the morepopular romantic view, which tends to overvalorize the emotions. 1The Socratic Imprint on Epictetus PhilosophyA. A. Long The honorable and good person neither fights with anyone himself, nor,as far as he can, does he let anyone else do so. Of this as of everything elsethe life of Socrates is available to us as a paradigm, who not only himselfavoided fighting everywhere, but did not let others fight either. (1.5.12)Now that Socrates is dead, the memory of what he did or said when alive isno less beneficial to people, or rather is even more so. (4.1.169) A version of this chapter has already appeared as chapter 3 of my 2002 book. Permissionfrom Oxford University Press to reprint this work is gratefully acknowledged. For my excerpts from Platos Gorgias I adopt (with ocasional changes) the translations of T. H. Irwin(Plato 1979). The translations of Epictetus are my own and, except as noted, are from theDiscourses. I also draw on some material included in my article Long 2000; this article formedthe basis for the paper I read at Emory Universitys Loemker Conference on Stoicism inApril 2000. Epictetus Philosophy 11 12 A. A. Long but in the business of life no one submits to such testing and we hate theone who puts us through it. But Socrates used to say that an unexaminedlife is not worth living (1.26.1718). Here as elsewhere (see 3.12.15)Epictetus quotes one of the most memorable of Socrates concludingsentences from the Apology (38a). His remarks about the hatred weextend to anyone who makes us give an account of our lives are a glossnot only on this context of the Apology but also on Socrates explanation of his elenctic mission and the Athenian prosecution to which it hasbrought him. 13 First, Socrates three interlocutors in this dialogue, mutatis mutandis, have great resonance and relevance for Epictetus students: Gorgias,the celebrated professor of rhetoric, who (as presented by Plato) caresnothing for the ethical effects of his words on his audience; Polus, anovereager discussant who cannot defend his conventionalist moralityagainst Socratic challenges; and, finally, Callicles, the ambitious politician, whose might is right concept of justice and extreme hedonismare pitted against Socrates claims for the cultivation of a well-temperedsoul. Epictetus discourses include or allude to contemporary equivalentsto Gorgias, Polus, and Callicles, each of whom, in his own way, is antithetical to the ideal Epictetus offers his students.Second, Socrates in the Gorgias does not simply announce thestartling propositions I have numbered AG. He puts most of them forward within the context of his elenctic discussion with Polus. Shortlybefore Polus replaces Gorgias as Socrates discussant, Socrates tellsGorgias:What kind of man am I? One of those who would be pleased to be refuted[elenchthenton] if I say something untrue, and pleased to refute if someone elsedoes, yet not at all less pleased to be refuted than to refute. For I think that beingrefuted is a greater good, insofar as it is a greater good for a man to be rid of thegreatest badness in himself than to rid someone else of it; for I think there is nobadness for a man as great as a false belief about the things which our discussionis about now. (458a = proposition A) In the course of his discussion with Polus, Socrates advances propositions CF against Polus attempts to defend the value of sheer rhetoricalpower untempered by moral integrity. When Polus objects that Socratesposition is absurd, Socrates gives him a lesson in elenctic discussion, contrasting that with the kind of rhetoric practiced in a court of law wheredefendants are often condemned on the basis of false witnesses. He acknowledges that Polus can adduce numerous witnesses who will attest tothe falsehood of his own position, but he dismisses them as irrelevant tothe kind of argument he and Polus are involved in: If I cant produce you,all alone by yourself as a witness agreeing on the things Im talking about,I think I have achieved nothing of any account in what our discussion isabout. And I dont think youll have achieved anything either unless I,all alone, bear witness for you, and you let all the others go (472bc). Asthe discussion proceeds, Socrates secures Polus agreement to premisesthat support his own position and conflict with Polus initial support forsheer rhetorical power. Although Polus is scarcely convinced, he admits 14 that Socrates conclusions follow from the premises that he himself hasaccepted (480e).Socrates procedure in his final argument with Callicles is similar. Atthe outset he tells Callicles that, unless he can refute Socrates thesisthat doing injustice with impunity is the worst of evils the thesis thatCallicles vehemently opposes Callicles will be discordant with himselfthroughout his life (482b). This prediction anticipates the outcome oftheir argument; for, after eliciting a series of reluctant admissions fromCallicles, Socrates tells him:Those things which appeared true to us earlier in the previous arguments areheld firm and bound down, so I say . . . by iron and adamantine arguments; so atleast it appears thus far. And if you, or someone more vigorous than you, doesntuntie them, no one who says anything besides what I say now can be right. Formy argument is always the same, that I myself dont know how these things are,but no one Ive ever met, just as now, is able to speak otherwise without beingridiculous. (508e509a) 15 by him in turn, he made a habit of testing and examining himself and was forever trying out the use of some particular preconception. That is how a philosopher writes. But trifling phrases like he said and I said he leaves to others.(2.1.3233)5 16 17 The Medea excerpt shares the lesson of the previous passage: that personssuffering from conflicting beliefs will only abandon the conflict when itis convincingly pointed out to them. Both passages endorse the centralSocratic proposition (which I numbered D earlier) that actions are alwaysmotivated by what the agent (however mistakenly and self-deceptively)thinks will be good for him or her. This intellectualist account of humanmotivation is, of course, extremely controversial, but it was absolutelycentral to Socratic ethics and completely endorsed by Epictetus. Theywere equally adamant that what is truly advantageous for any personmust always coincide with what is morally right.Next, I take a discourse (1.11) actually composed as a dialogue betweenEpictetus and an unnamed government administrator, who has come totalk with him. In the course of conversation the man tells Epictetus that hewas recently made so distraught by an illness affecting his young daughterthat he could not bear to remain at home until he got news of the childsrecovery. The point of the ensuing dialogue is to discover whether the father was really motivated, as he professed to be, by love of his daughter.The text is too long to be cited in full, so I offer this translation of sections 515, flagging the most recognizable Socratic features in brackets:epictetus. Do you think you acted correctly [orthos]?father. I think I acted naturally [phusikos]. [This is the belief to be examined.]epictetus. Well, convince me that you acted naturally, and I will convince youthat everything that occurs in accordance with nature occurs correctly.father. This is the way all, or at least most, fathers feel.epictetus. I dont deny that ; the question we are disputing is whether it iscorrect. For by this reasoning we would have to say that tumors are good forthe body because they occur, and that erring is absolutely in accordance with 18 nature because nearly all of us, or at least most of us, err. Show me, then, howyour behavior is in accordance with nature. [Pressure on the interlocutor toclarify his terms.]father. I cant. Rather, you show me how it is not in accordance with natureand not correct. [Confession of ignorance; inducement of aporia.]epictetus. Well, if we were investigating light and dark, what criterion wouldwe invoke to distinguish between them?father. Sight.epictetus. And if the question were about temperature or texture?father.Touch.epictetus. Accordingly, since our dispute is about things in accordance withnature and what occurs correctly or incorrectly, what criterion do you want usto adopt? [Socratic style of analogical or inductive inference.]father. I dont know. [Further confession of ignorance, and aporia.]epictetus. Ignorance about the criterion of colors and smells and flavors maynot be very harmful; but do you think that someone ignorant of the criterionof things good and bad and in accordance with or contrary to nature is onlyslightly harmed?father. No, that is the greatest harm.epictetus. Tell me, is everyones opinion concerning what is fine and propercorrect? Does that apply to all the opinions that Jews and Syrians andEgyptians and Romans hold on the subject of food?father. How could that be possible?epictetus. Presumably it is absolutely necessary that if the Egyptians opinionsare correct, the others are not, and if the Jews are fine, the others are not?father. Certainly.epictetus. Where ignorance exists, there also exists lack of learning and lackof training concerning essentials.father. I agree.epictetus. Now that you are aware of this, in future you will concentrate yourmind on nothing else than learning the criterion of what accords with natureand using it in order to make judgments concerning particular cases. Epictetus now gets the father to agree that love of ones family and goodreasoning (eulogiston) are mutually consistent, with the implication that ifone of them is in accordance with nature and correct, the other must be sotoo. Pressed by Epictetus, the father accepts that abandoning his daughterwas not a well-reasoned act. Could it, then, have been motivated, as heclaimed, by love? Through a further induction, the father is led to agreethat if the childs mother and others responsible for her welfare had actedlike himself, their behavior would not have been loving. Thus, Epictetusconcludes, the father, contrary to his initial belief, was not motivated byan excusably natural love for his daughter, but by erroneous reasoningabout the properly natural and right thing to do. The father ran away, heconcludes, because that was what mistakenly seemed good to him.11 19 The Socratic features of this elenchus are too obvious to need fullarticulation. By the end, the father has been brought to agree that (1) hisprotestations of love for his daughter conflict with the standards of love hewould apply to other people and (2) his initial appeal to the naturalnessof his action is incompatible with what he acknowledges to be naturalin the sense of being the normative and properly rational behavior. Thepoint of the exercise is also thoroughly Socratic not to blame or criticizethe father but to show him how his judgment and will went astray, failingto fit the affection that he had taken to be his motive.In order to understand the rationale behind Epictetus Socratic modeof argumentation, we need to clarify his claim that, although (1) human beings are innately motivated to seek their own happiness and toprefer right to wrong, (2) they typically hold beliefs that conflict withthe attainment of these objectives. Here is one of his clearest statements,taken from the beginning of the discourse entitled The Starting Pointof Philosophy (2.11):To make an authentic start on philosophy and to enter it at the front door, aperson needs to be aware of his own weakness and incapacity concerning absolute essentials. For in regard to a right-angled triangle or a half tone, we cameinto the world without any natural concept; it was through some expert instruction that we learned each of these, and consequently people who dont knowthem dont think that they do. But, on the other hand, who has entered theworld without an innate concept [emphutos ennoia] of good and bad, fine andugly, fitting and unfitting, and of happiness, propriety, and what is due andwhat one should and should not do? For this reason, we all use these termsand try to adapt our preconceptions [about them] to particular instances, saying:He has done well, or as he should, or as he should not; she has been unfortunate, or fortunate; he is just, or unjust. Which of us is sparing in using theseterms? Which of us holds back his use of them until he learns to do so likethose who are ignorant of lines or sounds? (6) The explanation is that we cameinto the world already instructed in this area to some extent by nature, andstarting from this we have added on our own opinion. By Zeus, you are right;I do know by nature, dont I, what is fine and ugly? Have I no concept ofthem? You do.Dont I apply the concept to particular instances? You do.Dont I apply it well?Thats where the whole question lies, and thats where opinion comes to beadded. Because, starting from those concepts they agree on, people get intoconflict as a result of applying them to instances where the concepts dont fit. Epictetus essential point is that everyone is innately equipped with a predisposition to form concepts that furnish the basic capacity to make valuejudgments. Because people are born with this equipment, they tend to 20 think, like his interlocutor here, that they know the specifics of goodness and happiness, or right and wrong, and can therefore make correctvalue judgments in particular cases. When Epictetus draws attention tothe conflicts that arise from misapplication of the natural concepts, heis referring not only to disagreements between persons but also to conflicts or contradictions that arise for the same reason within one person,like Medea.How does Epictetus theory about innate concepts of value relateto the Socratic elenchus, and what does Epictetus mean by positingthese concepts? I will approach these questions by taking the secondfirst.12The innate concepts or preconceptions (prolepseis), as he often callsthem, are explained as follows (1.22.12; and see 4.1.4445):Preconceptions are common to all people, and one preconception does notconflict with another. For which of us does not take it that a good thing is advantageous and choiceworthy, and something to be sought and pursued in everycircumstance? Which of us does not take it that justice is something honorableand fitting? When, then, does conflict occur? In the application of preconceptionsto particular instances, as when someone says:He acted in a fine way; he is courageous.But someone else retorts:No, hes crazy. Here Epictetus makes two big claims concerning peoples innate conceptsof value: first, any two people have the same preconception about thesame item; or to put it logically, they agree about the connotation of aterm such as good. Second, peoples stock of preconceptions formsa mutually consistent set of evaluative concepts or meanings. We mayrecall his comment in the preceding passage about starting from agreedconcepts.These are obviously very bold claims. However, their boldness is tempered by a very important qualification. What gives preconceptions theiruniversality and mutual consistency is their extremely general or formal content: everyone conceives good things to be advantageous andchoiceworthy, and bad things to be harmful and undesirable, irrespective of what they actually take to be good or bad in particular instances.More controversial, it may seem, is the claim that such universal andmutually consistent attitudes also pertain to the moral realm of justice,propriety, and so forth. Epictetus could reasonably respond that people ingeneral do agree on the positive connotations of justice and the negativeconnotations of injustice, taking these concepts or terms quite generally. 21 22 been exhaustively tested, satisfy the first assumption; and in the elenchushe elicits from his interlocutor latent but true moral beliefs that are foundto cohere with Socrates own judgments.Vlastos interpretation of the Socratic elenchus in Platos Gorgias iscontroversial.14 What matters for my purpose is not its exegetical correctness as regards interpretation of the Platonic Socrates, but its affinity toEpictetus methodology. As a Stoic philosopher, he takes himself to havea set of true moral beliefs which he can employ as premises, and he appeals to his interlocutors innate preconceptions as resources equippingthem to endorse those beliefs and thereby recognize the inconsistencyinfecting the particular desires and judgments with which they started.In Epictetus, innate preconceptions play a role very similar to the role oflatent but true beliefs in Vlastos interpretation of elenctic argument inPlatos Gorgias.15The criterial role and natural origin of preconceptions goes back toearly Stoicism, but Epictetus was probably unique in making them equivalent to an innate moral sense. Platonism has previously been suggestedas an influence on him, and that may be so in part; but Epictetus is notinvoking Platos fully fledged theory that a learner after birth can recollectprenatal acquaintance with everlasting truths. The Plato that interestsEpictetus is the author of what we today call the Socratic dialogues.Early in this chapter I noted the importance Epictetus attaches to theSocratic injunction on the worthlessness of living an unexamined life.Both in Plato and in Epictetus elenctic discussion is a methodology thatgets its participants to examine their beliefs by exposing unrecognizedinconsistencies and involuntary ignorance. Platos Socrates regularly askshis opinionated interlocutors to answer questions about what some moralconcept (piety or courage, for instance) is, with a view to subjecting theirresponses to elenctic examination.Epictetus follows suit. He characterizes Socrates as the person who saidthat the beginning of education is the scrutiny of terms (1.17.12), and inhyperbolical but authentically Socratic style he labels anyone who fails toknow what basic values are as going around deaf and blind, thinking heis someone when he is nothing (2.24.19). More particularly, he connectsthe standard Socratic question What is X? with his own diagnosis of theway people typically err: by heedlessly applying their preconceptions toparticular instances (4.1.41; cf. 2.11, quoted earlier):This . . . is the cause of everyones miseries. We differ in our opinions. One personthinks he is sick; no way, but he is not applying his preconceptions. Another thinks 23 he is a pauper, another that he has a harsh mother or father, and another thatCaesar is not gracious to him. This is one thing only ignorance of how to applypreconceptions. For who does not have a preconception of badness, that it isharmful, to be avoided, to be banished in every way? (4.1.4243) We are now in a position to see that Epictetus does not simply parrotthe Socratic elenchus but adapts it to his own didactic purposes, assistedby his special concept of universal and innate preconceptions. He may,as I noted in his refutation of the would-be loving father, use interpersonal dialogue that exactly mimics the procedures and goals of Socraticdialectic, and he offers his students a lesson in how to practice that, aswe shall see. But his preferred procedure in his intimate dealings withstudents is to show them how to practice the elenchus on themselves bygiving them such examples as the one preceding rather than engagingthem directly in dialogue. In just the same way, he urges them to interrogate their own impressions of particular things: For just as Socratesused to say we should not live an unexamined life, so we should not accept an unexamined impression, but should say: Wait, let me see whoyou are and where you are coming from. . . . Do you have your guaranteefrom nature, which every impression that is to be accepted should have?(3.12.15). Epictetus repeatedly expresses his cardinal rule of life in theformula: making correct use of impressions.17 That formula and themodel of the mind to which it belongs are neither Socratic nor Platonic.However, the material I have discussed here shows that Epictetus had anextraordinarily precise understanding of the methodology and goals ofthe Socratic elenchus. His main departure from Platos Socratic practice was in training his students to engage in dialogue with their individual selves and to use this as their principal instrument of moralprogress. 24 25 Have you already thought of entrusting your own body to someone, to lookafter it? Of course.Presumably to somone experienced in physical training or medicine? Yes.Are these your chief possessions, or do you have something else superior toall of them? What sort of thing do you mean?That which uses them, for heavens sake, and judges and deliberates eachthing. You mean the soul, I suppose?Correct; thats just what I mean.For heavens sake, I regard this as far superior to the rest of my possessions.Can you say, then, in what way you have cared for your soul? For its not likelythat as wise and politically prominent a man as you should carelessly and randomlylet your chief possession be neglected and ruined? Certainly not.But have you yourself cared for it? Did you learn to do so from someone, ordid you discover that by yourself ?So now theres the danger that, first, he will say:What business is it of yours, my fine fellow? Are you my master?And then, if you persist in bothering him, he may raise his hand and box yourears. There was a time when I myself was very keen on this activity, before I fellinto my present situation. (2.12.1725) 26 I take him in any case to be saying: try to converse with your interlocutorson their own ground, and even in your use of Socratically leading questions, be careful not to proceed in a peremptory manner that will simplyantagonize them. The advice to engage a big official is almost certainlyironical. That is to say, Epictetus is telling his students that they shouldnttry to outdo Socrates or worry about Callicles taunt of Socrates for preferring conversation with young men to engaging in politics. We shouldthink of Epictetus as recommending his students to use discourse that isappropriate to their interlocutors mind-set and social status.Readers of Plato are familiar with the way Socrates dialectical stylechanges in relation to his discussants; the Gorgias, as it moves from Gorgiasto Polus to Callicles, is a prime example. Similarly we find Epictetus varying his dialectic in relation to the age and background of the individualshe talks to. When he meets Maximus (3.7), an administrator and anEpicurean, he converses with him, as one philosopher to another, anddoes not avoid technicalities. In contrast, when a rhetorician on his wayto Rome for a lawsuit consults him about his business (3.9), he informsthe man at the beginning of their conversation that the kind of advice heis qualified to offer him cannot be provided in a brief encounter.Here we observe Epictetus drawing on the Platonic idea of dialectic asa cooperative undertaking, wherein the questioner or philosopher no lessthan the respondent submits his judgments to examination. Viewed inthe light of this passage, the mock questioning of the prominent Romanwas bound to fail because the conversation lacked the give-and-take andthe mutual respect and encouragement of a properly Socratic encounter. 27 28 29 lessons to teach his students. Given his time and place, as a Greco-Romanphilosopher training Greco-Roman youths, Epictetus had to adapt theSocratic paradigm to some extent. And so his Socrates, like himself, is apaternalist rather than a pederastic mentor, which would not have suitedthe mores of his day. What we are left with, when all due qualificationsare made, is the most creative appropriation of Socrates subsequent tothe works of Plato and Xenophon. Notes1. For the influence of Socrates on Stoicism, see Long 1988 and Sedley 1993,and chapters in van der Waerdt 1994. Socrates importance to Epictetus isbarely mentioned in the classic works by A. Bonhoffer (see note 22). Studiesthat deal with it in some detail include Doring 1979 and Gourinat 2001, whosearticle includes reference to my present paper.2. Zeno is the only Stoic who is attested to have written a work entitled Elenchoi(Diogenes Laertius 7.4), and no form of the word occurs in the survivingfragments of Chrysippus or other Stoics. They did include in their dialecticknowledge of how to discourse correctly on arguments in question and answerform (see Long 1996: 87), as Epictetus himself acknowledges (1.7.3); but bytheir time this specification had become too standard to allude specificallyto Socratic discussion. As for irony, it was officially excluded from the sagescharacter (whose virtues include irrefutability [anelenxia]) and treated as amark of inferior persons (SVF 3:630).3. It has been suggested to me that Epictetus was probably influenced by histeacher Musonius. Maybe. But in the record of Musonius discourses theallusions to Socrates are commonplace and do not in the least recall Socraticdialectic.4. See Jagu 1946: 16165.5. The text continues: either those who are blessed with leisure or those whoare too stupid to calculate logical consequences. With some hesitation, Isuggest that the first group refers to the likes of Plato, who as a literal writerof dialogue understands the progression of the arguments he records, andthe second group to people who merely parrot the form of Platonic dialogues.Epictetus may well be alluding to Socrates definition of solitary thinking aswriting in ones soul (Philebus 39a). Special thanks to David Sedley for adviceon this passage.6. As a further allusion to the Gorgias, I note 4.1.128 where, in reminiscenceof Socrates at 506c, Epictetus formally reviews the premises to which hisimaginary interlocutor has agreed.7. Epictetus paraphrases Gorgias 474a.8. At 3.21.19 Epictetus says that Socrates was appointed by God to hold theelenctic position. For further discussion of this passage and for Epictetusassociation of elenctic with protreptic, see Long 2002: 5457.9. See Long 2002: 98104. 30 10. Medea was one of the Stoics favorite mythological paradigms for a noblenature gone wrong: see Gill 1983. Epictetus refers to her again at 2.17.19and 4.13.15. The lines he cites here are from Euripides, Medea 107980.11. Throughout Epictetus refutation, he is careful to describe the fatherswrongly chosen act by the expression what seemed good to you (hoti edoxensoi). Thus he explicitly registers Socrates fundamental distinction in theGorgias (see proposition F) between doing what one thinks (perhaps mistakenly) will bring about ones good (the universal human desideratum) anddoing what one wants (getting that desideratum).12. The classic study of Stoic concepts (ennoiai) and preconceptions (prolepseis)is F. H. Sandbach, in Long 1971: 2237, who argues for Epictetus novelty intreating the latter as innate ideas in the moral domain. Epictetus devotesan entire discourse (1.22) to preconceptions; see also 2.17.13. See Vlastos 1991: ch. 4, and Vlastos 1994: ch. 1.14. See Benson 1995.15. After first proposing affinity between Vlastos and Epictetus Socrates, Isubsequently discovered that in late Platonism the role I assign to innatepreconceptions in Epictetus adaptation of the Socratic elenchus is playedby common notions (koinai ennoiai). Olympiodorus, in his commentary onPlatos Gorgias, describes common conceptions (from which all demonstrative proof should start) as including God-given foundations for acting rightly(Olympiodorus 1970: 51). According to Olympiodorus, Socrates refutesGorgias by showing that Gorgias admission that an orator may do wrong is inconsistent with Gorgias endorsement of the common notion that an oratorknows what is right (63). Tarrant 2000: 11617 finds that [w]hile Olympiodorus has actually anticipated many of Vlastos claims about Socratesarguments, in particular about the presence of true moral beliefs residingwithin questioner and interlocutor, he explains this with reference to onesawareness (conscious or unconscious) of common notions.16. On the semiofficial status of being Caesars friend, see Millar 1977: 11022. Epictetus satirizes the danger and enslavement it might involve at4.1.4750.17. See 1.1.7; 1.3.4; 1.6.13; 1.7.33; 1.12.34; 1.20.15, with discussion in Long 1996:ch. 12, 27582.18. Here Epictetus engages in a made-up example of Socratic dialogue, to illustrate how to expose a persons contradictory beliefs about malice (phthonos).The text is too condensed to make the argument thoroughly clear. I conjecture that it should have the following structure:The interlocutor starts by taking malice to be pleasure taken in someoneelses misfortunes (cf. Plato, Philebus 48c). Under challenge, he accepts thatmalice is a painful emotion, contradicting his initial claim. He then agreesthat it cannot be a pain aroused by others misfortunes. So he is promptedto redefine malice as pain taken in someone elses good fortunes (cf. SVF1:434), a complete reversal of his starting point.19. I draw on Barnes 1997: 29, for this skillful translation of Epictetus logicaljargon. 31 20. See Plato, Gorgias 485d, where Callicles charges Socrates with twittering ina corner with three or four young men.21. Barnes 1997: 29.22. Bonhoffer 1890 and 1894.23. Fr. 1, cited by Stobaeus (1974: 2:1314. For discussion of this text, see Long2002: 14952. 2The Stoics on the Voluntariness of the PassionsSteven K. Strange 33 Steven K. Strange 34orexisdesire/pursuit(toward futureapparent good) 1. epithumiaA. boulesislust (for futurewish (formerely apparent future realgood)good) eparsiselation (uponpresent apparentgood) ekklisisavoidance(away fromfuture apparent evil) 2. hedoneB. khara3. phobosC. eulabeiadelight (uponjoy (uponfear (againstcautionpresent merely present real future merely(againstapparent good)good)apparent evil) future realevil) sustolecontraction(upon presentapparent evil) 4. lupe[No eupatheiadistress (upon correspondingpresentto present realmerelyevil]apparent evil) and is not merely apparently so. As such, wish belongs to the class ofthe eupatheiai or rational affections in Stoic theory that is, it is one ofthe emotions or analogues of emotion that are possessed solely by thesage or wise person, which in his case have come to replace the normalpassions or pathe , which according to the Stoics are diseased conditionsof the vicious soul, and which they insist on calling by separate names tobring out the crucial difference between the two kinds (see Figure 1).It is nevertheless a species of the same genus, pursuit or desire (orexis),as epithumia or lust, bad or irrational desire (which aim at things thatseem to be good but are not). Wish, as opposed to will, is thus a kind ofdesire, not a faculty, and has no obvious connection with the notion ofchoice; moreover, it is by definition aimed at the genuine good, so thatthe idea of a bad wish or will makes no sense. Indeed, it is only possessedby the fully morally good person: a bad person cannot be said to haveboule sis.3 As far as I am aware, it seems to be Saint Augustine, in his DeLibero Arbitrio, who first clearly introduces the notion of a bad voluntas but even Augustines bad voluntas is aimed at things, namely earthly orcorporeal goods, that are good considered in themselves, just not goodfor the spiritual beings that he considers us to be. Augustine, even if hedoes not as some might want to suggest create the modern notionof the will, is a much less remote ancestor of it than are the Stoics. (Yethis notion of voluntas seems to owe something to the Stoic conception ofboule sis, as well as to Platos and Aristotles.)The Stoic notion of boule sis thus lies fairly far from its descendant,the later and familiar notion of the will. The specific contribution ofStoic moral psychology to the history of the notion of moral choice doesnot seem, therefore, to lie in the term used for its faculty. Of greaterimportance here is the Stoic conception of what makes someones action, 35 36 bad and harmful to ourselves (as well as to others), and are impedimentsto our happiness and freedom, that we should and must strive to andlearn to avoid feeling them.Now, even if we might be inclined to concede the first of these claims,the great majority of us would surely, at least initially, strongly resist thesecond, namely that we would be much better off with all of our passionsextirpated. I do not propose to defend this thesis here, although it iswell worth reflecting upon in the course of a close and careful study ofEpictetus Discourses or Senecas Letters. My concern is rather with the firstclaim, that passions are voluntary and that we therefore can in principlelearn to avoid having them. This seems generally taken to be a ratherstraightforward if paradoxical claim on the part of the Stoics. However,it seems to me to be very closely bound up, in interesting ways, with therationalism and monism of their moral psychology and to merit closerdiscussion for this reason. Moreover, I think consideration of their viewsabout this can throw some interesting light on the general Stoic conception of responsibility and the nature of their compatibilism (by which Imean the compatibilism of Chrysippus). Perhaps the discussion mighteven throw some light on the antecedents of the original notion of thewill. 37 affect the evaluating subject. Hormai, the intentional motions of the soul,are not dispositional states of desire or aversion, but actual occurrentmotions of the soul toward or away from prospective or actual valued ordisvalued objects. The theory can countenance dispositional desiderativestates as tendencies to pursue or avoid certain kinds of objects,8 but hormaithemselves are always actual movements occasioned by the (also actual)positive or negative evaluation of some practical property.The summa genera of passions are four (lust or appetite, fear, delight,and distress), depending on whether the associated object is present orprospective, and conceived as good or bad, with three summa genera ofrational affections (wish, caution, and joy there is no rational affectionhaving as its object a present evil, briefly because the only real evil is vice,which can never be present for a sage) (see Figure 1). Passions differ fromthese latter, fully rational hormai in being disturbances of the soul: they arereferred to by Stobaeus source (SVF 3:378) as flutterings (ptoiai), and asimilar thought may lie behind Ciceros somewhat misleading translationin the Tusculan Disputations of pathos by perturbatio and commotio.9 We maythus think of them as analogous to the disorderly and disturbing motionsof soul that have been eliminated in the competing Epicurean ideal ofataraxia. The usual Stoic term for the analogous state, attained by theirsage, is apatheia or absence of passion. (Dispassionateness is perhapsthe best available translation.) The sage experiences instead of passionsthe calm or orderly motions of the soul that are the eupatheiai (which arestill a type of hormai).Connected with the notion of passions as disturbances within the soulis Zenos description of them as irrational and excessive impulses. According to Diogenes Laertius (7.110; cf. Cicero, Tusculan Disputations 4.11),Zeno defined passion as an irrational or unnatural motion of the soul,and as a horme pleonazousa, an excessive impulse, one that goes beyondthe bounds or reason.10 It appears from the parallel with the Ciceropassage that going beyond the bounds here means turning away from(or, elsewhere, disobeying) right or correct reason.11 This is capturedin Chrysippus famous comparison of a walking person, who is able tostop moving immediately any time he wants, with a runner, who is unableto stop himself right away. This disobedient violation of the appropriatebounds set by reason for the relevant motion of the soul is presumablythe same as the certain voluntary intemperance that Cicero says Zenodeclared to be the mother of all emotions (i.e., passions, Academica 1.39).Passions represent disturbances in the soul more precisely, in the soulshe gemonikon or controlling part precisely because they exceed rational 38 bounds. For the souls he gemonikon is the mind or thought (dianoia), anda disturbance in it is a disturbance of its rationality. We can come to seethis by reflecting on what is meant by the Stoic talk of a motion of thesoul. The Stoics, of course, do not conceive of the soul as an immaterialentity: the soul, and in particular the he gemonikon or controlling portionof the soul, which is also the mind (dianoia), is a bit of pneuma, a mixtureof corporeal air and fire that is concentrated about the main organ ofthe body, which for the Stoics is the heart.So the motion of the soul, under one aspect, is a literal movementwithin this pneuma, which is itself a part of the active element in the universe, which the Stoics identify with Zeus or God. Under another aspect,the motion of the soul is intentional: in the case of appetite and fear,it is a movement of thought toward what is judged to be an apparentlygood object, or away from an apparently bad one. These movements arealso called desire or pursuit (orexis) and aversion (ekklisis). In the caseof delight or pleasure and distress, where the material movement corresponding to the passion is an actual physical swelling (elation, eparsis)or contraction (sustole ) of the soul, presumably again to be consideredas the region of pneuma around the heart, the intentional aspect is anopinion (doxa) about the presence of some good or evil object. So whilehorme quite literally is a motion in the pneuma constituting the he gemonikoneither outward, toward something that is desired or welcomed, or awayfrom something unwanted or rejected, it is also at the same time a movement of thought, as I said earlier. These two aspects of the movementwhich is the horme , the literal, spatial movement toward the object, andthe intentional movement of thought, are not in any way separate: theyare quite literally two ways of looking at the very same phenomenon from the outside and from the inside, as it were, from the third-personand from the first-person perspective, respectively.Note that passions have been defined not only as pursuits and avoidances but as out-of-bounds impulses (hormai).12 This applies in the firstinstance to lust and fear (corresponding to the eupatheiai wish andcaution), which are the primary forms of passion: delight and distress(and joy) are said to supervene (epigignesthai) on these,13 perhaps asthe results of satisfaction of lust (or, respectively, wish) or its opposite.Officially, orexeis and ekkliseis are species of hormai (or, respectively, aphormai), impulses: strictly, they are impulses toward actual, occurrent pursuits and avoidances of immediately available objects, as opposed to onesexpected to be pursued or avoided in the future.14 Most discussionsnaturally focus on the case of orexis or desire (or, sometimes, ekklisis or 39 avoidance), though presumably one could also have an excessive (or, respectively, nonexcessive and fully rational) occurrent impulse toward theobject of some future plan for example, that one will be prepared to dowhatever is necessary when one reaches the age of thirty-five to becomepresident of the United States. Some notion of moral evaluation of future plan or intention, if that is what orousis is, would seem to be required,as Inwood notes, and orousis will clearly also be up to us. Nonrationalanimals also have impulses, but in this discussion we are only concernedwith rational impulses, that is, the impulses of rational animals, all ofwhich are either actual or future pursuits (or avoidances) on the basis ofjudgments.I have focused so far on the standard Old Stoic definitions given in ourtexts of passions as motions, that is, impulses of soul, and have broughtin the intentional element, and the element of judgment, as consequentupon this though, as I have stressed, these should not be seen as twoseparable aspects of the phenomenon but two ways of looking at the samething. Chrysippus, however, notoriously seems to have given definitionsof the passions merely in terms of judgment (krisis) and opinion (doxa) as the judgment, which is also an opinion, that there is a good or evil inthe offing, or now present, which good or evil is of a certain particularsort without making a reference to motion or impulse a part of thisdefinition. For this he was notoriously criticized by Galen, who accusedhim of contradicting the Zenonian definitions of passions in terms of thesouls motion or horme . Galen probably is following Posidonius on thispoint, because it appears from some of the evidence in Galens Doctrinesof Hippocrates and Plato that Posidonius was concerned not only to refutewhat he saw as Chrysippus extreme monistic rationalism in psychologybut to claim Stoic orthodoxy for his new, more moderate (and Platonicallyinfluenced) moral psychology by arguing that it was more in line with themoral psychology of Zeno and Cleanthes,15 the canonical founders ofStoicism, than was Chrysippus own. (To what extent Posidonius alternative, less rationalistic moral psychology might have also involved a rejection of psychological monism is a question I wish to leave to one side.)16I agree with the those scholars who argue that this criticism ofChrysippus definitions is not well grounded. For one thing, it is clearfrom Galens evidence that Chrysippus attempted to defend Zenos definitions in terms of his own. This sort of phenomenon is not unknownelsewhere in Stoicism. Every important Stoic tried to give his own individual definition of the telos or happiness, for instance, but we should notthink that in so doing that they meant to imply that Zenos definitions 40 41 42 and their opposites that is, with every valued and disvalued experience.That emotions are at base desires is not an idea that is new with theStoics: Aristotle defines emotions in terms of desires, for instance in hiswell-known definitions of anger in the De Anima and the Rhetoric. Whatis significant about the Stoic theory is that it allows us to see how alldesires or at least all practical desires, ones capable of being put intoaction (for I could desire that some purely theoretical claim be true,for instance that there be life on Mars)20 can be conceived along thelines of emotions. For all such desires will be either rational or irrationalpursuits or avoidances. Moreover, all actions have such desires as theirmotivations. So the scope and role of the emotions and passions in Stoictheory turn out to be very broad. 43 44 45 7.3 is due to it being Socrates views on akrasia that are there being discussed. But Aristotles physical solution to the Socratic puzzle aboutakrasia is remarkably similar to the Stoics own picture, as I have developed it, especially in attributing akrasia to beliefs that conflict, but notper se (logically).For the Stoics, in cases of (real or apparent) weakness of will, it is notpassion that is opposed to reason: rather, reason is opposing itself, reasonis as it were disobeying reason. If one decides to do something againstones better judgment, as we say, what is occurring is that there are twojudgments in the soul that are opposed to one another, the one declaringthat it is appropriate to perform a certain action A, and the other thatit is not right to do so.26 Logically, these two judgments are, of course,contradictory, but this only entails that they cannot both be entertained,or in Stoic terms assented to, by the same mind at the same time. One willfind oneself wavering between them. Copious evidence in the form of literal quotations from Chrysippus treatise on the passions cited by Galenin the central books of his work On the Doctrines of Hippocrates and Plato(evidence that Galen himself seems often willfully to misunderstand) reveals that Chrysippus tried to show in detail the sorts of psychologicalmechanisms a person may engage in so as to accommodate the presenceof such a contradiction in her practical commitments. He compares thiskind of situation, for instance, with a persons refusal to believe anythingbad about his lover, even though one may possess the incriminating report on unimpeachable authority. That is, one may decide not even toentertain a certain proposition, in the knowledge that if one did entertain it, one would be compelled to assent to it. Chrysippus, as reported byGalen, quotes a fragment of the comic poet Menander: I got my mindin hand and stuck it in a pot.27Chrysippus also seems to have undertaken an elaborate exegesis ofMedeas famous monologue in Euripides play (Medea 102180), whereshe addresses her own thumos or anger at her husband as the motive forher prospective murder of her children, and utters the famous lines:I know what kind of evils I am about to dobut anger [thumos] is strongest among my deliberations.28 46 Epictetus and Chrysippus, is her judgment that it would be an overwhelmingly good or appropriate thing for her to take the most terrible revengeupon her husband for his abandonment of her. Of course, she also thinksthat it would also be a good thing to spare her childrens lives. If she alsothinks that this would be an appropriate action, then we would find her inthe sort of akratic situation mentioned earlier: the overwhelming powerof her positive evaluation of revenge would cause her just to refuse tothink about the wrongness or moral inappropriateness of the murderof the children, at least until after the savage act has been carried out.But this does not seem to be Epictetus analysis of the situation. On hisinterpretation, Medea thinks that these two goods are in conflict but thatrevenge in the present situation is more profitable (sumphoroteron) forher than saving her children (1.28.7). Of course, this belief is false: thegratification of anger is never a real good, only an apparent one, and asEpictetus repeatedly insists throughout his Discourses, real goods actuallycannot conflict: the world and human life are so constructed that truegoods form a single, coherent, mutually realizable structure. But Medea,not being a Stoic and trapped in the confusion of ordinary human moralbeliefs, fails to realize these facts. Given her beliefs, she must do whatshe thinks she must do. Epictetus even pretends to admire her for thestrength of her soul (2.17.21). This is odd, since it seems we should analyze Medeas akrasia as the species of it that Chrysippus called fromweakness rather than from rashness.29To express admiration for Medea may seem a strange thing for a Stoicmoralist to say, but in fact it is quite consistent with the general Stoic attitude about the moral evaluation of wrongdoing. I return to this pointin closing. For now, I want to point to another important feature ofMedeas situation, namely the relative intensity of the evaluative judgment involved in the passion. Chrysippus emphasizes that in the case ofa strong passion at least, the object is taken by the agent as being overwhelmingly good or bad (as being the greatest good or evil). I suggestedearlier that this was the gist of the second in Chrysippus analysis of thetwo judgments that found emotion. We know that in his psychotherapyhe emphasized against Cleanthes the importance not only of trying topersuade people of the Stoic doctrine that nothing outside the mindis actually really good or evil and hence that no pathos can be justified,since all passions involve positive or negative evaluative judgments aboutexternal objects but, first, that external things are less important thanwe ordinarily think, and less than virtue and vice (a view, of course, thatPeripatetics and Platonists agreed upon with the Stoics). 47 48 49 the good appears to us. But Aristotles position in that chapter thatwe can be held responsible for our bad moral characters because we areresponsible for the voluntary actions that went into our habituation thatproduced those characters wont do, because by Aristotles own lightshow the good appears to us, that is, whether we possess a concept of thegood that is at all adequate, already depends on our character. Hence,Aristotle is committed to saying that a person can be held responsible forher conception of the good if she is already assumed to be responsiblefor her character, so that Aristotles argument is circular. It is clear whatthe Stoics way of dealing with this sort of problem will be: to insist onthe basic principle that we are responsible for how the good appears tous. This also means that they reject Aristotles further claim that a viciousperson has irretrievably lost the proper conception of the good and henceis a morally hopeless case. This is not true for the Stoics fortunately so,since all of us, at least nowadays, are vicious fools.The notion of power or possibility appealed to here must be understood within the context of Stoic (i.e., Chrysippean) determinism.Chrysippus holds that everything that happens, happens in accordancewith fate (heimarmene ), and thus must occur in exactly the way that itdoes both because it is causally determined to happen in just that way,and because in so doing it makes a contribution to the overall perfection of the universe (universal teleological determinism). Included ineverything that happens are, of course, all human assents, pursuits andavoidances, and emotions, which are just as fated as is anything else.Nevertheless, these are held by the Stoics to be open to moral evaluationand praise and blame. Important passages from Cicero (De Fato 4044),Aulus Gellius (Noctes Atticae 7.2.114), and Plutarch (De Stoicorum Repugnantiae), supplemented by some others (perhaps, for instance, Origen DePrincipiis 3.16) show us Chrysippus way of constructing a compatibilistdefense against objectors who claim that Stoic determinism underminesthe legitimacy of any attribution of responsibility. I think we are now ina position to see how this defense works. It crucially depends on the voluntariness of the passions, our control over our evaluative attitudes andjudgments.Notes1. Its origins have often but usually unconvincingly been discussed. They havein recent years been the subject of a set of Sather Lectures by Michael Frede,which will hopefully soon be published.2. Cf. Cicero, Tusculan Disputations 4.12, Diogenes Laertius 7.115, and SVF 3:173,also Inwood 1985: appendix 2, p. 237. 50 3. Compare the conditions of the vicious persons described in Platos Republic, books 7 and 9, where boule sis seems to be, even if present, at least notoperative in the preferred way, as a desire for the persons true good.4. In Varros speech in Academica 1.4041.5. This is somewhat of an oversimplification, since we do not know the precisedetails of the Stoic view of practical judgment (cf. Bobzien 1998: 24041);however, it seems safe to assume that it involved assent to or approval of apractical impression, analogous to assent to the truth of an impression inthe theoretical case.6. One could mention here, among others, Frede 1986, Engberg-Pedersen1990, Nussbaum 1994 (and Nussbaum 2001), Brennan 1998, and the relevant section of Long and Sedley 1987.7. See Inwood 1985: 22829. Epictetus makes orexis or desire or pursuit andhorme distinct genera, the former seemingly directed at the good (and awayfrom evil), and the latter directed at appropriate or preferred objects. Iassume that this represents a later development in the Stoic theory.8. Perhaps this what is meant by hexis horme tike at Stobaeus 2.86.17 = SVF 3:169.9. But perhaps commotio is just a translation of kine sis.10. Cf. huperteinousa ta kata ton logon metra, Clement of Alexandria in book 2 ofthe Stromateis (SVF 3:377).11. Cf. Chrysippus in LS 65J4 for both expressions.12. Recall again that the eupatheiai, by way of contrast, are properly restrainedimpulses.13. LS 65A4.14. Here the contrast term to orexis is orousis; on all this, see Inwood 1985,appendix 2.15. Galen 1981: 4.3.2, 4.4.38, and 5.6.3338, respectively.16. See Cooper 1999, following the lead of Fillion-Lahille 1984.17. Cf. also the boiling of the blood around the heart in his physicists definition of anger in Aristotle, De Anima 1.1 fin., corresponding to the intentionaldialecticians definition as a desire for revenge.18. The details are complex and controversial. It is, however, clear that the termprosphatos applied to the value judgment causing an emotion was originallydue to Zeno. The secondary judgment seems to have been brought in byChrysippus to explicate this term of Zenos.19. I assume the accusatives ti and he don are to be retained; cf. also 250.9 and342.30 Muller.20. Cf. LS 53Q4.21. Epictetus (Encheiridion 1 init.) lists as what is up to us is hupole psis, supposition (which includes belief), impulse, desire, and aversion, and what areour doings [erga]. All the items on the list are, in terms of the Old Stoicclassification, either assents or hormai, and thus depend on assent.22. Compare Seneca, De Ira 3.3, and Posidonius as reported by Lactantius, On theAnger of God 17.13 (reporting the contents of the lacuna at Seneca, De Ira 1.2).23. Galen 1981: 5.7, claims that the argument establishes indifferently either thatthere are different parts or different faculties (of reason and appetite, orappetite and spirit) within the soul (cf. 5.7.4950). He does not seem to 24.25.26.27.28.29.30.31.32. 33. 51 3Stoicism in the Apostle PaulA Philosophical ReadingTroels Engberg-Pedersen In 1949 Max Pohlenz, the doyen of early twentieth-century German scholarship on Stoicism, published an article, Paulus und die Stoa, in whichhe discussed the first few chapters of the apostles letter to the Romans andthe Christian historian Lukes account of Pauls speech on the Areopagusin chapter 17 of the Acts of the Apostles. Pohlenz was asking about theStoic credentials of various ideas in the two texts. He concluded that inPaul there was nothing that went directly back to Stoicism. Instead, anyStoic-sounding ideas had come to Paul through Jewish traditions thatwould rather reflect some form of middle Platonism. In Luke, by contrast, there is a direct reminiscence of Posidonius.In 1989 Abraham J. Malherbe, Buckingham Professor of the NewTestament at Yale Divinity School, published a book, Paul and the Popular Philosophers.1 He argued that in a number of individual passagesin the letters, Paul was interacting directly with specific motifs derivedfrom the moral exhortation of philosophers like the Cynics, Stoics, andEpicureans. Paul need not have read, for instance, Chrysippus. But hehad an easy familiarity with the moral discourse of the popular philosophers of his own time, as exemplified to us by his near contemporariesSeneca, Dio Chrysostom, and Epictetus.In Paul and the Stoics (2000), I argued that Paul is relying on centralideas in Stoicism even when he states the core of his own theologicalthought. This development should be of some interest to students ofStoicism. If we follow Pohlenz, we would say that while direct interaction between Christianity and Stoicism did begin in the New Testament,it is only reflected in a relatively late text that may be no earlier thana.d.100. Also, we would point out that the Stoic ideas that Luke ascribes52 53 to his Paul in the major part of the Areopagus speech (Acts 17:2229) are only to be understood as a foil for his distinctively Christianclaims made toward its end (17:3031). Furthermore, the speech isfollowed by partial rejection by some of those present (17:32), including some Epicurean and Stoic philosophers (17:1820). When theyheard about the resurrection of the dead, they laughed (17:32). So,even if Luke relied on Posidonius, the difference between Stoicismand Christianity was apparently felt to be more important than thesimilarity.If we follow Malherbe, we would move the time of direct interaction between nascent Christianity and Stoicism back by about fifty years to Paulsown letters. And we would not presuppose any basic contrast betweenPauls use of the specific motifs of moral exhortation and what we find inhis contemporaries. Instead, we would see him as acting within a sharedcontext. No one belonging to that context said exactly the same thing.But they all drew on a common pool of hortatory ideas and practices,modifying and adapting them for their own purposes.If we are persuaded by my own work, we would agree with Malherbebut now also claim direct interaction between nascent Christianity andStoicism at the earliest stage that is available to us, not only in the hortatorypractices, which in themselves constitute a very important part of thePauline letters but also at the center of his theological ideas. To put thepoint succinctly, Stoicism helps Paul articulate his own message of faithin Jesus Christ.How may we account for the change from Pohlenz to the present?As often happens, a change of this kind is basically one of interest andperspective. Pohlenz was operating within a framework that presupposeda fundamental contrast between Jewish ways of thinking and Greek ones.And Paul belonged with the Jews. Thus, with only superficially disguiseddistaste, Pohlenz could describe Pauls central theological constructionin Romans (the term is Pohlenzs own: Diese theologische Konstruktion) as completely un-Hellenic and claim that Greek ideas mightat most have any influence at the periphery.2 Pohlenzs treatment inthe same article of the Jewish philosopher Philo, Pauls contemporary, isalso highly revealing. Philo does far better than Paul because there is noquestion whether he was influenced by the Greek ideas. He evidentlywas. But how was that kind of engagement at all possible if it had thecharacter of crossing a cultural gap? Answer: Philo was a compromiser(ein Kompromissler)!3 Surely the case of Philo should have led Pohlenz toquestion his own presuppositions. 54 Troels Engberg-Pedersen 55 they share Pauls own perspective and subscribe to any of his truths. Forneither does the reading itself.I emphasize this here because the gap between ancient philosophyand Christianity is still quite often felt to be a wide one. Curiously, that isnot the case on the side of the most enlightened New Testament scholarship. Here there is a strong interest in ancient philosophy and its overlapswith early Christianity. Malherbe is a witness to this. On the other side,that of students of ancient philosophy, the situation is rather different.Quite often, Christian texts are viewed with a degree of distaste and distancing that matches that of Pohlenz. This attitude is wholly understandable in view of the way theologians have traditionally claimed superiorityover philosophy. But it no longer matches the way these texts are treatedby New Testament scholars. To put it bluntly: students of ancient philosophy who approach early Christian texts like the Pauline letters differentlyfrom the way in which they would read, say, Seneca or Epictetus, onlyshow that in this particular respect they are reflecting perspectives thathave been obsolete for thirty years or more.This is not to deny that one needs to perform a number of intellectualsomersaults in order to connect with the Pauline text in a manner that willappear philosophically satisfactory. In addition to placing the truth question in brackets, one must also bracket a wide range of ideas within Paulsown thought world. Who, for instance, will be able to do anything withPauls apocalyptic conviction that Christ will soon reappear in the sky andbring believers, dead or alive, up to his heavenly abode (1 Thessalonians4:1318)? In principle, however, that situation is not different fromthe one encountered by any student of ancient philosophy. Again, whowill be able to do anything with the cosmologies of Plato, Aristotle, orthe Stoics? And who will feel confident that these cosmologies are entirely unrelated to what the philosophers say in the rest of their varioussystems?Intellectual somersaults are required by any student of Paul, and a certain amount of explanation is necessary in the exposition of his thoughtto people who come to it from the outside. It is up to the individual todecide whether the benefit is likely to repay the cost. But there is nobenefit unless one is willing to pay the cost in the first place.This sets the task for the present essay: I shall show by a single examplein what way it is correct to claim that Stoicism helps Paul formulate thecore of his theological thought. That may be historically interesting initself. But if it is just a matter of Paul using Stoicism, there is perhaps littleexcuse for spending much time on him. We shall see, however, that Paul 56 57 analyze. Philo provides an account that is explicitly Stoic and in complete conformity with Stoic dogma. But he also connects that perspectivesmoothly with his basic Jewish outlook, in effect taking law to stand forthe Mosaic law. Pauls handling of the two concepts is more complex andcreative. But here, too, the Stoic perspective can be seen to lie just belowthe surface. Thus there is a difference in the detailed way they use Stoicmaterial. But again the similarity will turn out to be more important. 58 59 Thus, while being thoroughly Stoic, Every Good Man Is Free is also intended to provide the best description of what Judaism is all about. Stoicism is here employed extensively and overtly, though not for itself butto articulate the essence of Judaism. 60 Which law is Philo talking about? Obviously the Stoic universal law butalso the Jewish law. This comes out in section 62 where Philo introduceshis list of examples of good men who are free. The introduction is phrasedin purely Stoic terms. But we already know that the list will culminate inthe long description of the Jewish Essenes. Here is the introduction toPhilos list: [I]n the past there have been those who surpassed theircontemporaries in virtue, who took God for their sole guide [cf. earlier]and lived according to a law of natures right reason [kata nomon tonorthon phuseos logon], not only free themselves, but communicating totheir neighbours the spirit of freedom; also in our own time there arestill men [e.g., the Essenes!] formed as it were in the likeness of theoriginal picture supplied by the high excellence of sages (62).The point is this: Philo, who was a good Jew, took over everything foundin Stoicism concerning freedom and the law and applied it wholesaleto the Jewish law. Taking God for ones sole guide means letting rightreason govern and extinguish the passions. That is the same as livingwith (meta) law, not just in accordance with it. One who does this obeyseverything the law and right reason prescribe or forbid. But he is at thesame time also one who obeys no orders and works no will but his own.For he wills the law and nothing but that. In him the passions have beenextinguished. His is therefore a completely unenslaved character. Heis independent and displays total independence of action. All of this,Philo implies, will come about if one lets oneself be guided by the will ofthe Jewish God as expressed in the Jewish law.Paul too, in the passage we shall discuss, treats of freedom and law. Histreatment is more complex and not at all rigidly in accordance with Stoicdogma, where the two are so closely connected. Paul certainly speaks forfreedom but does not immediately connect that with law. For Paul wasnot merely a good Jew, but one who also believed in Christ Jesus. Andthat meant a certain distance in relation to the Jewish law. At first it willeven appear as if Paul opposed freedom and the (Mosaic) law. Freedom,for him, was freedom from that law. In the end, however, Paul will comeout as being, once more, far closer to Philo than one might immediatelyexpect. Even though Pauline freedom was freedom from following theMosaic law in favor of following Christ, the apostle ends up using thelaw again in the peculiar notion of Christs law, which is the Mosaic lawwith Christ prefixed. And following that law is a matter of freedom and,indeed, in the best Stoic manner of freedom from the passions.In short, just as Pauls use of Greek philosophy is more complex thanPhilos, so is his handling of the Stoic conceptual pair of freedom and 61 law. But the similarity is closer than the difference. Both employ Greekphilosophy (including Stoicism) creatively to articulate their own Jewishor Jewish-Christian message. In this section of the letter Paul provides what New Testament scholarsusually call general paraenesis or moral exhortation that is not specifically directed toward local issues in the congregations to which he iswriting. It is followed in 6:110 by paraenesis that is of more direct relevance to specific issues among the Galatians. And the letter is concludedin 6:1118 by a summary of the argument of the first four chapters, asummary that in certain ways recapitulates a section just before 5:13ff.,5:212. Thus our passage might initially be construed as having a somewhat parenthetical role in the letter: sliced in between two summaries ofall the material that has been presented before the first summary. As weshall see, such a conclusion would be completely mistaken.The letter as a whole is addressed to a group of people who lived somewhere in the interior of ancient Turkey and were non-Jewish. They had 62 been converted to the Christ faith by hearing Pauls message. They hadalso been baptized. And they had received the spirit (pneuma). Later,certain other Christ-believing missionaries of a more strongly Jewish orientation and with roots back to the Jerusalem church had turned uptrying to persuade the Galatians to go the whole Christ-believing way andplace themselves under the Jewish law. Paul will have none of that. Christfaith, baptism, and possession of the spirit should be enough. He therefore employs the first four chapters of the letter to argue the negativecase that the Galatians must not enter under the Mosaic law and try to livein accordance with its rules.That argument is over by 5:1, where Paul states that Christ has liberatedChristians for freedom and urges the Galatians to stick to that. Here, at theclimax of Pauls argument, we encounter the term freedom (eleutheria)of which Philo made so much in his rehearsal of Stoic ideas. Paul goeson to formulate, in 5:6, a kind of general maxim or rule later (6:1516), Paul will himself call it a kanon that also sounds somewhat Stoic: InChrist Jesus neither circumcision [that is, being under the Jewish law] noruncircumcision [that is, not being under it] has any force (or matters),only faith that is active through love. It is difficult not to hear in thefirst part of this rule the point that circumcision and uncircumcision areindifferent in the Stoic sense. In fact, systematic theologians from timeto time do employ the concept of adiaphora. They probably have it fromthis text.The second part of Pauls rule, however, gives the line for the argument of 5:1326. That fact is of great importance. It shows that 5:13ff. isno mere parenthesis. On the contrary, having previously argued purelynegatively and in various intricate ways based on Hebrew scripture whatthe Galatians must not do, he now provides the positive content of life inChrist Jesus already hinted at in 5:6 (a life of faith that is active throughlove). As it turns out in 5:13ff., a life in Christ Jesus is also a life in thespirit. And the passage is intended to show that it is living in the spirit asopposed to living under the law that will yield the kind of life that constitutes the positive content of life in Christ Jesus. The spirit will providewhat the law cannot. Paul even intends to produce a reasoned argumentwhy that is so. Thus 5:1326 formulates the ultimate reason, now stated inwholly positive terms and thoroughly argued for, why Paul produced hisvarious intricate arguments against entering under the law. It is becauseonly the spirit and not the law may yield the proper life in Christ Jesusthat the Galatians must not enter under it. Galatians 5:1326 is no mereparenthesis. 63 64 65 intimately with doing works of the flesh. Usually, works (erga) in Paulare those of the law, in the often used phrase works of the law (erganomou). Here they are of the flesh. Moreover, this is given in explanationof how one will live when one is under the law. What is Paul up to? Ishe suggesting that the law somehow leads people to do fleshly acts? Astrange idea!10A solution suggests itself when one considers the difference in ontologybetween the works of the flesh listed in 5:1921 and the fruit of the spiritlisted in 5:2223.11 In 5:1921 Paul provides what is normally called a vicelist. In fact, he is not speaking of vices proper in the sense of certain badstates of mind and character. Rather, his theme is acts in the form ofact types: illicit sexual activities (porneia), unclean acts (akatharsia), andso forth. By contrast, the list that exemplifies the fruit of the spirit is agenuine virtue list. Love (agape), joyfulness (chara), peacefulness (eirene),patience (makrothumia), and the like are states of mind and character.This difference suggests a valid phenomenological point about livingunder the law. The law consists of rules that tell one what to do (type) andwhat not to do (again type). In the latter group belong the various typesof fleshly act listed in 5:1921. But no matter how much one oneself inprinciple applauds the law and wishes to follow it, the law itself can neversecure that it is actually being followed. The risk is always there that oneacts against the laws rules and against what one oneself basically wishesto do. In other words, there is always a risk of akrasia.It is quite different with living in accordance with the spirit. For thespirits fruit consists of proper states of character, virtues in the full, Greeksense of the term (even though they are not explicitly identified as suchby Paul). If you have the spirit, then you also have those full states ofcharacter, and then you will not be able (psychologically) to act againstthem; you will always and only do the good.If this is the basic distinction that Paul is drawing on, then one can seethe point of his argument in 5:1723 in relation to 5:16. If having thespirit consists in having the full states of character listed in 5:2223, thenif one walks by it (5:16a) and lets oneself be led by it (5:18a), one willin fact never fulfill the desires of the flesh (5:16b). By contrast, if one isunder the law, then since the law is related in the suggested way to theact types listed in 5:1921, one will always run the risk of akrasia, of actingin accordance with the flesh and against the law that one basically wishesto follow. What Paul is bringing out is the phenomenology of having thespirit and living by that in contrast with the phenomenology of livingunder the law. If the flesh is the great opponent, then it is the spirit and 66 not the law that constitutes the solution. For only the spirit will providethe kind of thing that is required to vanquish the flesh.This last point is explicitly made at the end of 5:23 when Paul statesthat the law is not kata the kind of thing (ta toiauta) he has just listed,the spirits virtues. Most New Testament scholars translate kata here asagainst. The law is not against love, joy, peacefulness, and so forth. Thatseems exceedingly lame in itself. Also, did Paul not say in 5:14 that thewhole law was fulfilled in the love command? A few scholars thereforetranslate kata as about. That is far better in itself. But will this not runafoul of 5:14, too? No. For whereas in 5:23 Paul is saying that the law is notabout love (etc.) understood as a state of character, the love commandgiven in 5:14 is in principle a command to do certain types of act, not tobe or become psychically structured in some specific way.So far we may reasonably claim that in 5:1723 Paul has provided arespectable phenomenological argument for the thesis stated in 5:16b.The risk of akrasia, of fulfilling the desires of the flesh, will be overcomeif one lets oneself be led by the spirit. For the spirits fruit is the kind ofthing that will have that effect as opposed to anything engendered bythe law.But then, why is that so? 67 (phthonoi). These types of act directly reflect states or events in the psyche of individuals. Furthermore, they derive from a fundamental desireto give the individual more of any coveted goods than others will get. Withsuch an emphasis on the body and the individual it will be immediatelyclear to anybody who is acquainted with the ancient ethical tradition, andthe Stoic theory of oikeiosis in particular, that the root idea that serves todefine the flesh in Paul is that of selfishness: basing ones external behavior exclusively on ones own individual perspective, which includes that ofones own individual body. Call this the bodily I-perspective. The flesh inPaul corresponds closely with the bodily I-perspective that constitutes theessence of the first stage of Stoic oikeiosis, before the change to the rational, distancing view from above upon the individual.12 This suggestionfits Pauls delimitation of the flesh in 5:24 by reference to passions anddesires (pathemata kai epithumiai). While these may certainly be distinctlybody-oriented phenomena (e.g., epithumia for bodily pleasure), they mayalso include such phenomena as fits of anger that are not so much oriented toward the body as toward the individuals own desires irrespectiveof their object.The contrasting idea of belonging to Christ may be elucidated by acouple of intriguing verses much earlier in the letter, which serve toset the parameters for everything that follows. In 2:1920 Paul says this:(19) For I [ego] have through the law died to the law in order that Imight live for God. I am crucified together with Christ. (20) It is nolonger I [ego] who live: Christ lives in me. To the extent that I now livein the flesh, I live in the faith(fulness) of Gods son who felt love for meand gave himself over for my sake. Putting these verses together with5:24 we may say that belonging to Christ (5:24) is a matter of definingoneself (2:1920), not by the characteristics that would normally serve todefine the specific individual (ego) that one is, but exclusively by Christ:one is a Christ-person, nothing more just as in the Stoic theory ofoikeiosis one comes to see oneself as a person of reason. The I-perspective(2:1920) that is tied to the body (both together constitute the flesh of5:24) has been crucified. The bodily based I-perspective has been setcompletely apart and is allowed no role in defining who one is. How doesthat come about? Through faith (2:20) or by aligning oneself with thefaithfulness (toward God) shown by Christ in his act of giving himselfover to be crucified for the sake of human beings. More on faith later.The details of the Christ myth need not concern us here. What mattersis this: 5:24 taken together with 2:1920 explains why the spirit may bringabout what the law could not. Apparently, having the spirit as a result of 68 having the faith that makes one identify exclusively with Christ meanshaving crucified, eradicated, extinguished the flesh with its passionsand desires: everything, that is, that is based in the bodily I-perspective.In that case, the problem of living under the law the constant risk ofakrasia will in fact be dissolved. No room will by now be left for anyexercise of the bodily I-perspective that lay behind the various types ofvicious acts.Understood in this way in the light of 2:1920, 5:24 explains the difference between living in the spirit and living under the law (5:1823).It therefore also explains why the thesis stated in 5:16b holds, that if theGalatians will live by the spirit, there is complete certainty that they willnot fulfill the desires of the flesh. The spirit and its fruit (full virtuesof character) is the kind of thing that will necessarily issue in the corresponding practices (5:1823). It therefore excludes the possibility offulfilling the desires of the flesh (back to 5:16b). But the reason for this isthat it is based on faith and belonging to Christ in the sense of identifyingcompletely with him (5:24).With all these things settled, it is wholly appropriate that Paul shouldconclude the argument begun in 5:16 by stating in 5:25: If we live bythe spirit [as they of course do], let us also walk by the spirit. This pieceof exhortation brings the argument back full circle to where it began. In5:16a Paul employed an imperative in the second-person plural. In 5:25bhe uses a hortatory subjunctive in the first-person plural. The differencematches the development of the intervening argument. Through thatargument we now know that having the spirit means being in a state thataltogether excludes the risk of akrasia and fulfilling the desires of theflesh. Then the most appropriate form of exhortation is not an injunctionin the imperative, as if some new situation was being envisaged, but ashared reminder of where believers in Christ (we) already are. 69 Further, why does Paul suddenly speak of fulfilling the Mosaic law?Several answers are both possible and apposite. An important one is thatPaul did not see the Christ faith for which he was arguing as being inopposition to Judaism. On the contrary, it was the true Judaism. To matchthis, Pauls Christ faith was a state (of mind) that would indeed fulfillthe Jewish law instead of abrogating it. Another answer pertains moredirectly to the issue treated in the letter as a whole. If Pauls Jewishoriented opponents advocated that the Galatians should take the wholestep and become Jews by adopting the Mosaic law, one central argumentin their armory will have been that only in that way would they have anychance of getting completely away from the kind of sinfulness that Jewsnormally took to be characteristic of non-Jews (Gentiles). Paul takes thisup. Yes, the law should be fulfilled. But the law is fulfilled in love. Andlove is what comes with the Christ faith (as construed by Paul) withoutthe law. Indeed, if one lives primarily by the law, one will only get so faras to live with the constant risk of akratic sinning. Complete freedomfrom that risk requires something different: Christ faith and the spirit.That, to conclude the line of Pauls argument, is the reason why theyshould remain in the freedom from the law for which Christ has liberatedthem (5:1).In this manner Paul succeeds to his own satisfaction, at least in bothhaving his cake and eating it. There is freedom from the law for Paulsnon-Jewish addressees. In that way they will fulfill the law. Indeed, Christfaith and living by the spirit is the only way of fulfilling it.Can we base as much as this on Pauls superficially very paradoxicalclaim in 5:14 about fulfilling the Mosaic law? Indeed, yes. For in 6:2 hemakes the point wholly explicit when he states that by carrying one anothers burdens (cf. be slaves to one another in love), the Galatians willfulfill in full measure [anapleroun] Christs law, or the Christ law,that is, the (Mosaic) law as seen from within the Christ perspective thatPaul has just developed. The law itself cannot make people do it alwaysand everywhere. Christ faith and possession of the spirit can. In actingin accordance with the spirit, Christ people and they alone fulfill thelaw, Christs law. 70 has been required to show this, but the argument is there. What oneneeds to see in the argument is an easy acquaintance with certain basic ideas in Aristotelian and Stoic ethics.13 Let us take note of theseideas.Paul began from the claim (5:6) that in Christ Jesus neither circumcision nor uncircumcision has any force, only faith that is active throughlove. We noted the use here of the basic Stoic idea of adiaphora. We shallalso see in a moment that Pauls notion of faith (pistis) is closely comparable in structural terms with the Stoic one of wisdom (sophia). The ideathat faith is active [energoumene] through love, however, is Aristotelianin its basic ontology of a state (hexis) here the one of love that issuesin acts or actualizations (energeiai). Of course, the Stoics would take overthat bit of ontology and elaborate it further.14 But that is part of the point.Pauls use of the philosophical tradition is not always very distinct, noris distinctness required to claim that he is in fact drawing on it. In thepresent verse, Pauls use of the idea of adiaphora is sufficiently distinct toallow us to claim derivation at least at second hand from Stoicism.His use of the Aristotelian-sounding term energeia is hardly sufficientlydistinct to back up a claim for derivation from Aristotle. But the distinction itself between a hexis and an energeia is of great importance to Paulsactual argument, as we have seen. Pauls philosophy is there, albeit justbelow the surface.Later (5:13), when Paul identifies the Galatians present state as oneof freedom (eleutheria), he is clearly drawing on an idea that had a special force within Stoicism. We have seen this already in our remarks onPhilo, who made so much of the same idea. Pauls further point thatthe Galatians should employ their freedom to enslave themselves to oneanother initially looks distinctly un-Stoic. Did the Stoics (and Philo) notprecisely emphasize the unenslaved character of the good and wise? However, we also saw that Philo was not at all averse to ascribing obedienceto the wise, at the same time that he also stressed that the freedom thatis theirs is distinguished by not obeying any orders and working no willbut its own. There is something like a genuine Stoic paradox here, of anobedience that is totally self-willed.15 It is not at all impossible that Paulis playing on the same idea.In Pauls actual argument from 5:16 onward, the focus on the riskof akrasia and how that may be overcome in full virtues that will alwaysand everywhere be actualized in acts of love certainly reflects knowledgeof the ancient ethical tradition. Is it Aristotelian or Stoic? The theme itself is, of course, absolutely central to Aristotelian ethics. But there is one 71 72 73 conclusionWas Paul a Stoic? That depends. He was and saw himself as an apostleof Christ. And he was not quite and certainly did not see himself as a philosopher. A fortiori he was not a Stoic.I have argued, however, that Paul did make use of notions that are distinctly Stoic to formulate his own message: the adiaphora, oikeiosis, apatheia,and more. He did not directly quote those notions or bring in Stoicismwith flying colors. For that he was probably too preoccupied with hisown agenda. Indeed, elsewhere he contrasts the wisdom [sophia] of thepresent world with his own wisdom (sophia), which was kept apart forthose who are perfect [teleioi] (1 Corinthians 2:6). Nevertheless, Pauldid use Stoic notions to formulate his message.Nor was this use of merely peripheral importance. It lies at the heartof the argument of Galatians 5:1326. And that passage itself lies at theheart of what Paul was trying to communicate in this letter as a whole.In 5:1326 Paul brought out the positive point of the Christ faith he hadpreached to the Galatians. And that point explains why, in the rest of theletter, he argued so strongly against supplementing the Christ faith withadherence to the Mosaic law. The former was sufficient in itself. Adoptingthe latter in addition would only lead in the wrong direction: back to astate where the risk of akrasia had not been eradicated.So, was Paul a Stoic? Not in the direct way stated here, of one who is andsees himself as a philosopher. But in his own, hidden way he was. Oncewe look just below the surface, we see that Paul brings in central Stoicideas and employs them to spell out the meaning of his own message ofChrist. Paul was a crypto-Stoic.23Thus Paul was far closer to Stoicism than Max Pohlenz would allow.Pohlenz never got near to seeing those hidden, but real and important similarities that we have discovered. To get there he should haveattempted to read Pauls texts in some philosophical depth as an exercise 74 well worth undertaking on its own. But his presupposed belief in a basiccontrast between Jewish (religious) and Greek (philosophical) ways ofthinking stood against that.Paul was also more of a Stoic than has been allowed by AbrahamMalherbe. His use of Stoic ideas is in no way peripheral to his own mostcentral concerns. Rather, he uses philosophical ideas, and Stoic ones inparticular, to articulate the meaning of his own core message.In this he is in line with Philo, who did exactly the same. The comparison with Philo is in fact quite revealing. These two Jews, I have argued,are far closer to one another in their use of Greek philosophy than isnormally allowed. Whereas the one flaunts his knowledge, however, andmakes it part of his message, the other hides it away. But it is still there. Notes1. Malherbe 1989 collects a number of his papers on the topic. For other relevant writings, see Malherbe 1986, 1987, 1992, and 1994.2. Pohlenz 1949: 70.3. Pohlenz 1949: 76.4. The point underlies a collection of essays edited by the present writer inEngberg-Pederson 2001. It is well expressed by Philip S. Alexander in one ofthose essays, Hellenism and Hellenization as Problematic HistoriographicalCategories: [W]e need an intellectual paradigm shift so that the presumption now is always in favour of similarity rather than dissimilarity (79).5. All references to Every Good Man Is Free are to the Loeb edition (Philo ofAlexandria 1929), following the numbered divisions of that text.6. The Loeb translation has been slightly modified here.7. For Stoicism cf., for example, Diogenes Laertius 7.88 (quoted later).8. The Loeb translation has been slightly modified here.9. The following account relies on chapters 67 of Paul and the Stoics, in whichI discuss the passage and its position within the letter extensively and inconstant interaction with the relevant New Testament scholarship (EngbergPedersen 2000: 13177, with notes on 32450). It should be noted that thereading I propose is not the communis opinio within the guild (to the extentthat one can speak of such a thing). My claim is that it makes better overallsense of the passage as a whole and its role within the letter than alternativereadings.10. Readers who know their Paul may reply that the apostle seems to say something like this in Romans 5:20. However, Pauls claims in Romans about theeffect of the law need careful sifting. See Engberg-Pedersen 2000: 24046.11. This is one of the points where the present reading of Galatians 5:1326 isheterodox.12. I am relying here on my own account of the initial stages of Stoic oikeiosis asreflected in Cicero, De Finibus 3.16. See Engberg-Pedersen 1990: 6471. 75 13. This is the single most important point where I feel I have been able to addto New Testament scholarship on Paul. It is not that New Testament scholarsknow nothing about Stoicism or Aristotelianism. Some definitely do. Butone very rarely comes across attempts to read Paul (in the light of ancientphilosophy) as if he were a philosopher intent on drawing philosophicaldistinctions like those one finds in the Stoics or Aristotle.14. Cf., for example, SVF 3:104.15. Cf. SVF 3:615 (Stobaeus): [T]he good man alone rules [archei] . . . and thegood man alone is obedient [ peitharchikos] since he follows [is akolouthetikosof ] one who rules [archon].16. I am relying here on my own account of the last stage of Stoic oikeiosis asreflected in Cicero, De Finibus 3.2021, for which, see Engberg-Pedersen1990: 8097.17. Aristotle, Nicomachean Ethics 10.9, 1179b45 and 1179b2631.18. Ibid., 9.8, 1168b31ff.19. Aristotle, Politics 3.13, 1284a14.20. Ibid., 3.13, 1284a1011.21. Diogenes Laertius, Lives of Eminent Philosophers 7.88.22. Ibid.23. I owe this happy formulation to my friend, Sten Ebbesen. 4Moral Judgment in SenecaBrad Inwood We are all familiar with the notion of a moral judgment. In the vocabularyof ethical debate, this term is so common as to be a cliche. While we havedifferent theories about how we make such judgments, it would seem distinctly odd to observe that judgment is a term transferred from anothersemantic domain and to attempt to sort out its meaning by scrutinizing itssource or to impugn the clarity or usefulness of the term on the groundsthat it began its conceptual career as a mere metaphor. Whatever originsthe term may have had, they now seem irrelevant.But is this really so? I want to argue that moral judgment has not alwaysbeen taken as a bland general synonym for moral decisions and that itneed not be; to see that, we can consider uses of the terminology of moraljudgment in which the original semantic sphere for such language (thejudicial sphere) is still relevant to understanding how it is used.1 Onesuch use comes from the Stoic Seneca, and I argue that he did take thenotion of moral judgment as a live metaphor, one that he used to develophis own distinctive Stoic views on moral thinking.That the particular language we use in talking about moral decisionand moral assessment should matter is not surprising. Even for us, this isnot the only way to talk about such matters; we also invoke the notions ofdeduction, calculation, and analysis, for example. Perceptual language is Versions of this chapter were also read to audiences at the University of Toronto and atthe Chicago area conference on Roman Stoicism in the spring of 2000, as well as at theUniversity of Victoria and the University of Calgary in March 2001. I am grateful to a numberof people for helpful comment, but especially to Miriam Griffin who heard the Chicagoversion. 77 78 Brad Inwood judicial experience than its Greek counterparts did, if for no other reasonthen because every paterfamilias held the position of judge and magistratewith regard to his own household.3 But even the lawyer Cicero does not, inmy reading, show such a propensity for thinking and talking about moralassessment and decision in terms of judging and passing judgment.I doubt that the facts support the extravagant claim that Seneca invented the idea of moral judgment. But his elaboration of the metaphorof judges and judging is pervasive and insistent; its use is both original andilluminating. So I do want to suggest that whatever its origins, we find inSeneca an intriguing, influential, and creative exploitation of this notionin the service of his own moral philosophy.4 In this provisional discussionI can neither explore Senecas exploitation of this concept thoroughly,nor can I explore the possibility of its influence on later uses of the idea.It will, I hope, suffice if I draw attention to the interest and complexity ofhis thinking on the topic.The verb iudicare and the noun iudicium are common, and while I hopeto show that Seneca self-consciously uses them to develop his own original views, it would be difficult to start from those terms. In consideringhis usage we would certainly find far too much noise and nowhere nearenough signal. A more effective entree into the topic comes from consideration of the agent noun iudex. For Seneca says some striking thingsabout judges moral judges, in particular and if we can come to anunderstanding of those oddities we will be well on the way to an understanding of his thoughts on the topic of moral judgment more generally.From the outset I want to make a confession, though. The notion of amoral judge equivocates between two distinguishable ideas: the demandson an actual judge to act by relevant moral standards in carrying out hisor her duties as a judge; and the notion that someone making a moraldecision or evaluation is to be conceptualized as a judge. My main interest is, of course, in the latter notion. But the morally proper behavior ofa real judge would tend to show many of the same features as the morallyproper behavior of any moral agent acting on the model of a judge; henceI propose to allow these two ideas to blend together for the purposes ofthis chapter.Several works are of particular importance for Senecas explorationand exploitation of the idea of a moral judge: On Clemency, On Anger, andOn Favors stand out for their close connections, though there does notseem to be a planned coordination with regard to the theme.In On Clemency Seneca naturally deals with the proper behavior of ajudge. For much of what the young emperor whom he is advising will have 79 80 pardon [venia] can be granted to you in a more honourable way. The wise man willspare men, take thought for them, and reform them but without forgiving, sinceto forgive is to confess that one has left undone something which ought to havebeen done. In one case, he may simply administer a verbal admonition without anypunishment, seeing the man to be at an age still capable of correction. In another,where the man is patently labouring under an invidious accusation, he will orderhim to go scot-free, since he may have been misled or under the influence ofalcohol. Enemies he will release unharmed, sometimes even commended, if theyhad an honourable reason loyalty, a treaty, their freedom to drive them towar. All these are works of mercy [clementia], not pardon. Mercy has a freedomof decision. It judges not by legal formula, but by what is equitable and good. Itcan acquit or set the damages as high as it wishes. All these things it does with theidea not of doing something less than what is just but that what it decides shouldbe the justest possible. The wise man is here envisaged as a judge acting in pursuit of the justoutcome in every case. Mercy is a factor internal to the determination ofthe just decision, whereas pardon is external to that decision. The wiseman judges with freedom of decision (liberum arbitrium), not constrainedby the formula that would guide a judge in the courtroom.6 This latitudemakes it possible for his consideration of relevant factors to be basedex aequo et bono rather than on more mechanical considerations. Thereformative goal of punishment remains paramount.Evidently the wise man does not play the role of a severus iudex inhis dealings with others, whether or not he is an actual judge presidingat a tribunal, and we may infer that the stern judge neglects the broadrange of relevant factors because he fails to acknowledge his own humanfallibility and its relevance for his own judgments. The wise man of OnClemency 2 will have become wise after having erred, and awareness ofthat personal history will enter into his subsequent judgments. This is initself an interesting insight into moral judgment, and one that militatesvigorously against some models of moral decision making. One thing ofspecial note, though, is that the insight which applies to actual judgesas much as it does to anyone called upon to condemn or to forgive isdeveloped and expressed in quite explicitly legal language. For we havenot merely the language of the iudex, but also other technical legal termssuch as formula. In the context of advice to Nero, this is not surprising,but its broader implications are brought out by a consideration of similarideas in On Anger.For the relevance of such a personal history to the capacity of the sageto act as a moral judge had been of interest to Seneca for some time. In afamiliar passage of the treatise On Anger (1.16.67) the same collocation 81 The good judge envisaged here is a wise man, for only such a person isfree of the passions relevant to his situation. And the wise man, in dealingwith provocations to anger, will be like that good judge; he will still feelsomething in his soul, a reminder of the passionate and foolish past thathe, like the judge of the treatise On Clemency, has had. Like that judge,he will act with an awareness of his former self and its failings. In judgingothers without anger he will remember his own fallibility.In fact, this whole section of the treatise On Anger (1.1419) is built onthe model of the judge. Consider the description of the aequus iudex at1.14.23:What has he, in truth, to hate about wrong-doers? Error is what has driven themto their sort of misdeeds. But there is no reason for a man of understanding[prudens] to hate those who have gone astray. If there were, he would hate himself.He should consider how often he himself has not behaved well, how often his ownactions have required forgiveness his anger will extend to himself. No fair judge[aequus iudex] will reach a different verdict on his own case than on anothers.No one, I say, will be found who can acquit himself; anyone who declares himselfinnocent has his eyes on the witness-box, not on his own conscience. How muchhumaner it is to show a mild, paternal spirit, not harrying those who do wrongbut calling them back! Those who stray in the fields, through ignorance of theway, are better brought back to the right path than chased out altogether. The prudens here may or may not be a sage yet; but he is certainlysomeone in a position of authoritative judgment who acts under two 82 constraints: he must be fair, using the same considerations for his owncase and others; and he must act in the light of his own fallibility andproven track record for moral error. Anyone who has ever been in needof forgiveness7 must extend to the objects of his judgment the kind ofwell-rounded consideration that makes possible his own forgiveness. Hewill not act in light of what he can get away with (with an eye to the witnessbox, believing that no one can prove that he has erred) but on the basisof true self-knowledge, in honest realization of his fallible character. AsSeneca says in 1.15.3, this judicious attitude is a key to making the educational purpose of punishment succeed. He does not say why this shouldbe so, but it is not hard to see what he has in mind: if the person punishedbelieves that the judge is evenhanded and fair-minded, he or she is morelikely to avoid the kind of recalcitrance often provoked by the perception of a double standard. In the chapters that follow (1.1719) reasonsjudgment is preferred to that of a passion like anger in large measurebecause the rational agent has the judicial quality of holding itself to thesame standard as others, whereas anger is in totum inaequalis, grants itselfspecial standing (sibi enim indulget), and impedes any correction of itsown judgment (1.17.7).Seneca returns to this important feature of fair judging in book 2(2.28). The aequi iudices will be those who acknowledge that no one iswithout culpa. What provokes resentment (indignatio), he says, is theclaim by a judge that he is free of error (nihil peccavi, nihil feci), andthis resentment at hypocritical double standards makes punishment inefficacious. And in considering the unlikely claim that someone mightbe free of crime under statute law, Seneca gives yet another reason forpreferring a broader standard for judgment than merely legal requirements. The iuris regula is narrow, the officiorum regula is a wider andmore relevant standard (and these officia include the requirements for humane and generous treatment of our fellow men). The innocentiae formulais a narrow and legalistic requirement for evaluation, Seneca maintains,and we should take into account in our judgments our own moral selfawareness. If we bear in mind that our own behavior may have beenonly technically and accidentally proper though still proper enough tomake us unconvictable then we will be more fair in our judgments ofthose who actually do wrong (2.28.34). Such a broad and inclusive judgment is again recommended at 3.26.3: if we consider the general stateof human affairs we will be aequi iudices, but we will be unfair (iniquus)if we treat some general human failing as specific to the person we arejudging. 83 84 sends an unbribed judge in to deliver sentence, then we seek out the most worthyrecipient for our goods; we prepare nothing with greater care than the thingswhich dont matter to us. 85 such cases are so extremely difficult (cum difficilis esset incertae rei aestimatio)that we suspend our own judgments and refer the matter to divine iudices.Variable human inclinations cloud human assessments, just as they do thedecisions of judges.In 3.7 he outlines justifications for exempting ingratitude from actuallegal judgment and reserving it for moral judgment. The first three donot bear closely on our theme of moral judgment, but at 3.7.58 the iudexmodel comes into play again.Moreover, all the issues which are the basis for a legal action can be delimitedand do not provide unbounded freedom for the judge. That is why a good caseis in better shape if it is sent to a judge than to an arbitrator, because the formulaconstrains the judge and imposes fixed limits which he cannot violate; but anarbitrator has the freedom of his own integrity [libera religio] and is not restrictedby any bounds. He can devalue one factor and play up another, regulating hisverdict not by the arguments of law and justice but in accordance with the demands of humanity and pity [misericordia]. A trial for ingratitude would not bindthe judge but would put him in a position of complete freedom of decision [sedregno liberrimo positura]. For there is no agreement on what a favor is, nor on howgreat it is. It makes a big difference how indulgently [benigne] the judge interpretsit. No law shows us what an ungrateful man is: often even the man who returnedwhat he received is ungrateful and the man who did not is grateful. There aresome matters on which even an inexperienced judge can give a verdict, as whenone must decide that someone did or did not do something, or when the disputeis eliminated by offering written commitments, or when reason dictates to theparties what is right. But when an assessment of someones state of mind has tobe made, when the only matter at issue is one on which only wisdom can decide,then you cannot pick a judge for these matters from the standard roster someman whose wealth and equestrian inheritance put him on the list. So it is not thecase that this matter is inappropriate for referral to a judge. It is just that no onehas been discovered who is a fit judge for this issue. This wont surprise you ifyou consider the difficulty that anyone would have who is to take action againsta man charged in such a matter. 86 87 I indulge in a slight digression at this point to bring in an interesting parallel to the sort of self-critical modesty in judgment that Senecadisplays here in moral matters. In the Natural Questions a somewhatneglected work with a strong epistemological subtext not advertised inits title16 Seneca shows the same sensitivity. In the fragmentary book 4bSeneca raises a curious Stoic theory about snow (5.1) and at the sameapologizes for introducing a theory that is (shall we say) less than compelling (infirma is Senecas word):I dare neither to mention nor to omit a consideration adduced by my own school.For what harm is done by occasionally writing for an easy-going judge [ facilioriudex]? Indeed, if we are going to start testing every argument by the standard ofa gold assay, silence will be imposed. There are very few arguments without anopponent; the rest are contested even if they do win. An easygoing judge is one, I think, who does not impose the higheststandards on every theory, simply because he or she is aware that in a fieldlike meteorology the demand for demonstrative proof cannot be met.Epistemic humility and pragmatism suggest the wisdom of being a facilioriudex where certainty is not attainable. As in the moral sphere, so here,Seneca works out this essentially liberal notion through the metaphor ofjudging.If one wants a chilling picture of the results to which a rigida sententiacan lead if it is formed by some lesser man, one need only to turn back tothe treatise On Anger. In his discussion of the traits of the aequus iudex inbook 1, Seneca tells the story of one Cn. Piso: he was free of many vices,but he was perversely stubborn and mistook rigor for constantia (1.18.3).Constantia, of course, is a virtue of the sage Seneca wrote a short treatiseon the constantia of the sage and, as we see in the anecdote that follows,(1.18.36) rigor is the vice that corresponds to it:I can remember Gnaeus Piso, a man free of many faults, but wrong-headed intaking obduracy [rigor] for firmness [constantia]. In a fit of anger, he had orderedthe execution of a soldier who had returned from leave without his companion, onthe grounds that if he could not produce him he must have killed him. The manrequested time for an enquiry to be made. His request was refused. Condemnedand led outside the rampart, he was already stretching out his neck for executionwhen suddenly there appeared the very companion who was thought to havebeen murdered. The centurion in charge of the execution told his subordinateto sheathe his sword, and led the condemned man back to Piso, intending toexonerate Piso of guilt for fortune had already exonerated the soldier. A hugecrowd accompanied the two soldiers locked in each otherss embrace amid greatrejoicing in the camp. In a fury Piso mounted the tribunal and ordered them 88 both to be executed, the soldier who had not committed murder and the onewho had not been murdered. What could be more scandalous? The vindicationof the one meant the death of the two. And Piso added a third. He ordered thecenturion who had brought the condemned man back to be himself executed.On the self-same spot, three were consigned to execution, all for the innocenceof one! How skilful bad temper can be at devising pretexts for rage! You, it says,I command to be executed because you have been condemned; you, becauseyou have been the cause of your companions condemnation; and you, becauseyou have disobeyed orders from your general to kill. It invented three charges,having discovered grounds for none.17 89 90 omnibus malis. As he says at 6.2, the happy man is exactly he who is iudiciirectus.20 This remark comes in the midst of his discussion of the role ofpleasure in the happy life, a discussion that culminates in section 9.23with an apt statement of the normal Stoic view on pleasure:21It is not a cause or reward for virtue, but an adjunct [accessio] to it. The highestgood is in iudicium itself and the condition of a mind in the best state, which, whenit has filled up its own domain and fenced itself about at its boundaries is then thecomplete and highest good and wants for nothing further. For there is nothingbeyond the whole, any more than there is anything beyond the boundary. 91 Here again our judgment is an unchangeable inner disposition, cognitive in its function and determinative in the process of regulating actions.It is, in the relevant sense, our perfected hegemonikon, our prohairesis.I want to conclude by emphasizing just two points. First, it really isremarkable that Seneca uses the language of legal judgment to expressthis idea. I concede happily that the noun iudicium does not always carrythe full weight of a live legal metaphor. But in the context of the briefsurvey I have offered of Senecas active and long-term interest in thatmetaphorical field, it seems implausible to suggest that it plays no rolehere even if the nonlegal idea of kanon is also prominently in play inthis letter.And, second, this is a good and effective metaphor with which to work.Consider only the key point of this letter, the notion that the iudicium ofthe sage is unbendable and rigid. Seneca had written elsewhere aboutrigidity of judgment we think of the perverse and passionate iudex Cn.Piso from the On Anger. Yet here judgment in its normative sense is supposed to be rigid and unbending. The merely human judge on the bench,like the ordinary man exercising moral judgment, must not be a severusor rigidus iudex, for reasons we know about from his other discussions ofmoral judgment: human affairs call for the kind of fine evaluations andjudgment calls that lead anyone with a grain of self-knowledge to refrain,to suspend, to wait. On matters so complex that it is wiser (as Seneca saysin On Favors 3.6) to refer them to the gods, only the sage, Zeus intellectual equal, can truly judge. The inflexibility suitable for gods and forthe sage would be mad rigidity for us.25 It is often said that Seneca, like 92 all later Stoics, adopts a double code of ethics, one for the sage and onefor miserable mankind. I have argued before that this is not so.26 WhatSeneca accomplishes in this bold experiment of thinking by means of aliving legal metaphor is to show that, despite all of the differences betweensage and fool, there is still but one norm by which all humans should live.The inescapable fact that we are all moral judges, each according to hisor her abilities, unites us in the shared humanity that Seneca urged soineffectively on Nero in his address On Clemency. Notes1. As Janet Sisson reminds me (in correspondence), the judicial metaphor is alsoused in relatively straightforward epistemological contexts as well, as by Platoat Theaetetus 201. But the issues involved with moral judgment are markedlydifferent, as we shall see.2. In Plato too there are examples of such models. Socrates account in theProtagoras of moral decision as a matter of measurement and calculation isan obvious example of such a philosophical redescription.3. My thanks to Michael Dewar for this observation.4. This nexus of ideas has not been fully explored in Seneca, although I amaware of three helpful discussions. First, Dull 1976, though jejune, nevertheless confirms the realism and legal accuracy of Senecas handling of legalconcepts. (Indeed, his discussion of the exceptio, pp. 37780, would shed useful light on discussions of reservation in Senecas works, although I do notpursue that issue here.) Second, Maurach 1965 makes some tantalizing butunderdeveloped suggestions along the lines I pursue here (the pertinent remarks are on pp. 316ff. of the reprinted version). Closer to my argument isMaria Bellincionis discussion of the judicial metaphor in connection withthe theme of clemency (Bellincioni 1984a). In this paper, I think, her viewof how Seneca uses the metaphor is somewhat one-sided: The sense . . . is,then, always just one: it is an invitation to seek in human relations, such asthey are, the sole authentic justice which is born from an attitude of love(124 of the reprinted version); compare her remarks about Ep. 81 on p. 115,which opposes clementia to the rigidity of the iudex rather too starkly. I argue,first, that the judicial metaphor is more of a conceptual tool for thinkingthrough a range of problems; and, second, that Seneca makes more positiveuse of the notion of a moral judge than Bellincioni allows for. Her thesis is(in outline) that humanitas, love, and forgiveness stand in opposition to therigidity of judging, whereas I think Seneca leaves considerable room for anidealized form of judging that is practicable only for a sage. I am grateful toMiriam Griffin for pointing out the importance of Bellincionis work for mydiscussion. (See too her book, Bellincioni 1984b.)5. tamquam in alieno iudicio dicam I think that Procopes translation is wronghere. I would prefer to translate as though I were speaking at someone elsestrial which he is not really doing, since this issue affects us all. 93 94 18. Compare On Favors 2.14.1: iudicium interpellat adfectus. Also Epistulae 45.34where iudicium is contrasted to externally motivated indulgentia. Tony Longpointed out that sunkatathesis (so important in Stoic analysis of the passions)is originally a legal term for casting a vote at a trial. I have discussed thispassage of On Anger in Inwood 1993.19. In 1.5 the term iudicium is used generically too Seneca avoids technicalprecision and consistency. At On Favors 1.10.5 it is iudicare which is used forunstable opinion in contrast to scire.20. Compare Epistulae 66.32: sola ratio est immutabilis et iudicii tenax.21. See Diogenes Laertius 7.8586, where pleasure is an epigennema.22. See Epistulae 108.21: iudicium quidem tuum sustine.23. In the Oxford Classical Text: rigidari quidem amplius quam intendi potest.24. The invariability of virtue forms the basis for the argument in support of themain proposition under discussion, that all goods are equal. Because theother goods are measured by virtue and (as goods) found to measure upto its standard, they must all be equal with regard to the trait measured bythat absolute standard (in this case, straightness, Epistulae 71.20). See alsoEpistulae 66.32: Ratio rationi par est, sicut rectum recto. Omnes virtutes rationessunt; rationes sunt, si rectae sunt; si rectae sunt et pares sunt.25. I am grateful to Tony Long for directing my attention to what the StoicHierocles says about divine judgments in Stobaeus (1: 63, 6 W): they areunswerving and implacable in their krimata. The virtues on which this rigidityis based are epistemological, of course (ametaptosia and bebaiotes), and sharedwith the sage.26. See Inwood 1999b. 5Stoic First Movements in ChristianityRichard Sorabji 96 Richard Sorabji is fear. All those things are movements of minds unwilling to be moved, and notemotions [adfectus], but preliminary [principia] preludes to emotions.2.2.6 It is in this way that the trumpet excites [suscitare] the ears of a militaryman who is now wearing his toga in the middle of peacetime and the clatter ofweapons alerts [erigere] the camp horses. They say that Alexander put his hand tohis weapons when Xenophantus sang.2.3.1 None of these things which jolt the mind by chance ought to be calledemotions [adfectus], but are things to which the mind is subject, so to speak, ratherthan being active. So emotion is not being moved at the appearances presented bythings, but is giving oneself up to them and following up this chance movement.2.3.2 For with pallor, and falling tears, and irritation from fluid in the private parts, or a deep sigh, and eyes suddenly flashing, or anything like these,if anyone thinks that they are a sign of emotion and a manifestation of themind, he is mistaken and does not understand that these are jolts to thebody.2.3.3 So very often even the bravest man grows pale as he puts on his armor, andwhen the signal for battle is given, the knees of the fiercest soldier tremble a little,and before the battle lines ram each other, the heart of the great commanderjumps, and the extremities of the most eloquent speaker stiffen as he gets readyto speak.2.3.4 Anger must not merely be moved; it must rush out. For it is an impulse[impetus], but there is never impulse without assent of the mind [adsensus mentis].For it is impossible that revenge and punishment should be at stake withoutthe minds knowledge. Someone thinks himself injured, he wills revenge, buthe settles down at once when some consideration dissuades him. I do not callthis anger, this movement of the mind obedient to reason. That is anger whichleap-frogs reason and drags reason with it.2.3.5 So that first agitation of the mind which the appearance of injustice [speciesiniuriae] inflicts [incussit] is no more anger than is the appearance of injusticeitself. It is the subsequent impulse [impetus], which not only receives but approves[adprobavit] the appearance of injustice, that is anger: the rousing of a mind thatprosecutes vengeance with will and judgment [voluntas, iudicium]. There is neverany doubt that fear involves flight and anger impulse [impetus]. See if you thinkanything can be chosen or avoided without the assent of the mind [adsensusmentis].2.4.1 In order that you may know how emotions [adfectus] (1) begin, or (2)grow, or (3) are carried away [efferri], (1) the first movement is involuntary [nonvoluntarius], like a preparation for emotion and a kind of threat. (2) The secondmovement is accompanied by will [voluntas], not an obstinate one, to the effectthat it is appropriate [oporteat] for me to be avenged since I am injured, or it isappropriate for him to be punished since he has committed a crime. (3) The thirdmovement is by now uncontrolled [impotens], and wills [vult] to be avenged, notif it is appropriate [si oportet], but come what may [utique], and it has overthrown[evicit] reason.2.4.2 We cannot escape that first shock [ictus] of the mind by reason, just as wecannot escape those things we mentioned which befall the body either, so as toavoid anothers yawn infecting us, or avoid our eyes blinking when fingers are 97 suddenly poked toward us. Reason cannot control those things, though perhapsfamiliarity and constant attention may weaken them. The second movement,which is born of judgment, is removed by judgment.2 I think Seneca was defending the view of his Stoic predecessor Chrysippus that the emotions are judgments. Distress is the judgment that thereis present harm and that it is appropriate to feel a sinking of the soul.Pleasure is the judgment that there is present benefit and that it is appropriate to feel an expansion of the soul. Fear is the judgment that thereis future harm and that it is appropriate to avoid it. Appetite is the judgment that there is future good and that it is appropriate to reach for it.Sharply distinguished from these judgments are the mere appearancesthat there is benefit or harm in the offing. The mere appearance is notyet a judgment and not yet an emotion because a judgment and, hence,an emotion is the assent of reason to the appearance. Ordinary peoplenot trained in Stoicism may give the assent of reason so automaticallythat they do not realize that assent is a separate operation of the mindfrom receiving an appearance. But Stoicism trains you to stand back fromappearances and interrogate them without automatically giving them theassent of your reason.This account of the emotions as judgments of reason is very intellectualistic. Seneca wants to distinguish sharply from the genuine emotionwhat he calls first movements. The first movements are involuntary.They do not involve the voluntary assent of reason. Seneca wants to show,by contrast, that anger and other emotions can be eradicated by rational means and that is why he wants to distinguish so decisively betweenthe emotion itself, which involves the voluntary assent of reason, and themere first movements, which are admittedly involuntary but are not tobe confused with voluntary emotions.First movements are initial shocks that are caused (Senecas verb in2.3.5 is incussit) by the mere appearance of harm or benefit. Seneca distinguishes physical and mental first movements. His examples of physicalfirst movements are shivering, recoiling, having ones hair stand on end,blushing, vertigo, pallor, tears, sexual irritation, sighing, the eyes flashing, the knees trembling, the heart jumping. Even in the most eloquentspeaker, he adds, the fingers will stiffen. There has been controversy aboutwhat he means by mental first movements, but I think this is revealed by apassage in Cicero in the preceding century, Tusculan Disputations 3.8283,where Cicero talks of bites and little contractions which are independentof judgment and, unlike judgment, are involuntary.3 Cicero does not yet 98 say that these bites and little contractions can precede the judgment, buthe does at least make them independent of judgment. I have arguedelsewhere that bites and little contractions are fully explained by Galenin On the Doctrines of Hippocrates and Plato, books 23, as being sharp contractions of the soul, and the soul, Chrysippus believed, was a physicalentity located in the chest.4 These contractions can be felt. We ourselves,and Galen like us, would reinterpret them as physiological rather thanas movements of a physical soul. Thus reinterpreted, they are familiar toeveryone. We have all experienced sinking feelings when distressed.I believe that Senecas first movements perform four important functions. First, we have seen by distinguishing them from emotions, Senecahoped to dispel the impression that emotions are involuntary. Second,I think that first movements provide an answer to Posidonius objections to the claim that emotions consist exclusively of judgments. Posidonius, as reported by Galen,5 offers examples in which, he says, we haveemotion without the relevant judgments. Sometimes we shed tears eventhough we disown the judgment that there is harm. People can have theiremotions changed by music, which is wordless and therefore does nothing to change their judgments. Animals are not capable of the relevantjudgments and yet they too, Posidonius urges, experience emotions. Itis noteworthy that Seneca, without mentioning Posidonius, has a replyto all three of these alleged examples of emotion without the relevantjudgments. In no case do we have here genuine emotion. We have onlyfirst movements. Seneca explicitly mentions tears and the effects of musicas examples of first movements. Just before our passage, without actuallymentioning first movements, at 1.3.8, he denies that animals experiencegenuine emotions. A further function of his discussion, then, is to ruleout Posidonius alleged examples of emotion not requiring judgment.Third, it is noteworthy that Seneca speaks as if the arts do not arousegenuine emotion but only first movements. In 2.2.34 he considers thetheater, historical narrative, singing, rhythm, trumpets, and painting, andhe says that these arts arouse only first movements. This I think helps toexplain what has proved something of a mystery. Why do not the Stoics,given their enormous interest in the theater, discuss the brilliant theory,provided by Aristotle in response to Plato, the theory of catharsis? According to Aristotle, there is no need to banish from the ideal societythe writers of tragedy and comedy because the stirring up of emotionshas, after all, a good effect, overlooked by Plato, of lightening us by somecatharsis of the emotions. Whatever Aristotle means by catharsis, whichis controversial, he gives the analogy of a medical laxative or emetic. We 99 can now begin to guess why at least the later Stoics did not need to discussAristotles theory further. Seneca had in effect ruled it out by his claim inour passage that the theater does not arouse emotions at all but only firstmovements. In that case, it could not perform the cathartic function thatAristotle postulated. No wonder, then, the later Stoics look for anotherjustification for the theater in society.Fourth, I think Senecas distinction of first movements is useful in thetreatment of unwanted emotions. William James said we do not cry because we are sad; we are sad because we cry.6 This is not entirely true,but there is some truth in it. One can think, I must have been unjustlytreated. Look, I am even crying. Noticing the tears can intensify ouremotion. Conversely, Senecas advice should help us to calm our emotions. We need only say to ourselves, these are just tears, in other words,first movements. So they are irrelevant. The only question that matters iswhether I am really in a bad situation. The distinction of first movementscan be genuinely calming.The term first movement is explicitly used in 2.4.1, although thevaguer term beginnings (principia) is used in 2.2.5. Senecas contemporary in the first century a.d., Philo of Alexandria, the Jewish philosopher,tells us of another name for Stoic first movements: prepassions.7I want to turn now to the brilliant Christian thinker Origen two centuries later than Seneca in the third century a.d., or at least to thefourth-century Latin translation provided by Rufinus of Origens On FirstPrinciples.8 In On First Principles 3.2.2, Origen, in Rufinus translation,explicitly uses the expression first movements but he completely transforms the Stoic idea of first movements by connecting them in 3.2.2and 3.2.4 with the bad thoughts that are discussed in the Gospels ofMark 7:21 and Matthew 15:19. By identifying first movements withthoughts and suggestions, he blurs the sharp distinction between firstmovements and emotions on which the Stoics had insisted. For now firstmovements like emotions are thoughts. In his commentary on Matthew26:3639, Origen uses the same word as Seneca had used at 2.2.5 thatfirst movements are beginnings (principia). But now the word beginnings has become vague because it is no longer so clear whether a beginning of emotion is distinct from emotion or whether it is a little bitof emotion. Origen adds that these first movements are no more thanincitements and that they may be aroused either naturally or, in somecases, by the devil.After Philo and Origen, who both worked in Alexandria, the Stoictradition about prepassion continued in that city. Didymus the Blind 100 and his pupil Jerome, who were both in Alexandria, continue to talk ofprepassion. It is no surprise, then, if Origens idea of first movements wasfamiliar to Evagrius, who came to live in the Egyptian desert in the fourthcentury a.d.Evagrius had been ordained first by Basil of Caesarea and then byGregory of Nazianzus. But in 382 he had an affair with a married womanand had to leave Constantinople. He was first sheltered in Jerusalem byRufinus and Melania, who were the heads of monastic communities inJerusalem for men and women, respectively. After staying with them hewent to join the monks in the Egyptian desert and soon became a semianchorite. The semianchorites lived in solitude for six days of the weekand met their fellow monks only on the seventh day. He had ample timeto study the emotional states that afflict the hermit in the solitude of thedesert. Coming to Evagrius from the Stoics is like moving from Aristotleslogic of individual terms to the Stoic logic of complete propositions, inthat Evagrius studies not the individual types of emotions so much as theinterrelations between emotions. In his Practical Treatise, Sentence 6, it isclear that he is talking about first movements, even though he does notuse the name, for he says that it is not up to us whether bad thoughts affectus but it is up to us whether they linger or stir up emotions (pathe ). Thereare eight bad thoughts, which, using a Stoic term, he calls generic. Theyare thoughts of gluttony, fornication, avarice, distress, anger, depression,vanity, and pride. Having the thoughts is not yet sin but only temptation.Sin, he says, is assent, assent to the pleasure of the thought. These lastpoints are made in Practical Treatise 7475.The most generic thoughts, in which every thought is included, are eight in all.First the thought of gluttony, and after that the thought of fornication, thirdthat of avarice, fourth that of distress, fifth that of anger, sixth that of depression[akedia], seventh that of vanity, eighth that of pride. It is not up to us whether anyof these afflict the soul or not, but it is up to us whether they linger or not, andwhether they stir up emotions [pathe] or not.9For a monk temptation [peirasmos] is a thought arising through the emotional part[to pathetikon] of the soul and darkening the intellect. For a monk sin [hamartia]is assent [sunkathathesis] to the forbidden pleasure of the thought.10 Evagrius says that thoughts of vanity play a special role. For if you have defeated the other seven bad thoughts, you are liable to be defeated in turnby thoughts of vanity. Even if you defeat thoughts of vanity themselves,you may be overcome by vanity at that achievement, so we learn in Practical Treatise 3031. In 58, we learn that the only way of defeating vanity may 101 Which of us has not counted how many pages are left in a boring book?A third demon encourages uncomfortable memories of the homethat the monk has left forever, sometimes, like Evagrius, after an affair,sometimes after a family quarrel. There are harrowing stories of mothers 102 103 pride as the root of all sin, by collapsing depression and distress, and byadding envy. A further transformation occurred, in that depression wasitself transformed into what for Evagrius and Cassian been merely theeffect of depression, mainly sloth. Evagrius depressed monk is not doingthe reading he was supposed to. But in Evagrius the term akedia appliedto the depression, not to its effect, the slothfulness in reading.Augustine had surely read Evagrius. In an earlier treatise written ina.d. 394, On the Sermon on the Mount, he repeats the term from Rufinustranslation of Origen that we are subject to suggestions.15 Augustine alsoexplains that when Christ tells us that a man who looks at a woman to lustafter her has already committed adultery with her in his heart, Christ wasnot condemning titillation. Titillation is involuntary. What is condemnedis only looking at a woman in order deliberately to stir up lust. Jeromerepeats this explanation four years later and adds that the titillation itselfis only a prepassion.Some twenty years later in a.d. 414, in On the Trinity, Augustine returns to the subject of bad thoughts.16 He explains that we all have badthoughts, but they become sins only if one is pleased by the thought, orretains it and willingly revolves it instead of spitting it out. These criteria for sin, of being pleased and retaining, are extraordinarily close toEvagrius. Augustine thinks that if we believe that we do not need forgiveness, we are in fact committing the sin of pride, and he is very criticalof Pelagians for allegedly thinking this way. We can atone for our badthoughts according to On the Trinity by saying the Lords Prayer and asking for our sins to be forgiven. As we ask for forgiveness we must beatour breast, and we must add that we forgive others. Elsewhere, for example, in On the Sermon on the Mount, Augustine connects the request in theLords Prayer for our daily bread with the necessity of taking the breadof communion every day, every day because we are subject to temptationand sin every day.17 Augustine also distinguishes different degrees of sinaccording to whether we assent merely to the pleasure of the thought orwhether we actually act on it. New questions about first movements became possible once they were transformed into thoughts. One could alsoask, Did you put yourself in the way of the thought? Thomas Aquinaslater asks, Did you take pleasure in the process of thinking or in thecontent of the thought?18Augustine has an agenda. In City of God, he defends Aristotles idea ofmoderate emotion (metriopatheia) against the Stoic ideal of freedom fromemotion (apatheia).19 Saint Paul, Augustine points out, bids us rejoice withthem who do rejoice, and Christ was really saddened. The insistence on 104 105 Epictetus. Gellius then introduces a new word and says that it is alrightfor the Stoic sage to experience pavor. Pavor is a word ambiguous betweenreal fear and mere trembling: we might translate it having the jitters.Gellius uses the word pavor twice, the adjective pavidus once, and the verbpavescere once. This new word gives a distinct impression that real fear maybe accepted for the Stoic sage. As a literary word, pavor is ideal, because itpreserves the ambiguity of the situation. Was the Stoic sailor allowed onlyto tremble or to feel real fear? But as a philosophical word, pavor was adisaster because a philosopher needs to know exactly which of the two,fear or trembling, is what the Stoics allow. Pallor is what is allowed to theStoic, but the alliteration was no doubt an added attraction for Gellius,when he moved to pavor.It is a great pity that Augustine read Aulus Gellius, the philosophical journalist, instead of Seneca, who would have been absolutely clear.Augustine says in City of God 9.4, that he will tell Gellius story a little moreclearly (planius). He repeats the word pavescere but disambiguates it thewrong way. He says that it is alright for the Stoic sailor to have the jitters(pavescere) with real fear (metu) and to experience contractions (contrahi)with real sadness (tristitia). Augustine moreover uses the word passions(passiones) three times to describe what is allowed to the Stoic sage. Heuses another word for genuine fear (timor), and he wrongly claims thatthe Stoic sailor failed to set his life at naught (nihili pendere). The twovery different concepts of pallor and pavor would have been related inAugustines mind because he tells us in City of God 6.10 that pallor andpavor were recognized as two gods. In fact, however, Gellius easy slidebetween pallor and pavor was to prove disastrous in Augustine.There is a further mistreatment of first movements by Augustine inCity of God 14.19. Augustine is there discussing one of the exceptionalemotions of which he disapproves, namely lust. He explains that lust isunlike anger because it is not under the control of the will, whereas angeris. Augustine fails to notice the point that is so clearly made in Seneca,On Anger 2.3.2, that angry flashing of eyes, sexual irritation, and pallorare all on the same footing. All three of them are mentioned by Senecaas involuntary first movements. It makes no difference that each of themcan lead to voluntary behavior. In this regard anger is still no differentfrom lust. All three involve first movements that are not under the controlof the mind.Why does Augustine misrepresent the Stoic theory of first movements?In City of God 9.4, his aim is to dismiss the Stoics. He was further writingin the tradition that came from Origen which blurred the distinction 106 Notes1.2.3.4.5.6.7. 8.9.10.11.12.13. 107 14. See Jerome, Letter 133, to Ctesiphon (CSEL 56: 246); Against the Pelagians,Prologue (PL 23: 496A); On Jeremiah 4.1 (CSEL 59: 22021).15. Augustine, De Sermone Domini in Monte 1.12.3334 (PL 34: 1246).16. Augustine, De Trinitate 12.12 (PL 42).17. Augustine, De Sermone Domini in Monte 2.7.25 (PL 34).18. Thomas Aquinas, Summa Theologiae IaIIae, q. 74, a. 8.19. Augustine, De Civitate Dei 14.9 (PL 41).20. Augustine, Confessiones 4.4 (7) 9 (14) (PL 32).21. Aulus Gellius, Noctes Atticae 19.1.22. For a more detailed account of this transformation, see Sorabji 2000. 6Where Were the Stoics in the Late Middle Ages?Sten Ebbesen Where were the Stoics in the late Middle Ages? The short answer is:everywhere and nowhere.Stoicism is not a sport for gentlemen; it requires far too much rigorous intellectual work. Most of Western history consists of gentlemenscenturies. But there were the couple of centuries, the fourth and thethird b.c., in which the ancient philosophical schools were created, andthere were the three centuries from a.d. 1100 to 1400, when medievalscholasticism flourished centuries that produced a considerable number of tough men ready to chew their way through all the tedious logicalstuff that disgusts a gentleman and to make all the nice distinctions thata gentleman can never understand but only ridicule, distinctions necessary to work out a coherent, and perhaps even consistent, picture ofthe structure of the world. In this respect, a good scholastic and a goodStoic are kindred spirits. As an attitude to doing philosophy, Stoicism iseverywhere in the late Middle Ages.Also, bits and snippets of Stoic doctrine were available in a large number of ancient writings and could inspire or be integrated in scholastic theory. Further, some works with a high concentration of Stoicismwere widely known, notably Ciceros Paradoxes of the Stoics, De Officiis, andSenecas Letters to Lucilius. Finally, Saint Paul was a crypto-Stoic ethicist,1and so were several of the church fathers.In another way Stoicism is nowhere. After all, the philosophical authority par excellence was Aristotle, whose works were available, whereas themedievals were like us in not possessing a single work by one of the majorStoics, and they had not a Hans von Arnim to collect and organize thefragments from secondary sources. A well-educated man might know108 109 that the Stoics thought everything was governed by fate and that virtue isthe highest good, and little else. He might have some general idea of thehistory of ancient philosophy, but it would be a rather different historyfrom ours. To him Democritus and Leucippus might be Epicureans, andSocrates the founder of the Stoic school, while the leading Peripateticswould be Aristotle, Porphyry, and Avicenna.Real knowledge of Stoicism had become extinct already by a.d. 500,the time, that is, of Boethius, the man who more than anyone else decidedthe direction scholastic philosophy was to take. Simplicius, who was acontemporary of Boethius, apparently had access to just about any pieceof older Greek philosophical literature available anywhere in the empire,but he could find none or very few of the writings of the Stoics.2It can make a scholar cry to read such remarks of Boethius as these inhis larger commentary on Aristotles De Interpretatione:Here Porphyry inserts much about the dialectic of the Stoics and the otherschools, and he has done the same in his comments on several other parts ofthis work. We however, shall have to skip that, for superfluous explanation rathergenerates obscurity <than clarity>.3Porphyry, however, inserts some information about Stoic dialectic, but since that[i.e., Stoic dialectic] is unfamiliar to Latin ears . . . we purposely omit it.4 Even when Boethius had access to an account of Stoic doctrine, he considered it irrelevant and decided not to hand it down to the Middle Ages.Thus, no schoolman had a clear idea about what Stoicism had beenall about, and very few Stoic tenets were known as such. In this senseStoicism was nowhere.In a fundamental sense all philosophy in the medieval West had beenAristotelian since its birth in the Carolingian age. Out of the study ofPorphyrys Isagoge, Aristotles Categories and De Interpretatione, Boethiuscommentaries and monographs, and Priscians grammar developed apeculiar sort of Aristotelianism with a strong emphasis on logic and linguistic analysis. In Abelards youth, about the year 1100, this native Western philosophical tradition was already highly developed. Between 1130and 1270 the whole Corpus Aristotelicum became available. The philosophical landscape changed, but the way philosophers worked remained trueto the native tradition: all scholastic work is marked by the analytical approachdeveloped in its early phase.I see a connection between the scholastics analytic approach andStoicism. I also find Stoic inspiration in the development of some centralpieces of doctrine relating to propositions, arguments, and intentional 110 Sten Ebbesen objects and in a radical trend in ethics that made liberty a key concept. Ishall use some Stoic divisions to structure the main part of this essay. 111 fundamentally the same ways as those that took care of human rationality and its products. Hence, although we may not grasp what God oris means in the true proposition God is God, we must assume thatit obeys standard rules of inference, so that it is equivalent to God isGod by virtue of Godhood, while God is Godhood must be consideredill-formed and untrue. This conclusion nearly earned Gilbert a condemnation for denying the unity of God.8Now, what has this to do with Stoicism? This: the Porretans spheresof nature, culture, and reason were clearly inspired by the Stoic divisioninto natural science, ethics, and dialectic, in Porretan language facultasnaturalis, moralis or ethica, and rationalis. The Porretan understanding offacultas moralis as comprising not only human action and its evaluationbut any sort of evaluation and any sort of human product was novel anddid not have a long future. The addition of a facultas theologica was anothernovelty, for which a better future was in store. But, anyhow, Gilberts wasan original development of a Stoic piece of thinking.The understanding of nature as the sensible world with the constituents mentioned in the biblical account of creation was not equallyoriginal. This equation provided a setting in which certain pieces of Stoicphysical theory could be used. Several twelfth-century thinkers made ittheir aim to lay the foundations of a science of nature by sketching a cosmogony that required no direct divine intervention after the creation ofthe four elements. Once the elements had been created, the laws of nature had been established, and the world would gradually articulate itselfuntil reaching its present state. Men like Thierry of Chartres (a contemporary of Gilberts) read the biblical Genesis as a succinct account of sucha cosmogony. One common feature of their theories was the assignmentof a special role to fire as the active element in the cosmogonic process.To some extent, this was a revival of the Stoic pur technikon/ignis artifex,and even the very expression was occasionally used. Thierrys type of natural philosophy disappeared or receded into the background after thetwelfth century, and ignis artifex was not to become an important conceptagain until the Renaissance.9 112 things corporeal than did most philosophers, whereas their list of incorporeals was short: time, place, the void, sayables.10The Stoic division was used, but not explained, in a text every medievalphilosopher knew. In the prologue to his Isagoge, Porphyry asks thesefamous questions about genera and species: (1) Do they subsist or dothey repose in mere secondary conceptualizations? (2) If they subsist,are they corporeal or incorporeal? (3) Do they subsist in separation orin and about sensible things?11The first two questions presuppose Stoic ontology. They ask: (1) Aregenera and species tina or outina somethings or nothings? (2) Supposingthey are somethings, are they bodies or incorporeals? The third questionreflects the perennial debate between Platonists and Aristotelians.Although a sketchy account of the Stoic notion of somethings andnothings was in principle available in Senecas Letter 58, this was largelyunknown to the schoolmen, and no other source available to them carried the information. When, in the fourteenth century, John Buridan seriously considers whether something quid or aliquid might function asa genus generalissimum, he betrays no awareness of the Stoic theory.12 Neither are those medieval theories directly linked to the Stoics which makeuniversals and other objects of thought similar to Stoic outina; we evenfind people who call some entia rationis nothing (nihil).13 But it seemsthat by using the Stoic division Porphyry had smuggled in a conceptualpossibility that some people were to develop.Moreover, at least after about 1140, the medievals knew Chrysippusoutis, that is, the nobody argument, which runs:If someone is in Megara, he is not in Athens.Man is in Megara, ergo man is not in Athens. The argument shows the absurdity that a theory incurs if it allows there tobe a universal man and treats him as a somebody. While never becomingimmensely popular in the West, it did play a certain role in teaching people the dangers of hypostatizing universals. Commentaries on AristotlesCategories and Sophistical Refutations seem to have been important vehiclesin the arguments long travels from Chrysippus to the West.14The distinction between real corporeal things and incorporeal quasithings was not very relevant in the Middle Ages when everybody acceptedthe existence of real incorporeal things. Already Boethius had a problemwhen a revered author used the distinction. Cicero, in his Topics 5.2627,divides things into those which are, that is, corporeal things; and thosewhich are not, that is, intelligible things. Boethius comments lengthily to 113 show that, strictly speaking, it is false to say that incorporeal things are notand, to excuse Cicero, resorts to the expedient of claiming that Cicerodoes not in this place express his own opinion but just one commonlyheld by uneducated people.15John Scotus Eriugena in the ninth century seems to have had theCiceronian passage in mind when he divided nature into things that areand things that are not, the chief representative of the latter class beingGod.16 The result, of course, is most un-Stoic.We seem closer to Stoic thought when a text from the second half ofthe twelfth century text lists things that do not fall into any Aristoteliancategory: Universals, then, are neither substances nor properties, buthave a being of their own, as is also the case with enunciables, times,words [voces, possibly an error for loca, places] and voids [inania, themanuscript has fama, rumor].17 Apart from universals, this text mentions four noncategorial quasi things, at least two, probably three, andpossibly four of which also occur in the Stoic list of four incorporeals(sayable, time, place, void). An enunciable is something very much likea Stoic complete sayable (lekton).A longer list occurs in a Categories commentary from about 1200:There are several noncompound terms that neither signify substance nor are contained in any category due to their meaning, such as (1) any name that containseverything, (2) any name of the second imposition, (3) any name imposed asa technical term, (4) any name that denotes an enunciable, a truth, e.g., and(5) time, (6) space, (7) place, (8) void, and any other extracategorials.18 Elsewhere the same author tells us that the void (inane) has a being ofits own (suum esse per se),19 and the same is said about enunciables in yetanother near contemporary text.20One might think that book 4 of Aristotles Physics could have inspiredtwelfth-century speculation about the ontological status of place, the void,and time, but the book was virtually unknown at the time and the textsthat speak about these quasi things having a being of their own contain no echos of the Physics. Anyhow, an Aristotelian connection is quiteimpossible as regards the enunciables; only the Stoics had a somewhatsimilar notion, namely, that of a sayable. Sadly, I have not succeeded inidentifying any text available in twelfth-century Latin Europe that couldhave transmitted the Stoic list of not-being somethings. Hence we may befacing a case of parallel development, similar problems giving rise to similar ideas. But at least the development was encouraged by some genuineinheritance. 114 115 un-aristotelian logicTo most scholastics Aristotelian logic equaled logic. Any later additions,including their own products, were just extensions of Aristotles work,nothing really new. A modern observer may find it hard to share theschoolmens self-perception on this point.Among the strikingly un-Aristotelian features of scholastic logic is thegreat attention paid to syncategorematic words. The class includes quantifiers, modal operators, exclusives like only, exceptives like besides,the verbs is, begins, ceases, the conjunction if and several othermembers.Another strikingly un-Aristotelian feature is a strong interest in conditionals and a two-part analysis of arguments, eventually issuing in thegenre of consequentiae, that is, special treatises about molecular propositions consisting each of two propositions joined by the particles therefore or if-then (ergo, si). The two parts of a consequence were called theantecedent and the consequent. Categorical syllogisms might be viewedas consequences with a molecular antecedent the conjunction of thetwo premises and an atomic consequent (the conclusion). A frequentlycited criterion for validity of a consequence was this: [I]f the opposite of 116 117 A related requirement for the antecedent and consequent was formulated by the people whom Sextus characterizes as those who judgeby inclusion (emphasis): Those who judge by inclusion say that thatconditional is true whose consequent is potentially contained in theantecedent.30 Modern scholars have not been able to agree on whothese inclusion-people were. Michael Frede in his book on Stoic logiccomforted himself with the remark that, anyhow, this definition of a trueconditional appeared to have been of little historical importance becausenobody else mentions it.31Personally, I feel tempted to ascribe it to Posidonius, because it maybe seen as an attempt to flesh out what cohesion means, the answerbeing: semantic inclusion of consequent in antecedent and Posidoniusclaimed that (one class of) relational syllogisms hold kata dunamin,axiomatos possibly because he viewed an argument such as a:b = c:d.a:b = 2:1 c:d = 2:1 as implicitly contained in the axiom Those betweenwhich there is the same ratio universally, between those all the particular ratios arealso the same.32Be that as it may, the historically unimportant cohesion definitionof a true conditional turns up almost verbatim in the scholastic period,when very many adhered to the view that it is a good consequence,when the consequent is included [or: understood] in the antecedent.33Once again, however, it has not been possible to demonstrate a directhistorical connection between the ancient Greek and the medieval Latinformulation of the view. Parallel development, perhaps.Special treatises on consequentiae only appeared about the year 1300,but the notion had been around for a long time, and so had the rulesI discussed earlier. Already about 1100 there was a debate whether asyllogism is one molecular proposition, and if so, whether this meant thattrue instances of p, q, therefore r and true instances of p, therefore qcould be treated alike as depending for their truth on a topical maxim.Some Westerners saw every consequence, including syllogisms, as onemolecular proposition. This was already proposed by Abelards teacher,William of Champeaux.34 Abelard disagreed. In his discussion of theopposing view, he tells us that its champions claim that subcontinuative conjunctions glue sentences together just as well as continuative conjunctions.35 Now, what is that? Thats a recycled piece of Stoicdialectic! The Stoics had a healthy interest in conjunctions, and a classification allowing for primary and secondary types in some classes. Thus wefind the synaptic conjunction if (ei) and the parasynaptic conjunctionas (epei), the diazeuctic conjunction or (which expresses exclusive 118 sunemmenonsunaptikos sundesmosdiezeugmenondiazeuktikos sundesmossumbama parasunemmenonparasunaptikos sundesmosparadiezeugmenonparadiazeutikos sundesmosparasumbama The system was not meant for more than two brothers, but in the caseof sumbama parasumbama there was a third one, and this littlest brotherwas called elattone e parasumbama less than by-companionship.Careful distinctions accompanied by a systematic terminology wereone of the hallmarks of Stoicism. Thus, the use of nominal suffixes hadbeen regulated. An action understood as a state of someones center 119 of command (hegemonikon) and hence corporeal was praxis; one understood as the result of acting and hence incorporeal was a pragma. Similarly, other -ma words were used for incorporeal objects of wish andthe like. Now, careful distinctions and bold linguistic creativity with thegoal of having a systematic terminology were also characteristic traits ofmuch of scholasticism. Was there a historical connection? There probably was. The remains of Stoic terminology in grammar helped build atradition for being systematic. Boethius De Topicis Differentiis, one of themost widely read logic books in early scholasticism, provided anotherlink.In the beginning of that work Boethius introduces a distinction between argumentum and argumentatio. An argumentum is the sense and forceof an argumentatio, which, in turn, is the expression of the argumentum.38Wherever Boethius learned this distinction, I have little doubt that it isbased on the Stoic distinction between incorporeals with a - name andcorporeal entities with a - name.39As I have already indicated, an enuntiabile also called a dictum is a close relative of the Stoic sayable (lekton). It is that-which-aproposition-states, and it is a truth or a falsehood. It was the sort of mentalexercise they got when reading about arguments versus argumentationsthat taught the medievals how to make a distinction between an enunciation and its content, the enunciable.Abelard, who may have been the inventor of the dictum, has ea quaepropositionibus dicuntur as one possible interpretation of the Boethianargumentum.40 It is just possible that he derived the term dictum fromSenecas 117th letter to Lucilius,41 which contains some important information about sayables and offers dictum as one possible Latin translation.The rival term among the scholastics, enuntiabile, may be a creation ofAdam of Balsham, who belonged to the generation after Abelard,42 andit may owe something to Augustines use of dicibile.43The reinvention of something similar to a Stoic sayable was facilitatedby the presence in Boethius of a modified version of the Stoic definitionof an axioma: [A] proposition is an utterance that signifies a truth or afalsehood.44 This definition called for an answer to the question whatsort of thing are those truths and falsehoods signified by propositions?The answer, we have already seen, was They are not things in anyordinary sense; they do not really add to our ontological inventory. Thiswas a momentous decision, to introduce quasi things besides genuinethings. The move was repeated many times over for the next couple ofcenturies. Universals could be considered quasi things, for instance; or 120 121 on a deictic foundation for descriptive terms is one we know from the oldStoa. It does not go well with the Aristotelian notion of substance, butthough few were willing to drop traditional substances, one may, I think,talk of a crisis for substances in the fourteenth century.But we have not finished with the sayable-like entities and nonentities.John Buridan, in the fourteenth century, makes much of the differencebetween entertaining a proposition and accepting it. In his philosophy ofknowledge and science this means that knowledge is certain and evidentassent supervening on a mental proposition, by which we assent to it withcertitude and evidence.50 Knowledge, then, is a propositional attitude,like opinion.51 As in Stoicism, the basis is an assent, a sunkatathesis asthe ancients said.52 We need no more-or-less hypostatized essences to bethe objects of our knowledge. For the object of knowledge to have therequisite immutability, all we need is to take the relevant propositionsto be conditionals, or to take them as having an omni- or atemporalcopula.53The idea of separating assent to a proposition from understanding itis obviously transferable to ethics, and a move in that direction had beentaken as early as the twelfth century.In his Ethics, Peter Abelard tries to establish what is required for something to be morally wrong, what it takes to be a peccatum, or error as Ishall call it. He considers various candidates for the criterion of whetheran action is an error: vicious disposition, a desire (voluntas) to do thewrong thing, and wrong done, and eliminates each of them in turn. Theprimary bearer of moral predicates is the agents intention, his consciousacceptance of acting in some way. Abelards elimination of the other candidates for primary bearer of moral predicates culminates in his fantasy ofa monk who is trussed up with rope and thrown into a soft bed containinga couple of attractive women. The monk experiences pleasant feelings.That is not a good thing for a monk, but it makes no sense to say: whatis evil here is pleasant feelings, or: what is evil here is a mans naturalinclination to experience such feelings in such circumstances, or: what isevil here is his desire for we want to praise people who can resist desiresto do wrong things. So, if the monk is to be blamed in this case, it canonly be if he accepts having those pleasant feelings; if they occur againsthis internal protests, he is blameless.54Abelards word for acceptance is consensus. He may owe it toAugustine, or even to Saint Paul.55 The similarity to the Stoic sunkatathesisis striking. To err or sin, then, is to consent to something that is not properor allowed,56 and what is not proper is acting in some way. Abelard does 122 not discuss the ontological status of this object of consent, but later inthe century, at least, some would say that the object must be a dictum the content of a proposition, that is, for the monk my having pleasantfeelings.57Two centuries later after Abelard, John Buridan makes it a centralpoint of his ethics that man has the freedom to accept (acceptare), refuse,or postpone the decision about a certain course of action presented tothe will by the intellect. He does not explicitly say that the object ofacceptance refusal, or postponement is a mental proposition, but givenhis other philosophy it can hardly be anything but a proposition of theform, This is to be done.58 The old Stoics and at least some scholastics also share a predilection foroutrageous, paradoxical theses. The twelfth century shared with the Hellenistic age the trait of having competing philosophical sects, and eachof them made a list of remarkably strange theses that they were willingto defend: No noun is equivocal, No animal is rational or irrational,and so on. Most of the twelfth-century theses have to do with logic.63 TheStoic ones are ethical: All errors (sins) are equal and Only the sage isa king.The use of crazy examples and the flaunting of paradoxical thesesare symptoms of an intellectual fanaticism common to the Stoa and the 123 medieval schoolmen. They always try to carry the discussion to the extremes, hoping to construct a totally consistent philosophy, and theyhappily sacrifice a lot of commonsense views to achieve that goal. 124 Boethius ends his diatribe with speaking of the philosophers love for anddelight in the first principle, concluding:This is the life of the philosopher, and whoever has not that life does not havea right life. By philosopher I mean any man who lives according to the rightorder of nature and who has acquired the best and ultimate end of human life. 125 The first principle, which I have spoken about, is God the glorious and most high,who is blessed for ever and ever.70 conclusionAlthough clad in Aristotelian garbs, all variants of scholastic philosophydiffered markedly from Aristotles. None was Stoic either. But there wasa community of spirit with the Stoics, which made it possible for variousStoic rationes seminales preserved in ancient texts to develop into freshorganisms, themselves organic parts of different totalities of theory fromthat of the parent plants. Stoicism was nowhere and everywhere in theMiddle Ages but it was everywhere in a more important sense than theone in which it was nowhere.Notes1. Cf. Engberg-Pedersen 2000 and Chapter 3 in this volume.2. Simplicius, In Categorias Aristotelis (ed. Kalbfleisch 1907 in CAG 8.334.23):tois Stoikois, hon eph hemon kai he didaskalia kai ta pleista ton suggrammatonekleloipen. 126 3. Boethius, In Librum Peri Hermeneias 2a (ed. Meiser 1880: 71): Hoc locoPorphyrius de Stoicorum dialectica aliarumque scholarum multa permiscetet in aliis quoque huius libri partibus idem in expositionibus fecit, quodinterdum nobis est neglegendum. Saepe enim superflua explanatione magisobscuritas comparatur.4. Ibid. (201): Porphyrius tamen quaedam de Stoica dialectica permiscet: quaecum Latinis auribus nota non sit, nec hoc ipsum quod in quaestionem venitagnoscitur atque ideo illa studio praetermittemus.5. Among the sources transmitting the classification was Isidorus much-usedOrigines (Etymologiae) 2.24.38. For early medieval divisions of philosophy,see Iwakuma 1999.6. Boethius, In Categorias Aristotelis, PL 64: 159AC.7. Among other things, this might allow us to say that what unites the sciencesthey called dialectic is exactly that they speak the second-imposition language.See Pinborg 1962: 160.8. The account given here represents my interpretation of the whole Porretanproject. Documenting it is not possible within the space of this chapter. Themain sources are Gilberts commentary on Boethius Opuscula Sacra (ed.Haring 1966), the Compendium Logicae Porretanum (ed. Ebbesen, Fredborg,and Nielsen 1983), and Evrard of Ypres Dialogus (ed. Haring 1953). Forsecondary literature, see in particular Nielsen 1982, Jolivet and Libera 1987,and Jacobi 1987.9. On Stoicism in twelfth-century physics, see Lapidge 1988. For Thierrys cosmogony, see Haring 1971. A brief version of the same cosmogony occurs inAndrew Sunesens Hexaemeron (3.14681516) from about 1190 (ed. Ebbesenand Mortensen 198588).10. Sextus Empiricus, Adversus Mathematicos 10.218 (= SVF 2:31: 117): ton deasomaton tessara eide katarithmountai hos lekton kai kenon kai topon kai chronon.11. Porphyrius, Isagoge (ed. Busse 1887) (= CAG 4.1: 1.1012): peri ton genon te kaieidon to men <1> eite huphesteken eite kai en monais psilais epinoiais keitai <2> eitekai huphestekota somata estin he asomata <3> kai poteron chorista he en tois aisthetoiskai peri tauta huphestota. In Boethius Latin rendition (Aristoteles Latinus 1.67;5.1014): de generibus et speciebus illud quidem sive subsistunt sive in solisnudis purisque intellectibus posita sunt sive subsistentia corporalia sunt anincorporalia, et utrum separata an in sensibilibus et circa ea constantia.12. Buridanus, In Metaphysicen Quaestiones 4.6 (ed. 1518: f. 17rbva).13. See, for example, Lambertini 1989.14. Cf. Ebbesen 1981: 1:4649, 203204; 2:465, 471; 3:199. Variants of the argument continued to be used until the fourteenth century at least.15. Boethius, In Topica Ciceronis (PL 64: 1092D1093A): Sed id sciendum est,M. Tullium ad hominum protulisse opinionem, non ad veritatem. Nam utinter optime philosophantes constitit, illa maxime sunt quae longe a sensibussegregata sunt, illa minus, quae opiniones sensibus subministrant. . . . Sed, utdictum est, corporea esse, et incorporea non esse, non ad veritatem sed adcommunem quorumlibet hominum opinionem locutus est.16. Eriugena, Periphyseon 1 (ed. Sheldon-Williams 1968: 36). This is the openingremark of the work: Saepe mihi cogitanti diligentiusque quantum uires 17. 18. 19. 20. 21.22. 23.24. 127 suppetunt inquirenti rerum omnium quae uel animo percipi possunt uelintentionem eius superant primam summamque diuisionem esse in ea quaesunt et in ea quae non sunt horum omnium generale uocabulum occurritquod graece, YCIC latine vero natura uocitatur.Ars Meliduna, ch. 2 (ed. Iwakuma, forthcoming), f. 219rb: Non sunt ergouniversalia substantie nec proprietates sed habent suum esse per se, sicut enuntiabilia, tempora, et voces et fama. The proposed emendationsare mine. I feel rather confident that fama is a scribal error for inania,whereas voces may be a correct reading, though it is tempting to changeit into loca since locus and inane both appear in Anonymous DOrvillensislist of extracategorial words. In ch. 1, the Ars Meliduna has a long discussion of the ontological status of voces, and it seems possible that the author would claim that (in a certain sense) they have a sort of being oftheir own.Anonymous DOrvillensis, In Categorias Aristotelis (ed. Ebbesen 1999a: 273):sunt enim plurimi [sc. termini incomplexi] qui nec significant substantiamnec continentur in aliquo praedicamento significatione, ut nomen omniacontinens et nomen secundae impositionis et nomen ab artificio impositumet nomen appellans enuntiabile, ut verum, et tempus spatium et locusinane, et omnia extrapraedicamentalia.Ibid. (314): Hoc nomen locus tres habet acceptiones. Dicitur enim locus inane, ut: Socrates est in aliquo loco, movetur ab illo, locus ille a quomovetur intelligatur vacuus talem habens partium dispositionem qualemhabet Socrates, ita quod ibi non subintret aer, licet hoc sit impossibile; ettalis locus dicitur inane. Dicitur etiam locus aliquod superficiale cui aliquidsuperponitur, ut pratum. Dicitur etiam locus substantia ut domus, scyphuset alia huiusmodi concava quae infra sui concavitatem aliquid continent.In nulla harum significationum accipitur hic. Quia prout dicitur inane necest substantia nec qualitas nec res praedicamentalis sed suum habens esseper se.Anonymous, Ars Burana (ed. de Rijk 196267: 2:208): dicendum est deenuntiabili, sicut de predicabili, quod nec est substantia nec accidens necest de aliquo predicamentorum. Suum enim habet modum per se existendi.Et dicitur extrapredicamentale . . .On the twelfth-century schools, see Ebbesen 1992 and the other articles inVivarium 30/1.The main source for the ancient debate is Simplicius, In Categorias Aristotelis(ed. Kalbfleisch 1907 = CAG 8.913). It appears from his account that thequestion had already been raised by the time of Alexander of Aigai in thefirst century.This is what the moderate nominalist, Anonymus DOrvillensis, does about1200 (ed. Ebbesen 1999a).Compendium Logicae Porretanum (ed. Ebbesen et al. 1983): I. De terminis, II.De propositionibus, III. De significatis terminorum, IV. De significatis propositionum. Ars Meliduna (ed. Iwakuma, forthcoming): I. De terminis, II. Designificatis terminorum, III. De propositionibus, IV. De dictis propositionumsive de enuntiabilibus. 128 25. Auctoritates (ed. Hamesse 1974: no. 34 [i.e., Analytica Priora] 7, p. 309):Quando oppositum consequentis repugnat antecedenti, tunc consequentiafuit bona. Cf. ibid., no. 14: Quandocumque ex opposito consequentis infertur oppositum antecedentis, tunc prima consequentia fuit bona. Cf. alsoIncerti Auctores (ed. Ebbesen 1977), qu. 90: Dicendum ad hoc quod nonsequitur, quia oppositum consequentis potest stare cum antecedente, andqu. 99: Item oppositum consequentis non potest stare cum antecedente,ergo prima consequentia fuit bona per artem Aristotelis.26. Diogenes Laertius, Lives of Eminent Philosophers 7.73 (ed. Hicks 197980:180): sunemmenon oun alethes estin hou to antikeimenon oui legontos machetaitoi hegoumenoi. Cf. Sextus Empiricus, Outlines of Pyrrhonism 2.111: hoi de tensunartesin eisagontes hugies einai phasin sunemmenon hotan to antikeimenon toien autoi legonti machetai toi en autoi hegoumenoi, Those who introduce coherence, say that a conditional is sound when the opposite of its consequent isinconsistent with its antecedent.27. Akolouthoun or hepomenon are likely Greek models. For akolouthoun speaksthe fact that Boethius uses consequentia for akolouthia/akolouthesis and thatakolouthei is attested with the proper sense in Stoic logic. On the otherhand, substantival use of akolouthoun is not attested. For hepomenon speaksthe fact that this is used by others as a noun meaning consequent.Alexander of Aphrodisias, In Analytica Priora, CAG 2.1: 178 (= FDS 994,p. 1268) uses the pair hegoumenon/hepomenon in a passage relating Chrysippean doctrine. Boethius uses consequi and derivatives for the akolouthein family in his translations of Categories, De Interpretatione, and Topics, but for hepesthai in Sophistici Elenchi (and once in Topics); for references, see the relevant volumes of Aristoteles Latinus. Cicero uses consequi and antecedere inrelated senses. Boethius may have been inspired by the locus ab antecedentibus et consequentibus in Ciceros Topics. At one point in his career, Boethiusused praecedens/consequens (Hyp. Syll. I, ed. Obertello 1969: 221 = PL 64:835D836A).28. Apuleius, Perihermeneias (ed. Thomas 1938: 191): est et altera probatiocommunius omnium etiam indemonstrabilium, quae dicitur per impossibile appellaturque a Stoicis prima constitutio vel primum expositum. quodsic definiunt: Si ex duobus terminum quid colligitur, alterum eorum cumcontrario illationis colligit contrarium reliquo. veteres autem sic definierunt:Omnis conclusionis si sublata sit illatio, assumpta alterutra propositione tollireliquam.29. Ebbesen 1981: 1.26 (Sextus Empiricus, Outlines of Pyrrhonism 2.14653, Adversus mathematicos. 8.42937 [SVF 2:79, no. 240]).30. Sextus Empiricus, Outlines of Pyrrhonism 2.112: hoi de tei emphasei krinontesphasin hoti alethes esti sunemmenon hou to legon en to hegoumenoi periechetaidunamei.31. Frede 1974: 90.32. Galen, Institutio Logica 18 (ed. Kalbfleisch 1896: 4647).33. Illa consequentia est bona in qua consequens includitur [or: intelligitur]in antecedente. See, for example, Incerti Auctores, Quaestiones super LibroElenchorum, qu. 47 (ed. Ebbesen 1977: 98): illa consequentia est bona 34. 35.36.37.38. 39. 40.41. 42.43.44.45.46.47.48. 49. 50. 129 130 51. In other places Buridan operates with a knowledge that has both things anda proposition for their object, that is, knowledge becomes a triadic relationbetween knower, thing known, and proposition known. Cf. Willing 1999.52. For other propositional attitudes among the Stoics, one might think of theverbal moods. Cf. Ildefonse 1999: 16970.53. Conditionals: see William of Ockham. Omni- or atemporal: see Buridanus,Summulae 4.3.4 (ed. de Rijk 1998: 47). Cf. Buridans discussion in his Quaestiones super decem libros Ethicorum VI.6. See also Ebbesen 1984: 109, n19.54. Abelard, Ethica (ed. Luscombe 1971: 20).55. Cf. Marenbon 1997: 260 n31. See also Romans 7:16.56. Cf. Marenbon 1997: 260 with footnotes.57. Cf. the discussion of Deus vult Iudam furari in Andrew Sunesen 8.482331(ed. Ebbesen and Mortensen 198588: 230).58. Buridans Quaestiones super decem libros Ethicorum VI.6 is curiously ambiguous:acceptare and assentire both occur, but the object of assentire isapparentiis.59. FDS 845. Similar examples, using a donkey with or without one foot, occur inthe Sentences commentary of Roger Rosetus (fourteenth century), of whichan edition is being prepared by Olli Hallamaa of Helsinki.60. Cicero, De Officiis 2.23.90 (from Hecaton).61. See, for example, Peter of Auvergne, Sophisma VII, in Ebbesen 1989: 157.Socrates desinit esse non desinendo esse etc. Ista oratio probatur facta positione sc. quod Socrates sit in paenultimo instanti vitae suae.62. Ps.-Buridan = Nicholas of Vaudemont, Quaestiones super Octo Libros PoliticorumAristotelis VI.7 (Paris 1513; rpr., Frankfurt am Main: Minerva, 1969) f. 92ra.See emended text in Cahiers de lInstitut du Moyen Age Grec et Latin 56 (1988):19495.63. Nullum nomen est aequivocum was a tenet of the Melun school; see Ebbesen1992: 63. An anonymous list of tenets of the Albricani (discovered by Yukio 67.68. 69. 70. 71. 131 7Abelards Stoicism and Its ConsequencesCalvin Normore Abelards Stoicism 133 That a difference in the reason for which an act is done can affect itsmoral character shows that it is not merely intrinsic features of the actthat matter but leaves open what extrinsic features might be relevant.As preparation for his own positive view, Abelard insists that the moralsignificance of an act cannot derive solely from what he calls the voluntasfrom which it is done either. To show this, he borrows Augustines example 134 Calvin Normore of the slave who kills his master to avoid being beaten but who does it as Abelard, like Augustine, insists without a voluntas for doing it: [F]orbehold there is an innocent man whose cruel lord is so moved by ragethat he chases him with a drawn sword to kill him. Eventually, the man,though he flees for a long time and avoids his own death as much as hecan, forced and unwilling kills the lord lest he be killed by him. Tell me,whoever you are, what bad will he would have had in doing this.5 Afterconsidering several possible defenses for the view that there is some badvoluntas involved, Abelard concludes: [S]o it is evident that sometimessin is committed entirely without bad will; it is clear from this that it oughtnot to be said that sin is will.6 Abelard goes on to conclude that not everysin is voluntary either because There are those who are wholly ashamedto be dragged into consent to concupiscence or into a bad voluntas andare dragged by the infirmity of the flesh to velle this which they do not atall velle to velle.7 What then remains? In the Ethics Abelard insists that:Vice is that by which we are made prone to sin, that is, are inclined to what is notfitting so that we either do it or forsake it. Now this consent we properly call sin,that is, the fault of the soul by which it earns damnation or is made guilty beforeGod. . . . And so our sin is contempt of the Creator and to sin is to hold the creatorin contempt, that is, not at all to do on his account what we believe we ought todo on his account or to forsake on his account what we believe is to be forsaken.8 Sin is contemptus dei. But how does one commit it? The passage justquoted suggests that one sins by not acting in a certain way, but this iselliptical. As Abelard goes on a few pages later:We truly then consent to that which is not permitted when we do not at all drawback from its accomplishment but are wholly ready, if given the chance, to doit. Anyone who is found in this disposition [proposito] incurs the fullness of guilt;nor does the addition of the carrying out of the deed add anything to increasethe sin. On the contrary, before God the one who strives to do it as much as hecan is as guilty as the one who does it as much as he can.9 135 136 action. If the moral value of an intention does not derive from the moralvalue of the act that it is an intention to perform, then from what does itderive? By making the moral value of acts derivative on the moral valueof intentions to perform them, has not Abelard made it impossible toassign a value to acts at all?Abelards procedure appears to be ungrounded. In contrast, moral theories that first assign a moral value to acts and then use that to assigna value to intentions are typically grounded. For example, in The Theoryof Morality, Alan Donagan first works out an account of which acts it ispermissible to perform and then connects this to an account of intention through the principle: It is impermissible to intend to do what itis impermissible to do.15 Donagans account of the permissibility of actions makes no reference whatever to intentions and so avoids circularity.Abelard can, it is true, accept the principle that it is permissible to intendjust what it is permissible to do, but, because the permissibility of acts is,for him, derivative upon the permissibility of intentions, it cannot providethe basis from which we compute the permissibility of an intention. Anindependent specification of the permissibility of an intention is needed.One might seek a beginning for such an independent specification inAbelards remark that for an intention to be good, what one intends mustbe in fact pleasing to God. This would do, provided there was a specification of what was pleasing to God that did not depend on specifying whatis good. But is there?Abelard was infamous for the claim that God was not able to do anything other than what God did do. Abelards argument was simple. Because God acted for the best, this was the best. Because this was the best,anything else would be less good. But God cannot do the less good; therefore God cannot do anything other than what God does do.16 If we takethis at face value, we see at once that if we suppose that the best is justwhatever most pleases God, then Abelards argument reduces to the claimthat whatever most pleases God necessarily most pleases God. This maybe defensible but it cannot be defended on the grounds that Abelardhimself invokes. For example, it cannot be reconciled with this passagefrom the Dialogue :So since plainly nothing is done except with Gods permitting it indeed nothingcan be done if he is unwilling or resists and since in addition its certain thatGod never permits anything without a cause and does nothing whatever exceptreasonably, so that both his permission and his action are reasonable, surelytherefore, since he sees why he permits the individual things that are done tobe done, he isnt ignorant why they should be done, even if theyre evil or are 137 evilly done. For it wouldnt be good for them to be permitted unless it weregood for them to be done. And he wouldnt be perfectly good who would notinterfere, even though he could, with what wouldnt be good to be done. Rather,by agreeing that something be done that isnt good to be done, he would obviouslybe to blame.So obviously whatever happens to be done or happens not to be done has areasonable cause why its done or not done.17 138 good acts depend on morally good intentions that are grounded in statesof affairs that have no moral value but are metaphysically good. Thesemetaphysically good states can, the proposal goes, be identified withoutany recourse to morality at all.Can we find the distinction this proposal needs in Abelard?Abelard is very sensitive to the logic of the word bonum. At the end ofthe first book of the Dialogue where the Christian and the Philosopherseem to have just about arrived at a consensus, he has the Christian saythat when it is used adjectively (as in good horse or good thief ),the word bonum has its signification affected by the noun to which it isattached. Thus to be a good man is not the same thing as to be a goodworker even if the same thing is man and a worker. The Christian pointsout that we apply the term bonum both to things (res) and to dicta andrather differently. We could say for example that it is bad that there existsa good thief.At this point, Abelards moral theory intersects with his ontology andhis logic. In his ontology Abelard has individual res and individual forms there is nothing else. But Abelard is happy to speak about two other items,which he usually calls statuses and dicta. Being a human is a status, andwe are all humans because we agree in this status. But a status is not athing, and there is nothing in which we agree. That we are all humanis a dictum but it too is not a thing. I dont really know exactly howto fit actions into Abelards metaphysics (despite the discussion of facereand pati in the Dialectica), but I am reasonably certain that these too arenonthings. My evidence for this is that, when Abelard gives examples ofcausation (which he expressly claims is a relation among nonthings), theexamples he gives are of actions thus, committing a murder is a causeof being hanged, he says. So whether or not actions just are statuses, theyare like them in many ways.The primary adjectival uses of the word bonum are to things undera status and to dicta. We speak of good people, good horses, and goodthieves, and we say that it is good that freon levels in the atmosphere havestopped rising so quickly. Especially important for our purposes is the statusof being human.We say a man is good for his morals, Abelard has the Christian say,18and his morals are a matter of the mans intentions. Hence to be a goodman is to have good intentions or, as the Christian sometimes puts it, toact well (as contrasted with doing good).This brings us back to an earlier worry: what is it to have a good intention or to act well? 139 In this passage, Abelard apparently assumes what I will call the Principleof the Concomitance of All Goods (PCG) the view that real goods neverconflict.Now Abelard expressly accepts the PCG. In his Dialectica, for example,he writes: However, truth is not opposed to truth. For it is not the casethat as a falsehood can be found to be contrary to a falsehood or anevil to an evil so can a truth be opposed to a truth or a good to a good;rather to it [the good] all things good are consona et convenientia.20 PCGis a principle of which various corollaries figure prominently in Abelardslogic and metaphysics. For example, Abelard is distinctive among twelfthcentury logicians for rejecting the so-called locus ab oppositis, that is, thetopical rule that enables one to infer from the presence of one feature tothe absence of its contrary. The consequences of this for ethics emergein another passage very late in the Dialogue where the Christian says:As was said, we call a thing good that, while its fit for some use, mustnt impedethe advantage or worthiness of anything. Now a things being impeded or lessened would indeed be necessary if through its contrary or lack the worthinessor advantage would necessarily not remain. For example, life, immortality, joy,health, knowledge, and chastity are such that although they have some worthinessor advantage, it plainly doesnt remain when their contraries overtake them. Sotoo any substances whatever are plainly to be called good things because, whiletheyre able to impart some usefulness, no worthiness or advantage is necessarilyhampered through them. For even a perverse man whose life is corrupt or evencauses corruption, could be such that he were not perverse, and so nothings beingmade worse would be necessary through him.21 The upshot of this passage, as I read it in conjunction with the endorsement of PCG in the Dialectica, is that we have a test for telling metaphysicalgoods from nongoods. Metaphysical goods are elements that necessarilycan coexist together. Evils on the other hand may coexist, but do notnecessarily coexist. 140 God, the Supreme Good, always and necessarily acts for the best onAbelards view. The best for which he acts is not a moral best but a metaphysical best; he creates the maximal collection of necessarily compatiblethings and forms the metaphysically best world.22 Moreover, he makesthis collection in such a way that each item in it has a reasonable causeof its coming to be in it.For a human being to intend well is to intend to do what is pleasingto God, that is, to intend to bring about a situation (an eventus rei inAbelards terminology) which you believe to be part of the metaphysicallybest world and which is, in fact, part of the metaphysically best world, andto intend to do this in a way which you believe to be and which in fact isreasonable. For a human being to act badly is to intend to bring abouta situation that you believe to be inconsistent with the metaphysicallyperfect world, either because you believe the intended effect to not havea place in such a world or because you believe the way you intend to bringit about not to be a reasonable one. An intention is neither good nor badif it is an intention to bring about something that you believe to be part ofthe metaphysically best world and to be reasonably brought about as youintend but which in fact is not. On this account, metaphysical perfection isspecifiable independently of intentions, and so the circle that threatenedAbelards account is broken.I will now try to suggest that the ethical picture attributed to Abelardso far is Stoic, both in content and inspiration.If I guess correctly, much of the account just related will seem familiar to those interested in Stoicism. The division of items into goods, evils,and indifferents, and the Principle of the Concomitance of All Goods arewell-known Stoic themes. But I want to make a stronger claim that thecentral feature of Abelards ethics, his view that the locus of sin is consentor intention is a Stoic view. Since this feature of Abelards ethics also becomes a central feature of fourteenth-century Franciscan ethical theory,it is, I suggest, plausible to think that through Abelard an interesting aspect of Stoic ethical theory is transmitted. The first hurdle my claim mustclear is the suggestion that by Abelards time, at least, there is nothingdistinctively Stoic about the suggestion that sin is a matter of consent andintention. After all, does not Abelard himself quote Augustine in supportof his doctrine?Like Abelard, the Stoics face the problem of accounting for moralresponsibility within the best of all possible worlds. Let me stress theimportance of this. Stoic determinism is also a providentialism: on the 141 Stoic view this is not only the only possible world, but also the best possibleworld. There is a wealth of evidence for the Zenonian equation of Fate,Providence, and Nature, and for the Chrysippean equation of Fate, Providence, and Zeus.23 In such a setting, to live in accordance with nature isto accept Fate. It is to identify with what must (and so will) happen.Such a stance raises very peculiar problems for moral responsibility.Dio will kill Theo if and only if that is for the best. Thus, if Dio kills Theoit is good, even best, that Theo be killed by Dio. How then can we blameDio? Both Abelard and the Stoics have this problem and both respond inthe same way by making moral praise and blame not a matter of what isdone but of the description under which it is done. In Abelards exampleof the two men who cooperate to hang a criminal, the hanging of thecriminal justly is a good; what is bad is the revenging of oneself. Theexecutioner and the vengeful man both hang the criminal, and bothconsent to hang the criminal, but only the vengeful man consents torevenging himself and only he is blameworthy.Abelards solution to this common problem is to make consent orintention the locus of moral responsibility. The ancient Stoic solution isto locate moral responsibility in sunkatathesis which Cicero translatedas adsensio and English writers translate as assent.First, then, let me try to make plausible the identification of Abelardianconsent and intention with Stoic assent so as to defend the claim that itwould be reasonable for Abelard to think his consent a Stoic notion. HereI appeal to Brad Inwoods authority:[I]f one wants to know what someones intentions are, one simply checks to seewhat hormetic propositions receive assent. If there is a quarrel about what anagent was intending to do, the dispute is in principle soluble. Did Dion intend towound Theon when he threw the javelin across the playing field? We need onlyask what proposition he assented to in performing that action.24 Inwood admits that, although there is no ancient source for the claim thatthere are strictly hormetic propositions, there is plenty of theoretical justification for thinking the Stoics distinguished them. And what would sucha hormetic proposition, assent to which leads directly to horme (impulse), 142 be like? Inwoods example is that of a man with a sweet tooth who in thepresence of a piece of cake forms the hormetic proposition, It is fittingfor me to eat cake.26What now of Abelard? Usually he speaks of consent not to dicta butto acts. X consents to take his sister in marriage, or to build a house forthe poor, or to kill his master. But Abelard also insists that there is beliefinvolved. Thus an intention isnt to be called good because it appearsgood, but more than that, because it is such as it is considered to be that is, when if one believes that what he is aiming at is pleasing to God,he is in addition not deceived in his evaluation.27 And he sometimesdescribes sin to be scorn for God or consenting to what one believesshouldnt be consented to.Because believing that your intention is one you should (or shouldnot) have is a matter of accepting or assenting to a dictum, it seems clearthat assent is involved in Abelardian consent.28Grant me for a moment that Abelards consent or intention is closeenough to our best current understanding of Stoic theory on this matterthat, if Abelard had had our sources, it would not be unreasonable forhim to think that the view was Stoic. Still the question remains whetherit is plausible to think that he thought of it as a Stoic doctrine.In the Ethics, Abelard proposes the doctrine that the locus of sin is consent as his own, but in the Collationes and in particular in the Dialogue of aPhilosopher with a Christian, Abelard introduces it first as the Philosophersposition. The Philosopher claims:For certain things are called goods or evils properly and so to speak substantially.For instance the virtues and vices themselves. But certain things are so called byaccident and through something else, like actions that are our deeds. Althoughtheyre indifferent in themselves, nevertheless theyre called good or evil from theintention from which they proceed. Frequently, therefore, when the same thingis done by different people, or by the same person at different times, the samedeed is nevertheless called both good and evil because of the difference in theintentions. On the other hand, things that are called goods or evils substantiallyand from their own nature remain so permanently unmixed that what is oncegood can never become evil, or conversely.29 This is the first occurrence in the Dialogue of the claim that intention isthe locus of moral culpability, and it follows shortly a passage in whichthe Christian refuses to take a stand on the issue. It seems plain then thatin the Dialogue Abelard is marking this as a distinctive position of thePhilosopher. 143 144 the Lombard takes up the question of what a sin is and canvasses threeopinions:Because of the ambiguity occasioned by these words, widely divergent opinionson sin have been observed. For some say that the will alone is sinful, and notexternal acts; others, that both the will and acts are sinful; still others reject both,saying that all acts are good and from God and exist by divine authorship. Evil,however, is nothing, as Augustine says in his Commentary on the Gospel of John (16):All things are made through him, and without him nothing is made, that is, sin isnothing, and when men sin they are not producing anything.33 The first of these opinions is Abelards (with the terminological shift justmentioned). Lombard returns to it in d. 40:Next, regarding acts, it seems that we must also ask whether they should beconsidered good or evil in view of their ends, just like the will. For although somebelieve that all things are naturally good insofar as they exist, we should not callall things good or praiseworthy absolutely, but some are said to be absolutely evil,in the same way as others are called good. For those acts are absolutely and trulygood which are accompanied by good reasons and intentions, that is, which areaccompanied by a good will, and which tend to good ends. But those acts are saidto be absolutely evil which are done for perverse reasons and intentions.34 Lombard does not endorse Abelards view, but he does treat it with respect, and as commenting on his Sententiae became the standard way atheologian showed his mettle in the mid-thirteenth century, Abelardsview became one of those every theologian had to consider and discuss.Finding thinkers who attack Abelards view of this matter during thethirteenth century is easy, but finding thinkers who endorse it is muchharder, and I have not been able to discover any of whom I am confident before Peter John Olivi. But the view does appear very explicitly inOckham. Here is one of the first occurrences in Ockham in book 3 of hisCommentary on the Sentences:If you were to ask what the goodness or badness of the act adds beyond thesubstance of the act that is merely by an extrinsic denomination called goodor bad, such as an act of the sensitive part or, similarly, an act of the will, I saythat it adds nothing that is positive, whether absolute or relative having being inthe act through any cause, that is, distinct from the act. Rather, the goodness isonly a connotative name or concept, principally signifying the act itself as neutraland connoting an act of will that is perfectly virtuous and the right reason inconformity with which it is elicited.35 In his biblical Question on the Connection of the Virtues, and in his Quodlibeta, Ockham adopts the same strategy. He argues that all exterior actsand many interior acts can be called good or bad only derivatively, in 145 Notes1.2.3.4.5.6.7.8. 146 9. Ibid. (14).10. Ibid. (22).11. That said, it must be admitted that, as Blomme (1958: 12844) andLuscombe (1971: 4243, n2) have pointed out, Abelard usually speaks ofconsent when he talks of sin and of intention when he talks of merit. Ido not think that this is philosophically significant since he is quite explicit that When the same thing is done by the same man at differenttimes, his action is said to be now good and now bad on account ofthe diversity of his intention (Ethica I; ed. Luscombe 1971: 52), whichpretty clearly suggests that intentions account for bad actions as well as forgood.12. Abelard, Ethica I (tr. Spade 1995: 20).13. Abelard, Ethica I (ed. Luscombe 1971: 54).14. Ibid.15. See Donagan 1977: 127.16. Abelard himself attributes the argument to Plato in the Timaeus (28a; seeTimaeus a Calcidio translatus, ed. Waszink 1962: 20, lines 2022) in the secondDialogue, between the Philosopher and the Christian (tr. Spade 1995: 145).17. Abelard, Dialogus II (tr. Spade 1995: 145; ed. Marenbon and Orlandi 2001:21617).18. Abelard, Dialogus II (ed. Marenbon and Orlando 2001: 204).19. Ibid. (2046): Quantum tamen michi nunc occurrit, bonum simpliciteridest bonam rem dici arbitror, que cum alicui usui sit apta, nullius rei commodum uel dignitatem per eam impediri necesse est. E contrario malamrem uocari credo, per quam alterum horum conferri [conferri//impediri]necesse est. Indifferens uero, idest rem que neque bona est neque mala,illam arbitror per cuius existentiam nec ulla bona [ulla bona//illa] deferri[deferri//conferri] neque impediri necesse est, sicut est fortuita motio digitiuel quecunque actiones huiusmodi. Non enim actiones bone uel male nisisecundum intentionis radicem iudicantur, sed omnes ex se indifferentes suntet, si diligenter inspiciamus, nichil ad meritum conferunt, que nequaquamex se bone sunt aut male, cum ipse uidelicet tam reprobis quam electis equeconueniant. Square brackets indicate adoption of a variant reading. Thetranslation in the text is mine, although it is based on Spade 1995: 141. Ihave treated the scope of some modal expressions differently from the wayProfessor Spades translation does.20. Abelard, Dialectica IV.1 (ed. de Rijk 1970: 469, l.1720): veritas autem veritati non est adversa. Non enim sicut falsum falso vel malum malo contrariumpotest reperiri, ita verum vero vel bonum bono potest adversari, sed omniasibi bona consona sunt et convenientia.21. Abelard, Dialogus II (tr. Spade 1995: 147).22. See ibid.; Abelard, Dialogus II (ed. Marenbon and Orlando 2001: 14042):For since nothing happens without a cause, because God arranges all thingsfor the best, what is it that occurs that makes a just person have to grieve orbe sad, and insofar as he can to go against Gods arrangement for the best,as if he thinks it has to be corrected? (Cum enim nichil sine causa, Deocuncta optime disponente, fiat, quid accidit, unde iustum tristari uel dolere 23.24.25.26.27.28. 29. 30.31.32.33. 147 8Constancy and CoherenceJacqueline Lagree 149 150 Jacqueline Lagree 151 fortune that defeats my fortune.17 Even so, the goods of fortune are nottruly goods because there is no good except honor; they are all, at most,preferred indifferents. They are also unpredictable advantages, destinedby their very nature to change whomever they favor frequently and inunforeseeable ways. The sage therefore learns to separate himself fromfortune in order to take refuge in his inner citadel.To summarize Senecas main points:1. Constancy is a virtue proper to the sage alone. To be sure, this is trueof every virtue, but it is especially true in this case because constancyis virtue par excellence. It exhibits the specific quality of the wisdomthat is the coherent life lived in accordance with nature and reason(homologoumenos zen/convenienter vivere). In this sense, constancy isidentified with wisdom. It testifies to the particular excellence ofthe sage, to his divine status, beyond the reach of the blows offortune. This firmness of character is what corresponds externallyto the solidity and internal coherence of the representations in hissoul.2. Constancy is the virtue that responds to the onslaughts of fortune.It represents the stability of the sages soul when faced with theabsolute exteriority of fortune, which signifies the changeability ofevents outside us, their never-ending vacillation from favorable tounfavorable and back again. No compromise is possible betweenthe sage and fortune: vincit nos fortuna nisi tota vincitur (fortunevanquishes us unless it is utterly vanquished). To defeat fortune isto demystify it: to show that it is only a mistaken personificationof temporal change, which has the least reality. The door to thesages home is wide open, but fortune cannot enter there becauseit can only take up residence where something is attributed toit,18 and fortune is nothing. On the other hand, as soon as weconfuse what is disagreeable with what is evil (incommodum withmalum),19 that is, as soon as we attribute any reality to this phantom,it besieges our soul and fills it with troubles joy and sorrow, desireand fear all of which stem from our inability to live truly in thepresent.3. The true name of fortune in the Stoic system is destiny. But although Seneca thoroughly examines providence and destiny in DeProvidentia, Naturales Quaestiones, and De Beneficiis, he utters not aword about them in De Constantia. That is because constancy isnot considered there in relation to freedom but in the context of 152 Jacqueline Lagreenegative interpersonal relations (injury and offense), especially inthe struggle for political power.20 This emerges in the powerfulbond between constancy, providence, and destiny in Neostoicism. neostoic constancyThe flowering of treatises on constancy at the end of the Renaissancecan be attributed to a frightening political landscape of religious war,pestilence, and famine. Here one needs not only personal consolation as when one is bereaved, in exile, or faced with impending death21 butwell-being [salut] in both the political and philosophical senses of theterm.22 How does one keep ones peace of mind when ones only salvationappears to be a flight that is for all intents and purposes impossible?For so many years now we have been tossed about by the tempest of civil war;we are buffeted by winds of trouble and sedition from every direction, as on astormy sea. If I seek quietude and peace of mind, I am deafened by the soundof trumpets and armed conflict. If I take refuge in the gardens and countryside,soldiers and assassins drive me back into the town. It is for this reason that I havedecided to flee.23 But where to go? What part of Europe is peaceful today?24 And how doesone flee passions of the soul? Can one flee from oneself?25 Strengthenyour spirit and fortify it, replies Langius, that is the true way of finding inner calm in the midst of troubles, peace in the midst of armedconflict.26To better understand the force and originality of Lipsius treatment ofconstancy, we should compare it with that of Montaigne, who a numberof years before (157375) had written a short lecture on constancy inthe context of his reflections on military life.27 The appeal to Stoicism ishere a matter of circumstance. What does Montaigne say? The game ofconstancy is played principally to tolerate inconveniences patiently whenthey have no remedy.28 Constancy is the virtue of a soldier faced with theenemy, not at all characteristic of everyday life.29 Alluding to the impassibility attributed to the Stoic sage, Montaigne remarks that his feelingfear and suffering is not out of the question, provided that his judgmentremains safe and intact, and that the foundation of his discourse does notsuffer injury or any alteration, and that he gives no consent to his terrorand suffering.30 In short, constancy is a virtue of brave and military men,not of private individuals. But the situation becomes quite different afterthe publication of Lipsius treatise and its revival by Guillaume du Vair. 153 Lipsius DiscoveryI do not dwell on Lipsius philological writings, which are vast and wellknown.31 Lipsius not only gave to his contemporaries scholarly editionsof Seneca, which were made more authoritative through references, intheir notes, to his two treatises on Stoic doxography, the Guide to StoicPhilosophy (Manuductio ad Stoicam Philosophiam) and the Natural Philosophyof the Stoics (Physiologia Stoicorum) (Anvers 1604); he also provided themwith a newer and more modern version of the Stoic system. In thesetwo works, which are filled with long quotations, Lipsius tries to explainthe Stoic system thoroughly and show that for the most part, it is notincompatible with Christianity while also offering a sure path to wisdom.To be sure, it should be said that this is possible only by a certain amount ofborrowing from Neoplatonism, but it should also be recognized that theancient Stoic school was always borrowing ideas that originated in otherschools, and that this practice quite normal in the life of a philosophicalschool did not necessarily water down its teachings. In the Manuductioand Physiologia, Lipsius proposes a Stoicism close to that of the churchfathers,32 that is, as acceptable with some reservations,33 whereas in thetreatise De Constantia he advocates a Stoicized Christianity. Because I wishto analyze the points of convergence and divergence between Stoicismand Christianity at the end of the Renaissance, it is certainly a good ideato focus on the latter work and its impact. Lipsius De ConstantiaIn point of fact, Lipsius properly philosophical output comes down to thetreatise De Constantia, since the Politics, despite its historical importance,is only a well-organized cento of quotations. De Constantia contains onlya few quotations and a structured discussion linked by a central theme.Above all, it is a work written for himself for a man who has suffered lossin the midst of public calamity34 and a work to which anyone shouldbe able to return for encouragement. Still, why look to the Stoics for alesson in virtue rather than the Bible? Lipsius gives a number of differentreasons: (1) in De Constantia, he rejects the help of religion in a centuryrife with controversy and quarrels but lacking in true piety;35 (2) he offers three arguments in the Manuductio:36 (a) philosophy in the serviceof theology prepares the way for Christianity; (b) the ancient philosophers (prisci philosophi) are capable of transmitting to us something ofthe wisdom of antiquity (prisca theologia), since they were closer in theirorigins to God; (c) at night and when the sun is no longer shining, we 154 must navigate by the stars it being conceded that the night of faith isthe result of these controversies.37Let us now turn to the nature of constancy: what it is distinct from, aswell as what it presupposes, what it requires, and what it produces. 155 cultivation of judgmentIf everything begins with judgment, then emotion is merely weak opinion,as Cicero says.52 To guide his young friend and restore the equilibrium ofhis soul, Languis proposes a fourfold remedy:53 public evils are (a) sent byGod; (b) necessary, because they have been fated to happen; (c) useful;and (d) less serious than one might at first imagine.54 Corresponding tothese are four requirements: knowledge, discernment, perspective, andcomparison. KnowledgeTo the age-old question, Whence evil? (unde malum?), Lipsius gives thetraditional answer: From God (a deo). How? Necessarily. On the oceanof life, the man who refuses to sail with the winds of the universe refusesin vain because he must either do so or be dragged along.55 When a manknows that the ultimate source of what he sees as evil is both good andrational, he can admit that freedom is obedience to God, the formula 156 DiscernmentThe first false opinion to be uprooted is the confusion between goodsand preferred indifferents, or again, between evils and dispreferred indifferents. False goods and false evils (falsa bona et falsa mala) are external,fortuitous things affecting our well-being that do not properly concernour soul and its good, that is, virtue or honor.58 Among these, public evils(war, pestilence, famine, tyranny, etc.) are without a doubt more seriousthan private evils (suffering, poverty, infamy, death, etc.) because theyhave grievous effects on a greater number of people and cause greaterturmoil, but especially because they elicit passionate, virulent, and pernicious responses that masquerade as virtue. PerspectiveThe third discipline of judgment involves putting what happens to usinto perspective and ascribing particular misfortunes to global politicalupheaval, a vision of history that insists upon the rise and fall of empiresand, at a higher level, that involves a cosmological vision revealing a worldthat totters everywhere. In nature, as in history, everything eventuallywears out. Everything changes, even what is thought to be unchangeable,such as in the celestial order, because new stars and new lands have beenobserved.59 All the elements are invoked, one after another: the transformation of stars (fire), atmospheric changes (air), the movement of theocean and the flooding of rivers (water), earthquakes and the swallowing up of islands (earth). War between men is written into the immensecosmic framework of war between elements. So, too, it goes with the riseand fall of empires: Nations, like men, have their youth, their maturity,and their old age.60 To judge correctly, you must vary your point of viewand change your perspective. Lipsius revives Machiavellis distinction between top-down and bottom-up perspectives:61 viewed from the bottom,there is much in the vanity of human affairs to make one weep; viewedfrom the top, that is, from the perspective of God and Providence, onewill recognize that everything is governed in a planned and unchangeable order.62 Guillaume du Vair uses another image, that of the GreatPyramid of Egypt, which travelers approach from different directions.63 157 Each one sees only the side facing him and, without walking around it andthereby realizing that its three faces form one body, resumes his journeyconvinced that he alone has seen it as it really is.When these people came to reflect on this sovereign power directing and governing the universe, which they had hitherto considered in its effects, each wascontent to view it from afar and conceive of it in whatever way it first appeared tohim. The one who perceived order and a regulated sequence of causes pushingthemselves into existence one after another called it Nature and believed that itis responsible for everything that happens. The one who saw a number of thingshappen that were foreseen and predicted that they could in no way be avoidedcalled the power that produced them Destiny and fatal necessity and judgedeverything to be subordinate to it. Another still, who saw infinitely many eventsthat he could not make sense of and that seemed to happen without cause calledthe power from which they originated Fortune and reckoned that all things aregoverned in this way.64 Nature, destiny, and fortune are only imperfect names, representing onesided views of a unique divine Providence that is, as it were, the geometric projection of these perspectives. In the final analysis, fortune andnecessity are no more than two faces of an omnipotent deity, the oneinscrutable, the other capable of being foreseen by men.65 ComparisonWe can see a thing without its disguises by putting it into its proper perspective. Once we have learned not to see our misfortunes as evils, theirproper perspective will show that they are less serious than we first imagined and that they have beneficial effects not apparent at first glance.Change is the law of the universe;66 discord is necessary for harmony;67what frightens us, like the masks that terrify children, is the false image ofthings we make for ourselves.68 When we compare actual evils to ancientcalamities, we find that they are neither so great nor so serious.69 Consequently, we will be able to move on to the other, more positive, aspect ofthis recovery of the self, that is, to the cultivation of the will. cultivation of willThe will belongs to us. As an expression of our freedom, it participatesin secondary causes inscribed in the providential system of cosmic order:God carries along all that is human by the force of destiny, without takinganything away from its particular power or movement.70 True freedomlies not in pointless rebellion against God which nevertheless remains 158 possible for us71 but in obedience to God, like the man who ties hisboat to a rock and uses the rope to pull himself to shore rather than tobring the rock to himself.72In the end, one must concede that the finality of evil has a certainpurposefulness for man and the world at the same time.73 For man, thisutility is captured in three words: practice, correction, and punishment.74Calamities serve to restrict our access to the goods we seek: they lash,restrain, and strengthen the weak, as well as punishing the wicked directlyor through their acquaintances and descendants.75 For the world, theykeep the population under control.76 However, these bitter remediesare less effective than the shock treatment one feels once one becomesaware of their necessity. It is no use to protest, since one cannot rebel butonly consent to an outcome that is inescapable.77 Faced with the searchfor the ultimate cause of the evils that afflict us, Lipsius reaches for theremedy of learned ignorance: [W]here divine and transcendent thingsare concerned there is only one kind of knowledge that of knowingnothing (in divinis superisque, una scientia, nihil scire).78 159 160 161 Here we notice immediately that consolation has been substituted for constancy, and that men persuade themselves of something instead of actuallyknowing it. Nature, Destiny, and Fortune are but three faces covering thesummit of the pyramid named Providence.99 Finally, a more moral visionreplaces the cosmic vision of the fall of empires:Consider, if you will, the fall of any empire and of all the great cities; compareits rise to its fall; you will see that its worthy ascendance was helped along by itsvirtue and that it was also assisted in its endeavors by this holy Providence; on theother hand, you will concede that its fall was just and that its vices made its ruinby divine justice virtually inevitable.100 162 despair and hope alike:104 So then, since we are so doubtful about thingsin the future and misled by our hopes and fears, what steps can we taketo resolve our fear of the future that will cause us to abandon our presentduty?105 True constancy is nourished by the three theological virtues offaith in a good and provident God and belief in the immortality of thesoul106 a common notion,107 central to philosophy,108 which is specifiedand reinforced by religion; hope, indeed, assurance in eternal life afterdeath;109 and finally, charity, because beyond compassion, there is a dutyto relieve the suffering of those who are close to us.110 Constancy is a moralposition taken by and for oneself; consolation is concerned with others.Finally, the exhortation to virtue in du Vair opens into eschatologicaltime because future history will inevitably know the same vicissitudes offortune and the return of the same public disasters. The examination ofall the perfections a man seeks goodness, wisdom, power and authority,truth, eternal being, creative power (children, works, discoveries), justice,perseverence in achieving ones goals, affluence in life, contemplation,joy in oneself111 shows that they are but human reflections of the divineperfections.The Christian reading of Stoicism expressed in du Vair can be summarized as follows: the sage and God are no longer identical, butanalogous;112 constancy is determined not only by the recognition oforder in the world but by the hope of eternal life; and, therefore, constancy becomes a virtue with regard to weak theoretical determinations.In fact, in the final analysis, the ultimate basis of du Vairs exhortation topatience is no longer philosophical but theological, and the treatise concludes with pages invoking the incarnation, resurrection, redemption,and eternal life. 163 Christian Seneca does not include any reference to Seneca and seems arather traditional, pious discussion.114 Hall simply praises the pagans forhaving done a better job of repudiating Fortune than Christians whowere so illuminated by the light of the Gospel that a casual observerwould be unable to tell what is pagan in our practice.115 He uses Stoicexhortations to constancy as a means of combating religious inconstancycaused by the desire for novelty,116 which leads the superstitious personfrom Rome to Munster, 164 to satisfy us. This is an ability that disarms fortune, an antidote that neutralizesall poisons, a catalyst that turns all metals into gold,124 a wisdom that expressesthe hidden aspects of self-love, a general maxim that is applicable to everythingthat happens.125 165 Actually, Goulart remains faithful to the scholastic model, which sees philosophy as the maidservant of theology. Thus, when philosophical reasonclaims to liberate itself from theological and ecclesiastical authority, it islike a roughly educated maidservant who speaks more aristocraticallythan the wise mistress of the house,147 a situation that would not be tolerated in any well-ordered family. It is because everything is going badlyand because men lack fear that they must read Seneca, in order to subjectthemselves to reason although this must still be ordered to divine wisdom. For otherwise, it is no longer reason but a wild animal unceasinglyand senselessly fighting against its mistress, wanting to deprive her of herauthority.148 166 The impassive sage, relying only on himself and not on God, has madehimself a prisoner in his own inner fortress. In the final analysis, Stoicismlooks to be a philosophy of hyperbole that can have admirers but nodisciples. Furthermore, when these philosophers utter these lofty words,they are in my opinion imitating orators who, using hyperbole, lead us tothe truth by lying to us and persuade us to do what is difficult by making usthink we can do the impossible.156 Must this kind of Stoicism be entirelycondemned? Certainly not more than tropes of orators. It is valuable tothe extent that it persuades us to do what is difficult:All their pompous speeches managed only to prop up the spirit in its domain, lestit succumb to bodily weakness; they authorized its power in words more eloquentthan true, thinking that in order to bring us to the rational point of view, theyhad to lift us higher, and that, in order not to assign any superfluous function toour senses, they had to deny them what they needed.157 167 168 Notes1. This is what Spanneut suggests (1973: 387): Stoicism a philosophy fortimes of misfortune? Certainly.2. The De Constantia of Lipsius appeared in some eighty editions andtranslations.3. See Charron, De la sagesse II, ch. 3; Descartes, Traite des passions de lame(1649), art. 153.4. I have written many other works for other people; this book I have written primarily for myself. The former were for my reputation, the latter formy own well-being (De Constantia, preface to the first edition, tr. Du Bois,1873: 117).5. I have sought consolation for public evils; who has done so before me? (DeConstantia, preface to the first edition, p. 116).6. Cicero, Academica 2.47.145.7. Cf. Bernard Joly, Rationalite de lalchemie au XVII`e si`ecle, Collection Mathesis(Paris: Vrin, 1992).8. In contrast to the conception of a Pascal or Malebranche, for example.9. This very brief treatise was written between 1572 and 1578.10. Eymard dAngers 1964.11. Notice in Seneca the difference in tone between the Ad Marciam de Consolatione or Ad Helviam and the De Constantia Sapientis.12. Andronicus, Peri pathon, SVF 3:270. See also Diogenes Laertius, Lives of Eminent Philosophers 7.93, in SVF 3:265. Goulet translates this as endurance. 169 13. Pierre Hadot translates the Latin dogma as discipline, a term designating aprinciple based on a rule of life. See his analysis of the three disciplines ofassent, desire, and impulse in Hadot 1992: esp. 5962.14. Seneca thoroughly explores this Epicurean theme with specific examples inDe Tranquillitate Animi.15. Diogenes Laertius, Lives of Eminent Philosophers 7.157; Plutarch, De Communibus Notitiis contra Stoicos 46.16. Seneca is more insistent about this in De Beata Vita.17. Teneo, habeo quicquid mei habui. . . . vicit fortuna tua fortunam meam(Seneca, De Constantia 7.56).18. Ibid., 15.5.19. These two notions are carefully distinguished by Cicero in De Finibus 3.21.69.20. Whence the insistence on the fact that constancy is a manly virtue: Defendthe post you have been assigned by nature. Which post, you ask? That of aman [viri] (Seneca, De Constantia 19.4).21. This can be seen in the three consolations of Seneca or even in BoethiusConsolation of Philosophy, which also plays a very important role as a source ofthe treatises discussed later.22. Recall that well-being [salut] a term the Pythagoreans used in closing theirletters above all signifies health and equilibrium in mind and body.23. Iactamur jam tot annos ut vides, bellorum civilium aestu: et, ut in undosomari, non uno vento agitamur turbarum seditionumque. Otium mihi cordiet quies? tubae interpellant et strepitus armorum. Horti et rura? miles etsicarius compellit in urbem (Lipsius, De Constantia I, 1: 133).24. Ibid., I, 1: 135.25. So you are going to flee your country. But tell me seriously, in fleeing, areyou not fleeing yourself ? Make sure that adversity does not arrive at yourdoorstep and that you do not bring with you, in your heart, the source andcause of your grief (ibid., I, 2: 137).26. Et firmandus ita formandusque hic animus ut quies nobis in turba sit et paxinter media arma (ibid., I, 2: 13435).27. The first eighteen essays of book I of Montaignes Essais pertain to militaryquestions.28. Montaigne, Essais I.12: 45.29. This explains the choice of the examples referred to: the Lacedaemonians,the Scythians (i.e., warrior peoples), and the case of the siege of towns (ofArles by Charles Quint; of Mondolphe by Lorenzo di Medici).30. Montaigne, Essais I.12: 46.31. For discussion, see Lagree 1994.32. Actually, he refers to this explicitly as a guarantee in the preface to DeConstantia.33. Included here, of course, would be the prohibition against suicide and therehabilitation of repentance.34. In this dialogue, Lipsius makes no mention of any loss that affected himpersonally, although he suffered terribly from the loss of his library.35. De Constantia, preface to the second edition, p. 127. 170 36. Manuductio I, 3.37. The comparison of God (or the Supreme Good or First Being) to the sun wasstandard since the Platonists. Lipsius extends the metaphor, comparing theteaching of the philosophical schools to the glimmering of the stars. Cf. hisintroduction to the Physiologia: Only when set apart from our religion willthese stars be able to shine (Sole religionis nostrae seposito, et hae stellae poterantlucere).38. Constantiam hic appello rectum et immotum animi robur non elati externisaut fortuitis non depressi (De Constantia I, 4: 148).39. The Latin term robur signifies the oak tree and by extension its firmness, orthe firmness of iron. Lipsius makes this clear in what follows: I said oak, andby that I mean the souls innate firmness (robur dixi et intelligo firmitudineminsitam animo).40. rerum quaecunque homini aliunde accidunt aut incidunt voluntariam etsine querela perpessionem (De Constantia I, 4: 150).41. Ibid., I, 5: 157. On this point Lipsius is closer to Marcus Aurelius than toSeneca.42. Ibid., I, 6: 15961.43. Providence is defined as the everlasting and vigilant care by which Godsees, knows, and is present to all things, directing and governing them in anunchangeable order that pays no heed to our concerns (pervigil illa et perpescura qua res omnes inspicit, adit, cognoscit et cognitas immota quadam et ignota nobisserie dirigit et gubernat) (ibid., I, 13: 210).44. digestio et explicatio communis illius Providentiae distinctae et per partes(ibid., I, 19: 251).45. Ibid., I, 8: 175. The comedian Polus had perfected a role in which he playeda grief-stricken man who brings an urn containing the ashes of his dead sonon stage and fills the theater with his tears and wailing.46. Ibid., I, 9: 177 and ff.47. Ibid., I, 9: 179.48. Think of the different cities that successively figured in Lipsius universitycareer: Louvain, Jena, Leiden, and Louvain again.49. Du Vair, Philosophie morale des Stoques: 93; Traite de la constance et de la consolation, pp. 3436.50. De Constantia I, 12: 201.51. inclinationem animi ad alienam inopiam aut luctum sublevandum(ibid., I, 12: 203).52. Cicero, Tusculan Disputations 4.6.53. This is probably reminiscent of the Epicurean tetrapharmakon for freeing thesoul from fear.54. De Constantia I, 13.55. Aut sequere, aut trahere (ibid., I, 14: 217). This is clearly an allusion toSeneca: Fate leads the willing, and drags along the unwilling (ducunt volentem fata, nolentem trahunt) (Epistulae 107.10).56. Seneca, De Vita Beata 15.7; cited in Lipsius, De Constantia I, 14: 217.57. See Hadot 1992.58. De Constantia I, 7: 165. 171 59. In that very year [1572], a star arose whose brightening and dimming couldclearly be observed. And, although it is difficult to believe, we could see withour own eyes that something could be born and die even in the heavens(ibid., I, 16: 225).60. Ibid., I, 16: 227, a passage reminiscent of Lucretius.61. See the dedication of The Prince to Lorenzo di Medici: As those who drawmaps stand low on the plain in order to view the mountains and high places,and perch themselves on the latter in order to take in the low places.62. De Constantia I, 17: 233.63. It is conceivable that Leibniz had this example in mind when he comparedthe point of view of the monads with that of travelers approaching the samevillage from opposite directions.64. Du Vair, Traite de la constance et de la consolation, p. 91.65. [I]n this Nature, in this Destiny, in this Fortune, taken together, there shinesthrough in contrast to human ignorance, this wise and unsurpassed divineProvidence, even though our acquaintance with it is more in keeping with ourfeeble understanding than with its incomprehensible grandeur and majesty(ibid., p. 92).66. De Constantia I, 16.67. I do not conceive of any ornament in this immense machine without thevariety and vicissitudes of things. . . . Satiety and boredom always accompanyuniformity (De Constantia II, 11: 343).68. Ibid., II, 19: 401.69. Ibid., II, 20: 405. In order to minimize the present troubles, Lipsius offers anumerical comparison of human losses during various ancient and modernwars.70. Sic Deus fati impetus humana omnia trahit sed pecularium cujusque vimaut motionem non tollit (ibid., I, 20: 261).71. Quia arbitrium saltem relictum homini quo reluctari et obniti deo libeat:non vis etiam qua possit (Because at least man is left with choice, by whichhe is able to try to resist and struggle against God, even though he does nothave the power to succeed) (ibid., I, 20: 265).72. Ibid., I, 14: 217.73. Ibid., II, 6: 309.74. See ibid., II, chs. 89 and 10.75. Lipsius returns at this point to a theme from the Bible and from PlutarchsOn the Delay of Divine Justice: the joint responsibility of mankind in the midstof evil and punishment.76. De Constantia II, 11: 341.77. Ibid., I, 21: 269.78. Ibid., II, 13: 354.79. Manuductio III, 5: the wise man is always joyful (a chapter that concerns thethree types of Ciceronian constancy).80. Homer, Iliad 8.19; see also De Constantia I, 14.81. It is characteristic of the Stoics to join everything up and connect them likelinks in a chain, so that there is not only an order but a coherent, orderlysequence of things (Manuductio III, 1; cf. Lagree 1994: 98). 172 82. This is according to Seneca, Epistulae 67.10, as cited in the Manuductio, in thechapter entitled, All Virtues Are Equal. Courage has endurance, patience,and toleration as its species; prudence is connected to constancy.83. De Balzac, Socrate Chretien, Discours III: 265.84. According to Diogenes Laertius (Lives 7.117), the apathy of the Megariansis viewed as foolish or bad, that is, a hard and implacable sensitivity.85. Closed-off from the rest of the world and sheltered from external affairs, Iam wholly preoccupied with the single aim of subjecting my subdued spiritto right reason and to God, and all human affairs to my spirit (De ConstantiaII, 3: 297).86. Lipsius, Monita et exempla politica I, 7.87. Cf. Seneca, De Brevitate Vitae 9.1; De Vita Beata 6.2; Marcus Aurelius, Meditations 2.14; 3.10; 4.47.88. De Constantia I, 17; Lagree 1994: 138.89. Letters from Leibniz regarding Descartes in Philosophische Schriften (ed.Gerhardt), 4:29899.90. Simon Goulart, uvres morales et melees, 12.91. Recall that in 1532 Calvin published an edition, with commentary, of SenecasDe Clementia.92. I here distance myself somewhat from the typology of Spanneut andEymard dAngers, which I endorsed in my 1994 book (pp. 1617) in order todistinguish (1) the Christian Neostoicism of Lipsius; (2) the ChristianizedStoicism of Hall and Grotius; (3) the Stoicizing Christianity of Cherbury;(4) humanism in the Stoic references of Descartes or Francis of Sales;(5) the freethinking movement; and (6) the anti-Stoicism of Pascal andMalbranche.93. Cf. also Pierre Charrons systematic reworking of the doctrinal content ofMontaignes Essais in De la sagesse (1604).94. The whole earth is home to the wise man, or, as Pompey said, he must atthe very least think that his home is wherever his freedom is (Traite de laconstance et de la consolation, p. 36). See also: Who taught us that we wereborn to stay in one place? . . . The whole earth is home to the wise man, orrather, his home is no place on earth. The home to which he aspires is inheaven. He is only passing through here below, as if on a pilgrimage, stayingin cities and provinces as if they were rooms in an inn (Philosophie morale desStoques, p. 93).95. This was a common Epicurean and Stoic theme. Cf. Seneca: There is inextreme suffering this consolation: as a rule, one ceases to sense when thesensation becomes too intense (Epistulae 18.10).96. Traite de la constance et de la consolation, p. 50.97. Ibid.98. Ibid., p. 88 (emphasis added).99. [I]n this Nature, this Fortune, this Destiny taken together, and shiningthrough human ignorance, is this wise and excellent divine Providence,which is in any case acknowledged more in keeping with our feeble understanding than according to its incomprehensible grandeur and majesty(ibid., p. 92). 173 174 175 140. The four causes of bad living are the apprehension of death, bodily pains,heartache, and pleasure (Vie de Sen`eque 11).141. What I approve least of all in him, or, rather, what I am not able to approve,is the excessive praise he gives to his sage, raising him above even the gods.Then, on several occasions further on, he indicates that this sage shouldalso be able to give himself over to death and free himself from the bondsof life on his own authority, without the permission of the sovereign rulerand accompanied by an uncharacteristic fear and mistrust of the doctrineof eternal providence, which has it that we should keep our hopes high evenwhen things seem desperate (ibid., u, iiii).142. Ample discours sur la doctrine des Stoques uvres morales et melees(Geneva, 1606), t. III, p. 317.143. Vie de Sen`eque, u, iii.144. Ibid., 12 (The criticisms are addressed to Seneca).145. Like Casaubon, who first showed that the alleged correspondence betweenSeneca and Saint Paul was a poorly executed forgery. See the preface toGoularts 1606 edition of Seneca.146. A summary of his translation of De la tranquillite de lame (1595), p. 225(rpr. in uvres morales et melees). The remedy proposed by Seneca is harshlycharacterized as a band-aid for a paralytic.147. Ibid., p. 226 b.148. Ibid.149. Pascal, Entretien avec M. de Sacy; Francis of Sales, Traite de lamour de Dieu,I, 5; Malebranche, Traite de morale I, 1 and 8; also La recherche de la verite,IX`e e claircissement. See also Guez de Balzac: It is possible to resolve theStoic paradox and make their proud philosophy more human. But when allis said and done, I choose not to become involved in the affairs of Zeno orChrysippus. I do not feel obliged to defend all of the foolish things they havesaid about their sage. I remain a Stoic only as long as Stoicism is reasonable.But I take leave of it when it begins to talk nonsense (Socrate chretien, Discours II)(emphasis added).150. Senault, De lusage des passions, II.ii.2, p. 216.151. Ibid., I.iii.5, p. 113 and ff. The former criticism was also voiced by Spinozain the preface to Ethics IV.152. Stoic philosophy has conspired to cause the death of our passions. But thisproud sect did not consider that by destroying them it also brought aboutthe death of all the virtues because the passions are the seeds of virtue, andfor the little trouble it takes to cultivate them, they yield pleasant fruits (Delusage des passions I.iv.1, p. 118).153. Ibid., I.iv.1, p. 119.154. In their desire to produce gods they have only raised idols (ibid., I.i.1,p. 45). Thus our proud Stoics, having raised their sage up to the heavensand granted him titles to which not even the fallen angels pretended duringtheir rebellion, reduce him to the level of animals, and, unable to make himimpassible, they try instead to make him stupid. They blame reason for beingthe cause of our disorders. They complain about the advantages nature hasproduced for us and would like to lose memory and prudence so that they 176 155.156.157.158. 159.160.161. 162. 163. 164.165.166.167.168.169. 170.171.172. Jacqueline Lagreenever have to envisage future evils or think about past ones (ibid., I.ii.5,p. 93).Ibid., II.iii.3, p. 245.Ibid., II.vi.6, p. 346.Ibid., II.vi.6, p. 346.The Stoic philosophy, which does not consider an undertaking gloriousunless it is impossible, wanted to prohibit any commerce between the mindand body, and with uncharacteristic passion tried to separate the two partsthat make a single whole. It forbid to its disciples the use of tears, and,breaking up the holiest of all friendships, it wanted the mind to be insensitiveto bodily pain. . . . This barbarian philosophy had some admirers but never any truedisciples; its advice mires them in despair; all those who wanted to followits maxims were misled into vanity and were unable to protect themselvesagainst pain (ibid., II.vi.4, p. 334; emphasis added).Ibid., II.ii.3, p. 219, and I.iv.1, p. 117, respectively.Socrate chretien, Discours III, p. 265.These philosophers are austere only because they are too virtuous: theycondemn penitence only because they love fidelity and if they find faultwith the repentance, it is because they presuppose the crime. . . . Their zealdeserves some pardon (Senault, De lusage des passions II.vi.6, p. 347).On self-knowledge, see Jean Abbadie, Lart de se connatre soi-meme ou larecherche des sources de la morale (Rotterdam, 1693); on the tranquillity ofthe soul: Claude Cardinal Bona, Manuductio ad coelum medullam continensSanctorum patrum et veterum philosophum (Cologne, 1658), and Alfonse Antoine de Sarasa, Ars semper gaudiendi ex principiis divinae providentiae et rectaeconscientiae deducta (Anvers, 1664).Daniel Heinsius, De stoica philosophia oratio (1626); Georges MacKenzie, Religio stoici, with a friendly addresse to the Phanaticks of all sects and sorts (Edinburgh:1665); and Jacob Thomasius Lipsius, Exercitatio de stoico mundi exustione(1676).On this subject, see Olivo 1999 and Mehl 1999.See the classic studies of Eymard dAngers (195152 and 1964), MichelSpanneut (1957, 1964, and 1973), or, more recently, Taranto (1990).Carraud 1997.Descartes to Elisabeth, 4 August 1645; AT IV 26566 (AT = Descartes196474).AT VII 22, 3023, 1.AT VII 24, 1820, and a little further on: And this was consequently ableto deliver me from all the repentance and remorse that has usually agitated the conscience of these weak and vacillating spirits, who sometimes[inconstamment] allow themselves to treat as good what they later judge tobe evil.AT IV 265, 1620.AT IV 266, 2429.Discourse on Method III, AT VII 26, 1922. 9On the Happy LifeDescartes vis-`a-vis SenecaDonald Rutherford 178 Donald Rutherford 179 180 Seneca is less definite about whether the positive affects that attend theexercise of virtue can be considered goods at all. In the preceding passage 181 he appears to deny it; elsewhere in De Vita Beata he is more accommodating. But even if such affects are regarded as goods, Seneca is adamant thatthey form no part of the highest good, and hence contribute nothing tothe happy life:Not even that joy [gaudium] that arises from virtue, though a good, is part ofthe absolute good [absoluti boni], any more than delight [laetitia] and peace ofmind [tranquillitas], even if they arise from the finest causes: for though theseare goods, they are the sort which attend the highest good but do not bring it toperfection. (15.2) 182 So, Descartes concludes, virtue, which is the bulls-eye, does not cometo be strongly desired when it is seen all on its own; and contentment,like the prize, cannot be gained unless it is pursued (AT IV 276/CSMK262).The position Descartes outlines for Elisabeth breaks decisively with akey assumption of ancient eudaimonism. In the same letter, Descartesmaintains that on his account the positions of Zeno and Epicurus canboth be accepted as true and as consistent with each other, providedthey are interpreted favourably (AT IV 276/CSMK 261).15 Zeno andthe Stoics have correctly identified the supreme good as virtue. Epicurus,on the other hand, was right to think that happiness consists in pleasure in general that is to say, contentment of the mind (ibid.). Yetin bringing Stoicism and Epicureanism together in this way, Descarteseffectively abandons the framework within which both theories are developed. For both Stoics and Epicureans, the aim of ethical inquiry isto articulate the content of happiness (eudaimonia), which is identifiedas our supreme good and final end: that for the sake of which all elseis sought, which itself is not sought for the sake of anything else.16 ForEpicureans, the supreme good is freedom from bodily pain and mentaldisturbance (Diogenes Laertius 10.136); for the Stoics, it is living in accordance with virtue or, equivalently, living in agreement with nature(Diogenes Laertius 7.8789). By disengaging the notion of happinessfrom that of the supreme good, Descartes lays the foundations of a theory that is distinct from both these ancient views. The difference can beexpressed in terms of a distinction between the intension and extensionof the word happiness. Stoics and Epicureans operate with the sameconcept of happiness: it is the supreme good, and final end, of action.Where they differ is over the type of life in which happiness is realized whether a life of virtue or a life of pleasure. By contrast, Descartes beginswith a different understanding of happiness. For him, happiness meansa kind of pleasure. [A]lthough the mere knowledge of our duty mightoblige us to do good actions, he writes, yet this would not cause us to enjoy any happiness if we got no pleasure from it (AT IV 276/CSMK 261).Although this may make Descartes sound like a follower of Epicurus, heundercuts this inference by identifying the supreme good in a way thatmore closely resembles the Stoics understanding of it. Yet he does notembrace the Stoics account of happiness.Where does this leave Descartes? By identifying virtue as the supremegood and maintaining that it is our final end in the sense of whatwe ought to set ourselves as the goal of all our actions (AT IV 275/ 183 CSMK 261), Descartes develops a theory that tracks an important dimension of Stoic ethics. Descartes agrees with the Stoics that a virtuouscharacter is what reason commands us to pursue; and that, if it is achieved,our life will be complete from an ethical point of view. Furthermore, although he relies on a different conception of happiness, Descartes agrees,at least nominally, with the Stoics that virtue is necessary and sufficientfor happiness. That virtue is necessary for happiness is explained by thefact that only it can supply us with a contentment that is solid (AT IV280/CSMK 262), that is, one that lacks the instability of bodily pleasureand is independent of fortune. Only if our contentment derives from asource that is within us and within our power, namely virtue, can we rely onit never to be destroyed. The case for the sufficiency of virtue rests on thepremise that virtue is naturally productive of contentment. According toDescartes, we cannot ever practice any virtue that is to say, do what ourreason tells us we should do without receiving satisfaction and pleasurefrom doing so (AT IV 284/CSMK 263).17 If, as Descartes assumes, theactivity of virtue is naturally productive of intellectual joy (pleasure thatdepends only on the soul), and if this joy is (as he believes) strong enoughto outweigh negative affects such as pain and sadness, then we have onlyto continue acting virtuously in order to be happy. In order that oursoul should have the means of happinesss, he writes in the Passions ofthe Soul,it need only pursue virtue diligently. For if anyone lives in such a way that hisconscience cannot reproach him for ever failing to do something he judges tobe the best (which is what I here call pursuing virtue), he will receive from thisa satisfaction which has such power to make him happy that the most violentassaults of the passions will never have sufficient power to disturb the tranquilityof his soul. (art. 148; AT XI 442/CSM I 382) Setting aide Descartes use of the word happiness, a Stoic could accept much of the preceding argument. As we have seen, for Seneca theactivity of virtue is naturally linked to positive affective states. What Senecadenies is that these states contribute anything to our happiness, since, forhim, happiness is identified with the summum bonum, and pleasure formsno part of that. This way of putting it may make it seem that Descartes andSeneca in the end disagree only about the meaning of a word. More thanthis, though, is at stake. The Stoics refuse to include any kind of pleasureas part of happiness, for that, they believe, would undermine the claimof virtue to be desirable for its sake alone and, hence, to be our highestgood. The highest good consists in judgment itself, Seneca writes, and 184 in the disposition of the best type of mind, and when the mind has perfected itself and restrained itself within its own limits, the highest goodhas been completed, and nothing further is desired; for there is nothingoutside the whole, any more than there is something beyond the end(9.34). To allow pleasure to be a part of happiness would imply thatvirtue was not unqualifiedly complete as an end: it would be desired notonly for its own sake but also for the sake of the pleasure it produces.Descartes failure to be moved by this conclusion is an indication of hisdistance from classical eudaimonism. In his view, as the supreme good,virtue is what we ought to pursue in preference to any other good, andwhat we have greatest reason to pursue, but not necessarily what we pursue for its sake alone. That the practice of virtue is naturally productiveof contentment only makes virtue that much more desirable. In no waydoes it compromise the claim of virtue to be the supreme good. virtue as perfectionDescartes account of the relation between virtue and happiness relies onhis particular understanding of virtue as a perfection. Descartes explainsthe matter to Elisabeth in this way:[A]ll the actions of our soul that enable us to acquire some perfection are virtuous,and all our contentment consists simply in our inner awareness of possessing someperfection. Thus we cannot ever practice any virtue that is to say, do what ourreason tells us we should do without receiving satisfaction and pleasure fromdoing so. (AT IV 28384/CSMK 263) Descartes accepts the traditional view that perfection is the intrinsic goodness of a being, that perfection or goodness comes in degrees, and thatthe ranking of degrees of perfection is determined by their proximity tothe limiting case of God, the supremely perfect being. Descartes takesit for granted that the perfection of the soul is greater than that of thebody, and that consequently the soul is a source of pleasure that is not onlymore reliable but also intrinsically more desirable than that derived fromthe body, since it is based on an object of greater perfection. Descartesascribes the ability to make such discriminations to reason, whose use isthus essential to the practice of virtue.18 The soul possesses greater perfection than the body because it comes closer to realizing the perfectionof God. Why this is so has crucial consequences for Descartes accountof virtue. As he maintains in the Fourth Meditation, we are in no way morelike God that is, more perfect than in our possession of a free will.19 185 Hence the correct use of this will is our greatest virtue and the source ofour greatest happiness:[F]ree will [le libre arbitre] is in itself the noblest thing we can have, since it makesus in a way equal to God and seems to exempt us from being his subjects; and so itscorrect use [son bon usage] is the greatest of all the goods we possess; indeed thereis nothing that is more our own or that matters more to us. From all this it followsthat nothing but it can produce our greatest contentment. (AT V 85/CSMK326)20 186 agreement with its own nature (conveniens naturae suae) and that amongall Stoics it is agreed that such a life requires assenting to nature itself[rerum naturae]. Wisdom, he writes, consists in not straying from nature, and in being directed by its law and pattern [legem exemplumque].23Responding to this passage, Descartes claims that he finds Senecas statements very obscure. To suggest that wisdom is acquiescence in theorder of things, or, as a Christian would have it, submission to the willof God, explains almost nothing. The best interpretation he can puton Senecas words is that to live according to nature means to live inaccordance with true reason (AT IV 27374/CSMK 260). But then, hebelieves, Seneca still owes us an account of the knowledge that reasonmust supply in order for us to be able to act virtuously.In a subsequent letter to Elisabeth, Descartes summarizes the knowledge he thinks we require for this purpose. It consists of a surprisinglysmall class of truths, the most important of which are: the existence ofan omnipotent and supremely perfect God, on which the existence ofeverything else depends; the immateriality of the soul; that we are but asmall part of a vast universe; that we have duties to larger social wholesof which we are parts; that passions often distort the goodness of theirobjects; that pleasures of the body are more ephemeral and less reliablethan pleasures of the soul (AT IV 29195/CSMK 26567). The propositions Descartes advances do not provide specific directives for action;they do not dictate what we ought to do in any particular circumstance.Instead, they are best seen simply as facilitating right action, by removing impediments to it (anxiety about the future, fear of death) or savingus from obvious errors (ignoring the good of others, giving priority tobodily goods). That the content of morality is underdetermined by theknowledge on which it depends is made clear by Descartes final proposition, which instructs us to defer to the laws and customs of the land whenit is not obvious how we ought to act: Though we cannot have certaindemonstrations of everything, still we must take sides, and in matters ofcustom embrace the opinions that seem the most probable, so that wemay never be irresolute when we need to act. For nothing causes regretand remorse except irresolution (AT IV 295/CSMK 267).24The last remark reflects an abiding feature of Descartes ethics. Thetruths that compose the contents of Cartesian wisdom lay down a set ofgeneral guidelines for how to use our freedom correctly. They do notguarantee that, when faced with a choice, we will know with certaintywhat we ought to do. As Descartes writes in his next letter to Elisabeth, itis true that we lack the infinite knowledge which would be necessary for a 187 perfect acquaintance with all the goods between which we have to choosein the various situations of our lives. We must, I think, be contented witha modest knowledge of the most necessary truths such as those I listed inmy last letter (AT IV 308/CSMK 269). The crucial thing is that we dowhatever we can to ascertain the best course of action, appealing if necessary to law or custom, and that we then will decisively. This creates animportant disanalogy for Descartes between the theoretical and the practical. In both cases, we have a responsibility to correct our understandingbefore committing our will. Only in the case of theoretical judgment,however, is it reasonable to suspend the will if we lack the knowledgeneeded to be fully confident of our decision.25In the Fourth Meditation, Descartes advances this as a rule for avoidingerror: if we lack the clear and distinct ideas needed to be certain ofthe truth of a proposition, we should withhold assent from it. In thecase of action, he denies that this is possible: As far as the conductof life is concerned, I am very far from thinking that we should assentonly to what is clearly perceived. On the contrary, I do not think weshould always expect even probable truths (AT VII 149/CSM II 106).In acting, the essential thing is that we will in the right manner, allowingwisdom to guide our action so far as it can. It is not necessary that ourreason should be free from error, he tells Elizabeth; it is sufficient ifour conscience testifies that we have never lacked resolution and virtue tocarry out whatever we have judged the best course (AT IV 26667/CSMK258). Resolution, or firmness of judgment, is critical, for it is the lack ofthis above all that poses a threat to our tranquillity, or contentment ofmind.26In this, again, we hear an echo of Stoic views, for example, Senecasdescription of the happy life in Letter 92: What is the happy life? Peacefulness and uninterrupted tranquillity. Greatness of mind will bestow this,and constancy [constantia] which holds fast to good judgment (Epistulae 92.3). Absent from Descartes position, however, is the Stoics understanding of happiness as living in agreement with nature. FromCleanthes on, the Stoics advance the view that to live virtuously is forthere to be an agreement, or conformity (homologia, convenientia), between the rational principle that governs the will of an individual and theuniversal law or divine will that governs nature as a whole (DiogenesLaertius 7.8688). To achieve happiness, the Stoics argue, it is not enoughsimply that ones actions exemplify the traditional virtues of moderation,courage, and justice. In addition, those actions must be chosen for theright reason; and this requires wisdom, wherein one understands that 188 virtuous actions have value because they alone of all human things embody the divine reason immanent within nature.27 The basic message ofStoicism is that by regulating our actions according to the universal lawthat governs nature as a whole, and finding value only in what conformsto that law, we are able to avoid the suffering that afflicts the lives of thosedriven by desire and passion.28 In this way, we can through our own effortsbecome happy after the manner of a god: self-sufficient, independent offortune, and perfectly tranquil.29Descartes promises almost exactly the same benefits from his ethicsas do the Stoics; however, he denies that these depend on our acknowledging a conformity between our reason and that of God. Nothing welearn by reason entitles us to say that we have understood the world asGod understands it;30 and virtue, as the perfection of our will, does notrequire this. Descartes flatly denies that we have any insight into theprinciples that govern Gods will or furnish reasons for Gods acting.31Lacking such knowledge, virtuous action cannot be identified with acting in agreement with nature, or with the universal law that is Godswill. This does not mean, of course, that we should not act under theguidance of reason. Although we are limited in our knowledge, we areobliged to rely on reason as the basis for the correct use of our free will.32In this, for Descartes, consists our virtue, our greatest perfection, and ourhappiness. 189 how properly to use our freedom, the attribute by which we most closelyapproximate Gods perfection.Given the intellectual gulf that separates Descartes and the Stoics,one might conclude that they have very different conceptions of moralphilosophy: Descartes is a modern thinker who transforms received viewsabout mind and world, and as such we should expect from him an ethicaltheory that is fundamentally unlike that of the Stoics.33 In this final section, I argue for a different way of approaching Descartes position. Ofcourse, Descartes is a modern thinker, whose philosophical and theological views divide him from the Stoics. Nevertheless, if we are interested inunderstanding the kind of comprehensive philosophical system Descartesenvisioned, I believe we are helped by thinking about that system in relation to the Stoics. Although Descartes disagrees with the Stoics on keypoints of doctrine, his broader understanding of the structure of ethicaltheory and its relation to metaphysics and natural philosophy mirrorsthat of the Stoics.To see this, consider the following set of five propositions, which collectively define Descartes perfectionism:1. True happiness, or blessedness, can in principle be achieved withinthis life through the exercise of the natural powers of a humanbeing.2. Such happiness can be fully explained in terms of the activity ofvirtue, which presupposes the ordering of the will by wisdom.3. Wisdom sufficient for happiness requires the acquisition of specificintellectual knowledge, particularly knowledge of God and nature.4. Virtue is a good that is always within our power and independentof fortune.5. As a consequence of the exercise of virtue, we enjoy the most desirable affective states lasting joy or contentment and do soindependently of fortune.These five propositions establish a close connection between Descartessethical theory and the Stoics eudaimonism. Although Descartes rejectsthe eudaimonist principle that happiness is the highest good, he retains the Stoics assumption that happiness is what all human beingschiefly desire, and he regards the main task of ethics as the instruction (and disciplining) of the will in how best to achieve happiness. ForDescartes, the basis of this happiness is, as it is for the Stoics, the activity of virtue, and virtue itself is perfected through the acquisition ofwisdom, or what Seneca describes as scientific knowledge [scientia] of 190 the divine and the human (Epistulae 89.45).34 Finally, like the Stoics,Descartes links the unconditional goodness of virtue, and its relation tohappiness, to the fact that it is always within our power and independentof fortune; and he associates the exercise of virtue with the enjoymentof a pleasing affective state, which he (but not the Stoics) equates withhappiness.The propositions most central to Descartes perfectionism are propositions 2 and 3. Although Descartes emphasizes that virtue is a perfectionof the will, it is a perfection that can be realized only when the will isused in conjunction with reason. Virtue is a firm and constant resolution to carry out whatever reason recommends (AT IV 265/CSMK 257).Furthermore, Descartes leaves no doubt that while native reason, or thegood sense (le bon sens) with which all human beings are born, is theappropriate starting point for the attainment of happiness, such happiness can be guaranteed only if reason itself has been perfected throughthe acquisition and proper ordering of intellectual knowledge. As he argues in the preface to the French edition of the Principles, we are broughtto the highest perfection and felicity of life by his principles and thelong chain of truths that can be deduced from them (AT IXB 20/CSMI 190); the ethical theory that tops the tree of philosophy presupposesa complete knowledge of all the other sciences and is the ultimate levelof wisdom (AT IXB 14/CSM I 186).Here we see the deepest affinity between the aspirations of Descartesphilosophy and Stoicism. Setting aside the points on which I have distinguished them, Descartes propounds an ethical theory in which theperfection of reason (and hence of the will) through the acquisition ofknowledge is critical to the attainment of happiness. Like the Stoics, hebelieves that only the person who has acquired wisdom can enjoy the fullfruits of happiness. It goes without saying that Descartes understandingof the content of this wisdom is different from that of the Stoics. Still,it is significant that Descartes initial criticism of Seneca focuses not onthe main theses of his ethical theory (e.g., the sufficiency of virtue forhappiness), but on the requisite intellectual knowledge that he believesSeneca has failed to provide: all the principal truths whose knowledgeis necessary to facilitate the practice of virtue and to regulate our desiresand passions, and thus to enjoy natural happiness (AT IV 267/CSMK258).In his letter to Elisabeth of 15 September 1645, Descartes documentsthese principal truths whose knowledge is essential for happiness. Intheir scope the propositions correspond closely to Senecas description 191 192 the soul and the body depend entirely on the passions, so that personswhom the passions can move most deeply are capable of enjoying thesweetest pleasures of this life (AT XI 488/CSM I 404). But Descartesalso makes an appeal for the goodness of the passions on teleologicalgrounds: [T]hey are all ordained by nature to relate to the body, andto belong to the soul only in so far as it is joined with the body. Hence,their natural function is to move the soul to consent and contribute toactions which may serve to preserve the body or render it in some waymore perfect (AT XI 430/CSM I 376). In a final irony, then, Descartesreclaims the idea of a purposiveness internal to nature, and associates itwith the passions, whose sole function is to dispose the soul to want thethings which nature deems useful for us, and to persist in this volition(AT XI 372/CSM I 349).37In upholding against the Stoics the goodness of the passions, Descartesinadvertently gives a new meaning to the Stoic formula of the end. A lifein which we allow ourselves to be guided by the passions, properly regulated, will be for Descartes (though not for the Stoics) a life accordingto nature. We can make sense of this idea only if we see Descartes asoperating with a fundamentally different conception of nature; however,in this case his disagreement with the Stoics cannot be framed in termsof a simple contrast between ancient and modern views. For Descartes,as against the Stoics, the relevant sense of nature is not the immanentrationality of the universe but what God has bestowed on me as a combination of mind and body (AT VII 82/CSM II 57). Thus, rather thanbeing grounded in the eternal law that is divine reason, nature and purpose (and hence the goodness of the passions) are explained as particularproducts of Gods will, whose ends are beyond human reason. It is a factabout human nature that the passions serve a beneficial function, butthere is no deeper explanation for this than that God chose to make itso. The case of the passions highlights the danger in casting the opposition between Descartes and the Stoics in overly simple terms. AlthoughDescartes is one of the architects of the new science of the seventeenthcentury, his differences with the Stoics are often as much a reflectionof his voluntarist theology as they are of distinctively modern views innatural philosophy.My primary concern, however, has been to stress a deeper bond between Descartes ethics and that of the Stoics. When we consider theirphilosophies from the point of view of their principal goal, the attainment of happiness, we find a striking commonality of purpose. Philosophy teaches us how to live happily, to attain contentment, and it does so by 193 194 8. While Descartes himself directs Elisabeth to the provisional morality elaborated in part 3 of the Discourse, there are important differences between thetwo sets of rules that reflect his different goals in the two works. The rules ofthe Discourse were framed, Descartes writes, so that he might live as happilyas possible, while remaining indecisive in his theoretical judgments aboutnature. To this end, he proposed a provisional moral code consisting of justthree or four maxims. . . . The first was to obey the laws and customs of mycountry, holding constantly to the religion in which by Gods grace I hadbeen instructed from my childhood. . . . The second maxim was to be as firmand decisive in my actions as I could, and to follow even the most doubtfulopinions, once I had adopted them, with no less constancy than if they hadbeen quite certain. . . . My third maxim was to try always to master myselfrather than fortune, and to change my desires rather than the order of theworld. . . . Finally, to conclude this moral code . . . I thought I could do no better than to continue with the [occupation] I was engaged in, and to devotemy whole life to cultivating my reason and advancing as far as I could in theknowledge of the truth, following the method I had prescribed for myself(AT VII 2227/CSM I 12224). The crucial difference between the two setsof rules comes in the formulation of the second rule. The rule presented toElisabeth repeats the injunction to act with a firm and constant resolution;however, this is now linked to the recommendations of reason, which furnishes the positive knowledge of metaphysics and natural philosophy thatDescartes believes himself to have established. Also noteworthy is the absenceof the provisional moralitys first rule, prescribing deference to the laws andcustoms of ones country. As I discuss later, this feature of Descartes positiondoes not vanish completely, but it does acquire less prominence given thenewfound authority of reason. Cf. Sorell 1993: 28688.9. Cf. AT IV 277/CSMK 262; AT V 8283/CSMK 32425.10. In quoting from De Vita Beata, I have relied on the translations in Seneca196365 and Seneca 1994, which I have sometimes modified.11. See also De Tranquillitate Animi 2.4. Gisela Striker suggests that Seneca goesbeyond other Stoics in stressing the pleasing character of the positive affects (eupatheiai) that attend a virtuous character. See Striker 1996:188. I aminclined to read loosely the statement at De Vita Beata 4.2 that, for the virtuous person, true pleasure [vera voluptas] will be the disdain of pleasures.In Epistulae 23 and 59.14, Seneca reaffirms the orthodox Stoic distinctionbetween voluptas and gaudium.12. Descartes does not reject the importance of bodily health, or freedom frompain [aponia], but, consistent with his dualism, reinterprets its significancefrom the perspective of the mind: I can conclude that happiness consistssolely in contentment of mind that is to say, in contentment in general.For although some contentment depends on the body, and some does not,there is none anywhere but in the mind (AT IV 277/CSMK 262).13. Cf. Gueroult 1985: 2:184186.14. See also his letter to Elisabeth of 6 October 1645: If I thought joy thesupreme good, I should not doubt that one ought to try to make oneself joyful at any price. . . . But I make a distinction between the supreme 15. 16. 17.18. 19.20. 23. 24.25. 195 good which consists in the exercise of virtue, or, what comes to the same,the possession of all those goods whose acquisition depends upon our freewill and the satisfaction of mind which results from that acquisition (ATIV 305/CSMK 268). A less careful formulation appears in a later letter toQueen Christina: [T]he supreme good of each individual . . . consists onlyin a firm will to do well and the contentment which this produces (AT V82/CSMK 324).See also his letter to Queen Christina of 20 November 1647 (AT V 83/CSMK325). In his letter to Elisabeth, Descartes extends this judgment to a thirdmain view about the supreme good and the end of our actions, that ofAristotle, who, he says, made it consist of all the perfections, as much ofthe body as of the mind. However, Descartes immediately sets this viewaside, on the grounds that it does not serve our purpose (AT IV 27576/CSMK 261).Stoics and Epicureans both accept Aristotles claim in the Nicomachean Ethics1.7 that the highest good is an end complete without qualification, thatis, what is always desirable in itself and never because of something else(1097a30b5); and they identify this end with happiness. For the Stoics, seeArius Didymus in Stobaeus 2.77.1617 (LS 63A); for the Epicureans, Ciceroin De Finibus 1.29 (LS 21A), 1.42.Cf. Passions of the Soul, arts. 91, 190.The true function of reason . . . in the conduct of life is to examine andconsider without passion the value of all the perfections, both of the bodyand of the soul, which can be acquired by our conduct, so that since we arecommonly obliged to deprive ourselves of some goods in order to acquireothers, we shall always choose the better (AT IV 28687/CSMK 265).See also Principles of Philosophy I.37; Passions of the Soul, art. 152.In the original, the final sentence reads: . . . dou` il suit que ce nest que deluy que nos plus grands contentmens peuvent proceder. CSMK translatesluy as free will. In my view the passage makes better sense if we interpretthe pronoun as referring not to le libre arbitre but to son bon usage.See, for example, Seneca, Epistulae 124.1112, 2324.In the Passions of the Soul, Descartes identifies the recognition and properuse of our free will with the virtue of generosite (art. 153), which he describesas the key to all the other virtues and a general remedy for every disorderof the passions (art. 161; AT XI 454/CSM I 388).Elsewhere Seneca describes wisdom (sapientia) as the human minds goodbrought to perfection (Epistulae 89.4). Thus there is established an equivalence between the perfection of the human mind and its assent to theuniversal law (or right reason) of nature.In this qualified way, deference to custom remains part of Descartes matureethical theory. See note 3.Descartes alerts us to this point in the First Meditation when feigning thehypothesis of a malicious demon: I know that no danger or error will resultfrom my plan, and that I cannot possibly go too far in my distrustful attitude.This is because the task now in hand does not involve action but merely theacquisition of knowledge (AT VII 22/CSM II 15). 196 34.35.36. 37. 197 10Psychotherapy and Moral PerfectionSpinoza and the Stoics on the Prospect of HappinessFirmin DeBrabander Perhaps one of Stoicisms greatest points of appeal, prominent in its resurgence in the early modern period, is its assertion that happiness is attainable by any rational individual. Moreover, this happiness is, as Senecadepicts it, a this-worldly salvation: the rational individual can aspire toa perfect happiness, a tranquillity impervious to any and all assaults ofFortune. Such is the virtue of Stoic ethics famously celebrated in the sixteenth century by Justus Lipsius, who, exasperated by the conflicts ragingwithin the Christian tradition and the horrific wars accompanying them,looked to Stoicism for an alternate source of moral sustenance and theprospect of genuine respite from public tumult to be sure, nothing lessthan an enduring peace of mind.2 According to the Stoic model, sucheminent tranquillity, which entails perfecting ones intellect, is foundedon a specific collection of doctrines: an immanentist theology wherebyGod and the universe are rational in nature and can be perfectly apprehended by the human mind; virtue that is readily indicated in naturalimpulse; a diagnosis of the passions, the primary obstacle to virtue, interms that immediately invoke their susceptibility to remedy; and, finally,198 199 psychotherapy as the means to happiness, a means that is subject to individual agency and responsibility.Of all the great classical philosophers, Alexandre Matheron remarks,Spinoza is the one whose teaching best lends itself to a point-by-pointcomparison with Stoicism.3 Indeed, Spinoza emulates the structure ofStoic moral perfectionism, notoriously asserting that God is the immanent cause of nature God just is nature, in fact and subject to humanapprehension as a result. He also maintains that a beings conatus or natural striving for self-preservation is the basis of virtue, and attributes acognitive element to the passions that so disturb the lives of men, making psychotherapy the centerpiece of his ethics. And yet, despite thesestriking similarities, Spinoza ultimately defies Matherons assertion by diverging definitively from the Stoic project when he rejects the possibilityof absolute control of the passions, that is, moral perfection. Spinozasonly explicit reference to the Stoics in the whole of the Ethics consists ina criticism of them on precisely this point. In fact, the rejection of perfection is central to psychotherapy, as Spinoza understands it. Animatingthe whole of the Ethics is Spinozas intense desire to remind us all that weare part of nature, and that vain hopes to the contrary are responsiblefor a large part of human suffering. Thus, the spirit of his philosophy isdirectly opposed to Senecas account of the wise man whose virtue hasplaced him in another region of the universe and who has nothing incommon with you.4Why does Spinoza appeal to the Stoic model of ethics if he rejects itsultimate conclusion? What leads to this remarkable end? To explain theirdivergence, I survey the parallel foundations of psychotherapy in Spinozaand the Stoics,5 examining the seeds of difference already planted inthem, and paying close attention to how Spinozas rejection of perfectionism is borne out by the very principle of his psychotherapy, and thetherapeutic role played by this rejection.According to the Stoic model, perfectionism is founded first of allon the intelligibility of God and the universe. Thanks to their essentiallyrational nature, God and the universe are intellectually accessible and,in fact, can be made wholly transparent to the human mind. God justis the inner workings of nature, according to the Stoics, the logos thatpervades nature and internally directs the manner in which it unfolds.6God is intellectually accessible by virtue of his immanence in the world.Because he pervades nature, God is inseparable from nature, and onlyconceptually distinct from it.7 The immanentism of Stoic theology is but 200 Firmin DeBrabander a small step from monism, and the words of Diogenes Laertius suggestthis when he reports that Zeno declared the substance of God to be thewhole world and the heavens.8 According to the Stoic formula, insofar asit grounds the intelligibility of God and the universe, divine immanencein and identification with nature is a condition of moral perfectionism.With his infamous monism, Spinoza posits this primary element of perfectionism. Whatever is, is in God, Spinoza affirms, and nothing can beor be conceived without God (Pr.15, I: 40).9 Because he is nothing lessthan the whole of Nature that surrounds us and motivates us, SpinozasGod is likewise eminently accessible to human understanding.Furthermore, the nature of God or the nature of Nature is intelligiblethanks to its internal logic. Nature is intelligible because it operates in adetermined manner, according to the Stoics. One set of things followson and succeeds another, Chrysippus explains, and the interconnexion is inviolable.10 For Spinoza, too, the interconnection among naturalevents is inviolable, and infinite in extent. No finite individual thing canexist or be determined to act, Spinoza says, unless it is determined toexist or act by another cause that is a finite individual thing, which inturn is determined by another finite individual thing, and so on ad infinitum (Pr.28, I: 50). Such inviolability is the basis for the intelligibilityof the universe and God; any single event can be understood withinthe logical order of which it is a part, and that event in itself providesintellectual access to the same order. Spinoza and the Stoics disagree,however, about the nature of this logical order. The Stoics maintain thatthe universe is providentially ordered, whereas Spinoza rejects teleologyand insists rather that all things are connected in the order of efficientcausality. Nature has no fixed goal, he declares, and all final causesare but figments of the human imagination (App., I: 59) especiallydangerous figments, in Spinozas view, suggestive of the superstition thatincites anxiety and conflict.Every single event and thing is purposive, according to Stoic doctrine.If we were to perfect our intellects, we would discern that such purposiveness is familiar to ordinary human wishes and aspirations. In fact, theStoics go so far as to assert that human beings occupy an exalted placein the universe and that things and events are fashioned especially fortheir benefit. According to Chrysippus, bed-bugs are useful for wakingus . . . mice encourage us not to be untidy.11 Accordingly, understanding delivers tranquillity insofar as it includes a vision of ourselves as theobject of divine solicitude.12 The Stoic sage can endure the assaults offortune because he discerns the purposiveness hidden in such assaults 201 and sees that they are ultimately to his advantage. Spinoza agrees withthe basic idea that anxiety is eased by meditation upon the universe asabsolutely determined, but he must insist that it rests upon a differentdynamic. After all, Spinozas universe is indifferent to particular humanconcerns. The universe offers no comfort to the teleological tendenciesof the human mind. The Stoics might well wonder how Spinozas uncompromising antianthropocentric view of things can produce joy, as Spinozawill insist that it does.13Because nature is infused with providential logos, the Stoics trust thatnatural impulse (horme) informs us of the proper human end and thatthe path to happiness is readily disclosed as a result. Specifically, theyidentify the impulse for self-preservation as the basis of virtue.14 Thatvirtue is readily indicated in natural impulse constitutes a further fundamental premise of moral perfectionism, since virtue is readily discernibleby this account. If we would agree with nature, as is our telos, accordingto Zeno, we must heed our natural impulse for self-preservation, according to our proper nature.15 Humans agree with nature when they pursueself-preservation rationally. Thus, agreement with nature involves understanding, an apprehension of the cosmic logos that affords harmony withthat logos, that is, homologia.16 I agree with nature when, after discerningnatures providential plan, I pursue reasonable ends that is, ends appropriate to my nature and I desire what actually occurs. In this manner,virtue conquers anxiety and disappointment.17Spinoza agrees with the Stoics that virtue is grounded in the naturalimpulse or conatus to preserve ones being, which he identifies as nothingless than the very essence of a living thing (Pr.7, III: 108). Specifically,the virtuous life involves an intellectually illuminated conatus, for to actin absolute conformity with virtue is nothing else in us but to act, tolive, to preserve ones own being (these three mean the same) underthe guidance of reason (Pr.24, IV: 16667). And furthermore, at onepoint in the Ethics a passage Matheron calls le moment Stocien de lEthique Spinoza invokes the Stoic agreement with nature, suggestingthat understanding nature occasions acquiescence in its plan. Once weunderstand, he says, we can desire nothing but that which must be,whereupon the endeavor of the better part of us is in harmony with theorder of the whole of Nature (App.32, IV: 200).Spinoza and the Stoics hold that the passions constitute the primaryobstacle to the telos, or agreement with nature. Furthermore, they definepassions in terms of irrational cognition, which suggests at once how thepassions are susceptible to remedy a remedy to be administered by 202 203 204 view, so interpreted, is certainly repugnant to Spinoza since he enthusiastically rejects a faculty of free will (Pr.48, II: 95). However, a distinctfaculty of free will is alien to orthodox Stoic doctrine, which maintainsthat the soul is unitary in character and nature. In this respect, Spinozascritique is slightly misguided, perhaps guilty of reconstructing the Stoicposition through Descartes. While the Stoics insist only upon freedomof judgment as opposed to freedom of the will, this principle of theirpsychotherapy, which affords absolute command over the emotions, isequally repugnant to Spinoza. For Spinoza maintains that the conatus, ornatural desire, internally informs and motivates our judgments indeed,he argues that desire is already implicit in any cognition (Pr.49, II: 96).We do not endeavor, will, seek after or desire anything because we judgea thing to be good, Spinoza explains, but we judge a thing to be goodbecause we endeavor, will, seek after and desire it (Sch.Pr.9, III: 109).Contrary to the Stoics, we are not free to manipulate our judgment. Asfor the possibility of psychotherapy, Spinoza announces that since thepower of the mind is defined solely by the understanding, . . . we shalldetermine solely by the knowledge of the mind the remedies for theemotions (Pref., V: 203).In and of itself, knowledge does not preclude the emergence of passion, since nothing positive contained in a false idea can be annulled bythe presence of what is true, insofar as it is true (Pr.1, IV: 155). Spinozaillustrates this claim by pointing out that although we may learn the truedistance of the sun from us, this knowledge does not dispel our impression that it is only two hundred feet away. Imaginings do not disappearat the presence of what is true insofar as it is true, he contends, butbecause other imaginings that are stronger supervene to exclude thepresent existence of the things we imagine (Sch.Pr.1, IV: 15556). Thislogic applies to passions as well, since they amount to ideas, albeit confused ones. Thus, an emotion cannot be checked or destroyed exceptby a contrary emotion which is stronger than the emotion which is to bechecked (Pr.7, IV: 158). An emotion founded on something we imagine to be present, for example, is stronger than an emotion founded onsomething absent (Pr.9, IV: 159), and an emotion referring to somethingthat is merely possible is eclipsed in power by an emotion referring tosomething inevitable (Pr.11, IV: 160). Accordingly, no emotion can bechecked by the true knowledge of good and evil insofar as it is true, butonly insofar as it is considered as an emotion (Pr.14, IV: 161). Knowledgecan treat the passions only insofar as it exerts emotive force in its ownright. 205 206 upon which they are founded are likewise irreducible. Spinozistic therapyis a matter of transforming a mind that is predominantly passive into onethat is predominantly active, detaching the minds focus from inadequateideas and attaching it instead to adequate ideas derived from reflectionupon determinism.Spinoza states, a passive emotion ceases to be a passive emotion assoon as we form a clear and distinct idea of it (Pr.3, V: 204). Admittedly, this sounds as if therapy will involve transforming the passions,but Spinoza quickly adds that the more an emotion is known to us, themore it is within our control, and the mind is less passive in respect ofit (Cor.Pr.3, V: 204) (emphasis added). Transforming a passion wouldmean eradicating it, as the Stoics have it, but Spinoza only suggests thatwe can subject it to a degree of control and, in turn, reduce the degree towhich it has a hold on our mind. The very act of understanding a givenpassion produces a rational emotion that subdues that passion, an emotion that can be understood through my nature alone and which may becounted active in this respect. The mind becomes less passive and moreactive by the method of separating and joining. If we remove an agitationof the mind, or emotion, from the thought of its external cause, and joinit to other thoughts, Spinoza explains, then love or hatred towards theexternal cause, and also the vacillations that arise from these emotionswill be destroyed (Pr.2, V: 203). I can subdue a passive affect by detaching it from the idea of the external cause upon which it is founded, andjoining it to ideas of other causes.Apprehending the necessity of things provides those ideas that fashion the mind with greater power over the passions, such as commonnotions or things that are common to all things (Pr.38, II: 87). Emotions founded on common notions are, if we take account of time, morepowerful than those that are related to particular things which we regard as absent (Pr.7, V: 206) because ideas of the common properties ofthings are ideas of things we regard as being always present (Pr.7, V: 206)(emphasis added). In other words, common notions produce emotionsof superior endurance. Conceiving things as necessary, that is, as determined, also aids the power of the mind by providing the idea of a greaternumber of causes for the occurrence of some perceived thing. I see themas part of a vast network of causes, and, as Spinoza explains, an emotionthat is related to several different causes . . . is less harmful, and we sufferless from it . . . than if we were affected by another equally great emotionwhich is related to only one or to a few causes (Pr.9, V: 207). On theone hand, if we detach the mind from the image of something as being 207 208 209 210 211 212 213 11Duties of Justice, Duties of Material AidCiceros Problematic LegacyMartha Nussbaum 215 man born in Haiti can expect to live only 53 years, with correspondinglydiminished expectation of other central human goods.What do our theories of international law and morality have to sayabout this situation? By and large, very little. Although we have quitea few accounts of personal duties of aid at a distance,2 and althoughin recent years theorists such as Charles Beitz and Thomas Pogge havebegun to work out the foundations for a theory of material transfersbetween nations as part of a theory of global justice,3 we have virtuallyno consensus on this question, and some of our major theories of justiceare virtually silent about it, simply starting from the nation-state as theirbasic unit.4 Nor has international law progressed far in this direction.Although many international documents by now do concern themselveswith what are known as second-generation rights (economic and socialrights) in addition to the standard political and civil rights, they typicallydo so in a nation-state-based way, portraying certain material entitlementsas what all citizens have a right to demand from the state in which theylive. Most of us, if pressed, would admit that we are members of a largerworld community and bear some type of obligation to give material aidto poorer members of that community. But we have no clear picture ofwhat those obligations are or what entity (the person, the state) is thebearer of them.The primitive state of our thinking about this issue cannot be explainedby saying that we have not thought at all about transnational obligations.For we have thought quite a lot about some of them, and we have bynow sophisticated theories in some areas of this topic that command awide consensus. Theories of the proper conduct of war and of properconduct to the enemy during war; theories about torture and cruelty topersons; theories even about the rape of women and other transnationalatrocities; theories about aggressive acts of various other sorts toward foreign nationals, whether on our soil or abroad all these things we haveseen fit to work out in some detail, and our theories of international lawand justice have been dealing with them at least from the first centuryb.c., when Cicero described the duties of justice in his work De Officiis,perhaps the most influential book in the Western tradition of politicalphilosophy. Ciceros ideas were further developed in the Middle Agesby thinkers such as Aquinas, Suarez, and Gentili; they were the basis forGrotius account of just and unjust war, for many aspects of the thoughtof Wolff and Pufendorf, and for Kants thinking about cosmopolitan obligation in Perpetual Peace.5 By now we understand many nuances of thistopic and have a rich array of subtly different views for example, on such 216 Martha Nussbaum That is how Cicero wants us to think about duties of material aid acrossnational boundaries: we undertake them only when it really is like givingdirections on the road or lighting someones torch from our own: that is,when no significant material loss ensues. And, as we all know, that is howmany of us have come to think of such duties.It is important to understand just how central Ciceros work was tothe education of both philosophers and statesmen for many centuries.For both Grotius and Pufendorf, who quote Cicero with enormousregularity,7 it was the obvious starting point, because its arguments couldbe expected to be known to the audience for whom they were writing.The same is true of Kant in the political writings: he shows his familiaritywith Cicero in many ways. Adam Smith, who usually footnotes with carethe Greek and Roman philosophical texts he cites, simply assumes his audiences familiarity with Ciceros De Officiis, feeling that he doesnt evenneed to tell them when he is quoting huge chunks verbatim. Thus, inA Theory of Moral Sentiments, we find a sizable chunk of book 3 simply introduced into Smiths own prose without any mention of the author, theway we might do with Shakespeare or the Bible, feeling that to mentionthe source would be to insult the learning of the audience.8 English gentlemen typically had Tullys Offices on their desks to get them througha difficult situation, or at least to display their rectitude. And they took 217 Cicero with them when they went visiting (as Kant notes, a favorite euphemism for colonial conquest).9 African philosopher Kwame AnthonyAppiah records that his father Joe Appiah, one of the founding politicalleaders of the Ghanaian nation, kept two books on his bedside table: theBible and Ciceros De Officiis.10 The book really was a kind of biblical textfor the makers of public policy round the world. What I argue here isthat in one important respect this bible was more like the serpent in thegarden.I believe that Cicero was a pernicious influence on this topic. But Ialso think that his arguments are of considerable interest worth studying not only to discover how we went wrong but also in order to thinkbetter about what we want to say. We usually take on Ciceros conclusionswithout remembering, hence without criticizing, the arguments that ledto them, and so we lack self-understanding about a very fundamentalpart of our own current situation. I propose to begin here to supplysuch a critical account; and I suggest that Cicero himself provides us withsome of the most important resources for such critical argument. Healso gives us, along with many inadequate arguments for his distinction,some much more plausible arguments that we might use to defend amoderate asymmetry between the two types of duties but not one thathas the strong anticosmopolitan consequences that he believes he hasdefended.I begin by outlining Ciceros distinction between the two types ofduties, and asking what explicit arguments Cicero uses to support thedistinction. Then I suggest that the resulting position is regarded as acceptable by Cicero and his audience in large part because of a sharedview that derives from Stoicism, concerning the irrelevance of materialgoods for human flourishing. I then argue that the distinction does notcohere internally, even if one should accept this Stoic doctrine; and, second, that we ought not to accept it. We then have to ask which Ciceronianarguments remain standing, and whether they give us any good ways ofdefending the distinction between the two types of duties.There is one more reason for focusing on what Cicero says aboutthis question. Cicero, more than any other philosopher who discussedthis question, was immersed in it in a practical way. The De Officiis waswritten in 44 b.c., while Cicero was hiding out in the country, trying toescape assassination at the hands of the henchmen of Antony and theother triumvirs who succeeded several months after the completion ofthe work. The work, dedicated to his son who is studying philosophy atAthens, argues that philosophy is essential for public life and also that 218 219 provoked by a wrongful act.11 This is the most basic way in which Cicerothinks about justice and injustice, and it proves fundamental to everythinghe says in what follows.Second, justice requires using common things as common, privatepossessions as ones own. The idea that it is a fundamental violation ofjustice to take property that is owned by someone else goes very deep inCiceros thought, in a way that is explained by, but also explains, his fierceopposition to Julius Caesars policies of redistribution of land. Here hesays that any taking of property violates the law of human fellowship(21). The account of the relevant property rights and their origin isremarkable for its obscurity and arbitrariness:Nothing is private by nature, but either by long-standing occupation (as whensome people at some point came into an empty place), or by conquest (as whenpeople acquired something through war), or by law or treaty or by agreement orby lot. Hence it comes about that the Arpine land is called that of the Arpinates,the land of Tusculanum that of the Tusculans. The account of private propertyis of a similar kind. Hence, because, among the things that were common bynature, each one has become someones, therefore let each person hold ontowhat falls to his lot [quod cuique obtigit, id quisque teneat]. If someone tries to getsomething away for himself,12 he violates the law of human fellowship. (21) Cicero clearly thinks that a taking of private property is a serious injustice, analogous to an assault. But nothing in this passage explains whyhe should think this, or why he should think that there is any close relation between existing distributions and the property rights that justice would assign. The argument distinguishes several different ways inwhich natures common stock could be appropriated. They look morallydifferent, and yet Cicero makes no moral distinction among them. Itseems as if he is saying, because they are all rather arbitrary anyhow, theneach person may as well start with his own share, and we shall defineproperty rights from that point, rather than looking back to the modeof acquisition. But once he has distinguished between agreement andconquest in war, between law and mere chance or lot, he invites us tonotice that he has not said nearly enough to explain his strong preference for existing distributions. I return to this issue in the chapters finalsection.Having introduced the two types of injustice, Cicero now observes thatthe failure to prevent an injustice is itself a type of injustice; this importantpassage concerns us later in the chapter. Describing the causes of bothtypes of injustice, he remarks that people are frequently led into immoralaggression by fear (24), by greed (25), and by the desire for glory and 220 empire (26). The last, he notes, is the most disturbing, since it frequentlycoexists with great talent and force of character; he gives Julius Caesar asa case in point.Cicero is very clear that justice requires us to use our adversaries withrespect and honesty. Trickery of any sort is to be avoided (33). Furthermore, even those who have wronged you must be treated morally, forthere is a limit to vengeance and punishment (34). Punishment seemsto Cicero sufficient if the wrongdoer is brought to repentance and otherpotential wrongdoers are deterred. Anything that goes beyond this isexcessive.Cicero now turns from these general observations to the conduct ofwarfare. From now on he does not distinguish assault from propertycrime: and, of course, war standardly mingles the two subcategories of injustice. About the waging of war, he insists first that negotiated settlementis always preferable to war, since the former involves behaving humanly(and treating the other party as human), whereas the latter belongs tobeasts (34). So war should be a last resort when all negotiation has failed.Cicero offers as a good example the ancient Roman fetial law, whichinsists that all warfare be preceded by a formal demand for restitution(37). And, of course, war is justified, in his view, only when one has beengrievously wronged by the other party first. In general, war should alwaysbe limited to what will make it possible to live in peace without wrongfulacts (35). After conflict has ended the vanquished should be given fairtreatment, and even received into citizenship in ones own nation wherethat is possible (35).During conflict, the foe is to be treated mercifully: for example, Cicerowould permit an army to surrender unharmed even after the batteringram has touched their walls (35); in this he is more lenient than traditional Roman practice. Promises made to the enemy must be faithfullykept: Cicero cites with honor the example of Regulus, who returnedto a terrible punishment because he had promised the Carthaginianshe would return (39).13 Even a powerful and egregiously unjust enemyleader should not be murdered by stealth (40). Cicero ends this sectionby reminding his readers that the duties of justice are to be observed evento slaves (41).In general we might say that Ciceronian duties of justice involve an ideaof respect for humanity, of treating a human being like an end ratherthan a means. (That is the reason that Kant was so deeply influencedby this account.) To assault ones enemies aggressively is to treat themas a tool of ones desire for wealth or power or pleasure. To take their 221 property is, in Ciceros eyes, to treat them, again, as simply tools of onesown convenience. This underlying idea explains why Cicero prefers theinjustice of force (vis) to the injustice of deception (fraus). The formeris the act of a lion, the latter of a fox (41): [B]oth are most foreign tothe human being, but deception is more worthy of hatred presumablybecause it more designedly exploits and uses people.In book 3 Cicero returns to the duties of justice, elaborating on hisclaim that they are the basis for a truly transnational law of humanity.Since the useful frequently conflicts with the honorable, he writes, weneed a rule ( formula) to follow. The rule is that of never using violenceor theft against any other human being for our own advantage. Thispassage, more rhetorical than the book 1 account, is the text that mostdeeply influenced Grotius, Smith, and Kant:Then for someone to take anything away from another and for a human beingto augment his own advantage at the cost of a human beings disadvantage, ismore contrary to nature than death, than poverty, than pain, than all the otherthings that can happen to his body or his external possessions.14 For to beginwith it removes human fellowship and social life. For if we are so disposed toone another that anyone will plunder or assault another for the sake of his ownprofit, it is necessary that the fellowship of the human kind, which is most of allin accordance with nature, will be torn apart. Just as, if each limb had the ideathat it could be strong if it took the strength of the adjacent limb away for itself,the whole body would necessarily weaken and perish, so too, if each one of usshould take the advantages of others and should snatch away whatever he couldfor the sake of his own profit, the fellowship and common life of human beingsmust necessarily be overturned. (2122) The point is, presumably, that the universal law condemns any violationthat, should it be general, would undermine human fellowship. KlausReich has found in this passage the origins of Kants formula of universallaw.15 Whether this is right or wrong, we certainly should see a strongsimilarity between Ciceros argument and Kants idea.Cicero now calls this principle a part of nature, that is the law ofpeoples, and also natures reason, which is divine and human law. Henotes that it is also widely recognized in the laws of individual states.We should all devote ourselves to the upholding of this principle asHercules did, protecting the weak from assault, a humanitarian act forwhich he was made into a god. In general:If nature prescribes that a human being should consider the interests of a humanbeing, no matter who he is, just because he is human, it is necessary that accordingto nature what is useful for all is something in common. And if this is so, then we 222 are all embraced by one and the same law of nature, and if that is so, then it isclear that the law of nature forbids us to do violence to [violare] anyone else. Butthe first claim is true, so the last is true also. (3.27) Cicero remarks that it is absurd for us to hold to this principle when ourfamily or friends are concerned, but to deny that it holds of all relationsamong citizens. But, then, it is equally absurd to hold to it for citizensand deny it to foreigners. People who make such a distinction tear apartthe common fellowship of the human kind (28). (Hercules, his salientexample of natures law, was a cosmopolitan in his aid to the weak.)16This section makes it very clear that Ciceros duties of justice are fullycosmopolitan. National boundaries are morally irrelevant, and Cicerosternly reproves those who think them relevant. At the core of Cicerosargument is an idea of not doing violence to the human person and,when we add in the distinction from book 1 (and the Hercules example)of not allowing people to be violated when you can help them. Violareincludes physical assault, sexual assault, cruel punishments, tortures, andalso takings of property. Cicero now links to that idea of humanity asan end the idea of a universal law of nature: conduct is to be tested byasking whether it could be made into such a law. Cicero clearly wants theworld citizen to be Hercules-like in his determination to create a worldwhere such violations of humanity do not occur, a world that accords withnatures moral law. The law of nature is not actual positive law, but it ismorally binding on our actions, even when we are outside the realm ofpositive law.This is the material in Cicero that became the foundation for modern international law. Grotius De Lege Belli atque Pacis is, we might say, acommentary on these passages. Kants Perpetual Peace also follows themvery closely.17 Particularly influential was Ciceros moral rigor, his insistence that all promises be preserved; in the form of the Grotian maximpacta sunt servanda, this is the basis for modern conceptions of treatyobligation although of modern thinkers only Kant follows Cicero allthe way to his praise of Regulus. 223 that these duties, too, are basic to human nature, but there are manyconstraints. We have to make sure our gifts do not do harm; we have tomake sure we do not impoverish ourselves; and we have to make sure thegift suits the status of the recipient. Distinctions that we may legitimatelytake into account under the last rubric include the recipients character,his attitude toward us, benefits previously given to us, and the degree ofour association and fellowship (1.45). Duties are strongest when all ofthese intersect; but throughout there is a role for judgment as to whatseems weightier (45). If other things are equal, we should help the mostneedy (49).As if introducing an independent consideration which he neverclearly ranks against the preceding Cicero now says that human fellowship will be best served if the people to whom one has the closest ties(ut quisque erit coniunctissimus) should get the most benefit. He now enumerates the various degrees of association, beginning with the species asa whole, and the ties of reason and speech that link us all together. Thisall-embracing tie, he now says, citing Ennius, justifies only a type of material aid that can be given without personal diminution (sine detrimento).Examples are allowing a foreigner to have access to running water andfire; giving advice to anyone who asks. But, he says, because theres aninfinite number of people in the world (infinita multitudo) who might possibly ask us for something, we have to draw the line at the point Enniusmentions.Cicero then discusses other bonds that do, in his view, justify somesubstantial giving: the bond of nation and language; of the same state;of ones relatives; various degrees of familial propinquity; and, finally,ones own home. In no case, it is important to note, does his argumentfor the closeness of the connection rest simply on biology or heredity;at least one relevant feature, and usually the central one, is some aspectof shared human practices. Citizens are said to share a forum, temples, porticoes, roads, laws, rights, courts, elections. Families are heldtogether by blood, but also by the shared task of producing citizens, andby goodwill and love: for it is a great thing to have the same tombs ofancestors, to use the same religious rites, to have common burial places(5455). (It is of considerable practical importance to Cicero to showthat family ties are not merely blood ties, because adoption, remarriage,and other common features of Roman life had made family lines lookquite different from bloodlines.) Cicero does not make it clear whetherour duties are greater to those who are closer to us in these various sharedobservances. 224 225 226 at this point to their view of providence: Zeus asks us to concern ourselves with the distribution of material goods, even though, strictly speaking, such things have no real importance. In general, these things arepreferred, their opposites dispreferred; it is therefore appropriate topursue them, though not to grieve when one cannot attain them. MarcusAurelius says that the Stoic wise person will view people who weep overlost externals as similar to children who weep over a lost toy: he will helpthe child regain the toy (the needed externals), but he will know all thewhile that it is only their own foolish immaturity that makes them careabout such things. Cicero, unable to take up Stoic teleology because ofhis own epistemic skepticism, takes a line more like Marcus: if peopleare really good they dont mind the loss of externals, so, by implication,if they do mind them that shows they are morally defective.20 That doesnot mean that we should not aid them but it does color our sense ofwhy that aid is needed, and what its limits might be. 227 If this is so, then one rationale for the distinction between the twotypes of duties disappears. If humanity is owed certain types of treatmentfrom the world, it would seem it is owed good material treatment aswell as respect and noncruelty. If the worlds treatment does not matterto humanity, then it would seem that torture, rape, and disrespect areno more damaging, no more important, than poverty. It is incoherentto salve ones conscience on the duties of material aid by thinking abouttheir nonnecessity for true flourishing and, at the same time, to insist sostrictly on the absolute inviolability of the duties of justice, which are justother ways of supplying human beings with the external things they need.To see how fascinating this Stoic incoherence can be, let me digress toconsider Senecas letter on slavery,21 which is rightly regarded as one ofthe formative progressive documents on this topic. Its general argumentis that slaves have human worth and dignity and therefore are due certain sorts of treatment suited to that human worth and dignity. Senecasimaginary interlocutor keeps saying, He is a slave (Servus est). Senecakeeps on replying, No, he is a human being (Immo homo est). But to what,precisely, does Seneca think humanity entitles this human being? Both alot and a little. A lot, in the sense that Seneca is prepared to make quiteradical changes in customs of the use of slaves. Slaves are to be reasonedwith and made partners in the planning of the household. They are to sitat our table and eat with us. All cruelty and physical abuse is absolutelybanned. Especially radical is an equally absolute ban on using the slaveas a sexual object: for intercourse with slaves was such an accepted partof the conduct of life, where male owners were concerned, that it was notdefined as adultery under law,22 and the only other person we know whoobjected to it was the Stoic philosopher Musonius Rufus.23What, however, about the material conditions of the slave his lack ofself-ownership, his inability to own material goods, in short the institution of slavery? This, it seems, Seneca never thinks to question. And hisrationale for this quietism is what we might by now expect: slavery doesno harm, because the only important goods are the goods of the soul.The interlocutor utters his scornful He is a slave one last time, towardthe end of the letter. But this time Seneca does not reply that the personis human and had therefore to be treated thus and so: He is a slave.But maybe he has a free soul. He is a slave. Will this do him any harm?[hoc illi nocebit?] Show me anyone who is not a slave: one person is a slaveto lust, another to greed, another to ambition, all to hope, all to fear(47.17). 228 But this tack confounds the person who had just been thinking that thetreatment of people does matter, who had just been agreeing with Senecathat it is entirely wrong to use a human being the way one uses a beast ofburden (47.5). For how can it be wrong to neglect or fondle or terrorize oreven beat a slave, if all that matters is the free soul within and that cannotbe touched by any contingency? How can it be wrong to treat a slave like abeast, if it is a matter of indifference whether one is a slave or a freeman?Seneca would like to say that humanity requires respectful treatment, andyet that it does not: for obviously enough, the entire institution is an insultto humanity, because it treats a free soul as an unfree possession. This waswell known to Seneca and to his contemporaries. There was no coherentStoic defense of the institution available to him, although in fact mostStoic philosophers did support it.24 Seneca therefore falls back, at thispoint, on the familiar point about the external goods, and the familiarparadox that only virtue makes one truly free. But that maneuver doestoo much work if it does any at all, for it negates the importance ofeverything that has been argued up to this point. If it is really true thatthe only important form of slavery is internal slavery to passion, and ifwe accept the Stoic thesis that these passions are always in our control,then there is no reason to think that the lot of the abused and insultedslave is any worse than the lot of the slave who sits down with Seneca atthe dinner table.I believe that much modern thought about duties suffers from thissame incoherence.25 We allow that there are certain things that are sobad, so deforming of humanity, that we must go to great lengths to prevent them. Thus, with Cicero and Seneca, we hold that torture is aninsult to humanity; and we now go further, rejecting slavery itself. Butto deny people material aid seems to us not in the same category at all.We do not feel that we are torturing or raping people when we denythem the things that they need in order to live presumably becausewe do not think that these goods are in the same class. Humanity canshine out in a poor dwelling, and we tell ourselves that human dignityhas not been offended by the poverty itself. Poverty is just an external:it does not cut to the core of humanity. But, of course, it does. The human being is not like a block or a rock, but a body of flesh and bloodthat is made each day by its living conditions. Hope, desire, expectation,will, all these things are shaped by material surroundings. People canwonderfully rise above their conditions, but that does not mean that theconditions themselves are not important, shaping what they are able to doand to be. I believe that the Stoic idea of the invulnerability of the will to 229 contingencies and related Christian ideas about the soul lie behindthese judgments. At least the Christian version is consistent, holding thatno sort of ill treatment in this life affects ones salvation. (Interestingly,many thinkers are inclined to draw the line at rape, feeling that it sulliesthe soul despite its unwilling character. Dante, for example, puts PiccardaDonati in the lowest rung of paradise simply for having been raped.) TheStoic version lies closer to the ordinary thought of many of us, whenwe express horror at crimes against humanity but never consider thatfailures of material aid might be such crimes. 230 231 deal for others. That intuitive idea is very central in our thinking whenwe suppose that the recognition of duties of material aid would imposea great burden on our nation, while the recognition of duties of justicewould not. I have already cast doubt on the positive-negative distinctionby pointing out that real protection of people against violations of justiceis very expensive: so if we really are serious about protecting people inother parts of the world against wrongdoing, we will have to spend a lotof money on the institutions that do the protecting. But someone maynow say, if we decide not to spend this money, violations may occur, butat least the violators wont be us. We can consistently draw a line, if notprecisely where the old line between justice and material aid went, at leastbetween acting and refraining. If we refrain from cruelty, torture, and thelike, then we are doing no wrong, even if we are unwilling to spend ourmoney on people at a distance, even where justice issues themselves arein play.To this argument the best reply was given by Cicero himself. In thisvery section of book 1 of the De Officiis, he wrote:There are two types of injustice: one committed by people who inflict a wrong,another by those who fail to ward it off from those on whom it is being inflicted,although it is in their power to do so. For a person who unjustly attacks anotherunder the influence of anger or some other disturbance seems to be laying hands,so to speak, upon a colleague; but the person who does not provide a defense oroppose the injustice, if he can, is just as blameworthy as if he had deserted hisparents or his friends or his country. 232 compelled. But it would have been fairer that this be done willingly; for a rightaction is only just if it is done willingly. There are also some people who eitherbecause of keenness to protect their estates or through some hatred of humanbeings say that they mind their own business and dont seem to be doing anyoneany harm. They are free from one type of injustice, but they run into the other;for they abandon the fellowship of life, inasmuch as they do not expend on it anyzeal or effort or resources. (1.2829) 233 234 of assault against which one has duties to protect ones fellows, or else hewould have to rewrite completely the section on benevolence. But whynot? It seems quite unconvincing to treat the two types of harm asymmetrically. Furthermore, even Ciceros limited point has implications for thetopic of benevolence, which he does not notice. For even to protect ourneighbors from assault will surely require, as I have argued, massive usesof our own material resources, of a type that he seems to oppose in thatlater section.At this point, then, we must part company with Cicero, viewing thediscussion of passive injustice as highly suggestive but underdeveloped.Clearly Cicero did not see its importance for his later discussion of benevolence. This is perhaps not surprising, given the speed at which he waswriting, with death looming ahead of him, and given his intense focus onjustifying the philosophers choice to serve the public realm.The important point is that Cicero is right. It is no good to say, I havedone no wrong, if, in fact, what one has done is to sit by when one mighthave saved fellow human beings. That is true of assault, and it is trueof material aid. Failures to aid when one can deserve the same chargesCicero addresses to those who fail to defend: laziness, self-preoccupation,lack of concern. Cicero has let in a consideration that is fatal to his ownargument and to its modern descendants.One more rescue will now be attempted. Cicero, it will be said, isperfectly consistent when he applies his doctrine of passive injustice onlyto the sphere covered by the duties of justice. This is so because passiveinjustice is a failure to ward off an assault or aggression. But lack of materialgoods is not an assault or aggression. Nothing Cicero has said commitshim to the view that it is also passive injustice not to supply things thatpeople need in order to live. And, indeed, it seems likely that some suchintuitive idea lies behind Ciceros way of arguing here. Moreover, thissame intuitive idea is in many modern peoples minds when they thinkof what justifies humanitarian intervention.Of course, as we have already insisted, even to protect people againstassaults takes money. So this distinction cannot really help us defendCiceros original bifurcation of duties. But let us see whether there iseven a limited coherence in Ciceros doctrine, so understood. We maythink of assault or aggression in two ways. In one way, assault is somethingthat hits people from outside, through no fault of their own. But in thisway of thinking, many natural events look like assaults: floods, famines,depradations of many kinds from animals and the natural world.32 Cicerolets himself in for this extension by his reliance on the example of 235 236 by another person or persons wrongful act. Given that hunger is typically caused not so much by food shortage as by lack of entitlement tofood, it is a thoroughly human business, in which the arrangements ofsociety are profoundly implicated.35 With poverty, this is even clearer. Justbecause it is difficult to decide whom to blame, that does not mean thatno wrongful act has occurred and that no response need be made. Mostnations have the capacity to feed all their people, if they had a just systemof entitlements. Where war is concerned, we sometimes understand thatwe can judge a wrong has taken place without being exactly clear on whodid it: we dont always require that there be an easily recognizable badguy such as Hitler or Saddam Hussein or Milosevic, before we undertake an act of humanitarian intervention, or declare war on a nation thathas wronged another (though clearly the presence of such a bad guygets Americans going more easily witness the failure to intervene in thegenocide in Rwanda).Moreover, we should at least consider that some of the wrongdoersmay be ourselves. Through aid, we can feed all the worlds people; wejust dont. Of course, in the context of the present argument it wouldbe question begging to assert that the failure to give material aid acrossnational borders is iniuria. So too, however, would be the assertion thatour failure to aid involves no iniuria. At the very least, we should concedethat the question of our own moral rectitude has not been resolved.On any understanding of the distinction between aggression andnonaggression, then, Ciceros refusal to extend his analysis of passiveinjustice to failures to give material aid looks unconvincing. If aggression is catastrophe, there are many natural and social catastrophes thathave no clear bad guy; if aggression is wrongful action, there is almostcertain to be wrongful action afoot, when people are starving and indeep poverty, even though we cannot easily say whose wrongful action itis. And, yet, most of us do continue to think in something like Cicerosway, feeling that it is incumbent on us (maybe) to save people from thugsand bad guys but not incumbent on us to save them from the equallyaggressive depradations of hunger, poverty, and disease. Hercules knewbetter.I have argued that Ciceros distinction is not fully coherent, even toone who accepts the Stoic doctrine. And yet it also gets a lot of mileagefrom that doctrine, because Stoic moral theory permits us to salve ourconscience about our failure to aid our distant fellows, telling ourselvesthat no serious harm has befallen them. Let us, therefore, turn our attention to that doctrine. 237 238 239 what is left?We have removed some of the main props for Ciceros distinction ofduties into two kinds, one strict and one less strict. Let us now considerhis remaining arguments.A great advantage of Ciceros discussion is that it does not simply assume that national boundaries are of obvious moral relevance; nor doeshe rely on mysterious ideas of blood and belonging that frequently substitute for argument in these matters. Instead, he believes that we needto point to some feature or features of our fellow citizens that justify differential treatment. Indeed, even in the case of family he does not fallback on an allegedly obvious relevance of consanguinity: perhaps theprevalence of adoption of heirs in the Roman middle and upper classeshelps him avoid a pitfall of some modern discussions. So although I amcritical of some of his specific arguments, I think we should applaud 240 Property RightsCicero will insist that there is a fundamental part of justice itself thathas implications for material redistribution. For, as we recall, he definedjustice partly in terms of respect for property rights, understood as justified by the luck of existing distributions. He argued that once property is appropriated, no matter how, taking it away is the gravest kindof violation. Clearly it is his purpose to use that argument to opposeany state-mandated redistributive policies, such as Caesars attempts atland reform. But this argument has implications for the entire issue ofbenevolence: for if I have a right to something, and it is egregiously badfor someone to take it away from me, then it would seem peculiar to saythat I even have a strict moral duty to give it away to someone else.Thus modern Ciceronians might grant everything I have said aboutthe unfortunate problems in Ciceros distinction of duties and yet holdthat property rights are so extremely important that by themselves theyjustify making the duties of benevolence at best imperfect duties. I believethat both Richard Epstein and Robert Nozick would take this line.On the other hand, any such thinker who starts off from Cicero isbound to notice the thinness and arbitrariness of his account of theserights. Why should it actually be the case that each should hold whatfalls to the share of each, and if anyone takes anything from this, heviolates the law of human association? Why not say, instead, that suchclaims to ownership are always provisional, to be adjudicated along withclaims of need? By emphasizing need himself, as a legitimate source ofmoral claims, Cicero has left himself wide open to this objection.Here we should say that Ciceros highly partisan politicking distorts hisphilosophy. His Stoic forebears, as he well knows, thought all propertyshould be held in common;40 he himself has staked his entire career onan opposition to any redistributive takings. That he skates rather rapidlyover the whole issue of how property rights come into being, neglecting 241 242 Thick FellowshipCiceros most interesting claim for the republic is that our participationin it makes claims on our human faculties that other more distant associations do not. We share in speech and reason in a variety of ways when weassociate with our fellow citizens, thus confirming and developing our humanity in relation to them. This is not the case with the foreign national,unless that person is a guest on our soil. For this reason, Cicero thinks,we owe the republic more material aid than we do to foreign nations andnationals. The idea is presumably that we have reasons to make sure thatthe institutions that support and confirm our humanity prosper.One might complain, first, that Ciceros point was already of dubiousvalidity in his own time, because already Rome had complex civic andpolitical ties with many parts of the world, and non-Italians were notyet, though some of them later became, Roman citizens. His son was offstudying philosophy in Greece; his philosophical descendant Seneca wassoon to be born in Spain. North Africa, Gaul, and Germany, though oftencrudely caricatured in imperialist writings,43 were known to be the homesof people with whom Romans had many forms of cultural and humanexchange. So citizenship and fellowship were not coextensive even then.In our day, when we develop and exercise our human powers, weare increasingly associating with people elsewhere. Networks such as the 243 AccountabilityWe might read Ciceros previous argument to make, as well, the followingpoint. Our own republic is ours. One of the forms of association that weshare, in that fine institution of the republic that Cicero is strugglingto preserve, is mutual accountability, as well as accountability of publicpolicy to citizens. This might be said to give us some reasons to use ourmoney on a form of government that had this desirable feature. Does itgive us reasons to support republican government all over the world, ordoes it give us special reasons to focus our material aid on our own? Herewe might combine the accountability point with the points about need,dependency, and gratitude, and say that our own has an especially strongclaim on our resources.I think that there is something in this argument. But it also suggeststhat at least some of our resources might be well used in supporting otherinstances of republican government. Its main point is that institutions ofa certain type are good protectors of people, because of their responsiveness to peoples voices: this makes them good ways of channeling dutiesof aid. But once again, this is compatible with the duties themselves beingfully general.44 Certainly the argument does not get us anywhere near 244 245 is that the difficulty of these problems does not mean that we should fallback on the Ciceronian doctrine, with its multiple evasions. It means thatwe should continue our work. Notes1. All data in this paragraph are from United Nations Development Program1998.2. Most such theories take their start from the classical utilitarian tradition; aninfluential recent statement is Singer 1972; see also Kagan 1989 and Murphy1993. Murphy has pursued these issues further in Murphy 2000. Anothereffort in this tradition less successful, I believe is Unger 1996. On limitsto these personal duties of benevolence, see Nagel 1991 and Scheffler 1982.3. Beitz 1979; Pogge 1989. For promising combinations of institutional andpersonal duties, see Goodin 1988; Shue 1988; Nagel 1991; see also Shue1980; Goodin 1985.4. A salient example, much discussed, is Rawls 1993.5. On Kants debt to Cicero, see Nussbaum 1997.6. I translate the Latin of the De Officiis myself throughout, starting from MichaelWinterbottoms excellent Oxford Classical Text (Cicero 1994). The best translation is in the excellent annotated version of the work by Miriam Griffin andEileen Atkins (Cicero 1991). See also the commentary on the work by AndrewR. Dyck (1996).7. By count of my research assistant Chad Flanders, ninety citations or close paraphrases of De Officiis in Grotius De Iure Belli atque Pacis, eighty in PufendorfsDe Iure Naturae et Gentium (1688); most of these citations are to the portionsof the work I am about to discuss. Both authors are also extremely fond ofSeneca. (Caveat lector: some English translations, especially of Grotius, omitmany of the citations, feeling that the text is top-heavy with them.) For Grotiustremendous influence on the foundations of modern international law, seeLauterpacht 1946; on Kants influence, see Teson 1992: 53, 55. Grotius, andthe closely related arguments of Vattel and Bynkershoek, all had a major influence on eighteenth- and nineteenth-century jurisprudence in the UnitedStates. A LEXIS search shows 74 U.S. Supreme Court cases that refer toGrotius, 176 that refer to Vattel, and 39 that refer to Bynkershoek, all before1900; the reliance on these texts seems to be genuiine. (Search of LEXIS,Genfed Library, US File, January 1998: I owe this information to my colleague Jack Goldsmith, who informs me that a similar reliance is evident indiplomatic correspondence and political argument.)8. Adam Smith, The Theory of Moral Sentiments (Smith 1982: first edition 1759,sixth edition 1790), III.3.6: [A]nd who does not inwardly feel the truth ofthat great stoical maxim, that for one man to deprive another unjustly of anything, or unjustly to promote his own advantage by the loss or disadvantageof another, is more contrary to nature, than death, than poverty, than pain,than all the misfortunes which can affect him, either in his body, or in hisexternal circumstances. This, as will be seen, is a verbatim citation of III.21. 246 9. See Perpetual Peace, Third Definitive Article of a Perpetual Peace: Cosmopolitan Right Shall Be Limited to Conditions of Universal Hospitality, p. 106 inKant 1991:If we compare with this ultimate end the inhospitable conduct of the civilised statesof our continent, especially the commercial states, the injustice which they display invisiting foreign countries and peoples (which in their case is the same as conqueringthem) seems appallingly great. America, the negro countries, the Spice Islands, theCape, etc. were looked upon at the time of their discovery as ownerless territories; forthe native inhabitants were counted as nothing. In East India (Hindustan), foreigntroops were brought in under the pretext of merely setting up trading posts. This ledto oppression of the natives, incitement of the various Indian states to widespreadwars, famine, insurrection, treachery, and the whole litany of evils which can afflictthe human race. . . . And all this is the work of powers who make endless ado abouttheir piety, and who wish to be considered as chosen believers while they live on thefruits of iniquity. 10. Appiah 1996:23. Appiah actually says Cicero and the Bible, but in thiscontext there is only one text of Cicero that is likely to have had this privilegedplace.11. I so translate iniuria; one should avoid saying injustice, so that the definitiondoes not seem circular, but also avoid saying something morally neutral, likeprovocation, since iniuria clearly means something morally inappropriate.12. There is a textual problem here, and Winterbottom obelizes the first half ofthis sentence; but the sense not the argument! seems clear.13. The example of Regulus is very important to Cicero in De Officiis: he discusses it at greater length at 3.99111, arguing against various people whowould try to reconcile the conflict between virtue and expediency, or to urgethat Regulus ought to have followed expediencey. Marcus Atilius Regulus,a prominent Roman politician and military leader, was captured by theCarthaginians in 255 b.c. Later he was sent to Rome to negotiate a peace(or, in some versions, the return of Carthaginian prisoners); he promised toreturn after executing his mission. When he arrived, he urged the Senate todecline the peace terms; but he kept his promise to return. The story goesthat he was placed in the sunlight with his eyelids stapled open, dying anexcruciating death by both starvation and enforced sleeplessness. (Sourcescharacterize the torture in various ways, but all agree on the exceedinglypainful character of the death, exquisita supplicia, as Cicero says [3.100] andcompare the summary of the lost book 18 of Livy; for other references inCicero and elsewhere, see Dyck 1996: 61920.) Romans considered Regulusstory a salient example of honorable behavior, definitive of a national normof virtue (see Horace Odes 3.5), although modern scholars note that the storymay have been invented to defuse criticism of torture of Carthaginian prisoners at Rome (see Howard Hayes Scullard, Regulus, Oxford Classical Dictionary, 2nd ed. [Oxford: Oxford University Press, 1970], 911, and the brieferarticle in the 3rd ed.[1966]: Andrew Drummond, Atilius Regulus, Marcus,(207); they follow Polybius in holding that Regulus died in Carthaginian captivity and never went on an embassy to Rome (see Dyck 1996: 619). Horacesuse of the story is exceedingly colonialistic and chauvinistic, with vilification 14. 15.16. 247 of the barbarus tortor and praise of the virilis voltus (manly face) of the hero,the chaste kisses of his proper Roman wife. (The context in which the storyis introduced is anxiety about the dilution of warlike Roman blood by intermarriage with barbarian peoples.) Cicero standardy uses the story as anexample of the victory of virtue over expediency: see also De Finibus, defending the Stoic ideal of virtue against Epicurean hedonism: Virtue cries outthat, even while tortured by sleeplessness and hunger, he was happier thanThorius getting drunk on his bed of roses (2.65). In more recent times,the example, however extreme, still fascinates. Turners painting Regulus isnotorious for containing, it would appear, no representation of the centralfigure; the reason is that the viewer is placed in the position of Regulus,struck again and again by a hammering implacable sun.It is quite unclear in what sense death and pain could be said to be contraryto nature; even to a Stoic, for whom the cosmos is thoroughly good, deathitself will therefore have to be understood as a good, when it occurs. AndStoics energetically opposed the thesis that pain is intrinsically bad. EricBrown suggests that the Stoics can defuse this problem by distinguishingtwo viewpoints: from the point of view of Providence, nothing is contrary tonature; from a local viewpoint, things like death are contrary to nature, inthe sense that they mean the end of some natural organism. I am not sure:for the local perspective is not accurate, according to a strict Stoic account.Marcus and other writers insist again and again that we must meditate onthe naturalness of our own death.Reich 1939.See Dyck 1996: 529: The example of Hercules, a pan-Hellenic hero, breaksdown the boundaries of individual states and emphasizes the common needsand interests of all human kind. He compares Tusculan Disputations 1.28 andDe Finibus 3.6566.See Nussbaum 1997.Probably Cicero does not allow quite as much latitude as does Kant: for therequirement that we become boni ratiocinatores officiorum suggests that wemust learn to perform refined calculations, and that it is not simply up to ushow they turn out. (I owe this observation to Eric Brown.)See Nussbaum 1995.Given that elsewhere Cicero prefers a position that ascribes a tiny bit of valueto externals, though the preponderant amount to virtue, he may waver inthis work between that position (which would make it easier to justify dutiesof material aid to our fellow citizens) and the stricter Stoic position.Seneca, Moral Epistle 47.See Treggiari 1991. The relevant law is the famous (or infamous) Lex Iuliade Adulteriis, passed by Augustus in the first century in an alleged attempt torestore the pristine mores of former times although, as Treggiari persuasively argues, it is actually much more severe than either legal or social normsthat prevailed during Ciceros lifetime. Even this severe law did not restrictsexual access of male owners to their slaves and, as Musonius comments,public norms generally endorsed such conduct. Adultery was conceived of asa property offense against the husband or father of the woman in question. 248 23. See my Musonius Rufus: Platonist, Stoic, and Roman, in Nussbaum andSihvola 2002. I argue there that Musonius position is actually more conservative than Senecas: it does not claim that the slave has any right to respectfultreatment; it treats the sex act as a problem of overindulgence for the freeowner, rather than a problem of disrespect for the slave.24. See Griffin 1992: ch. 8, 25685.25. A significant attempt to break down the distinction, in connection with thinking about which duties to others are most urgent, is Shue 1980. See alsoGewirth 1996 for argument that what should be considered in both cases isthe prerequisites of human agency, and that both the duties of justice andthe duties of material aid involve important prerequisites of agency. AlsoGewirth 1985.26. See Holmes and Sunstein 1999.27. Recall George Bernard Shaws similar remark to a rich society woman on thetopic of prostitution. As legend has it, he asked her whether she would marryhim if he had a million pounds: amused, she said yes. He then asked whethershe would sleep with him for five pounds. She exclaimed, Mr. Shaw, whatkind of a woman do you think I am? He replied, We have already establishedthat: now were just haggling about the price.28. See Shue 1980: 1079, citing Wassily Leontiefs claims about the relativelylow cost of providing basic material support.29. See the interesting discussion of this part of Ciceros view in Shklar 1990.30. See Seneca, De Ira 1.12, where the interlocutor objects that the nonangryperson will not be able to avenge the murder of a father or the rape of amother, and Seneca hastens to reassure him that these central moral actscan all be done without anger.31. Compare Senecas De Otio, where he argues that the philosopher who doesnot enter public life may be able to serve the public better through philosophical insights: We definitely hold that Zeno and Chrysippus did greaterdeeds than if they had led armies, won honors, and written laws: they wrotelaws not for one nation but for the whole human race (6). But Senecasposition is much more retirement-friendly than Ciceros.32. See Landis 1998. Landis argues that Americans have always been reluctant togive relief unless they believe the person to have been the victim of somethinglike a natural disaster, that comes on them from outside; in a dissertation inprogress, she argues that Roosevelt understood this, and used the rhetoricof natural disaster to mobilize aid during the Depression. Even the termThe Depression positioned an economic catastrophe as a quasi-flood orhurricane.33. For a related myth used to exemplify this point about the bodily appetites,see Platos account of the Danaids who had to carry water in a sieve, in Gorgias494.34. 249 12Stoic EmotionLawrence C. Becker A successful rehabilitation of Stoic ethics will have to defeat the idea thatthere is something deeply wrong, and perhaps even psychologically impossible, about the kind of emotional life that Stoics recommend. Theimage of the austere, dispassionate, detached, tranquil, and virtually affectless sage an image destined to be self-refuting has become a stapleof anti-Stoic philosophy, literature, and popular culture. It has been constructed from incautious use of the ancient texts and is remarkably resistant to correction. Reminders that the ancient Stoics insisted that thereare good emotions are typically brushed aside by asserting that the ancient catalog of such emotions is peculiar;1 that the emotions in even thatpeculiar catalog are not accorded much significance by Stoics; and thatthe ruthless emotional therapy practiced by Epictetus is a reliable guideto the sort of emotional life Stoics want all of us to cultivate namely, alife of desiccated affect and discardable attachments.Both Stoics and anti-Stoics alike have developed an unwholesome fascination with a picture of the Stoic sage drawn for extreme circumstances.We persist, in high art and low journalism, in telling and retelling storiesof good people who resolutely endure horrors injustice, torture, disease, disability, and suffering. Those of us who are attracted to StoicismThis is a revised version of a paper presented at the Second Leroy E. Loemker Conference,Stoicism: Traditions and Transformations, 31 March2 April 2000, at Emory University.I am grateful to the participants at the conference for their helpful discussion. Special acknowledgment goes to Tony Long, Brad Inwood, and Richard Sorabji. A much earlier version of the paper was presented at a Stoicism conference at the University of London, in May1999. I am grateful to the justifiably more skeptical audience at that occasion, and particularly to my commentator, Anthony Price, as well as to Richard Sorabji and Gisela Striker. Stoic Emotion 251 often find such stories inspiring, and even anti-Stoics give them grudgingadmiration.2 But our fascination with them can be seriously misleading.It can cause us to treat the emotional remoteness and austerity exhibitedby their heroes as central to the Stoic theory of good emotion, as opposedto something central merely to its traditional therapies for people inextremis. This is a mistake.Rather, as I argue here, Stoic ethical theory entails only that we make ouremotions appropriate, by making sure that the beliefs implicit in themare true, and by making them good for, or at least consistent with, thedevelopment and exercise of virtue that is, with the perfection of theactivity of rational agency. At this very abstract level, a Stoic theory ofemotion is similar to an Aristotelian one. But we should not be misledby this high-altitude similarity. Stoic theories of value and virtue are verydifferent from their Aristotelian counterparts, so it will turn out that whatcounts as an appropriate Stoic emotion in a given case is often strikinglydifferent from what counts as an appropriate Aristotelian one. But thecentral, high-altitude theoretical point is nonetheless important. Robustpsychological health of the sort necessary for appropriate rational activityis a constitutive element of virtuoso rational agency a constitutive element of Stoic virtue. It thus follows that, insofar as emotion is a necessaryelement of this aspect of psychological health, it is necessary for virtue.It may be true that some ancient Stoics (notably Chrysippus) underestimated the extent to which emotion was a necessary component ofpsychological health and thus of virtue. But that is a matter of getting thefacts straight, and surely all Stoics are committed to getting an adequate,accurate psychology as a basis for their normative account of good emotion. The things that Chrysippus said about the heart being the seat ofconsciousness things ridiculed centuries later by Galen3 are surely errors that Chrysippus himself would have wanted corrected. Not ridiculed,but corrected. And if such errors informed his normative judgments,surely he would not only have corrected his errors about physiology butalso have made the necessary adjustments in his normative views.The obvious way to develop a contemporary version of Stoicism with respect to the emotions is therefore to fasten on what the theory requires that is, on the conceptual relation between virtue and emotion in humanbeings and on what the best contemporary psychology says about howsuch matters work out in practice. That is what I will do here, first by looking at some relevant features of empirical psychology, then by consideringthe value of emotions in human life, and finally by examining the natureof sagelike tranquillity and Stoic love. 252 Lawrence C. Becker 253 254 along several dimensions, we can for convenience imagine them as arranged along a line that forms a nearly closed circle, beginning and ending with more or less pure affect. At one end are moods or affectivetones of various types (fleeting or prolonged, volatile or stable, discreteor diffuse, mild or intense), which begin at a point just discernibly different from no affect at all a point at which, for example, a subject willreport that consciousness is simply tinted or tinged with affect that doesnot seem to have a causal connection to either cognition or action, orto be related to any special physical sensation or somatic phenomena, orto be focused on anything in particular. Nonetheless, even the mildest,most fleeting moods can often be described in terms of quite complexsubjective experience (anxious, secure, erotic, energized, serene, etc.),and neurological substrates for many of them can be identified and manipulated with drugs. Passions are at the other end of the line, endingin an extreme at which affect virtually obliterates cognition and agency an extreme in which, for example, people are so overwhelmed with whatbegan as anxiety or rage or fear or lust that they are out of their minds,or dont know the time of day, and, if they can make reports at all, canreport only a one-dimensional, ferociously focused affect. Passions canbe much milder than this, of course, but we will use the term to apply toaffect that is focused enough and strong enough to interrupt (as opposedto color, focus, direct, or otherwise shape) deliberation and choice.Between these extremes lie feelings and emotions. Feelings, we willsay, are distinct from moods primarily by virtue of the subjects awarenessof various sorts of physical sensations and somatic phenomena associated with the affect, as well as some causal implications for cognition andaction awareness that focuses and thus intensifies the affective experience, making it seem localized and often giving it an object. (Full-fledgedsexual arousal is a feeling in this sense, whereas low-level erotic affect isa mood.) And let us then say that emotions are distinct from other affects primarily by virtue of the subjects awareness and appraisal of thecognitive components of an affect the beliefs about the world that areimplicated in the affect, awareness that complicates and further focuses,reinforces, or intensifies the feelings. Worry is an example; so is objectspecific, manageable fear. 255 of moods, feelings, emotions, and passions. As far as I can tell, empirical psychology has so far settled one dispute within ancient Stoicism,has strengthened a few philosophical criticisms of the ancient Stoic account, has raised new problems about the unity of rational agency, buthas also confirmed much of the ancient Stoic doctrine on these matters.Contemporary Stoics will have to make some adjustments to the ancientdoctrines, but nothing, I think, that will undermine their claim to beingStoics.The Persistence of Affective Impulse. The ancient dispute that modern psychology seems to have settled is one between Chrysippus and Posidonius,as reported by Galen.11 If it is true that Chrysippus believed Stoic moraltraining could effectively remove excessive emotions at their source, byremoving the erroneous beliefs involved in them, and that this training could be so effective and so thorough that excessive emotion wouldnever arise in the sage, then Chrysippus was wrong. Instead, Posidoniushad it right when he argued that primal affect was a permanent featureof human life that sages, like the rest of us, would always have to copewith.The modern evidence for this comes from two sources: neurophysiology and pharmacology. Neurophysiologists have identified at least fouranatomically distinct structures in the ancient or subcortical portion ofthe human brain that generate affective states roughly fear, rage, panic,and goal-oriented desire.12 These structures are directly responsive toboth external stimuli and internal changes in brain chemistry prior tosignificant cognitive processing. There is, for example, a naturally occurring hormone called cholecystokinin, which regulates secretions of thepancreas and gallbladder. When this hormone is introduced directly intothe bloodstream (a natural, but not normal occurrence in human physiology) it generates an anxiety response unconnected to any external orinternal threat.13 Similar stimulants exist for other affective structures inthe amygdala, and there are blocking agents as well pharmacologicalagents that cause those affective structures to quiet down temporarily, tocease generating affect. This does not mean that subsequent cognitive responses are ineffective in controlling such affect. It only means that thissort of affective arousal and its immediate emotional or passional consequences cannot be eliminated by cognitive (Stoic) training, any morethan Stoic training can eliminate perspiration. Stoics with bad gallbladders will just have to cope with anxiety, whether they are sages or not;similarly for people who have brain injuries, or brain tumors, that excite 256 257 258 the general effectiveness of Stoic training. We will acknowledge a wide variety of cases in which the human body can be overwhelmed by unhealthyaffect, just as it can be overwhelmed by unhealthy microbes or viruses.This is not, it seems to me, an admission that compromises anythingfundamental in Stoicism. All sages are ultimately overcome by diseaseor injury. Their bodies are mortal. And because Stoics are materialists,we have always acknowledged that our minds and emotions too, like everything else about us, are physical entities subject to disease and injury.Ancient Stoics, confronted by the modern evidence, would surely have nodifficulty adjusting their ideas about the root physical causes and appropriate physical remedies for such affective neuropsychological diseasesand injuries, even for sages.The necessity for such adjustment is an example of the way in whichmodern empirical psychology strengthens some of the traditional criticisms of the Stoic psychology of emotion. There are several others, mostof which have to do with the relation between psychological health (whichStoics recognize as a necessary condition for the development of virtue)and the amount and variety of affect in ones life (which Stoics have perhaps traditionally underestimated). I deal with those matters in most ofwhat follows. But I want to conclude this section by noting that contemporary Stoics will have to pay somewhat closer attention to moods andthreshold affective states than the ancient texts do. Here is the problem. 259 260 about states of affairs; and beliefs about the appropriate response to thosestates of affairs.Is this a significant change in Stoic
https://de.scribd.com/document/330817551/Steven-K-Strange-Jack-Zupko-Stoicism-Traditions-and-Transformations-Cambridge-University-Press-2004-pdf
CC-MAIN-2021-17
refinedweb
58,295
50.87
a question recently about ClojureScript and how easy it would be to use the DevExtreme React Grid with that language and environment. ClojureScript is a functional language, which is a concept that works very well with React. The CLJSJS repository provides an easy way for ClojureScript developers to depend on JavaScript libraries (in their own words), but JavaScript libraries need to be packaged and maintained in a specific format for this to work. I’m leaving this approach for later, since right now the React Grid has not reached a full release yet. Meanwhile I found a nice recipe in this post by Tomer Weller, on the topic of including arbitrary JavaScript libraries in a ClojureScript project. I created a demo project, which is available here: In case you have never heard about Clojure or ClojureScript, I recommend reading Danial Higginbotham’s book Clojure for the Brave and True and this very nice ClojureScript tutorial. You will need both a working Node environment and an installation of Leiningen. Follow the two links in the previous sentence for installation files and instructions – both are quick and very straight-forward and you shouldn’t run into trouble. Of course you can skip this part and just get the whole thing from github – in that case scroll down to find the instructions for running the demo. If you would like to do it yourself, read on. The steps I followed are based on the post linked above, but I will also describe the demo code itself. 1 — Set up the project Use this command to create a new project (feel free to modify the name of course): lein new reagent-frontend demo-dx-react-grid-clojurescript Now initialize the project with a Node package.json file (it’s okay to accept all the defaults): package.json npm init Install the required JavaScript packages: npm install --save-dev bootstrap webpack react react-dom react-bootstrap @devexpress/dx-react-core @devexpress/dx-react-grid @devexpress/dx-react-grid-bootstrap3 2 — Create the JavaScript bundle Create the file webpack.config.js with the following content (there should be no need to modify anything in here): webpack.config.js const webpack = require('webpack'); const path = require('path'); const BUILD_DIR = path.resolve(__dirname, 'public', 'js'); const APP_DIR = path.resolve(__dirname, 'src', 'js'); const config = { entry: `${APP_DIR}/main.js`, output: { path: BUILD_DIR, filename: 'bundle.js' } }; module.exports = config; Create the file src/js/main.js with the content below. This is the main file used by webpack, and it configures all the JavaScript dependencies to be made available in the window context. It also means that webpack finds the actual requirements and includes the correct packages in the bundle. src/js/main.js window window.deps = { react: require('react'), 'react-dom': require('react-dom'), 'dx-react-core': require('@devexpress/dx-react-core'), 'dx-react-grid': require('@devexpress/dx-react-grid'), 'dx-react-grid-bootstrap3': require('@devexpress/dx-react-grid-bootstrap3') }; window.React = window.deps['react']; window.ReactDOM = window.deps['react-dom']; For simplicity, add a build entry to the scripts section of package.json and remove the test entry. Your scripts section should look like this: build scripts test ... "scripts": { "build": "webpack -p" }, ... Now run the build script to generate the bundle: npm run build 3 — Modify project.clj for the bundle Edit the file project.clj. First, change the dependencies block (within the first few lines) to the following to make sure that Reagent uses React and ReactDOM from the bundle in place of its own versions. project.clj dependencies :dependencies [[org.clojure/clojure "1.8.0" :scope "provided"] [org.clojure/clojurescript "1.9.908" :scope "provided"] [reagent "0.7.0" :exclusions [cljsjs/react cljsjs/react-dom]]] Second, insert the following block right behind both lines starting with :optimizations. This changes both compiler profiles to include the bundle instead of the standard CLJSJS libraries for React and ReactDOM. (Note that it is correct for the ReactDOM name to read cljsjs.react.dom, i.e. with a . (dot) instead of a dash.) :optimizations cljsjs.react.dom :foreign-libs [{:file "public/js/bundle.js" :provides ["cljsjs.react" "cljsjs.react.dom" "webpack.bundle"]}] If you have trouble finding the right place to insert the last block, please double-check with the complete version from my demo. 4 — Copy stylesheets and fonts The React Grid requires some Bootstrap stylesheets and font files to work correctly. It should be possible to include these in the bundle, but I went the easier way here to include them separately, not least because the Grid also supports Material UI as an alternative UI platform. Copy the stylesheets and font files to the public directory (use Windows copy andmd commands if you are indeed on Windows): public copy md cp node_modules/bootstrap/dist/css/bootstrap*min.css public/css mkdir public/fonts cp node_modules/bootstrap/dist/fonts/* public/fonts Edit public/index.html and add the lines to load the Bootstrap stylesheets. Your head section should look like this: public/index.html head <head> <meta charset="utf-8"> <meta content="width=device-width, initial-scale=1" name="viewport"> <link href="css/bootstrap.min.css" rel="stylesheet" type="text/css"> <link href="css/bootstrap-theme.min.css" rel="stylesheet" type="text/css"> <link href="css/site.css" rel="stylesheet" type="text/css"> </head> What remains is to implement the file src/demo-dx-react-grid-clojurescript/core.cljs to render a React Grid. First, add the bundle to the :require part of the namespace declaration: src/demo-dx-react-grid-clojurescript/core.cljs :require (ns demo-dx-react-grid-clojurescript.core (:require [reagent.core :as r] [webpack.bundle])) Here is my complete function home-page: (defn home-page [] (let [g (aget js/window "deps" "dx-react-grid") sorting-state (aget g "SortingState") local-sorting (aget g "LocalSorting") bs3 (aget js/window "deps" "dx-react-grid-bootstrap3") grid (aget bs3 "Grid") table-view (aget bs3 "TableView") table-header-row (aget bs3 "TableHeaderRow") columns [{:name "name" :title "Name"} {:name "age" :title "Age"}] rows [{:name "Oliver" :age 37} {:name "Bert" :age 52} {:name "Jill" :age 31}]] [:div [:h2 "DevExtreme React Grid"] [:> grid {:columns columns :rows rows } [:> sorting-state {:onSortingChange (fn [sorting] (.log js/console "sorting changed" sorting))}] [:> local-sorting] [:> table-view ] [:> table-header-row {:allowSorting true}]]])) The first block of code in the function defines a few local values using let. This call retrieves the object deps.dx-react-grid from the window context (where the webpack main.js put it previously): let deps.dx-react-grid main.js (aget js/window "deps" "dx-react-grid") The same thing is done with the line that assigns the bs3 value. From these two top level JavaScript objects, the SortingState, LocalSorting, Grid, TableView and TableHeaderRow objects can be retrieved, and they are stored in Clojure values. bs3 SortingState LocalSorting Grid TableView TableHeaderRow As the final part of the let instruction, columns and rows are populated for demo purposes. columns rows The home-page function renders a result using the slightly extended Hiccup syntax implemented by the Reagent interface library for React. For example, this line begins rendering the Grid component, passing the previously arranged values as columns and rows React properties: [:> grid {:columns columns :rows rows} ... The nested elements are rendered using similar syntax and follow the normal structure of the React Grid. I have included an event handler on the SortingState (or sorting-state) for illustration purposes. sorting-state If you cloned my project, you will need to build the JavaScript bundle before running the demo. If you were following the steps to do it yourself, you should have already done this above. These are the required commands: npm install npm run build With the bundle available, you can then run the demo by executing this: lein figwheel The command instructs Leiningen to run the project using the figwheel environment. On the console, you will see a few downloads of required components, a compilation step, hopefully no error messages (please check, especially in case things go wrong!) and finally this line: Prompt will show when Figwheel connects to your application At the same time, Figwheel tries to open the main page of the application in the browser. This works fine for me, but just in case you don’t see a browser page coming up, you can open it manually by connecting to or opening the file public/index.html. When the browser page loads, Figwheel changes its prompt to read app:cljs.user=>, which means that the built-in REPL is ready for interaction. app:cljs.user=> The browser should now show the working React Grid with three rows of demo data. Functionality is limited because I included only a few basic plugins - feel free to play around and extend, and let me know if you encounter any issues! What would be the reason for someone to use this instead of normal ES6/JSX? Mohamed, Reasons for using ClojureScript certainly vary, but the main point about it is that functional programming techniques are very common in the JavaScript world these days, from basic patterns all the way to functionally oriented libraries like React, and that JavaScript itself doesn't make it as easy as you'd wish to adhere to those principles. Here is a blog post that takes the position that functional programming in JavaScript is actually an antipattern: hackernoon.com/functional-programming-in-javascript-is-an-antipattern-58526819f21e This is a post about choosing ClojureScript over BLOCKED SCRIPT m.oursky.com/why-i-chose-clojure-over-javascript-24f045daab7e I like ClojureScript myself, for the reasons I've already mentioned. It facilitates the programming style I like to use anyway (that's FP) and that works great with my favorite JavaScript platforms today (that's React, Redux etc), and by enforcing this style it helps me write more maintainable code than I would probably achieve by trying to follow FP approaches using JavaScript alone. Hope this helps! Please or to post comments.
https://community.devexpress.com/blogs/oliver/archive/2017/08/22/clojurescript-for-the-devextreme-react-grid.aspx
CC-MAIN-2019-39
refinedweb
1,662
54.93
Securing web services in private networks comes with different challenges. In this blog post I would like to show how you can easily monitor your private web services located in a VPC with public uptime monitoring services such as Datadog, New Relic, Statuscake, Pingdom etc. Internal services are closed to public communication. Monitoring the health status of these services, allows us to see if there is a problem, and interfere at the right time. At this point, an idea appeared in my mind. Why wouldn’t I proxy only health check requests from a specific requester with a small lambda function? So I decided to develop a Lambda function in Python and create and deploy a REST API with API Gateway in front of it to give this control. As a first step, I will create a Lambda function via the AWS Console, Lambda service. I set a name for the function and runtime that will be used. I chose Python 3.6 because I will develop a function with Python. The next step is where we need to pay attention. Since the web services are located in private subnets, placing the lambda function in the same VPC is recommended otherwise the function cannot access these services. It’s also possible to interconnect VPC’s together with VPC Peering or Transit Gateway but I want to keep things simple OK? :) Let’s place our function in the same VPC with the web services that we would like to monitor. The urllib3 library is sufficient for our task to create the lambda function. I will map different services by different unique variables. The function will send a GET request to the desired service’s health check URL depending on the variable value. Returning the response of this health check URL will tell us if the services are down or up. botocore.vendored is primarily used when including libraries for the Lambda function. import json from botocore.vendored import urllib3 http = urllib3.PoolManager() def lambda_handler(event, context): # 1. Parse out query string params url = event['queryStringParameters']['url'] r = http.request('GET', url) # 2. Construct the body of the response object transaction_response = { 'url': url, 'message': r.status } # 3. Construct http response object response_object = { 'headers': { 'Content-Type': 'application/json' }, 'statusCode': r.status, 'body': json.dumps(transaction_response) } # 4. Return the response object return response_object You can find the full code on GitHub! Simple Lambda Python function to check internal service status.github.com/sufleio/aws-internal-healthcheck Now, it's time to create the API Gateway! I’ve created the API Gateway by following the wizard. Just choose REST as API type, name it, put some description in case of for a future reminder, and set endpoint type to Regional. Now, we will create a resource and a method (GET) in this resource. Proceed by setting a name to the resource and you can fill in the explanations below if you wish. After the resource is created, the Create Method is selected from the Actions section and the Method creation section is started. When you choose the GET method and proceed, the wizard screen will appear, which you will actually match the method with your Lambda function. The installation will be completed by choosing the region and function where the Lambda is located. You've come to the end! Following the completion of installations, you can deploy the application from the Actions section. Select [New Stage] on your first deployment and give it a name as I did, like “test” or “production”. For next deployments for the same stage, you should select the existing one that you want from the dropdown menu. Then deploy your healthchecker API! If you click on the stage name from the left sidebar, the url of the API will appear on “Invoke URL”. Copy and paste this url to your browser and add the internal (or external) site that you want to check at the end of the endpoint as a get parameter, like: https://<my-api-gateway-invoke-url>?url=<internal-application-url> To make sure that only health check requests are routed to your application, you can create a Web Application Firewall rule according to your health check endpoint by only allowing that and denying the rest of requests. You can also add an IP restriction to your allow rule if your uptime monitoring service is providing you list of IP addresses that health checks will come to your end: For more information, you can check following documentation: A cloud and platform engineer, Kerem, is dedicated to DevOps and cloud technologies. He is a technology enthusiast who is constantly eager to discover and learn about what technology has.
https://www.sufle.io/blog/how-to-check-the-status-of-internal-services
CC-MAIN-2022-40
refinedweb
780
64.2
HTML Templates In my previous essay I showed how to generate a web page using Python statement to print text to a file. It was rather messy because it mixed Python statements and HTML text; neither one nor the other. It's hard to edit and maintain because usually two languages don't mix well. The two languages here do different things. Python is used to make content and HTML is used present content. The standard solution is to separate the two almost entirely, most frequently as an HTML template. There are many different approaches. The one I prefer is through Zope's Page Template language. The template language is HTML with a few new attributes. Because it's HTML you can look at and edit the template file using standard HTML tools. The new attributes are commands for things like "insert text here" and "for each item in ...". The source of the data comes from Python code outside of the template. Zope is a large package for doing web-based application development in Python. Parts of Zope, like its page template language and its object database, are available outside of Zope. simpleTAL is another implementation of the the page template language for Python. I've found the two to be comparable and tend to use ZPT. The template can contain variable names and expressions based on the names. When ZPT evaluates the template it needs data for those variables. This is called the context and is a dictionary built by Python code. The variables can be string and numbers, containers like a list or dictionary, or more complex data structures. Page template expressions describe how to get to the needed data in the variable. For details you should read the documentation. Looking at the existing scatter plot creation script shows which data is needed: - The size of the scatter plot (width and height) - The size of the 2D depiction (width and height) - The location of each point (x and y, in Javascript image coordinates) - The name and depiction URL for each point The image sizes are numbers. To show how the template generation works I'll make a simple context with only those four elements: context = {"img_width": 30, "img_height": 35, "cmpd_width": 50, "cmpd_height": 75}and define a Python string containing a template which demonstrates how to use those values in different ways.> """While I define the template as a string here it's usually best to store the template in an HTML file and load it when needed. This lets you write the template using an HTML-aware editor and by people who don't need to know Python. This small template show a few features of the template language. From the top, the first is how to define the width and height attributes of the img tag. There can only be one tal:attributes term in an element so the ";" is used to join two definitions into one single string. The next two examples show the tal:replace command which removes the element with its contents and replaces it with the value of the expression. Even though they will be removed I put example numbers (the 123 and 456) in the element's content to make it easier to understand the template. The tal:content statement is similar to the tal:replace statement. It removes the content of the element but leaves the start and end tags unchanged. In this case I'm replacing the text with the result of a string expression. There are several ways to get data from the context; a path expression (the default), a string template, or even by evaluating a bit of Python code. The ZPT implementation of the Zope template language requires an unfortunate bit of complication, recommended by ZPT home page. It defines a new method to call the page template like a function. Strictly speaking it isn't needed because the underlying pt_render() method can be called directly. But I'll use it because that's what they suggest. import sys from ZopePageTemplates import PageTemplate # From the ZPT home page class PageTemplate(PageTemplate): def __call__(self, context={}, *args): if not context.has_key('args'): context['args'] = args return self.pt_render(extra_context=context) pt = PageTemplate() pt.write(template) sys.stdout.write(pt(context=context))I wrote the results of template evaluation to stdout. It could as easily been a file. Here's the output. <html><body> This image <a href="" width="30" height="35" /> has width 30 and height 35.<br /> The compound images will be <b>50x75</b>. </body></html> I chose to give each property its own unique name in the context. I didn't need to do that. I could have had one entry store the two scatter plot image properties and another store the two compound image properties. Here I'll use a dictionary but I could have used one of several approaches. context = {"img": {"width": 30, "height": 35}, "cmpd": {"width": 50, "height": 75}}The template must be changed to handle this new data structure. Instead of using just the name I need to use a path expression to get the correct value. For example, img/width refers to the width term of the img dictionary, and is 30.> """When evaluated, this produces output identical to the previous example. I'll stick with the old way of defining each image size with its own variable name. I showed that alternative to make it easier to understand how to deal with the scatter plot point data. Each point has a name, location, and URL. In my previous essay I stored the point properties in four lists, one for each property. In the Zope template language it's easier to iterate over one list so my example will take that approach. context = {"points": [ {"name": "A", "x": 1.2, "y": 2.3, "url": "imgs/A.gif"}, {"name": "B", "x": 9.8, "y": 8.7, "url": "imgs/B.gif"}, ]}As you can see, points refers to a list of dictionaries. The tal:repeat statement is the template equivalent of a for-loop. It repeats the given element once for each element in the list, and defines a new local variable name used to refer to the current element. Here's a template using the previously defined context: <a tal:</a> is at position <span tal: </li> </body></html> """and here's the output: <html><body> <ul> <li> <a href="imgs/A.gif">A</a> is at position 1.2, 2.3 </li> <li> <a href="imgs/B.gif">B</a> is at position 9.8, 8.7 </li> </ul></body></html> Only the tal:repeat command is new here so the rest should be pretty understandable. One word of caution. The ZPT parser is a stickler for balanced tags and if it doesn't find one it expects then it gives the opaque error message ZopePageTemplates.PTRuntimeError: Page Template (unknown) has errors.I got this message in the previous template because I forgot to close the li tag. While correctly required, most browsers don't need it so I forget to write it. If you get that error message there are a few solutions: - Delete parts of the template until it works again - Use an HTML validator like Tidy on the template, which might give some clues amidst all the warnings about unknown attributes starting with tal: - Use SimpleTAL, which does a better job of reporting which element was unclosed. Getting back to the goal, which is to modify the scatter plot HTML page generation to use a template. The template needs a context so I'll start with that. This code will replace the current template generation code at the end of the main() function. At this point the image sizes are available in the variables cmpd_{width,height} and img_{width,height}, the x coordinates are in the list x, y coordinates in y, compound identifiers in cids, and the URLs in the list imgnames. I'll turn the 4 parallel lists into a list of dictionaries, one dictionary per)], }This used the relatively new keyword constructor for dictionaries. Here's an example of it: >>> dict(x=1.23, y=2.34, cid="ABC00001", url="imgs/ABC00001.gif") {'url': 'imgs/ABC00001.gif', 'y': 2.3399999999999999, 'cid': 'ABC00001', 'x': 1.23} >>> Next is the template. It needs to insert the correct image attributes and the <AREA> elements for each point. I'll save this in the file named scatter_template.py. <HTML><HEAD> <TITLE>MW vs. XLogP</TITLE> </HEAD> <BODY> <SCRIPT> function mouseover(name) { var cid = document.getElementById("cid"); cid.innerHTML = name; } function show(filename) { var cmpd_img = document.getElementById("cmpd_img"); cmpd_img.src = filename; } </SCRIPT> Mouse is over: <SPAN id="cid"></SPAN><BR> Pick a point to see the depiction<BR> <IMG SRC="mw_v_xlogp.png" ismap <IMG ID="cmpd_img" tal: <MAP name="points"> <AREA shape="circle" tal: </MAP> </BODY> </HTML> The rest of the code is the mechanics of opening files and calling the page template. I've gone over the details already so I'll end with the newest version of the scatter plot generation code, with the new parts in bold. import subprocess, os from itertools import * from openeye.oechem import * from matplotlib.figure import Figure from matplotlib.patches import Polygon from matplotlib.backends.backend_agg import FigureCanvasAgg import matplotlib.numerix as nx from ZopePageTemplates import PageTemplate class PageTemplate(PageTemplate): def __call__(self, context={}, *args): if not context.has_key('args'): context['args'] = args return self.pt_render(extra_context=context) def make_gif(smiles, filename, width = 200, height = 200): p = subprocess.Popen([os.environ["OE_DIR"] + "/bin/mol2gif", "-width", str(width), "-height", str(height), "-gif", "-", filename], stdin = subprocess.PIPE, stderr = subprocess.PIPE) p.stdin.write(smiles + "\n") p.stdin.close() errmsg = p.stderr.read() errcode = p.wait() if errcode: raise AssertionError("Could not save %r as an image to %r:\n%s" % (smiles, filename, errmsg)) # True only for those molecules with an XLOGP field def has_xlogp(mol): return OEHasSDData(mol, "PUBCHEM_CACTVS_XLOGP") def get_data(mol):) ) return cid, float(weight), float(xlogp) def main(): filename = "/Users/dalke/databases/compounds_500001_510000.sdf.gz" ifs = oemolistream(filename) imgdir = "imgs" if not os.path.isdir(imgdir): os.mkdir(imgdir) # Width and height for each compound image, in pixels cmpd_width = cmpd_height = 320 cids = [] weights = [] xlogps = [] imgnames = [] # Get the first 100 compounds that have an XLogP field for mol in islice(ifilter(has_xlogp, ifs.GetOEGraphMols()), 0, 100): cid, weight, xlogp = get_data(mol) imgname = os.path.join(imgdir, "%s.gif" % (cid,)) make_gif(OECreateCanSmiString(mol), imgname, cmpd_width, cmpd_height) cids.append(cid) weights.append(weight) xlogps.append(xlogp) imgnames.append(imgname) fig = Figure(figsize=(4,4)) ax = fig.add_subplot(111) sc = ax.scatter(weights, xlogps) ax.set_xlabel("Atomic weight") ax.set_ylabel("CACTVS XLogP") # Make the PNG and get the scatter plot image size canvas = FigureCanvasAgg(fig) canvas.print_figure("mw_v_xlogp.png", dpi=80) img_width = fig.get_figwidth() * 80 img_height = fig.get_figheight() * 80 # Convert the data set points into screen space coordinates trans = sc.get_transform() xcoords, ycoords = trans.seq_x_y(weights, xlog)], } pt = PageTemplate() template = open("scatter_template.html").read() pt.write(template) f = open("mw_v_xlogp.html", "w") f.write(pt(context=context)) if __name__ == "__main__": OEThrow.SetLevel(OEErrorLevel_Error) main() There are many HTML template languages even when limited to those available for Python. Other popular ones include CherryTemplate, Quixote, Nevow, and I'll probably get emails from people reminding me of the dozen more that exist. All use different approachs to the same goal - simplfy making web pages. The differences are mostly in whose life is made simpler; the programmer, the web page designer, or someone who does both? I've found it best to make a clear distinction between content and presentation and have a template language which can be handled by normal HTML tools. That's why I prefer Zope's template language. You may have your own requirements which suggest using another system. Andrew Dalke is an independent consultant focusing on software development for computational chemistry and biology. Need contract programming, help, or training? Contact me
http://www.dalkescientific.com/writings/diary/archive/2005/04/24/html_templates.html
CC-MAIN-2016-40
refinedweb
1,989
67.15
Easily Create Fixture Data from Remote Services and Refresh Mock Data Oops Codeship and Laravel is here Easily Create Fixture Data from Remote Services and Refresh Mock Data We have integration tests that hit remote apis like Github, S3, DynomoDB, our own APIs etc and we do not want to hit those during out tests but we also want to make sure we have the real data. So when one of those APIs change then our mock data can be refreshed to see if our systems really work with it. Using Laravel's new integration tests, though this works with any framework, we will swap out these Service classes with a Wrapper class ONLY if we have a matching file. This allows us to delete those files and get another one on the fly. Lets start with the Controller This simple Controller will talk to a Repo. Imagine the repo talking to Dynamodb or GithubApi, database etc. <?php namespace App\Http\Controllers; use App\ExampleRepo; use Illuminate\Support\Facades\Response; class ExampleController extends Controller { public function mocking(ExampleRepo $exampleRepo) { return Response::json("You are here " . $exampleRepo->get()); } } So thanks to the dependency injection system; and the use of the Reflection Class, ExampleRepo get constructed as well. The Tests First lets look at a normal test no mock <"); } } Pretty simple. But not lets... Swap Things Out Here we add an example of replacing the default instance App would make with our own Wrapper <"); } public function testMocking() { $mock = m::mock('App\ExampleRepo'); $mock->shouldReceive('get')->once()->andReturn('bar'); App::instance('App\ExampleRepo', $mock); $this->get('/mocking')->see("You are here bar"); } } testMocking will now return bar! Making Fixtures on the Fly Same results BUT we hit my wrapper not the real services. But here is where I think it gets even better. I can return fixture data BUT at the same time I can not worry about returning stale fixture data eg the apis have changed but my fixtures have not. All of this without having my test code wrapped into the app code. This will look for the output of a route. That Controller and Repo we will show in a moment public function testMakeFixture() { $wrapper = App::make('App\ExampleRepoWrapper'); App::instance('App\ExampleRepo', $wrapper); $this->get('/mocking')->see("You are here foo"); } This test has a wrapper which extends the repo <?php namespace App; use Illuminate\Support\Facades\File; class ExampleRepoWrapper extends ExampleRepo { public function get() { if(File::exists(base_path('tests/fixtures/foo.json'))) { $content = File::get(base_path('tests/fixtures/foo.json')); return json_decode($content, true); } $results = parent::get(); if(!File::exists(base_path('tests/fixtures/foo.json'))) { $content = json_encode($results, JSON_PRETTY_PRINT); File::put(base_path('tests/fixtures/foo.json'), $content); } return $results; } } So now the Controller will talk to the Wrapper instead which will look for a file (NOTE: You can easily pass in $id or $name to make the fixtures unique) So now when the Controller hits our Wrapper it goes right to the real ExampleRepo (seen below) if there is no fixture file and then the Wrapper kicks in to make the file (as seen in the above class). <?php namespace App; class ExampleRepo { protected $results; public function get() { $this->results = 'foo'; return $this->results; } /** * @return mixed */ public function getResults() { return $this->results; } } That is it you can do integration testing on your APIs and not hit external services or even databases. Force Full Integration Sometimes you want to hit the external resources. This can be part of a weekly or daily test to make sure you app is working with all the external APIs. You can do this by deleting all the fixtures before running that test. So you can setup a provider like this class ExampleProvider extends ServiceProvider { public function register() { if(App::environment() == 'testing' and env('FULL_INTEGRATION') != 'true') { $this->app->bind('App\ExampleRepo', 'App\ExampleRepoWrapper'); } else { $this->app->bind('App\ExampleRepo', 'App\ExampleRepo'); } } } ** UPDATE ** Another good idea, by [Nathan Kirschbaum](), is to set the `FULL_INTEGRATION` setting by the user that is logged in. Cons One is UI testing. Prior to this I would make wrappers as needed to then take over if say APP_MOCK=true. Then I could mock even on Behat testing or the UI. But that meant a lot of Providers and alot of mixing of testing and code. But it worked and ran well on services like CodeShip and locally. If you Behat/Acceptance tests are hitting the API or UI it would be nice to fake all the external responses. Though now with the above the API testing is easy. The UI (when there is javascript) not so easy :( Since we are using App::instance we did not need to register a Provider Class. But to make the UI con a non issue you can go that far to register a ServiceProvider class ExampleProvider extends ServiceProvider { public function register() { if(App::environment() == 'testing' { $this->app->bind('App\ExampleRepo', 'App\ExampleRepoWrapper'); } else { $this->app->bind('App\ExampleRepo', 'App\ExampleRepo'); } } } Then register as normal in your config/app.php . This can be kinda tedious but would produce the same results. Great book on the topic Laravel Testing Decoded
https://alfrednutile.info/posts/145
CC-MAIN-2018-47
refinedweb
854
51.89
Hi All, I have a query regarding Adaptive Forms in AEM. I have created an AF with some fields(text box, date picker, drop-down and file attachment). I tried to submit the form without attaching any file, then it worked as per me. But when there is some file attachment it throws '500 error' as attached. Please help me to resolve this. Regards, Lakshmi. Hi experience-manager-forum I suspect that the submit action on your form is the default (before installing forms add-on) of "Store Content"? If that is correct: The store content option is just for testing see the note at the top of this page, but perhaps it doesn't work with attachments. Store content isn't recommended for production so I would suggest trying this by POSTing to a REST endpoint. I have done this successfully on AEM 6.3 forms: Adobe Experience Manager Help | Configuring the Submit action If my assumption isn't correct then let me know which submit action you are using Thanks, James Hi James, I have tried with the submit action "Submit to REST endpoint" and ends up with another error says: .......26.02.2018 02:05:05.239 *ERROR* [0:0:0:0:0:0:0:1 [1519628705201] POST /content/forms/af/sample-form/jcr:content/guideContainer.af.submit.jsp HTTP/1.1] com.adobe.aemds.guide.service.impl.RESTSubmitActionService Couldn't post data to 26.02.2018 02:05:05.291 *ERROR* [0:0:0:0:0:0:0:1 [1519628705201] POST /content/forms/af/sample-form/jcr:content/guideContainer.af.submit.jsp HTTP/1.1] com.adobe.aemds.guide.servlet.GuideSubmitServlet Could not complete Submit Action due to Error during form submission com.adobe.aemds.guide.service.GuideException: Error during form submission.......... I went through the link: [AF] [AEM-Af-901-004] error , it didn't helped me much. What all things should I fill in the AF dialog box? If there is any prerequisites , could you please suggest. Regards, Lakshmi. Views Replies Total Likes Hi experience-manager-forum An example of submit to REST endpoint below. Create a new servlet and update the package name: package com.<name of your package>; import org.apache.sling.api.SlingHttpServletRequest; import org.apache.sling.api.SlingHttpServletResponse; import org.apache.sling.api.servlets.SlingAllMethodsServlet; import org.osgi.service.component.annotations.Component; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import javax.servlet.Servlet; import javax.servlet.ServletException; import java.io.IOException; @Component( service = Servlet.class, property = { "sling.servlet.paths=/bin/submitServlet", "sling.auth.requirements=-/bin/submitServlet" } ) public class SubmitServlet extends SlingAllMethodsServlet { private final Logger logger = LoggerFactory.getLogger(this.getClass()); @Override protected void doPost(SlingHttpServletRequest request, SlingHttpServletResponse response) throws IOException, ServletException { String dataXML = request.getParameter("dataXml"); logger.info("Data submitted " + dataXML); } } Set the properties on your adaptive form container as follows, make sure Enable Post is ticked, and make sure the servlet path matches what you defined in your servlet. Hit the submit button and you should see that the "dataXml" parameter contains your submitted form data and the "attachments" parameter contains your attached file: If this doesn't work, do you have the aem forms add on packages installed? You should install that and it's dependencies via package share if not. experience-manager-forum : If you can tell the version of AEM & AEM Forms which you are using, we can help you better, I suppose. To start with, it may be that one of the Forms Servlet might be missing/unsatisfied, due to which the call is being forwarded to default SlingPostServlet which is unable to handle the request format. You can also post the request headers for the failing call. Cheers, Jagjeet Singh Hi Jagjeeth, I'm using AEM 6.3 version. And my adaptive form works on the submit action "Store Content" without file attachment. Regards, Lakshmi. Views Replies Total Likes Hi All, We are also facing the same issue after upgrade from 6.1 to 6.3. Kindly let me know if you have any other solution apart from creating any service. Thanks, Swapna Views Replies Total Likes What is the submit action configured on the form? What is the result of submission with and without any attachment? Please attach the error log snippet as well. Views Replies Total Likes Hi Mayank, Issue is fixed after installing cumulative fix (AEM-CFP-6.3.2.2-2.0.zip) and adaptive form package (adobe-aemfd-linux-pkg-4.1.50.zip) in publish instance. There is an issue while trying to hit from dispatcher which currently we are looking into. Thanks, Swapna Views Replies Total Likes Was this issue resolved in dispatcher? We are having the same issue, getting [AF] [AEM-AF-901-004]: Encountered an internal error while submitting the form. Views Replies Total Likes
https://experienceleaguecommunities.adobe.com/t5/adobe-experience-manager-forms/adaptive-form-file-attachment-component-leads-to-500-error-on/td-p/279827
CC-MAIN-2022-33
refinedweb
800
50.84
-receive-pack <directory> Invoked by git send-pack and updates the repository with the information fed from the remote end. This command is usually not invoked directly by the end user. The UI for the protocol is on the git send-pack side, and the program pair is meant to be used to push updates to remote repository. For pull operations, see git-fetch-pack(1). The command allows for creation and fast-forwarding of sha1 refs (heads/tags) on the remote end (strictly speaking, it is the local end git-receive-pack runs, but to the user who is sitting at the send-pack end, it is updating the remote. Confused?) There are other real-world examples of using update and post-update hooks found in the Documentation/howto directory. git-receive-pack honours the receive.denyNonFastForwards config option, which tells it if updates to a ref should be denied if they are not fast-forwards. <directory> Before any ref is updated, if $GIT_DIR/hooks/pre-receive file exists and is executable, it will be invoked once with no parameters. The standard input of the hook will be one line per ref to be updated: sha1-old SP sha1-new SP refname LF The refname value is relative to $GIT_DIR; e.g. for the master head this is "refs/heads/master". The two sha1 values before each refname are the object names for the refname before and after the update. Refs to be created will have sha1-old equal to 0{40}, while refs to be deleted will have sha1-new equal to 0{40}, otherwise sha1-old and sha1-new should be valid objects in the repository. When accepting a signed push (see git-push(1)), the signed push certificate is stored in a blob and an environment variable GIT_PUSH_CERT can be consulted for its object name. See the description of post-receive hook for an example. In addition, the certificate is verified using GPG and the result is exported with the following environment variables: GIT_PUSH_CERT_SIGNER GIT_PUSH_CERT_KEY GIT_PUSH_CERT_STATUS GIT_PUSH_CERT_NONCE GIT_PUSH_CERT_NONCE_STATUS UNSOLICITED MISSING BAD OK SLOP GIT_PUSH_CERT_NONCE_SLOP This hook is called before any refname is updated and before any fast-forward checks are performed. If the pre-receive hook exits with a non-zero exit status no updates will be performed, and the update, post-receive and post-update hooks will not be invoked either. This can be useful to quickly bail out if the update is not to be supported. Before each ref is updated, if $GIT_DIR/hooks/update file exists and is executable, it is invoked once per ref, with three parameters: $GIT_DIR/hooks/update refname sha1-old sha1-new The refname parameter is relative to $GIT_DIR; e.g. for the master head this is "refs/heads/master". The two sha1 arguments. The hook should exit with non-zero status if it wants to disallow updating the named ref. Otherwise it should exit with zero. Successful execution (a zero exit status) of this hook does not ensure the ref will actually be updated, it is only a prerequisite. As such it is not a good idea to send notices (e.g. email) from this hook. Consider using the post-receive hook instead. After all refs were updated (or attempted to be updated), if any ref update was successful, and if $GIT_DIR/hooks/post-receive file exists and is executable, it will be invoked once with no parameters. The standard input of the hook will be one line for each successfully updated ref: The refname value is relative to $GIT_DIR; e.g. for the master head this is "refs/heads/master". The two sha1 values before each refname are the object names for the refname before and after the update. Refs that were created will have sha1-old equal to 0{40}, while refs that were deleted will have sha1-new equal to 0{40}, otherwise sha1-old and sha1-new should be valid objects in the repository. The GIT_PUSH_CERT* environment variables can be inspected, just as in pre-receive hook, after accepting a signed push. Using this hook, it is easy to generate mails describing the updates to the repository. This example script sends one mail message per ref listing the commits pushed to the repository, and logs the push certificates of signed pushes with good signatures to a logger service: #!/bin/sh # mail out commit update information. while read oval nval ref do if expr "$oval" : '0*$' >/dev/null then echo "Created a new ref, with the following commits:" git rev-list --pretty "$nval" else echo "New commits:" git rev-list --pretty "$nval" "^$oval" fi | mail -s "Changes to ref $ref" commit-list@mydomain done # log signed push certificate, if any if test -n "${GIT_PUSH_CERT-}" && test ${GIT_PUSH_CERT_STATUS} = G then ( echo expected nonce is ${GIT_PUSH_NONCE} git cat-file blob ${GIT_PUSH_CERT} ) | mail -s "push certificate from $GIT_PUSH_CERT_SIGNER" push-log@mydomain fi exit 0 The exit code from this hook invocation is ignored, however a non-zero exit code will generate an error message. Note that it is possible for refname to not have sha1-new when this hook runs. This can easily occur if another user modifies the ref after it was updated by git-receive-pack, but before the hook was able to evaluate it. It is recommended that hooks rely on sha1-new rather than the current value of refname. After all other processing, if at least one ref was updated, and if $GIT_DIR/hooks/post-update file exists and is executable, then post-update will be called with the list of refs that have been updated. This can be used to implement any repository wide cleanup tasks. The exit code from this hook invocation is ignored; the only thing left for git-receive-pack to do at that point is to exit itself anyway. This hook can be used, for example, to run git update-server-info if the repository is packed and is served via a dumb transport. #!/bin/sh exec git update-server-info git-send-pack(1), gitnamespaces(7) Part of the git(1) suite
http://linuxhowtos.org/manpages/1/git-receive-pack.htm
CC-MAIN-2017-34
refinedweb
1,013
59.23
Hi Community, I have trained a Custom Trained Pytorch Mask-RCNN network which takes image as an input and gives outputs the bounding box, masks with class and class labels. I have used Mask-RCNN model directly for the torchvision v0.4.0. The training and data preprocessing code is similar to and to get mask-rcnn model I have just used from torchvision.models.detection import MaskRCNN with no changes and trained it for 2 classes. I tried to test this trained model on Jetson Nano without any use of ONNX/ DeepStream/TensorRT conversion and the swap memory(4GB) and the main memory(4GB) got filled up just while loading the model. The model weights .pth file is of size 241MB(just for ref). I just installed the pytorch 1.2.0 and torchvision 0.4.0 as per the ref: PyTorch for Jetson - version 1.6.0 now available Can someone tell me what I should do or how I should optimize this model to run on Jetson Nano. Is it possible to run this model in DeepStream or TensorRT? How can I convert the model to run in such system? I’m new on this hardware, so in need of some guidance. Thanks in advance. Regards,
https://forums.developer.nvidia.com/t/re-trained-pytorch-mask-rcnn-inferencing-in-jetson-nano/154461
CC-MAIN-2020-50
refinedweb
209
75.5
13-1. How do I secure my server? This question is asked by administrators, and I'm sure no hackers will read this info and learn what you admins might do to thwart hack attacks ;-) One thing to keep in mind, most compromises of data occur from an employee of the company, not an outside element. They may wish to access sensitive personnel files, copy and sell company secrets, be disgruntled and wish to cause harm, or break in for kicks or bragging rights. So trust no one. Physically Secure The Server - ------------------------------ This is the simplest one. Keep the server under lock and key. If the server is at a site where there is a data center (mainframes, midranges, etc) put it in the same room and treat it like the big boxes. Access to the server's room should be controlled minimally by key access, preferably by some type of key card access which can be tracked. In large shops, a man trap (humanoid that guards the room) should be in place. If the server has a door with a lock, lock it (some larger servers have this) and limit access to the key. This will secure the floppy drive. One paranoid site I know of keeps the monitor and CPU behind glass, so that the keyboard and floppy drive cannot be accessed by the same person at the same time. If you only load NLMs from the SYS:SYSTEM directory, use the SECURE CONSOLE command to prevent NLMs being loaded from the floppy or other location. A hacker could load a floppy into the drive and run one of several utility files to gain access to the server. Or they could steal a backup tape or just power off the server! By physically securing the server, you can control who has access to the server room, who has access to the floppy drive, backup tapes, and the System Console. This step alone will eliminate 75% of attack potential. Secure Important Files - ------------------------ These should be stored offline. You should make copies of the STARTUP.NCF and AUTOEXEC.NCF files. The bindery or NDS files should be backed up and stored offsite. All System Login Scripts, Container Scripts, and any robotic or non-human personal Login Scripts should be copied offline. A robotic or non-human account would be an account used by an email gateway, backup machine, etc. Compile a list of NLMs and their version numbers, and a list of files from the SYS:LOGIN, SYS:PUBLIC, and SYS:SYSTEM directories. You should periodically check these files against the originals to ensure none have been altered. Replacing the files with different ones (like using itsme's LOGIN.EXE instead of Novell's) will give the hacker access to the entire server. It is also possible that the hacker will alter .NCF or Login Scripts to bypass security or to open holes for later attacks. Make a list of Users and their accesses - ----------------------------------------- Use a tool like Bindview or GRPLIST.EXE from the JRB Utilities to get a list of users and groups (including group membership). Once again, keep this updated and check it frequently against the actual list. Also run Security (from the SYS:SYSTEM directory) or GETEQUIV.EXE from the JRB Utilities to determine who has Supervisor access. Look for odd accounts with Supervisor access like GUEST or PRINTER. It is also a good idea to look at Trustee Assignments and make sure access is at a minimum. Check your run from Security to see if access is too great in any areas, or run TRSTLIST from the JRB Utilities. Security will turn up some odd errors if SUPER.EXE has been run. If you are not using SUPER.EXE, delete and rebuild any odd accounts with odd errors related to the Bindery, particularly if BINDFIX doesn't fix them yet the account seems to work okay. If a hacker put in a backdoor using SUPER.EXE, they could get in and perhaps leave other ways in. Monitor the Console - --------------------- Use the CONLOG.NLM to track the server console activity. This is an excellent diagnostic tool since error messages tend to roll off the screen. It will not track what was typed in at the console, but the system's responses will be put in SYS:ETC\CONSOLE.LOG. When checking the console, hit the up arrow to show what commands were last typed in. While this won't work in large shops or shops with forgetful users, consider using the SECUREFX.NLM (or SECUREFX.VAP for 2.x). This sometimes annoying utility displays the following message on the console and to all the users after a security breach: "Security breach against station DETECTED." This will also be written to an error log. The following message is also written the the log and to the console: "Connection TERMINATED to prevent security compromise" Turn on Accounting - -------------------- Once Accounting is turned on, you can track every login and logout to the server, including failed attempts. Don't Use the Supervisor Account - ---------------------------------- Leaving the Supervisor logged in is an invitation to disaster. If packet signature is not being used, someone could use HACK.EXE and gain access to the server as Supervisor. HACK spoofs packets to make them look like they came from the Supervisor to add Supe equivalence to other users. Also, it implies a machine is logged in somewhere as Supervisor, if it has been logged in for more than 8 hours chances are it may be unattended. Use Packet Signature - ---------------------- To prevent packet spoofing (i.e. HACK.EXE) enforce packet signature. Add the following line to your AUTOEXEC.NCF - SET NCP PACKET SIGNATURE OPTION=3 This forces packet signature to be used. Clients that do not support packet signature will not be able to access, so they will need to be upgraded if you have any of these clients. Use RCONSOLE Sparingly (or not at all) - ---------------------------------------- When using RCONSOLE you are subject to a packet sniffer getting the packets and getting the password. While this is normally above the average user's expertise, DOS-based programs that put the network interface card into promiscuous mode and capture every packet on the wire are readily available on the Internet. The encryption method is not foolproof. Remember you cannot "detect" a sniffer in use on the wire. Do NOT use a switch to limit the RCONSOLE password to just the Supervisor password. All you have done is set the password equal to the switch. If you use the line "LOAD REMOTE /P=", Supervisor's password will get in (it ALWAYS does) and the RCONSOLE password is now "/P=". Since the RCONSOLE password will be in plain text in the AUTOEXEC.NCF file, to help secure it try adding a non-printing character or a space to the end of the password. And while you can use the encryption techniques outlined in 02-8, your server is still vulnerable to sniffing the password. Move all .NCF files to a more secure location (3.x and above) - --------------------------------------------------------------- Put your AUTOEXEC.NCF file in the same location as the SERVER.EXE file. If a server is compromised in that access to the SYS:SYSTEM directory is available to an unauthorized user, you will at least have protected the AUTOEXEC.NCF file. A simple trick you can do is "bait" a potential hacker by keeping a false AUTOEXEC.NCF file in the SYS:SYSTEM with a false RCONSOLE password (among other things). All other .NCF files should be moved to the C: drive as well. Remember, the .NCF file runs as if the commands it contains are typed from the console, making their security most important. Use the Lock File Server Console option in Monitor (3.x and above) - -------------------------------------------------------------------- Even if the RCONSOLE password is discovered, the Supe password is discovered, or physical access is gained, a hard to guess password on the console will stop someone from accessing the console. Add EXIT to the end of the System Login Script - ------------------------------------------------ By adding the EXIT command as the last line in the System Login Script, you can control to a degree what the user is doing. This eliminates the potential for personal Login Script attacks, as described in section 03-6. Upgrade to Netware 4.11 - ------------------------- Besides making a ton of Novell sales and marketing people very happy, you will defeat most of the techniques described in this faq. Most well-known hacks are for 3.11. If you don't want to make the leap to NDS and 4.x, at least get current and go to 3.12. Check the location of RCONSOLE.EXE - ------------------------------------ In 3.11, RCONSOLE.EXE is located in SYS:SYSTEM by default. In 3.12 and 4.1 it is in SYS:SYSTEM and SYS:PUBLIC. You may wish to remove RCONSOLE.EXE from SYS:PUBLIC, as by default everyone will have access to it. Remove [Public] from [Root] in 4.1's NDS - ------------------------------------------ Get the [Public] Trustee out of the [Root] object's list of Trustees. Anyone, even those not logged in, can see virtually all objects in the tree, giving an intruder a complete list of valid account names to try. Don't use Novell's FTP NLM until it is fixed, or use NFS namespace - -------------------------------------------------------------------- Since Novell's FTP NLM has some problems, only use the FTP NLM if you can use NFS namespace. For the extra paranoid, use a third party NLM. Novell's is the only one I've found with this problem. -------------------------------------------------------------------------------- 13-2. I'm an idiot. Exactly how do hackers get in? We will use this section as an illustrated example of how these techniques can be used in concert to gain Supe access on the target server. These techniques show the other thing that really helps in Netware hacking - a little social engineering. Exploitation #1 --------------- Assume tech support people are dialing in for after hours support. Call up and pose as a vendor of security products and ask for tech support person. Called this person posing as a local company looking for references, ask about remote dial-in products. Call operator of company and ask for help desk number. Call help desk after hours and ask for dial-in number, posing as the tech support person. Explain home machine has crashed and you've lost number. Dial in using the proper remote software and try simple logins and passwords for dial-in software if required. If you can't get in call help desk especially if others such as end users use dial-in. Upload alternate LOGIN.EXE and PROP.EXE, and edit AUTOEXEC.BAT to run the alternate LOGIN.EXE locally. Rename PROP.EXE to IBMNBIO.COM and make it hidden. Before editing AUTOEXEC.BAT change the date and time of the PC so that the date/time stamp reflects the original before the edit. Dial back in later, rename PROP.EXE and run it to get Accounts and passwords. Summary - Any keystroke capture program could produce the same results as the alternate LOGIN.EXE and PROP.EXE, but you end up with a Supe equivalent account. Exploitation #2 --------------- Load a DOS-based packet sniffer, call the sys admin and report a FATAL DIRECTORY ERROR when trying to access the server. He predictively will use RCONSOLE to look at the server and his packet conversation can be captured. He will find nothing wrong (of course). Study the capture and use the RCON.FAQ to obtain the RCONSOLE password. Log in as GUEST, create a SYSTEM subdirectory in the home directory (or any directory on SYS:). Root map a drive to the new SYSTEM, copy RCONSOLE.* to it, and run RCONSOLE. Once in try to unload CONLOG and upload BURGLAR.NLM to the real SYS:SYSTEM. Created a Supe user (i.e. NEWUSER) and then typed CLS to clear the server console screen. Log in as NEWUSER. Erase BURGLAR.NLM, new SYSTEM directory and its contents. Run PURGE in those directories. Turn off Accounting if on. Give GUEST Supe rights. Set toggle with SUPER.EXE for NEWUSER. Run FILER and note SYS:ETC\CONSOLE.LOG (if CONLOG was loaded) owner and create date, as well as SYS:SYSTEM\SYS$ERR.LOG owner and create date. Edit SYS:ETC\CONSOLE.LOG and remove BURGLAR.NLM activity, including RCONSOLE activity. Edit and remove RCONSOLE activity from SYS:SYSTEM\SYS$ERR.LOG as well. After saving files, run FILER and restore owner and dates if needed. Run PURGE in their directories. Logout and login as GUEST and set SUPER.EXE toggle. Remove NEWUSER Supe rights and logout. Login as NEWUSER with SUPER.EXE and remove GUEST Supe rights. Finally logout and login as GUEST with SUPER.EXE and turn on Accounting if it was on. Summary - You have created a backdoor into the system that will not show up as somthing unusual in the Accounting log. Login as GUEST using SUPER.EXE and turn off Accounting. Logout and back in as NEWUSER with SUPER.EXE, do what you need to do (covering file alterations with Filer), and logout. Log back in as GUEST and turn on Accounting. The NET$ACCT.DAT file shows only GUEST logging in followed by GUEST logging out. Exploitation #3 --------------- You do a web search and find DSMAINT.NLM on a Novell web site. You download it. Using Fetch, you access's Novell InterNetware server. You discover that you can gain access to everything on the SYS volume. You upload DSMAINT.NLM to SYS:SYSTEM, and upload an edited LDREMOTE.NCF file. This NCF file unloads CONLOG.NLM, unloads and reloads the REMOTE.NLM with a password of YOUR choosing, and loads XCONSOLE.NLM. After a reboot of the server (which you assisted with a SYN flood to overload buffers), the remote console password has been reset to one of your choosing. Telnet to and use your password. If your machine supports X Windows, you could choose that, but you use VT-100 instead since it creates less network traffic. You load DSMAINT.NLM and choose the Prepare for upgrade option. It looks nasty because of the VT-100 representation of the color screen, but in a few minutes, it is complete. An offshoot of the DSMAINT process is the creation of BACKUP.DS in the SYS:SYSTEM directory. Within a few minutes, Fetch is used to retrieve BACKUP.DS. This file contains all of the account names and passwords. The passwords are in encrypted form, but this is enough to log in. So you start writing an exploit to do just that, plus a brute force attack to get ALL of the passwords, including Admin's.... --------------------------------------------------------------------------------
http://www.antionline.com/printthread.php?t=236532&pp=10&page=1
CC-MAIN-2018-17
refinedweb
2,450
67.76
NAME SSL_pending, SSL_has_pending - check for readable bytes buffered in an SSL object SYNOPSIS #include <openssl/ssl.h> int SSL_pending(const SSL *ssl); int SSL_has_pending(const SSL *s); DESCRIPTION Data is received in whole blocks known as records from the peer. A whole record is processed (e.g. decrypted) in one go and is buffered by OpenSSL until it is read by the application via a call to SSL_read_ex(3) or SSL_read(3).). RETURN VALUES SSL_pending() returns the number of buffered and processed application data bytes that are pending and are available for immediate read. SSL_has_pending() returns 1 if there is buffered record data in the SSL object and 0 otherwise. SEE ALSO SSL_read_ex(3), SSL_read(3), SSL_CTX_set_read_ahead(3), SSL_CTX_set_split_send_fragment(3), ssl(7) HISTORY The SSL_has_pending() function was added in OpenSSL 1.1.0. Licensed under the Apache License 2.0 (the "License"). You may not use this file except in compliance with the License. You can obtain a copy in the file LICENSE in the source distribution or at.
https://www.openssl.org/docs/manmaster/man3/SSL_pending.html
CC-MAIN-2019-22
refinedweb
169
57.27
spellsDispelAoE(object, object, int) Handles dispel magic of area of effect spells. void spellsDispelAoE( object oTargetAoE, object oCaster, int nCasterLevel ); Parameters oTargetAoE The area of effect object you wish to examine. oCaster The object that cast the spell. nCasterLevel oCaster's spell class level to use. Description Handles dispel magic of area of effect spells. Before adding this AoE's got automatically destroyed. Since NWN does not give the required information to do proper dispelling on AoEs, we do some simulated stuff here: - Base chances to dispel are 25 (Lesser Dispel), 50 (Dispel Magic), 75 (Greater Dispelling) and 100% (Mordenkainen's Disjunction). - Chance is modified positive by the caster level of the spellcaster as well. - As the relevant ability score. - Chance is modified negative by the highest spellcasting class level of the AoE creator and the relevant ability score. It is bad, but it is not worse than just dispelling the AoE as the game did until now. Requirements #include "x0_i0_spells" Version ??? See Also author: Mistress, editor: Kolyana
http://palmergames.com/Lexicon/Lexicon_1_69/function.spellsDispelAoE.html
CC-MAIN-2016-40
refinedweb
168
58.69
Synchronising eventlets and threads Eventlet is an asynchronous network I/O framework which combines an event loop with greenlet-based coroutines to provide a familiar blocking-like API to the developer. One of the reasons I like eventlet a lot is that the technology it builds on allows it's event loop to run inside a thread or even run multiple event loops in different threads. This makes it a lot more amenable to slowly evolving existing applications to be more asynchronous then solutions like e.g. gevent which only allow one event loop per process. Eventlet isn't the most mature of tools however and it's API shows signs of being developed as needs arose. Not that this is necessarily a bad thing, APIs do need to grow from being used, but don't be surprised if you need to dig down into some parts and discover rough edges (hi IPv6!). The API does have a decent collection of tools you'll be familiar with however: greenlet-local storage, semaphores, events (though not quite the event you're used too) and even some extra goodies like pools, WSGI servers, DBAPI2 connection pools and ZeroMQ support. But you'll notice that all these goodies are designed to work in a greenlet-only world. And the one place where threads are acknowledged, a global threadpool of workers as a last resort to make things behave asynchronous, looks very messy and entirely not reusable (it's full of module globals for one). So if you're introducing eventlet into an existing application and you need to communicate data and events between threads and eventlets you'll find a void.() method: it needs to block until notified. But blocking is significantly different when you're running in a thread then when running in an eventlet. The basics are that in a thread you want to really block using the locking primitives provided by the OS (exposed to Python in the. You could call these directly obviously, but as you can see abstracting them away is not that hard. You can easily detect if you're in an eventlet by checking if the hub (which is thread-local) is actually running. The price you pay for this is that this will create a hub instance in each thread, even if it is not used. But the worst this does is waste some memory. (This could be avoided by checking for the list, which is a thread-safe operation. Now there is another catch, remember that the hub is basically an event loop? Well if no events happen then it will not be going round it loop! And appening something to a list is not creating an event. So what you need to do is use them: def gwait(self, timeout=None): def twait(self, timeout=None): None in the set of waiters. Now lets have a look at what notify looks like. We've already discussed what needs to happen here: to notify an eventlet we need schedule a call with it's hub to switch to it (no point special casing when we're already running in that same hub). In case we notified it from a different thread we also need to signal the hub using the pipe set up by .wait() so it will actually start to go round it's loop and execute this just scheduled call. Notifying a thread is even easier: just unlock the lock it's trying to acquire. def notify(self):)() left. This should now be obvious: look up the hub in the hubcache and write some data to the writing end of the pipe. The only gotcha here is that while we might be called from another thread, this does not mean the calling thread itself can't be part of an eventlet mainloop. So in that case make sure not to do a blocking write (as unlikely as that might be). Notice here how in the async case we remove the listener which was used to wake the hub up right from the callback itself. If we didn't do this then the hub would most likely find the writing end of the pipe writable again on the next loop which would trigger another notification of the other hub, not what we want! def _kick_hub(self,') That's cute, but what now? We've now got a great way of notifying other threads and eventlets at will. But this is an entirely non-standard tool! Using this is strange, unfamiliar and unwieldy. This isn't one of the synchronisation primitives we know and wanted to class Lock(object): def __init__(self, hubcache=GLOBAL_HUBCACHE): self.hubcache = hubcache self._notif = Notifier(hubcache) self._lock = threading.Lock() self.owner = None def acquire(self, blocking=True, timeout=None): gotit = self._lock.acquire(False) if gotit or not blocking: if gotit: self.owner = eventlet.getcurrent() return gotit if timeout is None: while not gotit: self._notif.wait() gotit = self._lock.acquire(False) if gotit: self.owner = eventlet.getcurrent() return True else: if timeout < 0: raise RuntimeError('timeout must be greater or equal then 0') now = time.time() end = now + timeout while not gotit and (now < end): self._notif.wait(end - now) gotit = self._lock.acquire(False) now = time.time() if gotit: self.owner = eventlet.getcurrent() return gotit __enter__ = acquire def release(self): self._lock.release() self.owner = None self._notif.notify() def __exit__(self, exc_type, exc_value, traceback): self.release() def __repr__(self): return ('<gsync.Lock object at 0x%x (%r, %r)>' % (id(self), self.owner, self._not. class Condition(object): def __init__(self, lock=None, hubcache=GLOBAL_HUBCACHE): if lock is None: self._lock = Lock(hubcache=hubcache) else: self._lock = lock self._notif = Notifier(hubcache=hubcache) # Export the lock methods self.acquire = self._lock.acquire self.release = self._lock.release self.__enter__ = self._lock.__enter__ self.__exit__ = self._lock.__exit__ def __repr__(self): return '<gsync.Condition (%r, %r)>' % (self._lock, self._notif) def wait(self, timeout=None): if self._lock.owner is not eventlet.getcurrent(): raise RuntimeError('Can not wait on un-acquired lock') self._lock.release() try: self._notif.wait(timeout) finally: self._lock.acquire() def notify(self): if self._lock.owner is not eventlet.getcurrent(): raise RuntimeError('Can not notify on un-acquired lock') self._notif.notify() def notify_all(self): if self._lock.owner is not eventlet.getcurrent(): raise RuntimeError('Can not notify on un-acquired lock') self._notif.notify_all() Now once we have a condition we can finally get to the real prise: a queue to move data freely between threads and eventlets. Great, only had to provide a new import Queue as queue class BaseQueue(object): def __init__(self, maxsize=0, hubcache=GLOBAL_HUBCACHE): self.hubcache = hubcache self.maxsize = maxsize self._init(maxsize) self.mutex = Lock(hubcache=hubcache) self.not_empty = Condition(self.mutex, hubcache=hubcache) self.not_full = Condition(self.mutex, hubcache=hubcache) self.all_tasks_done = Condition(self.mutex, hubcache=hubcache) self.unfinished_tasks = 0 class Queue(BaseQueue, queue.Queue): pass ._(), it checked if the hub was running to detect whether it was being called from inside an eventlet or not. So if you didn't manage to start the hub before calling this method the whole thread will block! Hence the minor hack to start the hub manually beforehand. All the code Here all the code for the notifier in one piece, including the docstrings and comments. This also includes the global hubcache with a tiny bit of extra magic to be able to clear the cache if you so desire. Having this as a parameter to pass in allows you to use different hub caches if you have a reason to do so. import os import thread import threading import time import eventlet class HubCache(dict): """Cache used by Notifier instances This is a dict-subclass to overwrite the .clear() method. It's keys are hubs and values are (rfd, wfd, listener). Using this means you can clear the cache in a way which will unregister the listeners from the hubs and close all filedescriptors. XXX This is hugely incomplete, only remove items from this cache using the .clear() method as the other ways of removing items will not release resources properly. """ def clear(self): while self: hub, (rfd, wfd, listener) = self.popitem() hub.remove(listener) os.close(rfd) os.close(wfd) def __del__(self): self.clear() """The global hubcache This is the default hubcache used by Notifier instances. """ GLOBAL_HUBCACHE = HubCache() class Notifier(object): """Notify one or more waiters This is essentially a condition without the lock. It can be used to signal between threads and greenlets at will. """ # This doesn't use eventlet.hubs.trampoline since that results in # a filedescriptor per waiting greenlet. Instead each eventlet # that calls .gwait() will ensure there's a filedescriptor # registered for reading for with it's hub. This filedescriptor # is then only used when another thread wants to wake up the hub # in order for a notification to be delivered to the eventlet. def __init__(self, hubcache=GLOBAL_HUBCACHE): """Initialise the notifier The hubcache is a dictionary which will keep pipes used by the notifier so that only ever one pipe gets created per hub. The default is to share this hubcache globally so all notifiers use the same pipes for intra-hub communication. """ # Each item in this set is a tuple of (waiter, hub). For an # eventlet the waiter is the greenlet while for a thread it is # a lock. For a thread the hub item is always None. self._waiters = set() self.hubcache = hubcache def wait(self, timeout=None): """Wait from a thread or eventlet This blocks the current thread/eventlet until it gets woken up by a call to .notify() or .notify_all(). This will automatically dispatch to .gwait() or .twait() as needed so that the blocking will be cooperative for greenlets. Returns True if this thread/eventlet was notified and False when a timeout occurred. """ hub = eventlet.hubs.get_hub() if hub.running: self.gwait(timeout) else: self.twait(timeout) def gwait(self, timeout=None): """Wait from an eventlet This cooperatively blocks the current eventlet by switching to the hub. The hub will switch back to this eventlet when it gets notified. Usually you can just call .wait() which will dispatch to this method if you are in an eventlet. Returns True if this thread/eventlet was notified and False when a timeout occurred. """ def twait(self, timeout=None): """Wait from an thread This blocks the current thread by using a conventional lock. Usually you can just call .wait() which will dispatch to this method if you are in an eventlet. Returns True if this thread/eventlet was notified and False when a timeout occurred. """ def notify(self): """Notify one waiter This will notify one waiter, regardless of whether it is a thread or eventlet, resulting in the waiter returning from it's .wait() call. This will never block itself so can be called from either a thread or eventlet itself and will wake up the hub of another thread if an eventlet from it is notified. """) def notify_all(self): """Notify all waiters Similar to .notify() but will notify all waiters instead of just one. """ for i in xrange(len(self._waiters)): self.notify() def _create_pipe(self, hub): """Create a pipe for a hub This creates a pipe (read and write fd) and registers it with the hub so that ._kick_hub() can use this to signal the hub. This keeps a cache of hubs on ``self.hubcache`` so that only one pipe is created per hub. Furthermore this dict is never cleared implicitly to avoid creating new sockets all the time. This method is always called from .gwait() and therefore can only run once for a given hub at the same time. Thus it is threadsave. """ if hub in self.hubcache: return def read_callback(fd): # This just reads the (bogus) data just written to empty # the os queues. The only purpose was to kick the hub # round it's loop which is now has. The notif function # scheduled by .notify() will now do it's work. os.read(fd, 512) rfd, wfd = os.pipe() listener = hub.add(eventlet.hubs.hub.READ, rfd, read_callback) self.hubcache[hub] = (rfd, wfd, listener) def _kick_hub(self, hub): """Kick the hub around it's loop Threads need to be able to kick a hub around their loop by interrupting the sleep. This is done with the help of a filedescriptor to which the thread writes a byte (using this method) which will then wake up the') def __repr__(self): return ('<gsync.Notifier object at 0x%x (%d waiters)>' % (id(self), len(self._waiters))) That's all folks So it seems that with some careful thinkering you can create all your tried and tested tools to communicate between threads and eventlets or between eventlets in different threads. This makes adopting eventlet into an existing application a whole lot more approachable. It certainly helped me!
http://blog.devork.be/2011_03_01_archive.html
CC-MAIN-2013-20
refinedweb
2,151
67.04
In Python, if you want to check if some element is in a list or array or string or anything, and see if it’s inside another list or array or string or etc, all you really have to do is write for character in word: if character in seen: # do something This would let you find repeated letters by making a new empty list called something like “Seen”, and loop over your Word (list of Characters). If that Character is in Seen already, then it’s not an isogram. If it isn’t in Seen, then you add it to Seen and move onto the next Character. But in C++ writing something like that is not so simple haha. But after a lot of Googling I ended up making a couple of functions and writing this to try to get closer to Python. First we make a function/method to do the equivalent of “if character in seen”: #include <algorithm> #include <list> using namespace std; // Using this is to cut down on having to write std:: everywhere. /// See if Element is in SearchList. /// bool UBullCowCartridge::Contains(const TCHAR Element, list<TCHAR> SearchList) { // Python says "if Element in SearchList". C++ says: return find(SearchList.begin(), SearchList.end(), Element) != SearchList.end(); } This searches for the Element provided, from the beginning of SearchList to the end of Searchlist, and checks that it doesn’t equal the end of the Searchlist. This is the equivalent of writing “If Element in SearchList”, giving us true or false. Then we can implement it like this: /// Check if Word is an isogram (does not contain more than 1 of each letter). bool UBullCowCartridge::IsIsogram(FString Word) { // Make empty list "Seen" for keeping track of our letters we're checking. list<TCHAR> Seen; // For Character in Word: for (TCHAR Character: Word) { // if Character in Seen: if (Contains(Character, Seen)) { return false; } // Otherwise append Character to the end of Seen list. Seen.push_back(Character); } // Making it here means we went through every Character in Word, and nothing repeated in Seen. return true; } In the end I was happier with reading that, but it did require additional includes, and making a dedicated method “Contains” to make it more readable when it was used in the IsIsogram method. This might also be slower / more inefficient than the one in the course, I’m not sure to be honest. But working with such tiny words and data, I don’t think it matters. I’m not sure how much these sorts of changes might affect compile time as well. Either way, I just wanted something more familiar and thus more readable to me coming from Python, and it was good practice figuring it out.
https://community.gamedev.tv/t/coming-from-python-i-ended-up-writing-the-isisogram-loop-a-bit-differently/170548
CC-MAIN-2021-25
refinedweb
453
66.47
GDAL/OGR 1.5.0 GDAL/OGR 1.5.0 - General Changes Build: - CFG environment variable now ignored. Instead set CFLAGS and CXXFLAGS environment variables to desired compilation options, or use --enable-debug for a debug build. Default is "-g -O2" like most other packages. - Added --with-hide-internal-symbols to restrict exported API from .so files to be the GDAL public API (as marked with CPL_DLL). Other: - OGR and GDAL C APIs now generally check for NULL objects and recover with an error report instead of crashing. GDAL 1.5.0 - Overview of Changes Core: - Enable Persistent Auxilary Metadata (.aux.xml) by default. - Support for "pam proxies" for files in read-only locations. - Create and CreateCopy pre-Delete output existing dataset. - Added Identify() method on drivers (per RFC 11: Fast Format Identify) - Implement GetFileList() on datasets (per RFC 12). - Implement Delete(), Rename(), Copy() based on GetFileList() (per RFC 12). - vrtdataset.h, memdataset.h and rawdataset.h are now considered part of the public GDAL API, and will be installed along with gdal.h, etc. - Support nodata/validity masks per RFC 14: Band Masks. - Plugin drivers test for ABI compatibility at load time. - Creation flags can now be validated (this is used by gdal_translate) - Default block cache size changed to 40MB from 10MB. Algorithms / Utilities: - gdal_grid: New utility to interpolate point data to a grid. - gdal2tiles.py is new for 1.5.0. - gdaltransform: stdin/stdout point transformer similar to PROJ.4 cs2cs. - gdalwarp: Several fixes related to destination "nodata" handling and nodata mixing in resampling kernels. - gdalwarp: Added Lanczos Windows Sinc resampling. - gdal_rasterize: added -i flag to rasterize all areas outside geometry. - gdalenhance: new utility for applying histogram equalization enhancements. - gdalmanage: Utility for managing datasets (identify, delete, copy, rename) - nearblack: Utility for fixing lossily compressed nodata collars. Intergraph Raster Driver: - New for 1.5.0. COSAR (TerraSAR-X) Driver: - New for 1.5.0. - SAR Format. COASP Driver: - New for 1.5.0 - SAR format produced by DRDC CASP SAR Processor. GFF Driver: - New for 1.5.0 GENBIN (Generic Binary) Driver: - New for 1.5.0. ISIS3 Driver: - New for 1.5.0. - Also PDS and ISIS2 driver improved substantially and all moved to frmts/pds WMS Driver: - New for 1.5.0. SDE Raster Driver: - New for 1.5.0. SRTMHGT Driver: - New for 1.5.0. PALSAR Driver: - New for 1.5.0. - SAR format. ERS Driver: - New for 1.5.0. - ERMapper ASCII Header HTTP Driver: - New for 1.5.0. - Fetches file by http and then GDALOpen()s. GSG Driver: - New for 1.5.0. - Golden Software Surfer Grid. GS7 Driver: - New for 1.5.0. - Golden Software Surfer 7 Binary Grid. Spot DIMAP Driver: - New for 1.5.0. RPFTOC Driver: - New for 1.5.0. ADRG Driver: - New for 1.5.0. NITF Driver: - Added support for writing JPEG compressed (IC=C3). - Added support for reading text segments and TREs as metadata. - Added support for 1bit images. - Added support for GeoSDE TRE for georeferencing. - Support PAM for subdatasets. - Improved NSIF support. - Support C1 (FAX3) compression. - Improved CADRG support (#913, #1750, #1751, #1754) ENVI Driver: - Many improvements, particularly to coordinate system handling and metadata. JP2KAK (Kakadu JPEG2000) Driver: - Now builds with libtool enabled. GTIFF (GeoTIFF) Driver: - Now supports BigTIFF (read and write) with libtiff4 (internal copy ok). - Upgraded to include libtiff 4.0 (alpha2) as the internal option. - Support AVERAGE_BIT2GRAYSCALE overviews. - Produce pixel interleaved files instead of band interleaved by default. - Support TIFF files with odd numbers of bits (1-8, 11, etc). - Add ZLEVEL creation option to specify level of compression for DEFLATE method GIF Driver: - Nodata/transparency support added. JPEG Driver: - Support in-file masks. AIGrid Driver: - Supports reading associated info table as a Raster Attribute Table. HFA Driver: - Support MapInformation?/xform nodes for read and write. - Support AVERAGE_BIT2GRAYSCALE overviews. - Support Signed Byte pixel type. - Support 1/2/4 bit pixel types. - Support PE_STRING coordinate system definitions. - Support nodata values (#1567) WCS Driver: - Support WCS 1.1.0 DTED Driver: - Can now perform checksum verification. - Better datum detection. HDF4 Driver: - Support PAM for subdatasets. Leveller Driver: - Added write support. - Added v7 (Leveller 2.6) support. OGR 1.5.0 - Overview of Changes General: - Plugin drivers test for ABI compatability at load time. - SFCOM/OLEDB stuff all removed (moved to /spike in subversion). - Various thread safety improvements made. - Added PointOnSurface? implementation for OGRPolygon. - Added C API interface to OGR Feature Style classes (RFC 18). Utilities: - All moved to gdal/apps. OGRSpatialReference: - Supports URL SRS type. - Upgraded to EPSG 6.13. - Operating much better in odd numeric locales. BNA Driver: - New for 1.5.0. GPX Driver: - New for 1.5.0. GeoJSON Driver: - New for 1.5.0. GMT ASCII Driver: - New for 1.5.0. KML Driver: - Preliminary read support added. DXF / DWG Driver: - Removed due to licensing issues with some of the source code. Still available in subversion from under /spike if needed. PG (Postgres/PostGIS) Driver: - Added support for recognising primary keys other than OGR_FID to use as FID. - Improved schema support. - Performance improvements related to enabling SEQSCAN and large cursor pages Shapefile Driver: - Do not keep .shx open in read only mode (better file handle management). - Use GEOS to classify rings into polygons with holes and multipolygons if it is available. - Support dbf files larger than 2GB. MySQL Driver: - Added support for BLOB fields. - Upgraded to MITAB 1.6.4. Interlis Drivers: - Support datasources without imported Interlis TID - Remove ili2c.jar (available from - Support for inner rings in Surface geometries. - Support spatial and attribute filters. SWIG Language Bindings - The "Next Generation" Python SWIG bindings are now the default. - Python utility and sample scripts migrated to swig/python/scripts and swig/python/samples. - Added Raster Attribute Tables to swig bindings. - Added Geometry.ExportToKML - Added CreateGeometryFromKML - Added CreateGeometryFromJson - Added Geometry.ExportToJson SWIG C# related changes: - Support for the enumerated types of the C# interface - C# namespace names and module names follows the .NET framework naming guidelines - Changed the names of the Windows builds for a better match with the GNU/Linux/OSX builds - The gdalconst assembly is now deprecated - GDAL C# libtool build support - CreateFromWkb support - Dataset.ReadRaster, Dataset.WriteRaster support - Added support for Dataset.BuildOverviews - More examples added SWIG Python related changes: - Progress function callback support added. You can use a Python function, or the standard GDALTermProgress variant - Sugar, sweet, sweet sugar. - ogr.Feature.geometry() - ogr.Feature.items() - ogr.Feature.keys() - doxygen-generated docstrings for ogr.py - geometry pickling - setuptools support - PyPi - setup.cfg for configuring major significant items (libs, includes, location of gdal-config0 - support building the bindings from *outside* the GDAL source tree SWIG Java: - SWIG Java bindings are orphaned and believed to be broken at this time.
http://trac.osgeo.org/gdal/wiki/Release/1.5.0-News
crawl-002
refinedweb
1,121
54.18
Microsoft RIA Services are a new kind of services introduce with Silveright . So question here is what is role of RIA Services in Siverlight Applications and WCF Services was already there so how it is diffrent from Normal WCF. What is the advantages of RIA Services? Silverlight RIA Services allow serializing LINQ queries between the client and server. So the client can create the LINQ query and make it run on the server side and get back the results. So the client has greater flexibility on this side.. Silver. The RIA Services can either Sit on top of WCF (essentially wrapping the WCF services for the Client App to consume) or Replace the WCF layer with RIA Services using alternatice data source (eg. ADO.NET Entity Data Model/Entity Framework. RIA Services pattern, it can make it very easy to build out a n-tier silverlight application.. Terminology - RIA Services (A.K.A WCF RIA Services) - RIA stands for Rich Internet Applications - RIA is a global term that applies to different technologies, one of them being Microsoft technologies. - WCF stands for Windows Communication Foundation. - WCF RIA Services is a framework and code generation tool built to ease the data communication development between the Silverlight and ASP.NET application clients and the server. WCF RIA Services is geared to serve Silverlight client model andASP.NET client model (other clients models have other choices such as WCF Data Services). -. (Ref #1). Problem (Why we need it?) - Silverlight versions 2.0 and up caught the eyes of developers working on web application development using the .NET platform who wanted to escape the browser compatibility and script coding. - Silverlight can't talk to databases behind firewalls for security reasons, so it had to use WCF. - Using WCF and Silverlight may requires considerable coding for large application - Coding across tiers requires sharing business classes between tiers. - Manually coding the shared classes between tiers would obviously lead to problems in large projects. Solution - The framework part of the RIA Services technology provides advanced data management, authentication/authorization management, and querying features; the code generation part is able to generate code in a new project based on the code in the existing project, so that hand coding the classes across tiers may be eliminated. - WCF RIA Services technology is concerned with the middle tier of the application. You are not forced to use specific data access technology. How is RIA Different from WCF - RIA Web Services uses WCF whereas WCF does not use RIA Web Services. WCF is concerned with technical aspects of communication between servers across tiers and message data management whereas RIA Web Services delegates this functionality to WCF. WCF does not generate source code to address class business logic such as validation that may need to be shared across tiers like RIA Web Services does. WCF is a generic service that could be used in Windows Forms for example whereas RIA Web Services is focused on Silverlight and ASP.NET clients References The above information reflects my current understanding from serveal readings and mostly from the following sources - If the above is inaccurate, I blame myself not the references! 1- What is .NET RIA Services? 2-Pro Business Applications with Silverlight 4 - By Chris Anderson WCF RIA, introduced in .NET Framework 4.0, is a superset of the traditional WCF functionality included in .NET Framework 3.0 and all subsequent versions. WCF RIA helps building n-tier-applications in that you can share code ansd behavior of the data contracts between different tiers. As in traditional WCF, an automatic proxy is generated on client side when you include the WCF reference into your project. But while WCF is limited (among others) in some ways: WCF RIA overcomes these shortcomings in that you can extend the data classes with your own annotations. These annotations are not limited to the Attributes contained in the System.Data.Annotations namespace. You can even include self-written classes or partial classes that extend the functionality of the data classes. This code is then replicated to the client. You can use this functionality for replication of validation checkers to the client without writing validation code twice. You can even include additional functionality not present in the data class (e.g. automatic retrieval of dependent objects on member access) through code originally written in the server layer, but then replicated to the client layer and used before data is sent to the server. With this, you can reduce network traffic, increase the client's responsiveness to the user and improve security of your n-tier application by validating data on both sides of the communication channel in dientical ways. .NET RIA Services was created for Silverlight that runs in the browser. Silverlight is running a special version of the the .NET framework and in an N-tier application Silverlight is unable to share assemblies with the server side. By employing some clever code generation .NET RIA Services makes this gap almost invisible to the developer. Classes similar to the domain classes are code generated on the client side, and ways to move objects back and forth between client and server are also made available. We will probably be able to call into a .NET RIA Service from Windows Mobile, but I don't think it will particular easy and currently you may in fact have to reverse engineer what's sent on the wire (JSON is used). WCF on the other has a much more broad scope, but doesn't support Silverlight development in the same way that .NET RIA Services does. If you are writing a Silverlight only N-tier application .NET RIA Services are very powerful. If however Silverlight is only one of several clients WCF is probably a better choice. Our thinking on the RIA Services work really grow out of the LINQ project a few years ago.. To answer this question let's first look at MSDN. WCF RIA Services.
http://beyondrelational.com/quiz/dotnet/general/2011/questions/47/what-is-role-of-ria-services-in-silverlight-why-we-need-it-and-how-it-is-diffrent-from-the-wcf.aspx
CC-MAIN-2015-48
refinedweb
993
56.15
infiniband_hca_persistent_cache(4) pci_unitaddr_persistent(4) TrustedExtensionsPolicy(4) - utmp and wtmp database entry formats #include <utmp.h> /var/adm/utmp /var/adm/wtmp The utmp and wtmp database files are obsolete and are no longer present on the system. They have been superseded by the extended database contained in the utmpx and wtmpx database files. See utmpx(4). It is possible for /var/adm/utmp to reappear on the system. This would most likely occur if a third party application that still uses utmp recreates the file if it finds it missing. This file should not be allowed to remain on the system. The user should investigate to determine which application is recreating this file.
http://docs.oracle.com/cd/E23824_01/html/821-1473/wtmp-4.html
CC-MAIN-2015-11
refinedweb
113
56.66
I'm new to Quantopian and to Python, so am in need of assistance. I'm piecing this together from help documents and examples and am really stuck. The goal is to have a monthly rotational strategy using just the Dow 30 stocks, using momentum of 3 and 12 month returns to rank them, and owning the top group equally of that rank. Seems simple enough but can't get farther. """ 1. Use the Dow 30 stocks 2. Find the top group of stocks with positive momentum with 3 and 12 month returns 3. Every month own the top group of stocks 4. Log the positions that we need """ import pandas as pd import numpy as np def initialize(context): # Dictionary of stocks and their respective weights context.stock_weights = {} # Count of days before rebalancing context.days = 0 # Number of stocks to go long in context.stock_numb = 7 set_symbol_lookup_date('2015-01-01') # DJI 30 stocks from March 19 2015 - now context.stocks = symbols( in data: order_target_percent(stock, 0) log.info("The stocks we are ordering today are %r" % context.stocks) # Create weights for each stock weight = create_weights(context, context.stocks) # Rebalance all stocks to target weights for stock in context.stocks: if stock in data: if weight != 0: log.info("Ordering %0.0f%% percent of %s in %s" % (weight * 100, stock.symbol, ) order_target_percent(stock, weight) def before_trading_start(context): #] def create_weights(context, stocks): """ Takes in a list of securities and weights them all equally """ if len(stocks) == 0: return 0 else: weight = 1.0/len(stocks) return weight def handle_data(context, data): """ Code logic to run during the trading day. handle_data() gets called every bar. """ # track how many positions we're holding record(num_positions = len(context.portfolio.positions)) Thoughts? thanks!
https://www.quantopian.com/posts/need-help-for-basic-dow-30-stock-rotation-strategy
CC-MAIN-2018-17
refinedweb
288
59.3
hi there, i've recently started to use webware and cheetah templates in a new way. since i'm experimenting successfully with such approach i feel it's useful to illustrate the process, mostly to get feedback. we start by marking a master page with zones to be replaced. the master page will dictate the layout of the site, it can contains HTML markup, flash objects, images and so on. in the following example we just have a "title" and "content" zones that site's servlets are likely to replace with dynamic content: <!-- file: site.html --> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns=""> <head> <title>$page.title</title> </head> <body> <div id="main"> <div id="header"> <h1>$page.title</h1> </div> <div id="content"> $page.content($page) <!-- Note: pass $page to content too --> </div> </div> </body> </html> servlets of our site will, as usual, inherit from a root servlet, commonly named SitePage. a SitePage class might looks like this: class SitePage(Page): def title(self): return 'Some title' def content(self): return '[content placeholder]' def awake(self, transaction): Page.awake(self, transaction) # needed #@@ acquire pooled DB connection self.load(transaction.request()) # sub classes are free to override load def load(self, request): pass def sleep(self, transaction): Page.sleep(self, transaction) # needed #@@ release pooled DB connection def writeHTML(self): self.write(self.html(self)) # master page template html = DiskTemplate('site.html') def preAction(self, actionName): """Override default Page behavior.""" pass def postAction(self, actionName): """Render a page after action.""" self.writeHTML() a DiskTemplate object (more on that later) will be asked to load the HTML file from disk. this will occur once, when servlet will be loaded in memory by webware. internally DiskTemplate will compile the HTML file (using Cheeatah) and store it in the "html" class variabile. the writeHTML method will ask to such template to render itself as a string and then the usual self.write method will send template output to the browser. note how the servlet passes itself as a namespace to the DiskTemplate, this will allow us to access all the servlet internals via the $page template's variable. this is the code of DiskTemplate: class DiskTemplate(object): # template cache cache = {} def __init__(self, path): self.path = path def __call__(self, page): if self.path in self.cache: tmpl = self.cache[self.path] else: app = page.application() s = app.serverSidePath('www/t/%s' % self.path) # if file changes reload app (only with AutoReload = True) #@@ modloader.watchFile(s) tmpl = Template(file=s, filter=Unicode2Latin1) # store in cache self.cache[self.path] = tmpl # pass caller servlet as namespace tmpl.page = page return str(tmpl) DiskTemplate keeps a cache of the loaded templates. actually i'm not sure this is really needed in a normal scenario. once we have setup your SitePage we start to subclass it to provide dynamic contents for all *or some* of the marked zones. to bring back the original example if we miss to re-implement "title" and "content" methods python will fall-back to use SitePage corresponding implementations. one of my servlets looks like this: from base import SitePage, MemoryTemplate import catalogdb # data-access module class index(SitePage): def load(self, request): #@@ let's pretend self._cnn holds an active DB connection #@@ get POST/GET params via request('someparam', '') self._categories = catalogdb.rootCategories(self._cnn) def content(self): ''' #for $category in $page._categories <div class="entry"> <p> <a href="/cat?id=$category.node_id">$category.title</a> </p> <p class="dtm">$category.description</p> </div> #end for #unless $page._categories <h2>Oops!</h2>\n<p>Category not found.</p> #end unless ''' content = MemoryTemplate(content) the above servlet loads some catalogue categories from db and saves the list in self._categories. servlet re-implements SitePage "content" to provide some custom HTML markup bound to loaded DB records. content method uses the docstring as cheetah code to generate markup. immediately after the docstring we rebind "content" to a MemoryTemplate object. again MemoryTemplate will compile only once he Cheetah template and keep it in memory. this is the code of MemoryTemplate: class MemoryTemplate(object): def __init__(self, fn): self.tmpl = Template(fn.__doc__, filter=Unicode2Latin1) def __call__(self, page): # pass caller servlet as namespace self.tmpl.page = page return str(self.tmpl) the code is similar in purpose to the DiskTemplate one. if we need to load from disk the Cheetah code we just write: content = DiskTemplate('content-for-index.html') the reason to keep the templates on disk or inside servlets it's really a matter of how much logic VS markup holds our templates. if we need to allow our peers to keep tweaking the markup so let's keep that on disk, otherwise keep 'em within the servlet. so everything is fine and dandy, but what if you have one or more servlets that need to have a different master page (an home page is a typical use-case). well, we would write: class home(SitePage): # master template html = DiskTemplate('home.html') def content(self): ''' ...your custom cheetah page... ''' content = MemoryTemplate(content) the above code replaces both "html" and "content" templates. there's a little gotcha to make everything to work correctly: in the master template you have to pass $page to the marked zone in order to be able to access $page again in sub-classes. that's why in master page i've written: <div id="content"> $page.content($page) </div> right now i'm not able to elimite this quirk. maybe with some sys._getframe(1) hacking i could grab the caller instance but it's risky to rely on sys._getframe. note that zones are really only required to return strings, nothing prevent us to just return strings for simpler task--just check SitePage.title implementation. finally, with the new function decorators planned for python 2.4 authors will be able to collapse content / content = (Disk|Memory)Template(content) on a single line: def content(self) [MemoryTemplate]: but don't trust me on the syntax sugar to do that, there's still some debate in the air. hope this helps. cheers, deelan ---
http://sourceforge.net/p/webware/mailman/message/13603236/
CC-MAIN-2014-41
refinedweb
1,018
58.69
Jeremy Quinn said the following on 9/1/07 13:12: > The next level of complexity, is all of the groups and layout stuff in > the cforms xslt. > In hindsight, it seems this could have been done cleaner in a separate > namespace, but we do not have that option now, unless we want to force > everyone to completely re-work their form templates etc. > > However, I do find lots of inconsistencies with the layouting code ..... > for instance, I have not found a way to combine the layouting tags with > stuff like repeaters and as you say, there is possibly too much usage of > tables. > > I would love to see more of the layout structure use divs and css, but > this was not done originally I suspect as these types of layouts are > more difficult to achieve. I once started (way back when 2.1.7 or 2.1.8 was released) to convert the XSLT to divs and CSS, replacing the tables. I ran into problems with the "autoformat" settings like "columns" and "rows", because there is no way to get that flexibly working without tables when you have no clue what type of widget it is and how long the label is. Also: fieldset is rendered differently in different browsers, that would mean you have to apply CSS hacks for all possible browsers thus moving the complication from XSLT to CSS. Just my 2ct. Although I'd love to tinker with this some more. Bye, Helma
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200701.mbox/%3C45A3C1F1.6070206@gmail.com%3E
CC-MAIN-2014-15
refinedweb
247
68.3
In computing, a Hashtable is defined as a data structure that stores data represented as key-value pairs. Compared to a map, it is more efficient for large data sets. This Java programming tutorial discusses Hashtable and HashMap data structures, their features and benefits, and how to work with them in Java. Read: Best Tools for Remote Software Developers What is a Hashtable in Java? A Hashtable is a data structure used to preserve data represented as key-value pairs. Despite its similarities with a map, it is more efficient when dealing with huge data sets. This explains why a Hashtable is a great choice in applications where performance is important. Hashtable is used for fast lookup of data, storing data in dynamic ways, or just storing it in compact ways, which makes it more efficient compared to other solutions, such as arrays or linked lists. A Hashtable is often used in programming languages, such as Java, to store data in a way that is easy to retrieve. A Hashtable can store large amounts of data quickly and easily, making it ideal for use in applications where speed is important. A Hashtable works by storing data in a table, with each piece of data having a unique key. You can retrieve data from a Hashtable using its key. Once you provide the key, you can get the corresponding value. The code snippet that follows shows how you can create an empty Hashtable instance: Hashtable<K, V> hashTable = new Hashtable<K, V>(); How Does Hashtable Work in Java? Hashtable is an abstract class and has two subclasses that extend this class — HashMap and LinkedHashMap. The HashMap provides the set of elements stored in it, while the LinkedHashMap allows insertion of new items at any position. What are the Benefits of Using Hashtable in Java? Hashtable is one of the most efficient of all data structures as far as performance is concerned. You can take advantage of Hashtable for fast data storage and retrieval. Hashtable is also thread-safe making it an excellent choice for multithreaded applications where concurrency is essential. When Should You Use Hashtable in Java? A Hashtable is a data structure that stores information in key-value pairs. The key is required when retrieving items from a Hashtable. This can be advantageous if you have a lot of data and need to be able to quickly find specific items. However, Hashtables are not well suited for storing data that needs to be sorted in any particular order. Additionally, because keys in a Hashtable must be unique, it is not possible to store duplicate keys in a Hashtable. Overall, Hashtables are a good option for storing data when you need quick access to specific items and don’t mind if the data is unordered. You can learn more about Hashing by reading our tutorial: Introduction to Hashing in Java. How to Program Hashtable in Java To create a Hashtable, programmers need to import the java.util.Hashtable package. Then, you can create a Hashtable object like this: Hashtable hashTable = new Hashtable(); You can now add data represented as key-value pairs to the Hashtable instance. To do so, you will use the put() method, like this: hashTable.put("key1", "value1"); hashTable.put("key2", "value2"); You can retrieve values from the Hashtable using the get() method, like so: String str1 = hashTable.get("key1"); String str2 = hashTable.get("key2"); If you want to check if a key exists in the Hashtable, you can use the containsKey() method: boolean containsKey = hashTable.containsKey("key1"); Finally, if you want to get a list of all the keys or values in the Hashtable, you can use the keySet() and values() methods: Set keys = hashTable.keySet(); Collection values = hashTable.values(); How to Improve the Performance of Hashtable in Java? Developers can use a different hashing algorithm to improve Hashtable’s performance. The default hashing algorithm used by Hashtable is known as Adler-32. However, there are other algorithms available that can be faster, such as Murmur3. To change the hashing algorithm used by Hashtable, you can use the setHashingAlgorithm() method. Increasing the internal array size is another way to improve Hashtable’s performance. By default, Hashtable uses an array with a size of 16. The setSize() method allows programmers to increase this size. Performance will be improved because collisions will be fewer when the array is larger. Finally, you can also consider using a different data structure altogether if performance is a major concern for you. For example, you could use a tree-based data structure, such as a red-black tree, instead of a Hashtable. Tree-based data structures tend to be much faster than Hashtables when it comes to lookup operations. What is a HashMap in Java? How does it work? Hashmap is a linked-list implementation of Map, where each element is assigned a unique integer hash code. An instance of HashMap contains a set of key-value pairs where the keys are instances of String and the values are instances of any Java serializable data type. The default storage mechanism used by HashMap is basically an array which is resized when the number of stored entries reaches a specified limit. Since a hash code is used to store the key-value pair in the map using their hash codes, it means that two keys with the same hash code will end up in the same bucket and this can result in collisions. When there is a collision, HashMap uses its secondary storage to store the conflicting key-value pairs. The code snippet that follows shows how you can create an empty HashMap instance in Java: HashMap<K, V> hashMap = new HashMap<K, V>(); How to Program HashMap in Java Refer to the code snippet shown below that shows how you can create an empty instance of a HashMap, insert data as key-value pairs to it and then display the data at the console window. import java.io.*; import java.util.*; public class MyHashMapHashtableExample { public static void main(String args[]) { Map<Integer, String> hashMap = new HashMap<>(); hashMap.put(1, "A"); hashMap.put(2, "B"); hashMap.put(3, "C"); hashMap.put(4, "D"); hashMap.put(5, "E"); Hashtable<Integer, String> hashTable = new Hashtable<Integer, String>(hashMap); System.out.println(hashTable); } } While the put method of the HashMap class can be used to insert items, the remove method can be used to delete items from the collection. For example, the code snippet given below can be used to remove the item having the key as 3. hashMap.remove(3); Final Thoughts on Hashtable and HashMap in Java A Hashtable can store large amounts of data quickly and easily, making it ideal for use in applications where performance is important. A collision occurs when two objects pertaining to the same Hashtable have the same hash code. A Hashtable is adept at avoiding such collisions using an array of lists. Read more Java programming tutorials and software development guides.
https://www.developer.com/java/hashtable-hashmap-java/
CC-MAIN-2022-33
refinedweb
1,162
63.7
Given two binary max heaps as arrays, merge the given heaps. Examples : Input : a = {10, 5, 6, 2}, b = {12, 7, 9} Output : {12, 10, 9, 2, 5, 7, 6} The idea is simple. We create an array to store result. We copy both given arrays one by one to result. Once we have copied all elements, we call standard build heap to construct full merged max heap. C++ Java Python3 # Python3 program to merge two Max heaps. # Standard heapify function to heapify a # subtree rooted under idx. It assumes that # subtrees of node are already heapified. def MaxHeapify(arr, n, idx): # Find largest of node and # its children if idx >= n: return l = 2 * idx + 1 r = 2 * idx + 2 Max = 0 if l < n and arr[l] > arr[idx]: Max = l else: Max = idx if r < n and arr[r] > arr[Max]: Max = r # Put Maximum value at root and # recur for the child with the # Maximum value if Max != idx: arr[Max], arr[idx] = arr[idx], arr[Max] MaxHeapify(arr, n, Max) # Builds a Max heap of given arr[0..n-1] def buildMaxHeap(arr, n): # building the heap from first non-leaf # node by calling Max heapify function for i in range(int(n / 2) – 1, -1, -1): MaxHeapify(arr, n, i) # Merges Max heaps a[] and b[] into merged[] def mergeHeaps(merged, a, b, n, m): # Copy elements of a[] and b[] one # by one to merged[] for i in range(n): merged[i] = a[i] for i in range(m): merged[n + i] = b[i] # build heap for the modified # array of size n+m buildMaxHeap(merged, n + m) # Driver code if __name__ == ‘__main__’: a = [10, 5, 6, 2] b = [12, 7, 9] n = len(a) m = len(b) merged = [0] * (m + n) mergeHeaps(merged, a, b, n, m) for i in range(n + m): print(merged[i], end = ” “) # This code is contributed by PranchalK C# Output: 12 10 9 2 5 7 6 Since time complexity for building the heap from array of n elements is O(n). The complexity of merging the heaps is equal to O(n + m). Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
https://tutorialspoint.dev/data-structure/heap-data-structure/merge-two-binary-max-heaps
CC-MAIN-2021-17
refinedweb
377
70.16
I have a list containing integers, I would like to create a copy of it such that duplicate elements are at least some distance apart. I am aware that it would be necessary to have "enough" different elements and a sufficiently "long" starting list but I would like to create that copy or return a message that it is not possible. Here is a python "possible" implementation but sometimes this program creates an infinite loop. import random out = [] pbs = [1, 2, 3, 1, 2, 3, 5, 8] l = len(pbs) step = 3 while l > 0: pb = random.choice(pbs) if pb in out: lastindex = out[::-1].index(pb) if (len(out) - lastindex) < step: continue pbs.remove(pb) out.append(pb) l += -1 print(out) Thank you for your help.
https://proxieslive.com/create-a-list-copy-having-distanced-duplicate-elements/
CC-MAIN-2020-40
refinedweb
129
62.98
I was assigned a homework project that's starting to get annoying. I can't figure out what's going wrong with it. Here's the question: The number Pi may be calculated using the following infinite series: Pi = 4(1 - 1/3 + 1/5 - 1/7 + 1/9 - 1/11 + ... ) How many terms of this series you need to use before you get the approximation p of Pi, where: a)p = 3.0 b)p = 3.1 c)p = 3.14 d)p = 3.141 e)p = 3.1415 f)p = 3.14159 Write a C++ program to answer this question. My answer is always 4 and my term starts at 0 and stays at 1. Here's what I have so far: #include <iostream> #include <cmath> using namespace std; double pif(double); //Fucntion to determine value of pi int term; //Variable to count terms int main() { cout << "Value of Pi" << " " << "Number of terms" << endl; cout << pif(3.0) << " " << term << endl; cout << pif(3.1) << " " << term << endl; cout << pif(3.14) << " " << term << endl; cout << pif(3.141) << " " << term << endl; cout << pif(3.1415) << " " << term << endl; cout << pif(3.14159) << " " << term << endl; return 0; } double pif(double n) { double pi = 0.0; //Variable to store value of pi int sign = 1; //Variable to store sign bool check = false; //Variable to check value of pi term = 0; while (!check) { if (pi * 4.0 >= n) //If value of pi is greater than or equal to approx of pi check = true; //Then exit the loop else { pi *= sign * (1.0 / (1.0 + term * 2.0)); //Otherwise calculate value of pi sign *= -1; //Change sign ++term; //And increment term } } pi *= 4.0; //Perform final pi calculation after fractional sums have been determined return pi; } Thanks for the help!
https://www.daniweb.com/programming/software-development/threads/152754/how-many-terms-to-find-p-of-pi
CC-MAIN-2017-26
refinedweb
294
86.2
On Mon, Jun 17, 2002 at 04:37:11PM +0200, Christof Petig wrote: > Christof Petig wrote: > > The following code does not compile with g++-3.0 and g++-3.1, but it > > does with g++-2.95.4. What is wrong (std:: is not missing!)? > > Oh, std:: was missing - in a way ... > > std::find only looks in namespace std:: for an operator==, if I specify > it inside namespace std { ... } , it works. Welcome to Koenig lookup. > Gives me a strange feeling though to declare user supplied operators in std. It's explicitly permitted to declare certain things in std, as long as a) you are specializing an existing template, not adding new names, and b) the specialization involves at least one of your own types > Perhaps the namepace of the first argument determines the namespace > searched for the operator ? No, not exactly, but namespaces containing the declaration of the argument type(s) are searched. I recommend finding a good C++ text and reading about "argument-dependant name lookup," aka Koenig lookup. -- To UNSUBSCRIBE, email to debian-gcc-request@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
https://lists.debian.org/debian-gcc/2002/06/msg00145.html
CC-MAIN-2017-30
refinedweb
191
66.64
I have been spending so much time on this. Also just tried clicking run several times so that it would give me the code but it does not. Here are the steps, and below that is the code. The basic idea of this exerciser to to pass promt from component to component. So from Greeting.js you create a component. Then export it, and in App,js you use that component inside a render function in App.js. The correct output renders on the display screen as well. But I can't get past the last step. Please help. 1.Your mission is to pass a prop to a <Greeting /> component instance, from an <App /> component instance. <Greeting /> <App /> If <App /> is going to pass a prop to <Greeting />, then it follows that <App /> is going to render <Greeting />. Since <Greeting /> is going to be rendered by another component, that means that <Greeting /> needs to use module.exports. In Greeting.js, delete this statement from line 2: var ReactDOM = require('react-dom');At the bottom of Greeting.js, remove the entire call to ReactDOM.render, and replace it with this: module.exports = Greeting;2.<App /> can't pass a prop to <Greeting /> until App.js imports the variable Greeting! Until then, the characters <Greeting /> in App.js might as well be nonsense. Select App.js. Create a new line underneath the line var ReactDOM = require('react');. On your new line, require the Greeting component class. Save the result in a variable named Greeting.3.In App.js, add a <Greeting /> instance to App's render function, immediately underneath the <h1></h1>. <h1></h1> Give <Greeting /> an attribute with a name of "name." The attribute's value can be whatever you'd like. When you click Run, <App /> will render <Greeting />, and pass it a prop! // App,js var React = require('react'); var ReactDOM = require('react-dom'); var Greeting = require('./Greeting'); var App = React.createClass({ render: function () { return ( <div> <h1> <Greeting name="name"/> </h1> <article> Latest newzz: where is my phone? </article> </div> ); } }); ReactDOM.render( <App />, document.getElementById('app') ); // Greeting.js var React = require('react'); var Greeting = React.createClass({ render: function () { return <h1>Hi there, {this.props.name}!</h1>; } }); // ReactDOM.render goes here: module.exports = Greeting; In App.js, add a <Greeting /> instance to App's render function, immediately underneath the <h1></h1>. App.js Here is the text that was in the H1: <h1> Hullo and, "Welcome to The Newzz," "On Line!" </h1> Below this, insert your instance component... <Greeting name="Wee Gillis" /> Okay beneath <h1> tags not between. Thanks. <h1> This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.
https://discuss.codecademy.com/t/swear-i-am-doing-this-right/56422
CC-MAIN-2018-05
refinedweb
448
61.63
Along package samples; import javax.servlet.http.annotation.*; @Servlet(urlMappings={"/foo"}) public class SimpleSample { }. Posted by: greeneyed on May 01, 2008 at 10:44 AM Posted by: chris_e_brown on May 02, 2008 at 12:48 AM... Posted by: ronaldtm on May 02, 2008 at 06:11 PM Posted by: mode on May 13, 2008 at 11:53 AM Posted by: mode on May 13, 2008 at 11:57 AM Posted by: mode on May 13, 2008 at 12:00 PM Thanks, Sahoo Posted by: ss141213 on May 15, 2008 at 12:48 AM Posted by: ss141213 on May 15, 2008 at 12:53 AM Posted by: mode on May 15, 2008 at 03:24 PM Posted by: mode on May 15, 2008 at 03:25 PM I know generating equivalent DD from annotations at deployment time is not mandated by the spec. I mentioned about that alternative only to counter the argument that annotation processing can slow things down. The implementations that care too much about speed can choose an alternative like that. Posted by: ss141213 on May 15, 2008 at 07:46 PM
http://weblogs.java.net/blog/mode/archive/2008/04/servlet_30_jsr_1.html
crawl-001
refinedweb
184
72.5
Published 1 year ago by madsynn Can anyone tell me why these will not work? I have tried them all and to no avail. I am trying to only show code if the view is part of the blog. @if(Request::is() === '/en/resources/blog/') <h1>this is a blog article</h1> @else // duh not working @endif @if(Request::is() === '/en/resources/blog/') @include('frontend.article.partials.temp') @else // duh not working @endif @if(Request::path() == '/en/resources/blog/') <h1>article for the blog</h1> @else // duh not working @endif @if(Request::path() === '/en/resources/blog/home-use') // code @else // duh not working @endif You need to pass the path as an argument to the is method: @if(Request::is('something')) ... @else ... @endif And it seems the path must be without a leading slash - Please sign in or create an account to participate in this conversation.
https://laracasts.com/discuss/channels/laravel/blade-conditional-help
CC-MAIN-2018-43
refinedweb
146
60.14
The objective of this post is to explain how to create a simple “Hello world” application with Python and Flask. Introduction Flask is a web micro framework for Python [1] which allows us to create and deploy simple web applications very easily. The installation of Flask is very simple if we use pip. We just need to type the following: pip install Flask The hello world application The code for the “Hello world” application is very straightforward. First of all, we need to import the Flask class from the flask module, so all the functionality we need becomes available. from flask import Flask You can read more about the Flask class here. Once the class is imported, we will create an instance of it. The first parameter of the constructor should be the name of the package or module of our application. Since we are using a single module, we can use __name__, which is a global variable that holds the current module’s name as a string [2]. app = Flask(__name__) Now, we will define a route. A route is basically a decorator that allows to specify an URL associated with a Python function. When a request is done to that URL, the corresponding function is executed. For our simple example, we will specify an URL called “/hello” that will trigger a function that returns a greeting message. The code is shown bellow. @app.route('/hello') def helloWorldHandler(): return 'Hello World from Flask!' As seen, in the route decorator we pass the URL we want (“/hello”) and then we define a function that will handle the HTTP request on that URL. We can name the function whatever we want. Then, we just specify our greeting sentence as the return value of the function. To tell our application to run, we just call the run method on the Flask object we instantiated before. You can read more about the run method here. We can specify as arguments of the run method the host and the port where the server will be listening. The host defaults to 127.0.0.1, which is the loopback address, and the port to 5000 [4]. Nevertheless, if we use those default settings, our application is only available on our machine. In this case, we will specify other values, so it will be available to other machines on the same network. To to so, we specify the host IP as 0.0.0.0, so it is externally available in the network. Although we could have kept the default port, we will change it to 8090, just to exemplify how to do it. app.run(host='0.0.0.0', port= 8090) The complete code for this tutorial is shown bellow. from flask import Flask app = Flask(__name__) @app.route('/hello') def helloWorldHandler(): return 'Hello World from Flask!' app.run(host='0.0.0.0', port= 8090) Testing the code To test the application, just run it, for example, on IDLE, the Python IDE. A message similar to the one shown in figure 1 should be printed to the command line, indicating that the server is listening to incoming HTTP requests. Figure 1 – Running the Flask application from IDLE. Now, to test it, just open your web browser of choice and type the following on the address bar: You should get the greeting message that we defined before, as shown in figure 2. Figure 2 – Output of the hello world application. If we check the command line again, a debugging message has been received, indicating the server received a HTTP request, as shown in figure 3. Figure 3 – Output of the command line after receiving a request. We can see the GET request on the “/hello” route, with the success HTTP code (200). Additionally, in my case, there is a request for a favicon.ico route, which is a default request from the browser to get the small icon that it presents on the left side of the website tab [5]. Naturally, since we didn’t define a handler to this route, it returns a 404 HTTP code, which corresponds to the “Not Found” code. You may not get this call, depending on your browser. In this case, we used the loopback IP address just for the sake of testing the response without any external variables that may lead to problems, since this request is performed internally on the same machine. You should try doing a request from other machine in the same network to check if the server is available. First, you need to discover your machine’s IP address on the network. In windows, you can do it from the command line using the ipconfig command. On Linux, you can use the ifconfig command. So, from the other machine, just open a web browser and type the same address as before, but now with the IP found with the previous commands. You can also test this from the web browser of a smartphone or tablet, as long as they are connected to the same network of the machine that is running the Flask application. You should get the same output message. If you are not able to access the Flask app, then you may have used the wrong IP or the firewall of your computer may be blocking the connection. Also, just keep in mind that. Final notes As seen through this tutorial, deploying a web application with Flask is really simple. Personally, I’ve been using Flask and other micro frameworks when I need to do some IoT quick proof of concepts, as the temperature logger described in a previous post. For those who like to work with IoT devices, such as the ESP8266 or the LinkIt Smart, Flask offers the possibility to quickly deploy a custom web server, giving us much more freedom that when we use a IoT server with predefined rules. Related Posts References [1] [2] [3] [4] [5] Technical details - Python version:3.4.2 - Flask library: 0.10.1 Reblogged this on TechCentral and commented: It works on raspberry pi too! LikeLiked by 2 people Cool. I will try it on the Raspberry Pi 😎 LikeLiked by 1 person Cool. I use Flask sometimes too 😎 LikeLiked by 1 person Pingback: Flask: Controlling HTTP methods allowed | techtutorialsx Pingback: techtutorialsx Pingback: LinkIt Smart Duo: Running a Flask server | techtutorialsx Pingback: Python anywhere: Deploying a Flask server on the cloud | techtutorialsx Pingback: Flask: Parsing JSON data | techtutorialsx
https://techtutorialsx.com/2016/12/10/flask-hello-world/
CC-MAIN-2017-26
refinedweb
1,076
71.34
Unformatted text preview: orrelation. With a correlation closer to positive one I could be at ease that my portfolio would perform at least as well as the index. This is not the case for my current portfolio however. My portfolio currently has a correlation of -0.6251 indicating that it is traveling in the opposite direction of the index, which explains why my arithmetic return (-2.5915%) and geometric return (-2.6593%) are so far off from the arithmetic and geometric return of the index that are 0.7872% and 0.7841% respectively. Throughout this project the most important thing I have learned would be the time and effort it takes to acquire a strong portfolio that will project a h... View Full Document - Spring '14 - Chang - 0.2001%, 0.6203%, 0.8661%, 5.9361% Click to edit the document details
https://www.coursehero.com/file/8934730/With-a-correlation-closer-to-positive-one-I-could-be-at-ease/
CC-MAIN-2017-51
refinedweb
141
67.76
Groovy is a very successful and powerful dynamic language for the Java Virtual Machine that provides seamless integration with Java, and has its roots firmly planted in Java itself for the syntax and APIs and other languages such as Smalltalk, Python or Ruby for its dynamic capabilities. theEasyperformance" If the list on the right-hand side contains more elements than the number of variables on the left-hand side, only the first elements will be assigned in order into the variables. Also, when there are less elements than variables, the extra variables will be assigned null. So for the case with more variables than list elements, here, c will be null: the singleton is pattern or an anti-pattern, there are still some cases where we need to create singletons. We're used to create a private constructor, a getInstance() method for a static field or even an initialized public static final field. So instead of writing code like this in Java: final class Coordinates { Double latitude, longitude } def c1 = new Coordinates(latitude: 48.824068, longitude: 2.531733) def c2 = new Coordinates(48.824068, 2.531733) assert c1 == c2. @Lazy Another transformation is @Lazy. Sometimes, you want to handle the initialization of a field of your clas lazily, so that its value is computed only on first use, often because it may be time-consuming or memory-expensive to create. The usual approach is to customize the getter of said field, so that it takes care of the initialization when the getter is called the first time. But in Groovy 1.6, you can now use the @Lazy annotation for that purpose: class Person { @Lazy pets = ['Cat', 'Dog', 'Bird'] } def p = new Person() assert !(p.dump().contains('Cat')) assert p.pets.size() == 3 assert p.dump().contains('Cat') In the case of complex computation for initializing the field, you may need to call some method for doing the work, instead of a value like our pets list. This is then possible to have the lazy evaluation being done by a closure call, as the following example shows:). class Event extends Date { @Delegate Date when String title, url }.* class LockableList { @Delegate private List list = [] @Delegate private Lock lock = new ReentrantLock() } def list = new LockableList() list.lock() try { list << 'Groovy' list << 'Grails' list << 'Griffon' } finally { list.unlock() } assert list.size() == 3 assert list instanceof Lock assert list instanceof List = [] )) } You'll also notice that we just allowed Tree and Leaf to be newified. By default, under the scope which is annotated, all instantiations arenew: @Immutable final class Coordinates { Double latitude, longitude } @Immutable final class Path { Coordinates[] coordinates } @Newify([Coordinates, Path]) def build() { Path( Coordinates(48.824068, 2.531733), Coordinates(48.857840, 2.347212), Coordinates(48.858429, 2.342622) ) } assert build().coordinates.size() ==grab. Grape can also be used as a method call instead of as an annotation. You can also install, list, resolve dependencies from the command-line using the grape command. For more information on Grape, please refer to the documentation. Swing builder improvements To wrap up our overview of AST transformations, let's finish by speaking about two transformations very useful to Swing developers: @Bindable and @Vetoable. When creating Swing UIs, you're often interested in monitoring the changes of value of certain UI elements. For this purpose, the usual approach is to use JavaBeans PropertyChangeListeners to be notified when the value of a class field changes. You then end up writing this very common boiler-plate code in your Java beans: import java.beans.PropertyChangeSupport; import java.beans.PropertyChangeListener; public class MyBean { private String prop; PropertyChangeSupport pcs = new PropertyChangeSupport(this); public void addPropertyChangeListener(PropertyChangeListener l) { pcs.add(l); } public void removePropertyChangeListener(PropertyChangeListener l) { pcs.remove(l); } public String getProp() { return prop; } public void setProp(String prop) { pcs.firePropertyChanged("prop", this.prop, this.prop = prop); } } Fortunately, with Groovy and the @Bindable annotation, this code can be greatly simplified: class MyBean { @Bindable String prop } Now pair that with Groovy's Swing builder new bind() method, define a text field and bind its value to a property of your data model: textField text: bind(source: myBeanInstance, sourceProperty: 'prop') Or even: textField text: bind { myBeanInstance.prop } The binding also works with simple expressions in the closure, for instance something like this is possible too: bean location: bind { pos.x + ', ' + pos.y } You may also be interested in having a look at ObservableMap and ObservableList, for a similar mechanism on maps and lists. Along with @Bindable, there's also a @Vetoable transformation for when you need to be able to veto some property change. Let's consider a Trompetist class, where the performer's name is not allowed to contain the letter 'z': import java.beans.* import groovy.beans.Vetoable class Trumpetist { @Vetoable String name } def me = new Trumpetist() me.vetoableChange = { PropertyChangeEvent pce -> if (pce.newValue.contains('z')) throw new PropertyVetoException("The letter 'z' is not allowed in a name", pce) } me.name = "Louis Armstrong" try { me.name = "Dizzy Gillespie" assert false: "You should not be able to set a name with letter 'z' in it." } catch (PropertyVetoException pve) { assert true } Looking at a more thorough Swing builder example with binding: import groovy.swing.SwingBuilder import groovy.beans.Bindable import static javax.swing.JFrame.EXIT_ON_CLOSE class TextModel { @Bindable String text } def textModel = new TextModel() SwingBuilder.build { frame( title: 'Binding Example (Groovy)', size: [240,100], show: true, locationRelativeTo: null, defaultCloseOperation: EXIT_ON_CLOSE ) { gridLayout cols: 1, rows: 2 textField id: 'textField' bean textModel, text: bind{ textField.text } label text: bind{ textModel.text } } } Running this script shows up the frame below with a text field and a lable below, and the label's text is bound on the text field's content. SwingBuilder has evolved so nicely in the past year that the Groovy Swing team decided to launch a new project based on it, and on the Grails foundations: project Griffon was born. Griffon proposes to bring the Convention over Configuration paradigm of Grails, as well as all its project structure, plugin system, gant scripting capabilities, etc. If you are developing Swing rich clients, make sure to have a look at Griffon.() } }: // add a fqn() method to Class to get the fully // qualified name of the class (ie. simply Class#getName) Class.metaClass.static.fqn = {.
http://docs.codehaus.org/display/GROOVY/Groovy+1.6+release+notes
CC-MAIN-2014-15
refinedweb
1,038
54.93
Introduction This article teaches how to create a strongly-typed dataset class library in your C# database applications. Our objectives are as follows: - Learn what a strongly-Typed DataSet is - Let Visual Studio Create an ST Data Set - (Semi)Manually create an ST DataSet using XSD (XML Schema Definition) For this article, you will need to use Visual Studio .NET and a relational database. SQL Server is required for the automatic generation of ST DataSets, and I will use Sybase ASE for the manual ST DataSet. Sybase ASE is a cross-platform database server with Java-based administration tools similar to MS SQL Server tools. Sybase ASE is available in a free developer version at. (Tip: If you install Sybase ASE, be sure to install Jisql which is under the “Free Utilities” option. This utility lets you type in SQL statements to execute.) strong-Typing C# itself is a strongly-typed language. Variables must be assigned a type, in contrast to languages like JavaScript or VBScript where a variable can hold any data type. The same is true with your database server, whether it is Oracle, Sybase, or Microsoft SQL Server. Database columns must be assigned a type: whether it is Text, Char, VarChar, Numeric, Int, or Binary. Using ADO.NET, you can choose to use a generic (untyped) Data Set, or you can create a strongly-typed DataSet by creating an XSD. When you use a dataset to assign values to instance members in your business objects, the data set type must match your business object member or you must convert the type from object to the type. Creating a strongly-typed dataset also ensures that a dataset does not change (in the case of a modified stored procedure) and enables intellisense rather than dictionary-based access. By default a data set is not strongly-typed; to create a strongly-typed dataset you must first create it as a DataSet Class. Thankfully, Visual Studio .NET automates much of this task. Creating a strongly-Typed DataSet from SQL Server While you can create an ST Data Set anywhere, I strongly recommend creating a data transport DLL. In distributed applications, this DLL may be placed both on multiple servers, and you do not want the entire data access DLL on each server. This will be a lightweight DLL with nothing but DataSet classes and will typically be referenced by the Data Access DLL and the Business Logic DLL. The first step is to create a DS Common project and give it a proper namespace. I’ll give mine the namespace Demo.DSCommon and I’ll have a good way of referencing it. The next step is to create a DataSet class. In the Visual Studio Solution Explorer, Right click your project and select “Add New Item”. Click DataSet and name it. We’ll call it “SalesByCategory.xsd” as we’ll be using that procedure in the Pubs database. We’ll actually be creating an XSD (XML Schema definition) that Visual Studio will use to generate the class with. We’ll later look at writing these XSDs ourselves, which is handy for “unsupported” databases. Open the Server Explorer pane and add a new data connection to the Pubs database on your DB server. It’s fine to use the “SA” account here as we won’t be using this connection in our application, only to administer the database. Next, select the “SalesByCategory” Stored Procedure and drag it to the design surface. You’ll see that it creates an XML “table” for us, and generates the required XML code. Change the table name to Sales (from SalesByCategory) as it cannot share the same table name as the class. Listing 17-3. Autogenerated XSD Code <?xml version="1.0" encoding="utf-8" ?> > To use this dataset, simply compile this class library and create a method in a data access class that will return it. To see the files it created, select “view all files” in the solution explorer. It is only the SalesByCategory class we will use, but since we will only use the DLL, there is no need to be concerned about extra XSD files. We will now create a Data Access project and give it the default namespace of Demo.Data. In your data access project, be sure to include a reference to System.Data and the DSCommon dll we just created. Create a “Sales” class in Data. Listing 17-4 Data Access Code using System; using System.Data; using System.Data.SqlClient; using Demo.DSCommon; using System.Diagnostics; namespace Demo.Data { public class Sales { public static DSCommon.SalesByCategory ByCategory(string cat, string year){ //Create the dataset DSCommon.SalesByCategory ds = new DSCommon.SalesByCategory(); //Store the connection string (preferably not here!!!) string conn = "server=dbserver;database=Northwind;uid=sa;pwd=secure"; //Create a connection System.Data.SqlClient.SqlConnection connect = new SqlConnection(conn); //Create the proc string string proc = "dbo.SalesByCategory"; //Create the command System.Data.SqlClient.SqlCommand command = new System.Data.SqlClient.SqlCommand(proc,connect); command.CommandType = CommandType.StoredProcedure; //Create the params (create 1 and reuse it) System.Data.SqlClient.SqlParameter param; param = command.Parameters.Add("@CategoryName", SqlDbType.VarChar, 15); param.Direction= System.Data.ParameterDirection.Input; param.Value = cat; //param = command.Parameters.Add("@OrdYear", SqlDbType.NVarChar, 4); //param.Direction= System.Data.ParameterDirection.Input; //param.Value = year; connect.Open(); //Create a SQL Adapter and fill the ds System.Data.SqlClient.SqlDataAdapter da = new SqlDataAdapter(command); //Add the table mapping(s) da.tableMappings.Add("table", "Sales"); //--- Syntax for multiple DS tables: //da.tableMappings.Add("table1", "SalesPerson"); //da.tableMappings.Add("table2", "SalesTeam"); da.Fill(ds); connect.Close(); connect.Dispose(); return ds; } } } Listing 17-4 illustrates the data access class. The elusive key of getting Data Sets to fill properly is to add the table Mappings. If your DataSets aren’t getting filled, this is probably your mistake. You won’t get a compile or run-time error either making it an extremely annoying bug in your code! All that is left now is implementing a client application. Our client application will simply call the static method of Demo.Data.Sales and will receive the typed dataset. Typically this will be done in a middle-tier business object, but we will just write the data to the console. Listing 17-5. Implementing the Client: Class1.cs using System; namespace Demo.Client { class Class1 { [STAThread] static void Main(string[] args) { Console.WriteLine("Dataset Demo"); Demo.DSCommon.SalesByCategory ds = new Demo.DSCommon.SalesByCategory(); ds = Demo.Data.Sales.ByCategory("Seafood","1997"); foreach (DSCommon.SalesByCategory.SalesRow row in ds.Sales) { Console.WriteLine(row.ProductName + " " + row.TotalPurchase.ToString()); } //Keep it open long enough to read. string foo = Console.ReadLine(); } } } strongly Typed DataSets with Sybase ASE To access ODBC data sources such as Sybase ASE from .NET, you will need to install the ODBC .NET Data Provider. You can download it from. On my Sybase ASE Server, I have an Accounts table in the Utility database. I have a simple stored procedure named UserList that I want to create a dataset from. Since I don’t have a native provider to Visual Studio I’ll have to create the XSD manually this time. We’ll use the same application as before to do this. Open the DS Common project and add a new blank DataSet. (Hint: to get the schema definition correct, refer to an existing DataSet XSD! You can also refer to Wrox press Professional ADO.NET for full details.) We’ll call it UserList.xsd, and it will get a list of names and phone numbers of our account users and a list of companies. The Data tables we want are Users and Companies. Listing 17-6. Empty XSD Shell <:CHOICE> </xs:complexType> </xs:element> </xs:schema> Starting with the empty XSD shell, we will only need to add two Data tables to the XSD. While you can do this visually, we will look at the XSD method. Choose the XML view tab in Visual Studio. You may even notice that the XSD is initially not well formed- it is missing a closing tag on xs:choice! To add an Data table, we’ll add the following element and attributes. The element is the Data table, and the attributes are its columns. To add multiple data tables, just create multiple elements. Listing 17-7. UserList.xsd <:CHOICE> </xs:complexType> </xs:element> </xs:schema> After compiling the DSCommon dll, we will now return to the Demo.Data project and create a new class (.cs) file. This time we’ll call it User.cs. The code will be similar to the SQLClient code in Sales.cs, but this time we’ll use OleDB. Listing 17-8 User.cs using System; using System.Data; using Microsoft.Data.Odbc; namespace Demo.Data { public class User { public static DSCommon.UserList UserData() { //Create the dataset DSCommon.UserList ds = new DSCommon.UserList(); //Store the connection string (preferably not here!!!) string conn = "DSN=nathanUtility;NA=192.168.0.14,2048;DB=Utility;UID= sa;PWD="; //Create a connection Microsoft.Data.Odbc.OdbcConnection connect = new OdbcConnection(conn); //Create the proc string string proc = "dbo.UserList"; //Create the command Microsoft.Data.Odbc.OdbcCommand command = new OdbcCommand(proc,connect); command.CommandType = System.Data.CommandType.StoredProcedure; connect.Open(); //Create an Adapter and fill the ds Microsoft.Data.Odbc.OdbcDataAdapter da = new Microsoft.Data.Odbc.OdbcDataAdapter(command); //Add the table mapping(s) da.tableMappings.Add("table", "Users"); da.tableMappings.Add("table1", "Companies"); da.Fill(ds); connect.Close(); connect.Dispose(); return ds; } } } We will now modify the Console Application code to use the Sybase data. Below is the code listing for the Main method, which simply receives the DataSet and writes it to the Console. Listing 17-8 Updated User.cs Main method static void Main(string[] args) { Console.WriteLine("Dataset Demo"); Demo.DSCommon.UserList ds = Demo.Data.User.UserData(); foreach (DSCommon.UserList.UsersRow row in ds.Users) { Console.WriteLine(row.UserName + " " + row.Phone); } foreach (DSCommon.UserList.CompaniesRow row in ds.Companies) { Console.WriteLine(row.CompanyName); } //Keep it open long enough to read. string foo = Console.ReadLine(); } As you saw, creating strongly-Typed Data Sets manually is not difficult, and we have seen the plumbing of the DataSet in the process. There are additional attributes you can add to the XSD to further modify the generated DataSet classes, and they make a nice data transport mechanism. There are several cons to creating datasets, however. They are more expensive than the lightweight DataReader and do have slight performance hits in their creation, however, this is negligible considering the data transport mechanism they provide. You may, however, opt to consider other data transport mechanisms depending on your project, and how much development resources you want to put into this tier. There are also multiple benefits of using datasets, especially in using ADO.NET’s “out-of-the-box” functionality. You can also use a DataSet to transport data regardless of its provider. You can fill a DataSet programmatically from business objects, XML data, text files, database sources, and just about anything else. Data Sets enable us to loosely integrate multiple data sources, and make it easier to swap data sources. For further exploration, I recommend the Wrox press book “Professional ADO.NET Programming”, an in-depth study of the ADO.NET data tier including advanced topics such as creating a custom .NET data provider. Summary This has been an introduction to strongly-Typed Data Sets. By now, you should have a good understanding of what a strongly-Typed DataSet is and how to create them in your projects. You should be able to create them from a managed data provider (SQL Server or Oracle) or an ODBC data source such as Sybase ASE. Please feel free to contact me with any questions or comments you may have about this lesson. About Daniel Larson Daniel Larson is a Microsoft Certified Solutions Developer (MCSD) and independent .NET software development consultant, specializing in SQL Server-based .NET applications.
http://csharp-station.com/Article/Index/StronglyTypedDatasets
CC-MAIN-2021-10
refinedweb
1,988
60.61
You can subscribe to this list here. Showing 9 results of 9 On Mar 30, 2009, at 12:03, Benjamin Franksen wrote: >: > Ups, yes. It's in the main branch now. Axel.: aragon: .../ghc-6.10.1/gtk2hs > darcs pull --dry-run Would pull from "";... No remote changes to pull in! aragon: .../ghc-6.10.1/gtk2hs > darcs pull No remote changes to pull in! aragon: .../ghc-6.10.1/gtk2hs > darcs changes --from-tag=0.10.0 Tue Feb 17 19:09:09 CET 2009 Axel.Simon@... * Fix documentation. Sat Feb 7 06:48:15 CET 2009 Peter Gavin <pgavin@...> tagged 0.10.0 Cheers Ben Fri Mar 27 12:06:03 CET. > > > Oh boy.. we do this pretty much everywhere, and it's going to be hard to fix, because for most cases I think we only do it because the C API leaves no other way to do it. So, in other words, it'll take a while to fix it. Pete I admit I forgot this, but I've already applied a similar patch from somebody else! It's in darcs already. Sorry for the inconvenience and thanks anyway, A. On Mar 14, 2009, at 16:27, ben.franksen@... wrote: > Fri Mar 13 18:32:12 CET 2009 ben.franksen@... > * Add function eventClick to module EventM. > > New patches: > > [Add function eventClick to module EventM. > ben.franksen@...**20090313173212 > Ignore-this: cfb94ce8b9aee04ec1cd05ea4d8baa4d > ] { > hunk ./gtk/Graphics/UI/Gtk/Gdk/EventM.hsc 130 > eventKeyboardGroup, > MouseButton(..), > eventButton, > + Click(..), > + eventClick, > ScrollDirection(..), > eventScrollDirection, > eventIsHint, > hunk ./gtk/Graphics/UI/Gtk/Gdk/EventM.hsc 450 > return (fromIntegral time) > else error ("eventModifiers: none for event type "++show ty) > > +-- | Query the click type of a mouse button event. > +eventClick :: EventM EButton Click > +eventClick = do > + ptr <- ask > + liftIO $ do > + (ty :: #{gtk2hs_type GdkEventType}) <- peek (castPtr ptr) > + if ty == #{const GDK_BUTTON_PRESS} then > + return SingleClick > + else if ty == #{const GDK_2BUTTON_PRESS} then > + return DoubleClick > + else if ty == #{const GDK_3BUTTON_PRESS} then > + return TripleClick > + else if ty == #{const GDK_BUTTON_RELEASE} then > + return ReleaseClick > + else error ("eventClick: none for event type "++show ty) > + > -- | The key value. See 'Graphics.UI.Gtk.Gdk.Keys.KeyVal'. > eventKeyVal :: EventM EKey KeyVal > eventKeyVal = ask >>= \ptr -> liftIO $ liftM fromIntegral > } > > Context: > > [Fix documentation. > Axel.Simon@...**20090217180909] > [TAG 0.10.0 > Peter Gavin <pgavin@...>**20090207054815] > [gtk: fix warning about redefinition of PANGO_CHECK_VERSION when > using -fvia-C > pgavin@...**20090206194511] > [configure.ac: protect gtk2hs-config.h from multiple inclusions > pgavin@...**20090206194439] > [Add the menu demo directory. > Axel.Simon@...**20090206093119] > [win32: name installer gtk2hs-0.10.0-win32-installer > Peter Gavin <pgavin@...>**20090205053925] > [bump version to 0.10.0 > Peter Gavin <pgavin@...>**20090114022319] > [Add a new demo on context sensitive scaling. > Axel.Simon@...**20090201222439 > This demo is due to Pawel Bulkowski. > ] > [Add another function to the simple text API for combo boxes. > Correct documentation. Due to Peter Hercek. > Axel.Simon@...**20090201215514] > [Resurrect the simple text API for ComboBox and ComboBoxEntry. > Adapt the demo. Include demo in 'make installcheck'. > Axel Simon <Axel.Simon@...>**20090201134339] > [Don't use Word64 in printf as it makes ghc 6.6 fail. > Axel Simon <Axel.Simon@...>**20090201134046] > [Make functions in Cairo runnable in both, Render and IO monad. Add > a function to set font options. > Axel Simon <Axel.Simon@...>**20090131111735] > [gio: fileMakeDirectoryWithParents only since glib 2.18.0 > pgavin@...**20090128040010] > [gio: System/GIO/File.chs -> System/GIO/File.chs.pp > pgavin@...**20090128040007] > [Makefile.am: clean up and add some dependencies > Peter Gavin <pgavin@...>**20090125205928] > [gtk: add gtk/hsgtk.h, define PANGO_CHECK_VERSION for versions that > don't define it > Peter Gavin <pgavin@...>**20090125201235] > [gstreamer: do the version check correctly in hsgstreamer.h > Peter Gavin <pgavin@...>**20090125195735] > [gtk: G.U.G.ModelView.TreeView: several functions/values only > available after 2.10.0 > Peter Gavin <pgavin@...>**20090125171909] > [Makefile.am, configure.ac: change install-exec-local to install- > exec-hook; update package.conf files after installation (not before) > Peter Gavin <pgavin@...>**20090121070321] > [win32: more changes to the build scripts > Peter Gavin <pgavin@...>**20090121053754] > [win32: update scripts > Peter Gavin <pgavin@...>**20090116045531] > [Change hierarchy.list so GtkSourceMark is a GtkTextMark > Hamish Mackenzie <hamish@...>**20090124000854] > [Docu fix. > Axel.Simon@...**20090122185551] > [Makefile.am: enable parallel builds again > Peter Gavin <pgavin@...>**20090121024326] > [mk/chsDepend.in: allow spaces and tabs inside import tags > Peter Gavin <pgavin@...>**20090121023819] > [Makefile.am: add demo/treeList/TreeSort.hs to dist > Peter Gavin <pgavin@...>**20090121022503] > [Makefile.am: make Gtk2HsStore.o depend on CustomStore.o > Peter Gavin <pgavin@...>**20090121012647 > actually it needs CustomStore_stub.h, but this works > ] > [Makefile.am: add a "dep" target that will build .dep files > Peter Gavin <pgavin@...>**20090121012200 > > this target only builds the dependencies that are needed early in the > build process (namely, tools/c2hs/c2hsLocal.deps and the files > generated by chsDepend). It is depended on by the "all" target, and > so will be processed before anything else, since all recursively runs > make all-am, which does the actual building. > > ] > [Makefile.am: make objects from each package depend on objects from > dependee packages > Peter Gavin <pgavin@...>**20090121011918] > [c2hs: make C2HSConfig.hs not be generated from configure > Peter Gavin <pgavin@...>**20090120033402 > this causes all of c2hs to be rebuilt if configure is rerun. > The only thing that was needed is the path for CPP, so just use > -cpp -DCPP=$(CPP) instead. > ] > [Makefile.am: haddock 2.0 doesn't create the doc-index-[A-Z].html > files > Peter Gavin <pgavin@...>**20090120030702] > [Makefile.am: install stock icon images for docs in the right place > Peter Gavin <pgavin@...>**20090120030625] > [Makefile.am: dont read dependency files for dist target > Peter Gavin <pgavin@...>**20090120030359] > [gtksourceview2: handle null pointers correctly in > sourceLanguageManager{Get,Guess}Language > Peter Gavin <pgavin@...>**20090120022939] > [Makefile.am: use a stamp file so docs don't get rebuilt multiple > times > Peter Gavin <pgavin@...>**20090120015924] > [Makefile.am, common.mk: use dependencies instead of recursively > calling make > Peter Gavin <pgavin@...>**20090120011544] > [gstreamer: define GST_CHECK_VERSION when gstreamer version is > before 0.10.18 > Peter Gavin <pgavin@...>**20090119234510] > [Makefile.am: add a few files missing from the dist > Peter Gavin <pgavin@...>**20090119232050] > [define *_CHECK_VERSION macros only when pre-processing Haskell > source files > Peter Gavin <pgavin@...>**20090119230516 > > These macros are only needed when preprocessing Haskell source files > (because we need to handle different versions of gtk etc., but can't > directly include the C headers in the Haskell sources). So, when > we're compiling with hsc2hs, we don't need to define them (since > #include and #ifdef is handled specially by hsc2hs) and we can still > use gtk2hs-config.h for both cases by introducing another macro > GTK2HS_HS_PREPROC when we're preprocessing Haskell source. > > ] > [glade: remove glade/Glade.chs (not sure why this file is here...) > Peter Gavin <pgavin@...>**20090114053423] > [gtksourceview2: make a few imports C2HS imports > Peter Gavin <pgavin@...>**20090114044139] > [configure.ac: add gstreamer-audio-0.10 to pkg-config for gstreamer > Peter Gavin <pgavin@...>**20090113021205] > [gtksourceview2: G.U.G.SV.SourceLanguageManager: export > sourceLanguageManagerGuessLanguage > Peter Gavin <pgavin@...>**20090113001001] > [gstreamer: make peekStructure create a copy of the structure > Peter Gavin <pgavin@...>**20090112080942 > this is probably safer because it's not always safe to make the > structure immutable > ] > [mk/link-splitobjs.sh.in: add some error checking > Peter Gavin <pgavin@...>**20090112054950] > [gstreamer: giveStructure should not free the structure > Peter Gavin <pgavin@...>**20090112051646] > [gstreamer: giveStructure should not make the copy of the structure > immutable > Peter Gavin <pgavin@...>**20090112045757] > [gstreamer: fix marshaling on Structure where GValues weren't > initialized properly > Peter Gavin <pgavin@...>**20090112044035 > reported/fixed by Oleg Belozeorov <upwawet@...> > ] > [gtksourceview2: typo: can-redo -> can-undo (for sourceBufferCanUndo) > Peter Gavin <pgavin@...>**20090112043017] > [gtksourceview2: handle NULLs returned from a few functions that > return gchar** > Peter Gavin <pgavin@...>**20090112042704] > [gtksourceview2: add sourceLanguageManagerGuessLanguage > Peter Gavin <pgavin@...>**20090112041351] > [configure.ac, mk/link-splitobjs: add xargs back, but only use it > if it's available > Peter Gavin <pgavin@...>**20090112021612 > I originally took it out for portability, but xargs speeds things > up enough to make it worth putting back > ] > [gtksourceview2: add GtkSourceMark to hierarchy.list and add > version checking code to configure.ac > Peter Gavin <pgavin@...>**20090112020150] > [configure.ac: use --enable-deprecated-packages for deprecated > packages instead of --enable-deprecated > Peter Gavin <pgavin@...>**20090112015838] > [gconf: S.G.G.GConfValue: remove -fallow-overlapping-instances > (it's handled in the makefile) > Peter Gavin <pgavin@...>**20090111160247] > [configure.ac: use AC_ARG_VAR for HADDOCK > Peter Gavin <pgavin@...>**20090111155851] > [configure.ac: use AS_HELP_STRING where appropriate > Peter Gavin <pgavin@...>**20090111155821] > [Add SourceMark to gtksourceview2 > Hamish Mackenzie <hamish@...>**20090108170324] > [mk/link-splitobjs.sh: don't use xargs > Peter Gavin <pgavin@...>**20090111153932] > [workaround several warnings caused by hsc2hs on amd64 > Peter Gavin <pgavin@...>**20090110220257] > [Makefile.am: clean up _hsc_make files > Peter Gavin <pgavin@...>**20090109035620] > [Makefile.am, others: fix various haddock problems > Peter Gavin <pgavin@...>**20090109032252] > [gio: S.G.File: don't export FileClass's nonexistent methods > Peter Gavin <pgavin@...>**20090107031407] > [gtk: make NativeWindowId a newtype; under Win32 it's a pointer, > not an integer > Peter Gavin <pgavin@...>**20090107030956] > [configure.ac: use 256 megs of ram under windows > Peter Gavin <pgavin@...>**20090107025121] > [gtk: add marshallers required for Range.{on,after}ChangeValue to > gtkmarshal.list > Peter Gavin <pgavin@...>**20090102004627] > [gio: S.G.File: only export File(..) and FileClass(..), not all of > module S.G.Types > Peter Gavin <pgavin@...>**20090101184156] > [gtk: G.U.G.Abstract.Range: add onRangeChangeValue/ > afterRangeChangeValue > Peter Gavin <pgavin@...>**20090101182517] > [gtk: Range.chs -> Range.chs.pp > Peter Gavin <pgavin@...>**20090101182445] > [add gtksourceview2 package > Peter Gavin <pgavin@...>**20081219183834] > [Makefile.am: add missing html files to htmldoc_haddock_files for > Haddock 2 > Peter Gavin <pgavin@...>**20081219183700] > [mk/common.mk: use -dep-makefile instead of -optdep-f for ghc >= 6.10 > Peter Gavin <pgavin@...>**20081219174308] > [configure.ac: deprecate sourceview and gnomevfs packages > Peter Gavin <pgavin@...>**20081219165501] > [glib: S.G.Properties: add readAttrFromBoxedOpaqueProperty > Peter Gavin <pgavin@...>**20081219165308] > [acinclude.m4: add GTKHS_PKG_CHECK_DEPRECATED: like GTKHS_PKG_CHECK > but for a deprecated package > Peter Gavin <pgavin@...>**20081219163145] > [configure.ac: move DISABLE_DEPRECATED code further up in the > script so we can deprecate entire packages > Peter Gavin <pgavin@...>**20081219161605] > [expose Types and flags constructors in System.GIO.File > Ashley Yakeley <ashley@...>**20081124001402] > [Swap Events for EventM. > Axel.Simon@...**20081202163918] > [add better support for haddock 2 in configure.ac > Peter Gavin <pgavin@...>**20081109223447] > [gio: fix some build problems > Peter Gavin <pgavin@...>**20081109223406] > [gio: add System.GIO.FileAttributes and related functions > Peter Gavin <pgavin@...>**20081030155952] > [fix compilation error on GHC 6.10.1 due to new Control.Exception > module > Ross Mellgren <rmm-haskell@...>**20081106080055] > [Stop the minute hand from floating around the center. > Axel.Simon@...**20081106203656] > [Add a demo for the combo box. > Axel.Simon@...**20081027214007] > [export widgetGetIsFocus (renamed from widgetIsFocus, which is now > an attribute) > Ross Mellgren <rmm-haskell@...>**20081104010922] > [Fix unpacking of Requisition height (was unpacking width erroneously) > Ross Mellgren <rmm-haskell@...>**20081102200803] > [Add function for implementing size requisition and allocation in > widgets. > Axel.Simon@...**20081101132943] > [Add a function to retrieve the current text. This function is not > strictly necessary. > Axel.Simon@...**20081027200037] > [Rename signals in StatusIcon since they will clash eventually. > Axel.Simon@...**20081027200005] > [Make the completion demo much more funky. > Axel.Simon@...**20081027195937] > [Forgot to add resources for DND demo. > Axel.Simon@...**20081027195141] > [Fix documentation of Pixbuf. Change the return type of pixbufSave. > Axel.Simon@...**20081026170015] > [Make IconSize accept new user-defined sizes. > Axel.Simon@...**20081026164212] > [Adapt demos to use new EventM module. > Axel.Simon@...**20081026164056] > [Revise and test the new EventM module. > Axel.Simon@...**20081026163948] > [Export on and after by default. > Axel.Simon@...**20081026163803 > Since this release will break programs, I think it is the rigth > time to > always export these two functions. There is a clash with a > function in > Data.Function. I think this is not so serious since Data.Function > does > not contain a lot of terribly useful functions and is thus not > likely to > be imported by default. > ] > [Make 'after' a separate definition so it appears in the > documentation. > Axel.Simon@...**20081026163738] > [Import Events explicitly due to the new Events module. > Axel.Simon@...**20081026100026] > [gio: don't use Control.Monad.>=> (it's not in ghc-6.6 or earlier) > Peter Gavin <pgavin@...>**20081020132048] > [Mend some build problems with GIO. > A.Simon@...**20081019132615] > [gio: S.G.File: more function definitions > Peter Gavin <pgavin@...>**20081019141246] > [gio: System.GIO.File: add more functions; make the functions that > do no IO use unsafePerformIO > Peter Gavin <pgavin@...>**20081017185905] > [Change Graphics.Rendering.Cairo.SVG.svgRender to return Bool > Bertram Felgenhauer <int-e@...>**20081017133418 > The C API has changed to return a gboolean in librsvg 2.22.3, so we > return a Bool now as well. For previous versions, always return True, > indicating success. > ] > [refactor configure.ac > Bertram Felgenhauer <int-e@...>**20081017115241 > add and use GTKHS_PACKAGE_ADD_CHECK_VERSION macro for adding > FOO_CHECK_VERSION macros to gtk2hs-config.h > ] > [gio: fix copyright header in source files > Peter Gavin <pgavin@...>**20081017172501 > the copyright was copied from another source file, and was incorrect > ] > [make gtk build when cairo is disabled > Peter Gavin <pgavin@...>**20081017165854] > [Makefile.am: add gio/System/GIO/AsyncResult.chs to libHSgio_a_SOURCES > Peter Gavin <pgavin@...>**20081017165039] > [gio: add module System.GIO > Peter Gavin <pgavin@...>**20081017164829] > [gio: S.G.File: specify module exports > Peter Gavin <pgavin@...>**20081017163901] > [gio: S.G.AsyncResult: specify module exports > Peter Gavin <pgavin@...>**20081017163845] > [acinclude.m4: make error message for missing internal dependency > better > Peter Gavin <pgavin@...>**20081017162707] > [gstreamer: M.S.G.Core.Bin: make haddock happy > Peter Gavin <pgavin@...>**20081017155250] > [gstreamer: M.S.G.Core.Iterator: export IteratorResult members > Peter Gavin <pgavin@...>**20081017155227] > [gstreamer: M.S.G.Core.Event: export SeekFlags and SeekType > Peter Gavin <pgavin@...>**20081017155153] > [Makefile.am: use -XOverlappingInstances instead of -fallow- > overlapping-instances for ghc >= 6.10 > Peter Gavin <pgavin@...>**20081017155123] > [initial import of gio binding > Peter Gavin <pgavin@...>**20081017154950] > [configure.ac, Makefile.am, acinclude.m4: allow gtk to be > conditionally built > Peter Gavin <pgavin@...>**20081017154200 > additional, add rudimentary internal dependency checking between > subpackages. > for example, if gtk is not built, then gtkglext and mozembed > shouldn't be built. > ] > [Fix autoconf check for cairo version for recent cairo > Bertram Felgenhauer <int-e@...>**20081015172138 > In cairo-1.8.0, cairo-features.h got split into cairo-features.h and > cairo-version.h. Instead of detecting this situation, this fix uses > cairo.h, which will work with both old and new cairo versions. > ] > [Always generate a _stub.o file so we don't have to link it in > conditionally in the Makefile. > A.Simon@...**20081014120131] > [Conditionally compile a function for old Cairo verions. > A.Simon@...**20081014034440] > [Forgot to add the actual EventM file. > A.Simon@...**20081014032355] > [Add new way to access information in events. > A.Simon@...**20081012222705 > > This patch makes it possible to pass around the pointer to an > event structure > without marshalling all its content. The new modules is called > EventM and > will replace the Events module. For now, I have stopped exporting > Events > from Gtk by default. It might be best to force people to > explicitly import > the events module from now on which might be ok, since it is > rather low > level stuff that is only needed when a special widget is to be > implemented. > > The advantage of the monad encapsulation is that it is now > possible to bind > functions that take a pointer to an event as parameter. There are > only a few > of these, but it is very hard to create a pointer to an event from > scratch. > > ] > [Correct spelling mistake of func def. > A.Simon@...**20081012093725] > [add {-# OPTIONS_HADDOCK hide #-} to a few more files > Peter Gavin <pgavin@...>**20081009190336] > [add {-# OPTIONS_HADDOCK hide #-} to a few files > Peter Gavin <pgavin@...>**20081009122938] > [update tools/c2hs/base/general/Binary.hs to work with GHC 6.10 > Peter Gavin <pgavin@...>**20081009000300 > the instance for Binary Integer needed to be rewritten, because > the old version used internal data for the Integer type. > The new instance only uses Bits functions, so should be portable. > ] > [add HAVE_NEW_CONTROL_EXCEPTION define to configure.ac; modify > several files to import OldException when it's defined > Peter Gavin <pgavin@...>**20081008201356 > ghc 6.10 introduces a new exception system, with several functions > that are different from the original Control.Exception module. > This new Control.Exception module is in base>=4.0. That version > moved the old exception interface to the module Control.OldException. > When we get around to fixing everything for real, grepping for > OldException should list the modules that need changing. > ] > [Makefile.am: use -XForeignFunctionInterface instead of -fffi for > GHC 6.10 and newer > Peter Gavin <pgavin@...>**20081008170731] > [configure.ac: check version of GHC and define makefile conditional > GHC_VERSION_610 for GHC >= 6.10 > Peter Gavin <pgavin@...>**20081008170701] > [use versioned dependencies everywhere (make ghc-6.10 happy) > Peter Gavin <pgavin@...>**20081008171309] > [Makefile.am, configure.ac: make haddock2 work > Peter Gavin <pgavin@...>**20080923003230 > Haddock 2.3 or better is required (which, at the time being, has > yet to be released) > ] > [gstreamer: M.S.G.Core.Format: document everything; remove > formatsContains as it's pretty redundant > Peter Gavin <pgavin@...>**20080824020113] > [gstreamer: Format should not be an enumeration. Some values have > special meanings, others will use FormatId > Peter Gavin <pgavin@...>**20080824015633] > [gstreamer: M.S.G.Core.Constants: remove export list > Peter Gavin <pgavin@...>**20080824015608] > [gstreamer: M.S.G.Core.Clock: document attributes > Peter Gavin <pgavin@...>**20080824015544] > [gstreamer: document those functions that are ony available after > some gstreamer version > Peter Gavin <pgavin@...>**20080824014548] > [Makefile.am: if WIN32 for haddock stuff should be if !WIN32 > Peter Gavin <pgavin@...>**20080820230212] > [Make happy happy. > A.Simon@...**20081005100851] > [Cairo produces a stub file since the addition of the Pixbuf as > Cairo functions. > A.Simon@...**20081005100727] > [Add TreeRowReference back to the TreeList directory. > A.Simon@...**20081005100634] > [Rename a few function in CustomStore to make the naming more > consistent. Add more comments. > A.Simon@...**20080920193454] > [Make CellLayout clever about translating iterators to child models. > A.Simon@...**20080920192810 > > This patch is a hack around a bug in Gtk+ that is at least present > in EntryCompletion. Often the CellLayout interface is only half- > heartedly implementedin the widgets. For instance the > EntryCompletion widget takes a model with the completion and wraps > a TreeModelFilter around it which is used to only admit selected > entries into the drop down list. Although EntryCompletion > implements the CellLayout interface, all the > gtk_cell_layout_set_cell_data_func does is to forward any queryies > to the contained model that the user has set. What it should do is > to translate from the filter model to the user model. In order to > work around this bug, this patch adds the functionality to > cellLayoutSetAttributeFunc that checks if the model in the call > back is the same as the model that was given as an argument. If > not, the model in the callback is checked to be a TreeModelFilter > or a TreeModelSort and the iterator is then automatically > translated. This patch therefore also makes it possible to define > how the values of cell renderers should be set even if the view > that contains the cell renderers accesses them using a > TreeModelFilter or a TreeModelSort. > > ] > [Add a few functions to MessageDialog. > A.Simon@...**20080915212455] > [Hijack the copyright of these orphaned modules. > A.Simon@...**20080915212439] > [Export missing flags. > A.Simon@...**20080915212417] > [Fix a link in the documentation to refer to object DrawWindow. > A.Simon@...**20080915212341] > [Add a treeModelValueGet function. > A.Simon@...**20080915211739 > This patch makes it possible to retrieve the content of a store > using the > TreeModel interface, that is, by using predefined columns. The > ColumnId > type had to be amended for that. I used the opportunity to move its > definition to the TreeModel module, where it belongs. As a > consequence > the two functions in CustomStore whose name started with treeModel > have been renamed to customStoreGetRow and customStoreSetColumn > since they > really only make sense on implementations of CustomStores. The > original > names are no longer documented but still exported in order not to > break > exisiting programs. > ] > [Add a few Show and Eq instances to Cairo. > A.Simon@...**20080915211708] > [Add forgotten stub files for Entry and ComboBox > A.Simon@...**20080915211357] > [Fix demo with respect to some recent name changes and export changes. > A.Simon@...**20080913205246] > [Remove the prefix New. Add automatic search. > A.Simon@...**20080913205204] > [Rename the toggle signal in CellRendererToggle since it clashed > with ToggleButton. > A.Simon@...**20080907123021] > [Fix the names of the color attributes. > A.Simon@...**20080907122938] > [Forgot to link the stub file for Clipboard. > A.Simon@...**20080907122616] > [Add the matchSelected signal to EntryCompletion. > A.Simon@...**20080907122359 > Using this signal and using the CellLayout functions to connect to > a model > is broken in Gtk+, see? > id=551202 > ] > [Make the new ModelView API the default. > A.Simon@...**20080907121958] > [Remove name of previous author since the whole content has changed. > A.Simon@...**20080906185342] > [Fix a memory management error. > A.Simon@...**20080906184451] > [Add Clipboard functionality. > A.Simon@...**20080906184126 > This patch replaces the two definitions that have been in the > Clipboard > module. I didn't bind the functions that wait for the clipboard data > to arrive since it should be easy enough to use callback functions in > Haskell. > ] > [Add function to Selection. > A.Simon@...**20080906184045] > [Rename tagNew to atomNew. > A.Simon@...**20080906183854] > [Add ability to toggle ticks in list. > A.Simon@...**20080904210158] > [Export and fix functions for querying display size. > A.Simon@...**20080902200710] > [Repair general DND functionality on lists. > A.Simon@...**20080902200127 > > This patch not only represents some bug fixes, it also adds a few > tags that > make the actual use of DND easier. Some functions such as > selectionDataGet > have changed in order to make them safer. It should be easy now to > add > Clipboards. What needs to be tested is if all functions are there to > implement DND on normal widgets. An example for this would be quite > elaborate, hence I have no example yet. > ] > [Add a demonstration of DND on lists. > A.Simon@...**20080902200031] > [Int64 GObject property getters and setters used incorrect type. > Reported by Oleg Belozeorov. > A.Simon@...**20080827172436] > [Fix bug #1138. > A.Simon@...**20080821212236 > I forgot to set the stamp in those TreeIters correctly that were > created > directly in ListStore. It was all correct in TreeStore. > > ] > [Revamp widget by adding most methods and separating signals from > events. Keep only the old style signals with the mixture of events > and signals that we had before in order not to break anything. > A.Simon@...**20080821211847] > [Add getter and setter for Maybe Objects. > A.Simon@...**20080821211658] > [Add `isA` function to GObjectClass and gType* functions for each type > hamish@...**20080713093509 > > You can use this function to test the type of an object. For > instance.. > > when (obj `isA` gTypeNotebook) > doSomething $ castToNotebook obj > > ] > [Missing window focus functions > hamish@...**20080713092851] > [Add Show instance to Event structure. > A.Simon@...**20080721204057] > [We do not need -fvia-C just because we are using -fffi > Duncan Coutts <duncan@...>**20080714150259 > It used to be recommended practise but no longer. > In addition -fvia-C causes problems on ppc32. > ] > [Add a few attributes in to Widget. > A.Simon@...**20080708200051] > [Make ComboBox entry work with the new tree stores. > A.Simon@...**20080708193256] > [Add a few attributes and signals. > A.Simon@...**20080708193102] > [Renamed the Container.chs file to Container.chs.pp > A.Simon@...**20080708192954] > [Correct Dialog syntax for Haddock tree > Marco Túlio Gontijo e Silva <marcot@...>**20080627215555] > [data IconSize instead of just Int > Marco Túlio Gontijo e Silva <marcot@...>**20080628013750] > [gtk: add conditional compilation for > Gdk.Screen.screenGetActiveWindows (>= gtk-2.10) > Peter Gavin <pgavin@...>**20080625040227] > [Some improvements in MessageDialog documentation > Malebria <malebria@...>**20080624232121] > [String -> StockId > Malebria <malebria@...>**20080530125036] > [win32 installer: remove old code, correct the ghc version in some > of the error messages > Peter Gavin <pgavin@...>**20080624191819] > [make the win32 installer not require admin rights > Peter Gavin <pgavin@...>**20080623185620] > [Add widgetSensitivity attribute > Duncan Coutts <duncan@...>**20080623160607 > Just noticed that the Real World Haskell book should > be using this attribute except that it was missing. > ] > [Add widgetGet/SetColormap and widgetGetScreen > Duncan Coutts <duncan@...>**20080622190639] > [Makefile.am: URI.chs -> URI.chs.pp > Peter Gavin <pgavin@...>**20080621221502] > [demos/fastdraw: comment out Show instance for Rectangle (as it's > been implemented in the library) > Peter Gavin <pgavin@...>**20080621221054] > [win32 build script updates > Peter Gavin <pgavin@...>**20080621212947] > [Makefile.am: add gstreamer & gnomevfs demos > Peter Gavin <pgavin@...>**20080621210833] > [gstreamer: fix demo so it works again > Peter Gavin <pgavin@...>**20080621210812] > [gnomevfs: add mtl to depends in gnomevfs.package.conf.in > Peter Gavin <pgavin@...>**20080621202831] > [gnomevfs: Types: only include S[GU]ID bits in perms if not > windows; remove the Perm*All flags > Peter Gavin <pgavin@...>**20080621170312 > the Perm*All flags should really be values, not constructors > ] > [Makefile.am: win32 version of haddock doesn't generate doc-index- > *.html, just the doc-index.html file > Peter Gavin <pgavin@...>**20080621160015] > [Makefile.am: add gstreamer/marshal.list; fix typo > Peter Gavin <pgavin@...>**20080621153244] > [configure.ac: bump version to 0.9.13 > Peter Gavin <pgavin@...>**20080621003719] > [Makefile.am: typo: I added gnomevfs/hsgnomevfs-2.16.h, not > gnomevfs/hsgnomevfs-2.14.h! > Peter Gavin <pgavin@...>**20080621003631] > [gstreamer: a few small doc fixes > Peter Gavin <pgavin@...>**20080621002315] > [Makefile.am: add gnomevfs precomp stuff > Peter Gavin <pgavin@...>**20080621002227] > [Add transparency support to the clock demo > Duncan Coutts <duncan@...>**20080620102255] > [Add Screen type > Duncan Coutts <duncan@...>**20080620101502 > This gives us the functions for getting colour maps with > alpha channels which lets us make transparent windows. > ] > [update AUTHORS file > Peter Gavin <pgavin@...>**20080530164412] > [Missing link > Malebria <malebria@...>**20080601160318] > [gstreamer: M.S.G.Core.Types: hide Foreign.Marshal.Utils.withObject > Peter Gavin <pgavin@...>**20080522002448] > [gstreamer: hide System.Glib.FFI.withObject everywhere > Peter Gavin <pgavin@...>**20080522002116] > [gstreamer: M.S.G.Core.Clock: hide Foreign.withObject > Peter Gavin <pgavin@...>**20080521234336] > [gstreamer: M.S.G.Core.Message: several functions need conditional > compilation > Peter Gavin <pgavin@...>**20080521234113] > [gstreamer: M.S.G.Core.Pad: padIsBlocking only since gstreamer >= > 0.10.11 > Peter Gavin <pgavin@...>**20080521233905] > [gstreamer: M.S.G.Core.Element: elementStateChangeReturnGetName > only since gstreamer >= 0.10.11; elementSeekSimple only since > gstreamer >= 0.10.7 > Peter Gavin <pgavin@...>**20080521233720] > [gstreamer: M.S.G.Core.Event: event{New,Parse}Latency only since > gstreamer 0.10.12 > Peter Gavin <pgavin@...>**20080521233526] > [gstreamer: M.S.G.Core.Object: hide Foreign.Marshal.Utils.withObject > Peter Gavin <pgavin@...>**20080521233247] > [gstreamer: M.S.G.Base.Adapter: don't compile ByteString stuff for > ghc < 6.6 > Peter Gavin <pgavin@...>**20080521232217] > [gstreamer: M.S.G.Core.Buffer: only compile BteString functions if > ghc >= 6.4 > Peter Gavin <pgavin@...>**20080521231527] > [gstreamer: M.S.G.Core.Constants: document a few missing enum > values; insert missing conditional compilation > Peter Gavin <pgavin@...>**20080521231145] > [gstreamer: M.S.G.DataProtocol: DPVersion and DPPacketizer only > since 0.10.7 > Peter Gavin <pgavin@...>**20080521230707] > [gstreamer: M.S.G.Base.Adapter: adapterCopy{,Into} only since > gstreamer >= 0.10.12 > Peter Gavin <pgavin@...>**20080521230043] > [gstreamer: M.S.G.Base.BaseSink: baseSink{Query,Get}Latency needs > gstreamer >= 0.10.12, baseSinkWaitPreroll needs gstreamer >= 0.10.11 > Peter Gavin <pgavin@...>**20080521225737] > [gstreamer: M.S.G.Base.BaseSrc: baseSrcWaitPlaying only since > gstreamer >= 0.10.12 > Peter Gavin <pgavin@...>**20080521225417] > [gstreamer: M.S.G.Core.TagList: tagListIsEmpty only since gstreamer > >= 0.10.11 > Peter Gavin <pgavin@...>**20080521224921] > [gstreamer: M.S.G.Core.Bus: busTimedPop only since gstreamer >= > 0.10.12 > Peter Gavin <pgavin@...>**20080521224722] > [gstreamer: M.S.G.Core.Caps: caps{Merge,Remove}Structure only since > gstreamer >= 0.10.10 > Peter Gavin <pgavin@...>**20080521223539] > [gstreamer: M.S.G.Core.GhostPad: ghostPadNew{,NoTarget}FromTemplate > only since gstreamer >= 0.10.10 > Peter Gavin <pgavin@...>**20080521223516] > [ gstreamer: M.S.G.Core.Init: updateRegistry only since gatreamer > >= 0.10.12 > Peter Gavin <pgavin@...>**20080521223329] > [gstreamer: M.S.G.Core.Init: segtrap{Is,Set}Enabled & registryFork > {Is,Set}Enabled only since gstreamer >= 0.10.10 > Peter Gavin <pgavin@...>**20080521215238] > [gstreamer: DataQueue only available for gstreamer >= 0.10.11 > Peter Gavin <pgavin@...>**20080521214132] > [gnomevfs: add conditional compilation based on version > Peter Gavin <pgavin@...>**20080521060138] > [gnomevfs: read/write not supported on ghc < 6.6; no ByteString > Peter Gavin <pgavin@...>**20080521055338 > 6.4 is much older than I care to worry about, chances are no-one > will use it anyhow > ] > [gnomevfs: require gnome-vfs-module-2.0 > Peter Gavin <pgavin@...>**20080521053735] > [gtk: check for gtk >= 2.8.0 and cairo for functions using > Cairo.Matrix > Peter Gavin <pgavin@...>**20080521030909] > [gtk: G.U.G.ModelView.IconView: gtk_target_table_new_from_list not > available until gtk-2.10.0 > Peter Gavin <pgavin@...>**20080521015314] > [gtk: G.U.G.Gdk.PixbufData: MArray.bounds(PixbufData): PixbufData > has 4 arguments > Peter Gavin <pgavin@...>**20080521014154 > that part was only compiled for < ghc-6.5, so we missed it until now > ] > [gstreamer: M.S.G.Core.ElementFactory: > gst_element_factory_has_interface not available before > gstreamer-0.10.14 > Peter Gavin <pgavin@...>**20080520020741] > [gstreamer: M.S.G.Core.ElementFactory: Control.Monad.>=> not > available before ghc 6.8 > Peter Gavin <pgavin@...>**20080520011633] > [gstreamer: M.S.G.Core.Types.chs: staticPadTemplateGet: > Control.Monad.>=> not available before ghc 6.8; don't export > StaticPadTemplate constructors > Peter Gavin <pgavin@...>**20080520010014] > [gtk: G.U.G.ModelView.ComboBox: a foreign import wrapper is > conditionally generated, so generate a dummy function (to force the > stub file) > Peter Gavin <pgavin@...>**20080519031907] > [gtk: G.U.G.ModelView.TreeView.TreeViewGridLines not available > until gtk-2.0 >= 2.10.0 > Peter Gavin <pgavin@...>**20080519030255] > [gtk: G.U.G.Pango.Types.PangoGravity{,Hint} only available in pango > >= 1.16.0 > Peter Gavin <pgavin@...>**20080519024423] > [gtk: G.U.G.Pango.Structs: PangoAttribute(AttrLetterSpacing) only > available in pango >= 1.6 > Peter Gavin <pgavin@...>**20080519023445] > [gtk: gtk_target_table_new_from_list & gtk_target_table_free only > available for gtk-2.0 >= 2.10, so disable > treeViewEnableModelDragDest & treeViewEnableModelDragSource > Peter Gavin <pgavin@...>**20080519021037] > [gnomevfs: S.G.VFS.Drive.driveGetMountedVolumes only available for > gnome-vfs-2.0 >= 2.8.0 > Peter Gavin <pgavin@...>**20080519020330] > [gnomevfs: S.G.VFS.Drive.driveGetHalUDI only available for gnome- > vfs-2.0 >= 2.8.0 > Peter Gavin <pgavin@...>**20080519020012] > [gnomevfs: S.G.VFS.Volume.volumeGetHalUDI only available for gnome- > vfs-2.0 >= 2.8.0 > Peter Gavin <pgavin@...>**20080519015626] > [gnomevfs: S.G.VFS.URI.uriResolveSymbolicLink only available in > gnome-vfs >= 2.16.0 > Peter Gavin <pgavin@...>**20080519014847] > [glib: System.Glib.MainLoop: g_source_id_destroyed is only > available since 2.12.0 > Peter Gavin <pgavin@...>**20080519014042] > [gstreamer: more docs for M.S.G.Core.Bus > Peter Gavin <pgavin@...>**20080519004951] > [gstreamer: more docs in M.S.G.Core.Caps > Peter Gavin <pgavin@...>**20080519005233] > [gstreamer: small fixes in M.S.G.Core.Event > Peter Gavin <pgavin@...>**20080519004859] > [gstreamer: M.S.G.Core.Buffer documentation edits and additions > Peter Gavin <pgavin@...>**20080517235327] > [gstreamer: document M.S.G.Core.Bin > Peter Gavin <pgavin@...>**20080517194736] > [Makefile.am: add docs/reference/haddock-util.js to htmldoc_DATA > Peter Gavin <pgavin@...>**20080517191140] > [gstreamer: document & implement missing API for > M.S.G.Core.ElementFactory > Peter Gavin <pgavin@...>**20080218011206] > [gstreamer: add StaticPadTemplate to M.S.G.Core.Types > Peter Gavin <pgavin@...>**20080218011120] > [Add a demo on how to construct a menu by hand. Submitted by J. > Romildo. > A.Simon@...**20080426135712] > [typo in Windows.Dialog > Antonio Regidor García <a_regidor@...>**20080406125153] > [Add a function to retrieve all toplevel windows. > Axel.Simon@...**20080331151712] > [Bring Pango up to date. Implement parseMarkup. > A.Simon@...**20080302222154] > [Add a function to retrieve an OS window. > A.Simon@...**20080302220305] > [Do not install non-existant HTML files. > A.Simon@...**20080302220127] > [Add Eq and Show instances for Gdk enumerations. > A.Simon@...**20080226124040] > [Use all three colour components for text attribs. > Thomas Schilling <nominolo@...>**20080217150259 > This was just a simple typo propagated by copy'n'paste. > ] > [Hide the Controller/Types.hs file in the gstreamer package. > A.Simon@...**20080212094444 > This patch repairs a conflict that occurred in a patch bundle form > Peter Gavin. > I don't quite know if these files should be excluded from haddock. > ] > [gstreamer: M.S.G.Core.Clock: a few small doc fixes > Peter Gavin <pgavin@...>**20080212041232] > [gstreamer: M.S.G.Core.Element: document everything, use new signal > types > Peter Gavin <pgavin@...>**20080212040550] > [gstreamer: use new signals instead of onSignal/afterSignal for > M.S.G.C.{Bin,Bus} > Peter Gavin <pgavin@...>**20080212010938] > [gstreamer: M.S.G.Core.Types: export mkCaps & unCaps, document > MiniObjectM > Peter Gavin <pgavin@...>**20080116191424] > [gstreamer: document M.S.G.Core.Clock > Peter Gavin <pgavin@...>**20080116191205] > [gstreamer: add -fglasgow-exts for M.S.G.Core.{Types,Caps} in > Makefile.am > Peter Gavin <pgavin@...>**20080116191123] > [gstreamer: M.S.G.Core.Caps: document everything; code cleanups, > remove capsMerge and capsAppend; add capsCopyNth > Peter Gavin <pgavin@...>**20080116040349] > [gstreamer: M.S.G.Core.Bus documentation cleanup > Peter Gavin <pgavin@...>**20080116040324] > [gstreamer: M.S.G.Core.Buffer: cleanup docs; make bufferCreateSub pure > Peter Gavin <pgavin@...>**20080116040226] > [gstreamer: documentation cleanups in M.S.G.Core.Bin > Peter Gavin <pgavin@...>**20080116040143] > [gstreamer: make MiniObjectT use a pointer instead of a wrapped object > Peter Gavin <pgavin@...>**20071111162356 > this way is better because > a) its more efficient not to unwrap it every time we're going to > use it > b) I figure it's best not to wrap the pointer in a foreign ptr until > we give it to the user > ] > [gstreamer: hide M.S.G.{Audio,Net}.Types from haddock > Peter Gavin <pgavin@...>**20071111001048] > [gstreamer: make MiniObjectM into a proper monad transformer, and > rename it to MiniObjectT > Peter Gavin <pgavin@...>**20071111001036] > [gstreamer: fix ByteString code in M.S.G.Base.Adapter > Peter Gavin <pgavin@...>**20071111000952] > [gnomevfs: fix use of ByteStrings in S.G.V.Ops; remove > Data.ByteString import from S.G.V.Types > Peter Gavin <pgavin@...>**20071111000442] > [Add a demo on using columns in tree models. > A.Simon@...**20080211101437] > [Typo. > A.Simon@...**20080211100620] > [Fix entry completion. Make CustomStore report gap columns as int > columns to avoid warnings. > A.Simon@...**20080211100514] > [Be more optimistic about things being 'BROKEN'. > A.Simon@...**20080210192638] > [Add a small text file describing the demos, how to call them, > errors I get with them, what I think (may well be incomplete) they > need as configure options or packages installed. > Paul Dufresne <dufresnep@...>**20080131080557] > [Review the support for creating columns. > A.Simon@...**20080210191616 > Column numbers must now be declared as constants. The extraction > function > has then to be set by 'treeModelSetColumn' which replaces the > function > to update a column. This makes everything more low-level and can lead > to type errors, however, the result is that Gtk will create warnings > if the requested value is of the wrong type when reading from the > store. > ] > [Fix the get side of all container child attributes. > Duncan Coutts <duncan@...>**20080128170816 > It was accidentally calling 'set' rather than 'get'. doh! :-) > ] > [import all C types in gstreamer hierarchy > A.Simon@...**20080131204505] > [Allow filenames with non-ASCII characters. > A.Simon@...**20080129192446 > Given the right locale, cpp inserts name such as 'command line' > and 'internal' in the local language. The special characters break > the scanner of c2hs which > this patch hopefully fixes. > Also correct three spelling mistakes. > ] > [Tutorial-Port Spanish Cairo (Appendix) > hthiel.char@...**20080128144137 > Spanish Translation of Drawing with Cairo: Getting Started > Added links English-Spanish and vice versa in the index files > Corrected typo and improved unclear formulation in English appendix > corrected .next' in Spanish index > ] > [Also derive Eq for Modifier > David Leuschner <david@...>**20080128121108 > As the modifier list on Linux platforms now also includes "Alt2" when > "Control" is pressed it is convenient to be able to write > "Control `elem` modifiers" instead. > ] > [Tutorial-Port Starting Cairo Drawing (Appendix 1) > hthiel.char@...**20080123150738] > [Use the new Modifer type in Events generated by c2hs. > A.Simon@...**20080123133742 > This patch removes the manually declared definition of Modifier in > Events > which was broken since it lacked the enumTo method of enum. Since > c2hs > is now fixed to always generate these methods, I've defined > Modifier in > Gdk/Enums and merely import it. Before the fix to c2hs, c2hs would > only > generated the enumTo instance if the data type was deriving Eq since > the previous implementation did an equality test. This complicated > behaviour > of c2hs probably led me back then to implement Modifier by hand in > Events since > it wouldn't work and I couldn't understand why. > > ] > [Add Show instances to Enums. > A.Simon@...**20080123125608] > [Revert the changes to the toFlags function since c2hs is now fixed. > A.Simon@...**20080123125159] > [Had the wrong type in mind when I did the last patch. > A.Simon@...**20080122105341] > [Fix the type that is read by widgetGetState. > A.Simon@...**20080122104814] > [Fix Enum instance generation so enumFrom works. > A.Simon@...**20080120231618 > This patch changes the generation of enumFrom functions in Enum > instances > produced by c2hs. Before, the enumFrom functions would only be > generated if > Eq was in the deriving clause of the hook. However, there is a way to > define the enumFrom function even without an Eq instance, which is > what > the patch implements. > ] > [Add succ and pred definitions to Modifier and fix toFlags function. > A.Simon@...**20080119201849] > [Fixed Enum class for Modifier and toModifier > David Leuschner <david@...>**20080118172605] > [Tutorial-Port: Correction of typing errors. > hthiel.char@...**20080102141654 > Eduardo Basterrechea has noted the typing errors he found > during his translation to Spanish of the Gtk2Hs tutorial > chapters 1 - 7. These and a few more are corrected here. > ] > [Tutorial-Port Spanish Chapters 5, 6 and 7 > hthiel.char@...**20080101165758 > (also corrected a previous typo) > ] > [Tutorial-Port Spanish Translation Chapter 4 (1-7) > hthiel.char@...**20071215174442] > [Do not show this internal module in the docs. > A.Simon@...**20080103210149] > [Add a demo on the TreeModelSort proxy. > A.Simon@...**20080103192429] > [Add the TreeSortable interface. > A.Simon@...**20080103190026 > > This interface allows a view to sort the underlying model. Since our > CustomStore does not implement this interface, the only object that > actually does implement it is the TreeModelSort proxy which takes > a child > model and sorts its rows on-the-fly. > ] > [Update the TreeModelSort proxy to work with the new Haskell-land > stores. > A.Simon@...**20080103185641 > > This patch makes the TreeModelSort proxy fully functional. > TreeModelSort > encapsulates any model and provides a sorted view of it. > > ] > [Add some bogus function that can be used to implement the sortable > interface in our CustomStore. > A.Simon@...**20080103185545] > [Allow a more general function to set CellRenderer attributes. > A.Simon@...**20080103185021 > > The function cellLayoutSetAttributes was rather specific in that > it only > allowed setting attributes of the given CellRenderer from the > given model. > Allow a more general variant which is necessary when using proxy > models > like TreeModelSort or TreeModelFilter. > ] > [Add Show instances to enumerations. > A.Simon@...**20080103182105] > [Make treeModelSortNewWithModel safe. > A.Simon@...**20071213142856] > [Correct CellLayout to compiler with Gtk 2.4. > A.Simon@...**20071210141435] > [Fix PixbufData's MArray instance for ghc-6.8 > Duncan Coutts <duncan@...>**20071210033636 > The MArray class gained a getNumElements method in ghc 6.8 > Also take the opportunity to tidy things up a bit. > ] > [Forgot to add the file with helper functions for tree DND. > A.Simon@...**20071209081148] > [Tutorial-Port-Spanish Chapters 1,2,3 and index > hthiel.char@...**20071208154317] > [Export DialogFlags which is used as an arg of several > MessageDialog functions > Duncan Coutts <duncan@...>**20071127021150] > [Add DND functions to IconView. > A.Simon@...**20071204213515] > [Docu fixes. > A.Simon@...**20071204213421] > [Fix stamp problems in tree implementation. > A.Simon@...**20071204212616 > > The story of TreeIter stamps is as follows: ListStore doesn't use > them since > the iterators remain valid throughout. TreeStore needs to > invalidate the > iterators each time a node is inserted since a new node might mean > that it > can't be indexed anymore with the available number of bits at that > level. > The function that inserts nodes correctly invalidated the tree > iterators. > However, this function also calls methods in TreeModel, passing in > order > to notify itself and the view. The passed in TreeIters have no > stamp set, > thus a warning is raised as soon as these tree iters get back to > the model, > usually in a form of ref'ing the node. This was slightly confusing as > setting the stamp was so far automatically done in the C interface > Gtk2HsStore.c. The stamp is now set explicity in the TreeIters > before they > are passed to the functions in the model. > > ] > [Tutorial-Port Introduction (Chapter 1) > hthiel.char@...**20071130175603] > [Tutorial-Port Popup, Radio and Toggle Actions (Chapter7.2) > hthiel.char@...**20071128165052] > [Update TreeView and TreeViewColumn with new functions. > A.Simon@...**20071202213657] > [Complete the default drag and drop interface for ListStore. > A.Simon@...**20071202211958 > > This patch completes the former patch which adds the C functions > to the C custom model. With this patch it is possible to reorder > the rows of a list store using drag and drop. The drag and drop for > tree stores doesn't quite work yet since I'm not sure what the > exact semantics are (do we move a row at a time, or a full tree? is > the last index of the drop path the position where we insert the > new row?). This patch also enables the reordering in the two > listtest and treetest demo programs. > ] > [Add drag and drop interfaces to the CustomStore. > A.Simon@...**20070720123434 > > Both, ListStore and TreeStore should implement the drag and drop > interfaces because the Gtk ListStore and TreeStore do. However, I'm > not sure yet what they exactly implement; I assume they only accept > rows from the same store. Figure out how to implement this in > Haskell and provide a way to add the possiblity of accepting other > dnd sources and destinations as well. > ] > [Tutorial-Port Menus and Toolbars (chapter 7.1) > hthiel.char@...**20071124130324] > [Tutorial Port Paned Windows and Aspect Frames (Chapt.4) > hthiel.char@...**20071119134955] > [Add to General Gdk: screen size querying, pointer grabbing, > keyboard grabbing > Bit Connor <bit@...>**20071118123425] > [Add Gdk Cursor module > Bit Connor <bit@...>**20071118123231] > [Add Gdk Event related 'currentTime' function (binding to > GDK_CURRENT_TIME) > Bit Connor <bit@...>**20071118122630] > [Tutorial_Port: The Layout Container (Chapter 6.3) > hthiel.char@...**20071109152940] > [gstreamer: M.S.G.Bus: improve API, add documentation > Peter Gavin <pgavin@...>**20071110052545] > [glib: S.G.MainLoop: remove g_ prefix in some {#call#} tags for > consistency (and to fix build error) > Peter Gavin <pgavin@...>**20071110042515] > [glib: change Word to HandlerId in a few places > Peter Gavin <pgavin@...>**20071110042128] > [glib: S.G.MainLoop: new API: sourceRemove > Peter Gavin <pgavin@...>**20071110041737] > [glib: S.G.MainLoop: new API: mainContextFindSourceById, > sourceDestroy, sourceIsDestroyed > Peter Gavin <pgavin@...>**20071110040450] > [gstreamer: M.S.G.Core.Types code cleanups > Peter Gavin <pgavin@...>**20071109001145] > [gnomevfs: use c2hs {#enum#} for FilePermissions instead of hand > coding it > Peter Gavin <pgavin@...>**20071108034949] > [gstreamer: M.S.G.Core.Buffer: add documentation, API improvements > Peter Gavin <pgavin@...>**20071103135504] > [gstreamer: document M.S.G.Core.Constants; add version checks, > remove MessageStateDirty (deprecated) > Peter Gavin <pgavin@...>**20071102003142] > [gstreamer: add docs & fix formatting in M/S/G/Core/Bin.chs; rename > to Bin.chs.pp > Peter Gavin <pgavin@...>**20071031200139] > [gstreamer: move M.S.G.Core.Buffer.bufferOffsetNone to > M.S.G.Core.Constants and use #{const ...} > Peter Gavin <pgavin@...>**20071030001137] > [configure.ac: add GSTREAMER_CHECK_VERSION macro > Peter Gavin <pgavin@...>**20071029222529] > [Makefile.am: filter CFLAGS/LDFLAGS for gstreamer & gnomevfs > Peter Gavin <pgavin@...>**20071029222356] > [gstreamer: M.S.G.Core.Buffer/M.S.G.Base.Adapter: change ghc > version check to OLD_BYTESTRING macro > Peter Gavin <pgavin@...>**20071029202836] > [gstreamer: remove M.S.G.Core.Buffer.bufferWithDataM > Peter Gavin <pgavin@...>**20071029160749 > I need to rethink this one > ] > [remove old ghc-6.2 package config files > Duncan Coutts <duncan@...>**20071108220833] > [Drop support for ghc < 6.4 and clean up > Duncan Coutts <duncan@...>**20071108212328 > There was quite a bit of configuritis needed to support ghc-6.0 > and 6.2 > ] > [Remove the variant of toFlags. > A.Simon@...**20071108085631] > [glib: replace toFlags with toFlags' > Peter Gavin <pgavin@...>**20071108001032] > [c2hs: added definitions for succ, pred, enumFrom{,To,ThenTo}; also > add support for bitwise and, or, xor, and complement > Peter Gavin <pgavin@...>**20071108000451] > [Replace toFlags with Peter Galvin's version. > A.Simon@...**20071105155303] > [glib: add toFlags': like toFlags but works doesn't fail when we > haven't defined toEnum for a bit > Peter Gavin <pgavin@...>**20071105150631 > This is needed for a few reasons, the most important of which is > for forward compatibility. > If a newer version of the upstream package defines a new flag for > a flag type, we shouldn't > fail just because we don't recognize it. This version allows > overlapping flags, for example, given: > > data OpenFlags = Read | Write | ReadWrite > instance Flags OpenFlags > instance Enum OpenFlags where ... > > If ReadWrite is equivalent to [ Read, Write ] in C land, then it > will be in Haskell land as well. > > ] > [Fix ball soe demo to work with newer soe api > Duncan Coutts <duncan@...>**20071107171349] > [Add statusicon demo to tarball > Duncan Coutts <duncan@...>**20071107170756] > [Use precise package versions when building and in package > Duncan Coutts <duncan@...>**20071107133455 > Previously we were being sloppy and allowing any version in the > so there was no guarantee that the version we built against was > the same as > the version that would get used when the package was installed. > Similarly > we could build against packages that were registered per-user > which would > then fail if we did a global install. Now it only looks for global > packages > unless you ./configure --with-user-pkgconf > This version of the patch for gtk2hs HEAD also covers gstreamer > and gnomevfs > ] > [Follow renaming of Image.chs -> Image.chs.pp in Makefile > Duncan Coutts <duncan@...>**20071107115759] > [compile clipboard functions conditionaly when gtk >= 2.2 > Duncan Coutts <duncan@...>**20071105004442 > as the Clipboard type was only introduced in gtk-2.2 > ] > [fix build with ghc 6.8.1 > Bertram Felgenhauer <int-e@...>**20071103212034] > [Make Adam's cairo patch build with cairo-1.0 > Duncan Coutts <duncan@...>**20071102234514] > [Add support for getting the stride and image data of an image surface > agl@...**20071030183825 > > This wraps the cairo functions: > cairo_image_surface_get_stride > cairo_image_surface_get_data > > Note that these functions were introduced in cairo 1.2, and so > this sets a new > high bar for gtk2hs cairo. > > ] > [SVG: provide implementation for GObjectClass > Bertram Felgenhauer <int-e@...>**20071031135016] > [GLDrawingArea: provide implementation for GObjectClass > Bertram Felgenhauer <int-e@...>**20071031134717 > Without this patch, trying to use a GLDrawingArea results in > program: gtkglext/Graphics/UI/Gtk/OpenGL/DrawingArea.chs:73:0: > No instance nor default method for class operation > System.Glib.Types.toGObject > ] > [Remove duplicate item from EXTRA_DIST > Duncan Coutts <duncan@...>**20071030085915] > [Tutorial Port Event and Button Boxes (Chapter 6.2) > hthiel.char@...**20071029173243 > > Corrected a previous patch error in index.xhtml > where .xhtml had been changed into .html. > > Corrected the chapter on font and color selection, > which came after Alex Tarkovsky's patch, for un- > necessary newlines after <pre> > > Corrected a few other missing or incorrect links in previous files > ] > [Makefile.am: in uninstall, unregister from user pkg conf when > USERPKGCONF is set > Peter Gavin <pgavin@...>**20071021192610] > [gstreamer: change M/S/G/DataProtocol/Constants.chs to > Constants.hsc in makefile > Peter Gavin <pgavin@...>**20071028214736] > [gstreamer: add M.S.G.Audio.AudioClock > Peter Gavin <pgavin@...>**20071025153404] > [gstreamer: add a format argument to elementQueryDuration/Position > Peter Gavin <pgavin@...>**20071021210213 > this goes with the previous patch. > also changes type of queryNewConvert to use Word64 rather than Int64. > ] > [gstreamer: change Int64 to Word64 in return types of > elementQueryPosition/Duration > Peter Gavin <pgavin@...>**20071021210138] > [gstreamer: improve vorbis-play demo > Peter Gavin <pgavin@...>**20071021205843] > [gstreamer: add vorbis-play demo > Peter Gavin <pgavin@...>**20071021192836] > [gstreamer: add mtl to packages dependencies > Peter Gavin <pgavin@...>**20071021192814] > [gstreamer: add MessageAsyncStart & MessageAsyncDone to MessageType > enum > Peter Gavin <pgavin@...>**20071021192726] > [gstreamer: include C objects in profile lib > Peter Gavin <pgavin@...>**20071021192523] > [gstreamer: export nsecond in M.S.G.Core.Constants > Peter Gavin <pgavin@...>**20071021180636] > [gstreamer: make Adapter and Buffer build with ghc 6.8 > Peter Gavin <pgavin@...>**20071021180323 > > the problems actually come from the changes in bytestring. not > sure if I should specifically check for that instead. > ] > [gstreamer: hopefully fix takeObject & peekObject for real this time > Peter Gavin <pgavin@...>**20071020195000 > > takeObject: to be used when a function returns an object that must > be unreffed at GC. > If the object has a floating reference, the float flag > is removed. > > peekObject: to be used when an object must not be unreffed. A ref > is added, and is > removed at GC. The floating flag is not touched. > ] > [gstreamer: fix floating reference mess > Peter Gavin <pgavin@...>**20071009023334 > > I figure, the safest and easiest thing to do is just remove the > floating flag > from any GstObject we see, without touching the refcount (except > maybe to add one). > ] > [Fix mistake over #ifdev vs #if > Duncan Coutts <duncan@...>**20071029132150] > [Fix compiling with gtk+-2.6 > Duncan Coutts <duncan@...>**20071029132112] > [gstreamer: lots of new code > Peter Gavin <pgavin@...>**20070906212129] > [gstreamer: add new modules BaseSink, BaseTransform, PushSrc, > Adapter; redid comment headers in source files > Peter Gavin <pgavin@...>**20070805140633] > [gstreamer: update heirarchy stuff to match Duncan's recent changes > Peter Gavin <pgavin@...>**20070804160945] > [gstreamer: better flag support for GstObject subtypes; replaced > from* and to* for enums with cToEnum/cFromEnum; similarly for flags > Peter Gavin <pgavin@...>**20070726202800] > [Make the StatusIcon stuff compile with gtk < 2.10 > Duncan Coutts <duncan@...>**20071029113538] > [Make GDateTime compile with old versions of glib > Duncan Coutts <duncan@...>**20071026124216] > [gnomevfs: some documentation cleanups > Peter Gavin <pgavin@...>**20071009023250] > [gstreamer: add bytestring dependency in Makefile.am if > HAVE_SPLIT_BASE is set > Peter Gavin <pgavin@...>**20071020184923] > [gtk: some source documentation cleanups > Peter Gavin <pgavin@...>**20071009023028] > [gnomevfs: add function S.G.V.Volume.volumeUnmount > Peter Gavin <pgavin@...>**20070929170420] > [gnomevfs: hide Marshal and Types from haddock > Peter Gavin <pgavin@...>**20070906205746] > [make hierarchyGenGst not use splitobjs > Peter Gavin <pgavin@...>**20070805144738] > [Move all deprecated exports to their own section in the export list > Duncan Coutts <duncan@...>**20071011130754] > [Use qualified names for new signals > Duncan Coutts <duncan@...>**20071011130612] > [Use the correct name for old signals > Duncan Coutts <duncan@...>**20071011130540] > [Use the proper deprecated doc comment rather than a generic one. > Duncan Coutts <duncan@...>**20071011125201] > [Deprecate getter/setter methods that duplicate attributes > Duncan Coutts <duncan@...>**20071011124907 > Add deprecated notes. > Also combine docs for attributes synthesised out of getter/setter > methods. > ] > [Disable generation hashes for the moment > Duncan Coutts <duncan@...>**20071011124525 > They don't quite work reliably yet for some reason. > ] > [Tutorial Port Scrolled Window (Chapter 6.1) > hthiel.char@...**20071020161937] > [Tutorial Port Notebook (Chapter 5.4) > hthiel.char@...**20071014153504] > [Tutorial_Port Font and File Selection (Chapter 5.3) > hthiel.char@...**20071009172147] > [Add a comment on the return values of events. > A.Simon@...**20071015105656] > [Gtk2Hs Tutorial: Fix vertical alignment in <pre> sections by > removing unnecessary leading \n chars > Alex Tarkovsky <alextarkovsky@...>**20071004171957] > [binding textbuffer clipboard functions (v2) > jnf@...**20071010172832 > Cut, copy and paste functions for the text buffer and a minimal > clipboard > module that just provides a type and a function to get the > standard clipboards. > ] > [Add a catch-all constructor to the set of mouse buttons. > A.Simon@...**20071004144115] > [Tutorial Port File Selection (Chapter 5-2) > hthiel.char@...**20070928162552] > [Patch Chapter 5.1 into the ToC and header/footer navigation > Alex Tarkovsky <alextarkovsky@...>**20070921021310] > [Change Source File Names, Change Window Titles and Screenshots > hthiel.char@...**20070920132032 > Source file Names now match XHTMl chapters, > Added and changed some window titles and screen shots for consistency > Removed references to chapters from source code files, except in > the very first (GtkChap2a.hs) > > ] > [Tutorial Port Calendar (chapter 13) > hthiel.char@...**20070916161414 > Changed to XHTML. Links need to be added and > there are 2 unordered lists with no <div> and id yet > ] > [Gtk2Hs Tutorial: XHTML-ize remaining files through chapter12; > rename files according to ToC > Alex Tarkovsky <alextarkovsky@...>**20070916192047] > [Make gtk2hs build with ghc 6.8 > Bertram Felgenhauer <int-e@...>**20070913160529 > - Add additional dependencies induced by the base package split. > - fix one API breakage. > ] > [GTK Tutorial Port Spin Boxes (Chapter 12) > hthiel.char@...**20070910131354 > Alex Tarkovsky will implement a change from HTML 4 Provisional to > XHTML strict > and a corresponding change in file names and extensions. New > additions by me > will match that, but until Alex's patches have been applied I'll > carry on as before. > However, this means that links will be broken. > This chapter 12 patch is standalone, for the moment. > ] > [XHTML-ize and CSS-ify ToC and chapters 3 and 4; renumber ToC > Alex Tarkovsky <alextarkovsky@...>**20070906210332] > [Remove broken links from Glade tutorial > Alex Tarkovsky <alextarkovsky@...>**20070904135020] > [Fix gconf_MOSTLYCLEANFILES typo > Duncan Coutts <duncan@...>**20070903173646] > [Tutorial_Port Text Entries and Statusbars (Chapter 11) + links update > hthiel.char@...**20070831132507] > [Dialogs StockItems Progress Bars (chapter 10) > hthiel.char@...**20070831105357 > Chapter 10 plus crrected links in Chapter 1 and Chapter 8 > ] > [Gtk2Hs tutorial: chapter 3 updates > Alex Tarkovsky <alextarkovsky@...>**20070901094853] > [Arrows and Tooltips Chapter 9 > hthiel.char@...**20070830202333] > [Undo accidentally-comitted debugging change to soe > Duncan Coutts <duncan@...>**20070830184107] > [Remove a bit of soe debug cruft. > Duncan Coutts <duncan@...>**20070830131622] > [Update Graphics.SOE.Gtk to new api and fix many bugs > Duncan Coutts <duncan@...>**20070830124427] > [Add missing _stub to list in Makefile.am > Duncan Coutts <duncan@...>**20070830124410] > [Fill in the GObjectClass methods for the ListStore and TreeStore > Duncan Coutts <duncan@...>**20070829020153 > We still seem to be getting some iter stamp errors :-( > ] > [Update Glade tutorial to focus on Glade 3 > Alex Tarkovsky <alextarkovsky@...>**20070820151705] > [Gtk+2.0 Tutorial Port Chapter 8, Tables > hthiel.char@...**20070817141356] > [Add _SPLIT variant for gstreamer to the CLEAN targets. > A.Simon@...**20070817124958] > [gstreamer: use the new hierarchyGenGst; rename Object marshallers > to better denote their semantics; start wrapping base libraries > Peter Gavin <pgavin@...>**20070723223441] > [tools: added hierarchyGenGst, a modified version of > hierarchyGenGst, needed for GStreamer > Peter Gavin <pgavin@...>**20070723223020] > [gstreamer: Core/Types.chs: added isObject (etc.) functions > Peter Gavin <pgavin@...>**20070722015917 > Also renamed newObject/newMiniObject to mkNewObject/ > mkNewMiniObject & added real newObject/newMiniObject functions. > ] > [gstreamer: add attributes & properties to GObject subclasses; > small code cleanups; add hierarchy for GStreamer.Base > Peter Gavin <pgavin@...>**20070722004623] > [gstreamer: change ClockTime/ClockTimeDiff to Word64/Int64 instead > of c2hs types > Peter Gavin <pgavin@...>**20070721214252] > [gstreamer: move all current modules from Media.Streaming.GStreamer > to Media.Streaming.GStreamer.Core > Peter Gavin <pgavin@...>**20070721185839 > I'm anticipating adding other submodules into GStreamer. > ] > [gstreamer: remove commented function messageParseAsyncStart in > Message.chs > Peter Gavin <pgavin@...>**20070720201214] > [gstreamer: fix types for event handlers in Element.chs > Peter Gavin <pgavin@...>**20070720201151] > [gstreamer: fix export list in most modules > Peter Gavin <pgavin@...>**20070720201023] > [gstreamer: add stub files to LIBADD in makefile > Peter Gavin <pgavin@...>**20070720200932] > [gstreamer: initial import > Peter Gavin <pgavin@...>**20070719060228] > [Add a demo on PangoLayout. > A.Simon@...**20070817112626] > [Fix ref counting bug in sourceBufferCreateMarker > Duncan Coutts <duncan@...>**20070814182814 > (reported by bwwx in #haskell) > The docs say that gtk_source_buffer_create_marker returns a new > GtkSourceMarker, owned by the buffer. So we need to ref the > object, so we > should use makeGObject rather than constructNewGObject. > ] > [Gtk+2.0 Tutorial Port (part 1) > hthiel.char@...**20070811141048 > > Start of a port to Gtk2Hs of the Gtk+2.0 tutorial > included in the Gtk documentation. > > Note: Chapter 2 is supposed to be an introductory overview, > as in the original, but for Gtk2Hs. I'm hoping someone > who knows more will jump in here. > > The implemented chapters are: > 1. Contents and copyright, as required by the original > authors > 2. To be written by an expert... > 3. Getting Started > 4. Packing Widgets > 5.a. Packing Demonstration Program > 5.b. Packing Using Tables > 6. The Button Widget > 7. Adjustments, Scale and Range > > The examples have been tested with Gtk2Hs-0.9.12 > (the first ones with .11) on Fedora 6. > > The html files have been taken from the original Gtk+2.0 tutorial, > processed in Open Office Writer html mode, and then cleaned up > with Screem and tidy. There are still some unreferred name tags left > from the original. > > ] > [small makefile changes to clean up documentation files that were > not being removed > Peter Gavin <pgavin@...>**20070805144521] > [gnomevfs: updated license to LGPLv3, made all source file headers > consistent > Peter Gavin <pgavin@...>**20070722024618] > [glib: add support for 64-bit int object properties > Peter Gavin <pgavin@...>**20070721212009] > [glib: export contructors for Source datatype in MainLoop.chs > Peter Gavin <pgavin@...>**20070719055702] > [Import CUInt in the Types.chs modules as GType seems to be CUInt now > Duncan Coutts <duncan@...>**20070808171654 > Reported in Fedora 8 that c2hs reports GType as CUInt whereas it > previously > was CULong, so we now import both in the Hierarchy.chs.template. > ] > [Fix doc typo, "the the" > Duncan Coutts <duncan@...>**20070802031049] > [Merge changes in cleaning split-objs with gnomefvs changes > Duncan Coutts <duncan@...>**20070728192403] > [Be more careful about cleaning split-objs and stub files > Duncan Coutts <duncan@...>**20070728173305 > And build the code gen tools without using split-objs at all. > ] > [Add clean script to win32 scripts bundle > Duncan Coutts <duncan@...>**20070728163800] > [Update win32 scripts to gtk2hs-0.9.12 + gtk+-2.12.14 > Duncan Coutts <duncan@...>**20070727132759 > And add checkbox to override dll problem to gtk2hs installer script > ] > [Follow change in name of local cairo header file > Duncan Coutts <duncan@...>**20070727131745] > [Document the 'on' function > Duncan Coutts <duncan@...>**20070727155728] > [Change the class heirarch implementation to avoid coerce > Duncan Coutts <duncan@...>**20070727155053 > GObjectClass now has two real mothods, toGObject and > unsafeCastGObject > (renamed from fromGObject to better reflect the fact that it performs > unchecked down-casts). Sub classes of GObjectClass do not have any > additional methods, so we only carry one dictionary per object. > So it should now be possible to make custom widgets in Haskell and > make > them instances of the existing widget classes etc. So this should > give > greater consistenncy and better data hiding. > ] > [Fix comboBoxGetModel, it previously did an unchecked downcast > Duncan Coutts <duncan@...>**20070727135408] > [Add authors name to author's file. > A.Simon@...**20070726130748] > [Add stock items to reference documentation. > A.Simon@...**20070726130215] > [Make treeList demos build again > Duncan Coutts <duncan@...>**20070725145318] > [Add Bulat Ziganshin to AUTHORS for dirlist contributions > Duncan Coutts <duncan@...>**20070725145245] > [Add HAVE_GTK_VERSION_2_12 makefile conditional > Duncan Coutts <duncan@...>**20070725145202] > [Change gtkmozembed provider preference to put xulrunner first > Duncan Coutts <duncan@...>**20070725144951 > Apparently xulrunner is now in vogue again so put it first > So ther order of prefernece has changed from/to: > -seamonkey, firefox, xulrunner, mozilla > +xulrunner, seamonkey, firefox, mozilla > ] > [Build the statusicon demo > Duncan Coutts <duncan@...>**20070725144924] > [StatusIcon demo > Andrea Vezzosi <sanzhiyan@...>**20070725094936] > [Add GtkStatusIcon bindings > Andrea Vezzosi <sanzhiyan@...>**20070725033433] > [Must not use _static version of pango_font_description_set_family > Duncan Coutts <duncan@...>**20070723145641 > Since the string buffer is only kept around temporarily, not > forever like > pango_font_description_set_family_static requires. > ] > [Fix bug in TreeStore on empty stores. > A.Simon@...**20070716123708 > This fixes a bug when running e.g. treeModelIterNChildren store > Nothing when > store is an empty TreeStore. > ] > [Rename the local cairo.h to something less ambiguous. > A.Simon@...**20070716123509 > I can't recall where this became a problem, but I guess it can't > hurt to have this name distinct from the real header file. > ] > [glib: import GDateTime instead of GDate > Peter Gavin <pgavin@...>**20070716104909 > > Brainfart. > > ] > [glib: add GSource support > Peter Gavin <pgavin@...>**20070713205416] > [glib: add GDate to exports in Glib.hs > Peter Gavin <pgavin@...>**20070713175625] > [glib: make module name for GDateTime the same as the filename :) > Peter Gavin <pgavin@...>**20070712013200] > [glib: added support for GDate/GTimeVal > Peter Gavin <pgavin@...>**20070711035813] > [Make the new ModelView interface self contained. > A.Simon@...**20070716102722 > This patch moves a few model functions from CustomStore to > TreeModel. Also, > it duplicates the TreeIter and TreePath definitions by copying > them into the > ModelView.Types file. The TreeModel module is now complete. > However, another > step would be to re-export all functions from CustomStore via > TreeModel and > hide CustomStore. The user would then be able to define a new > store by just > importing TreeModel. > ] > [Add missing case in Drag. Fix documentation. > A.Simon@...**20070715163552] > [Add a new function to Pixbuf. Correct documentation. > A.Simon@...**20070715163512] > [Add columns to our Haskell tree model. > A.Simon@...**20070715162232 > > This patch adds the ability to lookup values from TreeStore and > ListStore in terms of columns. Some information in e.g. ComboBox > have properties that are not shown by CellRenderers directly. These > properties can therefore not be set by using function in CellLayout > and the only way to connect them to the model is by pretending they > access a certain column in the model. Hence this patch adds the > ability to access a Haskell model using a column number. The idea > is that these column numbers are opaque to the user of Gtk2Hs. > Functions that use columns are called widgetSetBlahSource where > wiget is the widget and Blah is the property. > ] > [gnomevfs: fix demos to work again, new demo TestVolumeMonitor > pgavin@...**20070707212333] > [gnomevfs: use actual Volume type instead of VolumeClass volume > => ... for signal handler argument type > pgavin@...**20070707212228] > [gnomevfs: more documentation, type fixes, etc. in Volume/Drive/ > VolumeMonitor > pgavin@...**20070706195920] > [gnomevfs: small fixes in cabal/package.conf files > pgavin@...**20070706032048] > [glib: add support for GMainLoop/GMainContext > Peter Gavin <pgavin@...>**20070707210848] > [Use a local .h file that includes both gnomefvs headers > Duncan Coutts <duncan@...>**20070705191142 > So we don't have to locally include extra headers for > particular .chs files. > This is the same system we use for sourceview, cairo and svgcairo. > ] > [gnomevfs: initial import > pgavin@...**20070705094200] > [Use .NOTPARALLEL make directive to prevent parallel builds > Duncan Coutts <duncan@...>**20070705164541 > So we don't have to give users special instructions not to use -jN > ] > [Note that parallel make does not work. > Duncan Coutts <duncan@...>*-20070705144700] > [Update DirList demo to work on Windows and include file > modification times > Duncan Coutts <duncan@...>**20070705155824 > Thanks to Bulat Ziganshin for the patch. > I also updated it to use multiple columns with titles and to avoid > the use > of unsafePerformIO by using the :=> monadic attribute assignment > operator. > ] > [Note that parallel make does not work. > Duncan Coutts <duncan@...>**20070705144700] > [Really add the SVG backend, forgot to add the main file. > Duncan Coutts <duncan@...>**20070705012450 > I'm clearly going mad. The file got included in the tarball, > so I didn't notice it's absence from the darcs repo. > ] > [Fix typo in win32 build script > Duncan Coutts <duncan@...>**20070704040641] > [Update win32 build scripts to build sourceview but not svgcairo > Duncan Coutts <duncan@...>**20070704035213] > [Update a demo to follow change in api for pixbufNewFromFile > Duncan Coutts <duncan@...>**20070704032750] > [Add Cairo SVG backend > Duncan Coutts <duncan@...>**20070704032635 > I swear I added this before, the cairo demo certainly already > tests it. > Somehow I must have managed to not record it, and then loose the > code. > Fortunately it was only a few lines, so quick to replace. Silly me. > ] > [Reimplement the way we keep track of gthread initialisation state > Duncan Coutts <duncan@...>**20070704012435 > Sadly we cannot just call g_thread_supported() because this macro > expands > into a reference to a gloabl variable that lives in the gthread > dynamic lib. > On windows, GHCi's dynamic linker cannot cope with references to > global vars > from C dlls. So instead we keep a global var in the wrapper, so > it's in the > same package, rather than crossing a dll boundary. This allows us > to track > the state, and because the state is allocated in C land, it > survives :reload > in GHCi, which was the problem with the original scheme. Ugg. > ] > [Fix for the sed extra-ghci-libs hackery: some libs have '_' in > their names > Duncan Coutts <duncan@...>**20070704005240] > [Updated windows installer script > Duncan Coutts <duncan@...>**20070703203650 > updated for latest Gtk2Hs version > use a build for only a single ghc version (ghc 6.6.1) > we previously included builds for both ghc 6.4.2 and 6.6 > include the sourceview component > ] > [Update Win32 build scripts > Duncan Coutts <duncan@...>**20070703203305 > Both the scripts for building Gtk2Hs on Windows and the scripts > used to > build a Gtk+ SDK for Windows. Now using Gtk+ 2.10.13 for the > Windows builds. > ] > [Add a few people who contributed to this release to the AUTHORS file > Duncan Coutts <duncan@...>**20070703202905] > [sed hackery to generate extra-ghci-libraries in package files on > win32 > Duncan Coutts <duncan@...>**20070703201811 > The .lib names and .dll names on windows do not match up, so we > have to > specify different libraries for ghci than for ghc. eg for gtk-2.0.lib > the corresponding dll is libgtk-2.0-0.dll. We do this conversion > using > a sed rule plus a short list of hard-coded exceptions. > ] > [Unbreak SOE in the single threaded case again. > Duncan Coutts <duncan@...>**20070703201648 > This is getting embarassing. > ] > [Ugg, unbreak SOE in the non-threaded rts case. > Duncan Coutts <duncan@...>**20070630162214] > [Make the check for the GThread system being initialised more robust > Duncan Coutts <duncan@...>**20070630000326 > We previously used an evil top level IORef however in GHCi that does > not work because that state gets reverted when we :reload, and so we > end up calling g_thread_init again which aborts. > So now we have a little C wrapper around the g_thread_supported macro > and we use that to check if g_thread_init was already done. > ] > [More SOE fixes > Duncan Coutts <duncan@...>**20070629182128 > The main loop now uses a single call to mainGUI which makes > threading simpler. > Mouse button release events are now collected as well as button > down ones. > Don't specify position of the window for the BouncingBall demo; > having it > appear at (800,800) is not very helpful on a 1024x768 screen :-) > ] > [Bump version number to something bigger than 0.9.12 > Duncan Coutts <duncan@...>**20070628172323 > This is now the main/head branch which will lead to release > version 0.9.13 > The branch that will lead to the 0.9.12 release is at: > > ] > [Add several soe demos > Duncan Coutts <duncan@...>**20070628171553] > [Comment out debugging output in soegtk > Duncan Coutts <duncan@...>**20070628164947] > [Make soegtk work with the threaded RTS > Duncan Coutts <duncan@...>**20070628164122 > So it should work in GHCi > ] > [Fix all package files to use a dep on base and not haskell98 > Duncan Coutts <duncan@...>**20070531110545] > [The gtk package now always depends on mtl > Duncan Coutts <duncan@...>**20070531105251 > At least with ghc-6.4 and later, not that the .package.conf files are > only used for ghc-6.4 and the other form is used for ghc-6.2 and > before > so we just add it to one file and not the other, rather than needing > to add more conditional stuff to configure.ac > ] > [Change sed script to work with solaris sed > Duncan Coutts <duncan@...>**20070531105126 > The problem was with the optional "qualified" clause, but it turns > out > we don't need that anyway which allows it to work with the default > solaris sed. > ] > [The ps/pdf_surface_set_size functions appeared in cairo 1.2 > Duncan Coutts <duncan@...>**20070531104425 > I had thought they were there before but I was confusing them with > ps/pdf_surface_set_dpi which only appeared in cairo 1.0 but were > removed > before the ps & pdf backends were declared stable in cairo 1.2. > ] > [Have the gtk package dep on mtl even when not building the cairo > bindings > Duncan Coutts <duncan@...>**20070531094825 > We now always need mtl, previously it was only the cairo bits that > needed it. > ] > [Remove CVS $Revision and $Date tags > Duncan Coutts <duncan@...>**20070528172216 > darcs does not use these so they were out of date and thus useless > ] > [Don't depend on haskell98 package > Duncan Coutts <duncan@...>**20070528154423 > Except in c2hs > ] > [Tidy up imports in the gtk package > Duncan Coutts <duncan@...>**20070528150937 > drop lots of unused imports > ] > [Tidy up imports in the gtkglext package > Duncan Coutts <duncan@...>**20070528142849] > [Tidy up imports in the code produced by the type and signal code > generators > Duncan Coutts <duncan@...>**20070528142715] > [Tidy up imports in the cairo package > Duncan Coutts <duncan@...>**20070528142604] > [Tidy up imports in the glade and mozembed packages > Duncan Coutts <duncan@...>**20070528142538] > [Tidy up imports in the sourceview package > Duncan Coutts <duncan@...>**20070528142416] > [Tidy up imports in the gconf package > Duncan Coutts <duncan@...>**20070528142335] > [Tidy up imports in the glib package > Duncan Coutts <duncan@...>**20070524161155] > [Tidy up imports in TypeGenerator and HookGenerator > Duncan Coutts <duncan@...>**20070524161007] > [flags properties need the actual flag gtype code > Duncan Coutts <duncan@...>**20070528180817 > just like with enums. > ] > [Add property functions for Char type > Duncan Coutts <duncan@...>**20070528165852] > [Use the _readonly versions of a couple pango functions when available > Duncan Coutts <duncan@...>**20070528143645 > We don't provide any way of modifying the lines so it's ok for us > to always > use the optimised read-only versions of these functions. > ] > [Use the glib property implementation of > MenuComboToolbar.Toolbar.toolbarStyle > Duncan Coutts <duncan@...>**20070528143506 > rather than the getter/setter style > ] > [Use the glib property implementation of > MenuComboToolbar.Menu.menuTitle > Duncan Coutts <duncan@...>**20070528143330 > rather than the getter/setter version. > ] > [Export targetListAdd from Selection module > Duncan Coutts <duncan@...>**20070528143216 > I presume this was supposed to be exported, it's not otherwise used. > ] > [Remove unused files in signal generator > Duncan Coutts <duncan@...>**20070528102729] > [Use cpp without -P if we're building without docs > Duncan Coutts <duncan@...>**20070524160833 > So we get accurate ghc error message locations for .hs.pp > and .chs.pp files. > ] > [TAG 0.9.11.1 > Duncan Coutts <duncan@...>**20070524134504] > Patch bundle hash: > 7f1dd62c7e48f0bba693cf8cfeacfb4aa1cf81d.- > com_______________________________________________ > Gtk2hs-devel mailing list > Gtk2hs-devel@... > Fri Mar 13 18:32:12 CET 2009 ben.franksen@... * Add function eventClick to module EventM. I presume someone has noticed this? -------- Forwarded Message -------- > From: wordpress@... > To: duncan.coutts@... > Subject: [Gtk2Hs] Please moderate: "Mickinator File Manager" > Date: Mon, 9 Mar 2009 22:17:19 -0400 (EDT) > > A new comment on the post #79 "Mickinator File Manager" is waiting for your approval > > > > > > Author : Ivan Miljenovic (IP: 130.102.0.170 , proxy1.uq.edu.au) > > > URI : > > Whois : > > Comment: > > It would help if that image link existed... > > > > To approve this comment, visit: > > To delete this comment, visit: > > Currently 1 comments are waiting for approval. Please visit the moderation panel: > > > > Uh, apologies for missing this. On Mar 7, 2009, at 9:12, Hamish Mackenzie wrote: > Sat Mar 7 21:05:30 NZDT 2009 Hamish Mackenzie > <hamish@...> > * Add eventClick to EventM > Applied, thanks for the patch, Axel. Sat Mar 7 21:05:30 NZDT 2009 Hamish Mackenzie <hamish@...> * Add eventClick to EventM
http://sourceforge.net/p/gtk2hs/mailman/gtk2hs-devel/?style=flat&viewmonth=200903
CC-MAIN-2015-32
refinedweb
11,325
62.34
04 July 2012 10:14 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> All three projects are scheduled to come on stream in 2014, the source said. The company’s 500,000 cbm/day plant in Output sales have not yet been determined, the source said. “[The company has] been building LNG-refuelling stations across the country to enhance the application of LNG as vehicle fuels,” the source added. Details on the cost of construction were not immediately available. Hebei-based Huagang Gas Group, a joint venture of PetroChina’s Kunlun Energy and Huabei Oilfield, is largely engaged in the supply and distribution of pipeline
http://www.icis.com/Articles/2012/07/04/9575071/chinas-huagang-gas-to-build-three-lng-plants-in-2012.html
CC-MAIN-2014-41
refinedweb
104
63.9
Hi, When I run the exercise 23 code in Jupyter Notebook, the decoding and encoding looks funny and the bytes inbetween the b’…’ is different from Zed’s. Do you know if this might be a problem due to Jupyter? It should be able to read unicode as it is a based on a browser, but somehow it seems to go wrong. (I know I should not use Jupyter to do the exercises, but kind of forced as we use anaconda at my work. And it has actually taught me alot, since I have had to debug and rewrite code to make it work in Jupyter.) Here is my code and output: import sys sys.argv=['script', 'utf-8', 'strict'], sys.argv[1], sys.argv[2]) And then the output (I’ll just show you the top two lines). b’\xef\xbb\xbfAfrikaans’ <===> Afrikaans b’\xc3\xa1\xc5\xa0 \xc3\xa1\xcb\x86\xe2\x80\xba\xc3\xa1\xcb\x86\xc2\xad\xc3\xa1\xc5\xa0\xe2\x80\xba’<===> አማáˆáŠ› The “error” is not major, and my script runs. But the decoding/encoding becomes incorrect, and it bugs the hell out of me. Hope someone knows what went wrong, Heidi
https://forum.learncodethehardway.com/t/excercise-23-incorrect-decoding-encoding/2835
CC-MAIN-2020-24
refinedweb
202
73.17
GitHub Copilot is the newest tool developed by GitHub to autocomplete code with the help of OpenAI. Copilot generates smart code suggestions with context, such as docstrings, code comments, function names, or even file names. It then uses all this information to suggest code snippets that developers can easily accept by pressing the Tab key on a keyboard. It understands Python, JavaScript, TypeScript, Ruby, and Go, as well as dozens of other languages because it’s “trained on billions of lines of public code,” per the Copilot website. And while it’s currently still in its limited technical preview, those interested can sign up to join a waitlist to try it out. In this article, we’ll explore Copilot’s main functionalities, how to build a simple application using only Copilot, and its pros and cons. Copilot’s main features Copilot’s main feature is its autocomplete function. When typing a function description, for example, Copilot completes the whole function before a user finishes. While this autocomplete is similar to other general autocomplete functionalities, Copilot goes a step beyond. When continuing to write code and adding more comments, Copilot begins to understand the whole context of the code through its AI capabilities. With the context, it autocompletes comments mid-sentence. For example, by adding a function, it generates a whole comment and a function; in this case, it figured out the last function should be a multiply function, as seen below. Another cool feature of Copilot is the ability to see 10 full-page suggestions, instead of a single one-liner suggestion, and choosing which suits the code best. To do that, press ^ + Return on a Mac keyboard or Ctrl + Enter on Windows to open a list of suggestions, as shown below. Can you build an application with only GitHub Copilot? With Copilot’s capabilities, I wanted to challenge myself to build a small application using only Copilot. For this challenge, I wanted to create a simple random quote application that also displays the sentiment of the quote. To do this, I had a few rules to follow to see how much benefit I could receive from Copilot. First, I could not search the internet if I encountered a problem, including using Stack Overflow or documentation. This let me see whether it was possible to solely rely on Copilot’s suggestions to create working code. The second rule was that I could not write any new code myself. I could, however, write comments, variables, names, and function names to trigger Copilot’s suggestions. Similarly, I could also make small edits to the suggested code. And finally, I could trigger the list of Copilot suggestions and accept one, since this is a built-in feature. Setting up the project I chose Next.js and React to build this project since they are the tools I am most familiar with to help me better evaluate Copilot’s performance. Since React makes it fairly painless for developers to build applications, and I wanted to see how Copilot would manage React components. For Next.js, it provides a good starting point where I didn’t need to spend a lot of time setting everything up, and it has built-in back-end functions, making it useful to call different API endpoints without triggering CORS errors. While Next.js might seem too powerful for this small project, there’s no need to install other dependencies beforehand and its integrated, easy-to-use API functions make it a good choice for this challenge. Developing the API endpoints Starting with developing API endpoints, I wanted a quote generator that returns a random quote on a GET request and a sentiment analysis endpoint. The sentiment analysis endpoint needed to receive a string as a query parameter and return a sentiment. Since I didn’t know what format the return value would be, I let Copilot write it and see what it could return. /api/get_quote GET request endpoint To create two endpoints using Next.js, I created two files in the api folder: get_quote.js and get_sentiment.js. Next.js could then create these endpoints based on the file names. All that was left is to define was the handler functions inside those files, and I let Copilot do that for me. For the get_quote endpoint, I wrote a comment and chose a good suggestion: // get random quote from random API Clicking the comment, Copilot responds with a list of different options to pick from. The suggestion I selected was the following: const getQuote = async () => { const response = await fetch(' const quote = await response.json() return quote.contents.quotes[0].quote } This suggestion worked. Almost all of the other suggestions that I checked were either broken or required an API key that I didn’t have. This could be because Copilot was trained on open source GitHub code, and some endpoints might be outdated already, which can be frustrating. Also, this endpoint returns the quote of the day, which means for every call, I receive the same quote for the current day, which was not what I was expecting. Instead, for every endpoint call, I wanted to receive a different random quote. For the second part of this endpoint, I needed to create a handler for the endpoint that calls the function Copilot already generated. The handler is the function that Next.js calls when the client requests the endpoint. To do this, I declared the function with its name to see whether Copilot would suggest the correct implementation or not. And Copilot surprised me again. Everything seemed correct. First, it called the getQuote function and returned the quote received from the web service. Then, it saved the quote to the constant and returned JSON to the client with the retrieved quote. The only thing I had to add was .status(200) to send the 200 status code to the client. /api/get_sentiment GET request endpoint For the get_sentiment function, I chose Copilot’s suggestion and didn’t have to change anything. I input the following: // determine if the text is positive or negative Then, Copilot suggested the following code, which I used: async function getSentiment(text) { const response = await fetch(` { method: "POST", body: `text=${text}`, headers: { "Content-Type": "application/x-www-form-urlencoded", }, }) const json = await response.json() return json.label } However, for the endpoint part, I had to add .status(200), similar to the get_quote endpoint. Here is what I provided to Copilot: export default async function handler(req And then, Copilot suggested the following, which I selected: export default async function handler(req, res) { const sentiment = await getSentiment(req.query.text) res.json({ sentiment }) } Copilot managed to provide us with a working solution again, suggesting the right handler and calculating what the get_sentiment handle function needed to return. This suggestion looks similar to the one we had previously. Let’s see how Copilot handled React component generation next. React components For the frontend, I needed a couple of specified React components and a React controller component with a button. The two specified components needed to display text: one for quote text and another for sentiment text. Let’s see how Copilot handled creating these React components. QuoteTextBox I began with the QuoteTextBox component that would showcase a simple quote; I added the following comment to see what Copilot would suggest: // React component to show a quote inside a div const Copilot suggested a simple function component, which I did not have to change at all. const QuoteTextBox = (props) => { const { quote } = props return ( <div className="quote-text-box"> <p>{quote}</p> </div> ) } export default QuoteTextBox SentimentTextBox To build the sentiment text box, I added the following comment: // React component to show sentiment text inside a div const For this text box, it was a bit trickier to find the right suggestions, and I had to scroll through most of them until I found one that would work with my code: const SentimentTextBox = (props) => { // Get sentiment text from props const sentimentText = props.sentimentText // Return component return ( <div className="sentiment-text-box"> {sentimentText} </div> ) } // Export component export default SentimentTextBox It’s important to clearly understand what to expect from Copilot because it is only a tool, and developers can’t just rely on the first suggestion it gives. I still needed to come back to this component to map what the API returns to readable text. For example, if the web service returns neg, I needed to map it to Negative. Controller component with button I also needed a controller component to manage all the logic. The first two React components I created just received data and rendered it on-screen, but the controller component needed to manage the API calls. I started by inputting the following: // React component that uses QuiteTextBox and SentimentTextBox and APIs to show quotes and sentiment with a button to generate new quotes. const Then, Copilot brought this suggestion: const QuoteGenerator = ({quote, sentiment, onClick}) => { return ( <div> <QuoteTextBox quote={quote} sentiment={sentiment} /> <button onClick={onClick}>Generate New Quote</button> <SentimentTextBox sentiment={sentiment} /> </div> ) } export default QuoteGenerator The controller component suggested by Copilot appeared to be pretty basic; it lacked the API calls and didn’t use correct props for SentimentTextBox. However, Copilot did provide the correct structure, it just needed some tweaks to finish it. Then, I just needed to add functions to the controller component. Instead of passing quote, sentiment, and onClick, I asked Copilot to generate them. I also needed some Hooks to store the sentiment and quote data received from calling the APIs. For Hooks, Copilot figured out right away what I needed. To trigger the suggestion for the first Hook, I began typing a comment and Copilot suggested the correct Hook. However, for the second Hook, I didn’t even need to type a comment. I accepted the first Hook suggestion, moved to the next line, and Copilot suggested the second Hook immediately. While the endpoints were correct, I still needed to make some changes to make them work. I had to ask very specifically what I wanted, otherwise, Copilot started suggesting different web services. I wanted it to just call endpoints that were already created. Furthermore, I need to specifically call the getSentiment endpoint when I received a quote and map the sentiment to human-readable text. And this is my final version after some minor changes from my side: const QuoteGenerator = () => { // Hook to store text in state const [quoteText, setQuoteText] = React.useState('') const [sentimentText, setSentimentText] = React.useState('') // Function to get quotes from API /api/get-quote const getQuote = () => { fetch('/api/get-quote') .then(response => response.json()) .then(json => { setQuoteText(json.quote) getSentiment(json.quote) }) } // Function to get sentiment from API /api/get-sentiment\ const getSentiment = (text) => { fetch('/api/get-sentiment?text=' + text) .then(response => response.json()) .then(json => { setSentimentText(json.sentiment) }) } // Function to be called when user clicks on button to generate new quote const onClick = () => { getQuote() } const mapSentimentToText = { 'neg': 'Negative', 'pos': 'Positive', 'neutral': 'Neutral' } return ( <div> <QuoteTextBox quote={quoteText} /> <SentimentTextBox sentimentText={mapSentimentToText[sentimentText]} /> <button onClick={onClick}>Generate New Quote</button> </div> ) } export default QuoteGenerator The final application After experimenting with my simple quote-generating app, I found that Copilot gives enough help to create a simple application. I didn’t have high expectations and initially thought I would need to change a lot of code to make the application work. However, Copilot surprised me. In some places, it gave me nonsense suggestions, but in other places, the suggestions were so good that I can’t believe Copilot made them. Copilot pros and cons To recap my experience with Copilot, I’ve compiled the pros and cons of working with it so you can decide whether Copilot is something you can use daily or not. Copilot pros The main advantage of using Copilot is that it provides autocomplete on steroids. As an autocomplete tool, I believe it is currently the best on the market; there is nothing even close to as useful as Copilot. Copilot also shows developers multiple ways to solve different problems that may not be so obvious. When in need of a code snippet, the 10 suggestions functionality is great and can often be used in place of Stack Overflow for efficiency. In all, Copilot is fun to use. For all the tech geeks, it is something new to play around with and it makes daily work a bit more interesting. Copilot cons While its functionality provides greater efficiency, users must remember it’s a tool, not a human developer replacement. Because Copilot is not a panacea, users cannot solely rely on it for all their code. Most of its suggestions require changes to fit specific needs. And finally, I noticed Copilot was suggesting to use React classes for small components with little logic instead of functional components with Hooks. Because it was trained on openly available code on GitHub, Copilot might provide some depreciated suggestions for coding. Conclusion GitHub Copilot is not something that can just listen to a project idea and code everything for you. It is also not taking over developer jobs, either. But it is something that can make coding easier. GitHub Copilot is good with small tasks; when you start asking it something more complex, you often get nonsense. Nevertheless, it is a very good tool for both beginner and experienced developers alike. “1 week with GitHub Copilot: Building an app using…” Very insightful article, Thanks Well a lot of programming languages have had IDs that have done most of this for years. Without AI. Also the comments generated well those are the kind you probably should never ride anyways. Most, should be why and then I say I won’t be able to do that. Maybe it’ll get better and be useful.
https://blog.logrocket.com/building-github-copilot-app/
CC-MAIN-2022-21
refinedweb
2,293
52.6
table of contents NAME¶ getgroups, setgroups - get/set list of supplementary group IDs SYNOPSIS¶ #include <sys/types.h> #include <unistd.h> int getgroups(int size, gid_t list[]); #include <grp.h> int setgroups(size_t size, const gid_t *list); setgroups(): Since glibc 2.19: _DEFAULT_SOURCE Glibc 2.19 and earlier: _BSD_SOURCE DESCRIPTION¶. A process can drop all of its supplementary groups with the call: setgroups(0, NULL); RETURN VALUE¶ On success, getgroups() returns the number of supplementary group IDs. On error, -1 is returned, and errno is set appropriately. On success, setgroups() returns 0. On error, -1 is returned, and errno is set appropriately. ERRORS¶ getgroups() can additionally fail with the following error: getgroups(): SVr4, 4.3BSD, POSIX.1-2001, POSIX.1-2008. setgroups(): SVr4, 4.3BSD. Since setgroups() requires privilege, it is not covered by POSIX.1. NOTES¶ Agroups()) employ a signal-based technique to ensure that when one thread changes credentials, all of the other threads in the process also change their credentials. For details, see nptl(7). SEE ALSO¶ getgid(2), setgid(2), getgrouplist(3), group_member(3), initgroups(3), capabilities(7), credentials(7) COLOPHON¶ This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
https://manpages.debian.org/testing/manpages-dev/getgroups.2.en.html
CC-MAIN-2021-49
refinedweb
219
60.72
One of the questions frequently asked on the forums here on CodeProject is how does one add elements to an XML file. At first read, this seems like a trivial task, but it's really not. The quickest way to do it is to open the XML file in an XmlDocument object, add the rows, and call the Save method. But how much memory does it take to do this and how fast is it? This article explores the options available when appending to an XML file. XmlDocument Save I've recently updated this article to show what happens when using .NET 2.0. There really is a big difference in speed and memory usage between .NET 1.1 and .NET 2.0. To test, we'll need a large XML file, some idea of how much memory we're using, and a timer. The timer is pretty simple since we don't really need a low-level performance timer if our XML file is big enough. Getting a big XML file is pretty easy: XmlTextWriter xtw = new XmlTextWriter("Test.xml", System.Text.Encoding.UTF8); xtw.Formatting = Formatting.Indented; xtw.Indentation = 3; xtw.IndentChar = ' '; xtw.WriteStartDocument(true); xtw.WriteStartElement("root"); for (int i = 0; i < 500000; i++) xtw.WriteElementString("child", "This is child number " + i.ToString()); xtw.WriteEndElement(); xtw.WriteEndDocument(); xtw.Close(); Now we have a file called test.xml with 500,000 rows in it. The file will probably come out to 23 megs. The next thing we need is an idea of how much memory is being used. We can use WMI to grab the amount of heap memory being used for our process. That is accomplished with this class: public class MemorySampler { private static PerformanceCounter _Memory; static MemorySampler() { string appInstanceName = AppDomain.CurrentDomain.FriendlyName; if (appInstanceName.Length > 14) appInstanceName = appInstanceName.Substring(0, 14); _Memory = new PerformanceCounter(".NET CLR Memory", "# Total committed Bytes", appInstanceName); } public static long Sample() { long currMemUsage = _Memory.NextSample().RawValue; return currMemUsage; } } Please note that the code is written in .NET 1.1. So all you pattern-nazis out there ready to write in about not using a static class can calm down. Getting an idea of how much memory you're using in .NET is pretty difficult. The CLR has grabbed a big chunk of memory and manages internally how your program uses it. This is why we need to use a big XML document because we want to make sure that the CLR is grabbing memory. Any readings from this performance counter have to be taken with a grain of salt because there is always that buffer zone and you never know when garbage collects are happening. This makes the testing environment a bit unstable, so we can't run one test right after another without closing the application. The first approach we'll take is the simplest approach. We simply open the existing XML file in an XmlDocument object, append the row(s), and save it to the original filename. The code would look something like this: XmlDocument doc = new XmlDocument(); doc.Load("test.xml"); XmlElement el = doc.CreateElement("child"); el.InnerText = "This row is being appended to the end of the document."; doc.DocumentElement.AppendChild(el); doc.Save("test.xml"); Total committed bytes before opening XmlDocument : 663472 Total committed bytes after opening XmlDocument : 56794936 Total committed bytes after writing XmlDocument : 66146104 Time to append a row : 4.187634 The XmlDocument object is using 62 megs of memory to hold the 500,000 row document. Total committed bytes before opening XmlDocument : 1454072 Total committed bytes after opening XmlDocument : 63479800 Total committed bytes after writing XmlDocument : 63479800 Time to append a row : 2.902363 As you can see, they've improved the speed and memory usage in .Net 2.0. In this approach, we will use a MemoryStream to hold the XML document. We'll read from the current file until we get just before the end and simultaneously write into the MemoryStream. Then we add the rows, stick the end element tag for the document element on the end, rewind the stream, and write the whole thing out to the original file. The gist of it is in this code: MemoryStream FileInfo fi = new FileInfo("test.xml"); XmlTextReader xtr = new XmlTextReader(fi.OpenRead()); MemoryStream ms = new MemoryStream((int)fi.Length); XmlTextWriter xtw = new XmlTextWriter(new StreamWriter(ms)); // Copies the rows and appends a new row. Copy(xtr, xtw); ms.Seek(0L, SeekOrigin.Begin); xtr.Close(); // Writes the MemoryStream to the file. Stream s = fi.OpenWrite(); s.Write(ms.GetBuffer(), 0, (int)ms.Length); s.Close(); xtw.Close(); The Copy is the one doing all the work. This is actually why most people stick with the XmlDocument approach. Copying an XML document is not clear cut. The copy method I came up with may not be able to handle all XML documents, but it should handle a significant majority of them fairly well: Copy string docElemName = null; bool b = true; while (b) { xtr.Read(); switch (xtr.NodeType) { case XmlNodeType.Attribute: xtw.WriteAttributeString(xtr.Prefix, xtr.LocalName, xtr.NamespaceURI, xtr.Value); break; case XmlNodeType.CDATA: xtw.WriteCData(xtr.Value); break; case XmlNodeType.Comment: xtw.WriteComment(xtr.Value); break; case XmlNodeType.DocumentType: xtw.WriteDocType(xtr.Name, null, null, null); break; case XmlNodeType.Element: xtw.WriteStartElement(xtr.Prefix, xtr.LocalName, xtr.NamespaceURI); if (xtr.IsEmptyElement) xtw.WriteEndElement(); if (docElemName == null) docElemName = xtr.Name; break; case XmlNodeType.EndElement: if (docElemName == xtr.Name) b = false; else xtw.WriteEndElement(); break; case XmlNodeType.EntityReference: xtw.WriteEntityRef(xtr.Name); break; case XmlNodeType.ProcessingInstruction: xtw.WriteProcessingInstruction(xtr.Name, xtr.Value); break; case XmlNodeType.SignificantWhitespace: xtw.WriteWhitespace(xtr.Value); break; case XmlNodeType.Text: xtw.WriteString(xtr.Value); break; case XmlNodeType.Whitespace: xtw.WriteWhitespace(xtr.Value); break; } } xtw.WriteElementString("child", "This row is being" + " appended to the end of the document."); xtw.WriteEndElement(); xtw.Flush(); One thing you'll want to take note of if you're copying this code is that it looks at the name of the document element and then tries to find an end element with the same name. That is the indicator which tells the code it's close to the end of the file and needs to begin appending rows. It is entirely possible that your document element name is used for another element within your XML file. If I wasn't so lazy, I'd create a counter that would be incremented every time I see a start element with a name matching the document element and decremented in similar fashion. Total committed bytes before opening file : 663472 Total committed bytes after creating MemoryStream : 663472 Total committed bytes after writing to MemoryStream : 30433160 Total committed bytes after writing to file : 30433160 Time to append a row : 3.156351 The MemoryStream approach uses 28 megs of memory to store the entire file. Which is pretty understandable because the XML file generated is about 23 megs and the MemoryStream is simply a stream wrapped around a byte array. Total committed bytes before opening file : 1454072 Total committed bytes after creating MemoryStream : 1454072 Total committed bytes after writing to MemoryStream : 28729336 Total committed bytes after writing to file : 28729336 Time to append a row : 2.659330 Once again, a speed and memory usage improvement in .NET 2.0. Notice that the speed advantage between this approach and the XmlDocument approach has narrowed. The MemoryStream from the previous option is kind of like a temporary file, but in memory. So, if we instead just write to a temporary file, then we can get rid of the extra memory usage. We can also eliminate an extra run through the document because the temporary file can be renamed to match the original file's name. The code for this is below. Compare it to Option 2 to see the differences mentioned above: FileInfo fi = new FileInfo("test.xml"); XmlTextReader xtr = new XmlTextReader(fi.OpenRead()); XmlTextWriter xtw = new XmlTextWriter(fi.FullName + "_temp", xtr.Encoding); Copy(xtr, xtw); xtw.Close(); xtr.Close(); fi.Delete(); File.Move(fi.FullName + "_temp", fi.FullName); Total committed bytes before opening files : 663472 Total committed bytes after opening files : 663472 Total committed bytes after writing to file : 5513136 Time to append a row : 2.578208 Not only is this faster than both the previous methods, it also has an extremely small memory footprint. We're looking at less than 5 megs of committed memory. Total committed bytes before opening files : 1454072 Total committed bytes after opening files : 1454072 Total committed bytes after writing to file : 5832696 Time to append a row : 1.582042 We actually have a slightly larger memory footprint with this approach. But the speed has increased by almost a second. The downside of all this is that it can be hard to control temporary files. You want to make sure a file does not already exist with the same name and you need permissions to create, delete, and rename files in the directory you're working in. To make your appending code robust, you have to take into account all the problems that go with using a temporary file. Plus, it feels kinda kludgy. Another way to handle this is to use a custom class that serializes to XML using the XmlSerializer. This was suggested to me by CP'ian BoneSoft. To handle the records, I created a very simple custom class that looks like this: XmlSerializer using System; using System.Collections.Specialized; using System.Xml.Serialization; namespace XmlAppending { [XmlRoot("root")] public class Root { private StringCollection _ChildTexts; [XmlElement("child")] public StringCollection ChildTexts { get { return _ChildTexts; } set { _ChildTexts = value; } } public Root() {} } } While custom classes can come in all different shapes and sizes, this was pretty much the simplest way I could come up with of storing the data. The XmlSerializer class will automatically fill in the StringCollection with the inner text of each child node. Here's how the code looks for the test: StringCollection XmlSerializer xs = new XmlSerializer(typeof(Root)); FileInfo fi = new FileInfo("test.xml"); Stream inStream = fi.OpenRead(); Root r = xs.Deserialize(inStream) as Root; inStream.Close(); r.ChildTexts.Add("This row is being appended to the end of the document."); Stream outStream = fi.OpenWrite(); xs.Serialize(outStream, r); outStream.Close(); Total committed bytes before deserialization : 663472 Total committed bytes after deserialization : 54046560 Total committed bytes after writing to file : 54046560 Time to append a row : 3.500112 As you can see, this method is faster than the XmlDocument and uses less memory. However, it still does not compete with the other two options. But don't take me as saying you should not pursue something like this. Memory consumption could be a lot less as the XML gets more complicated. A custom class could have an enum or flag that converts into a much larger piece of XML or could compress child nodes into a much smaller space. The memory consumption could go below that of the MemoryStream in this case. The best way to put it: Your results may vary. Total committed bytes before deserialization : 1900536 Total committed bytes after deserialization : 41680888 Total committed bytes after writing to file : 41680888 Time to append a row : 2.039076 This shows a significant improvement in speed. This approach is now faster than all but option 3. It also uses significantly less memory than in .NET 1.1. Another thing to notice is that the initial memory usage is just a bit higher than the other .Net 2.0 tests. I tested this several times to be sure of it. We all know that the DataSet is a heavyweight object. It also poses some restrictions on the XML that it can read. But just how heavyweight is it? When doing the testing for this option I found that the typical test XML I was using with 500,000 rows was way too large for the DataSet to handle. It ended up taking on the order of hours to load up. So, I had to decrease the number of rows. The most I could get it to reasonably handle was 20,000 rows. Below are the results for using a DataSet with 20,000 rows but please, please, please don't misinterpret this as being competitive with the other options. If there's one thing you should take away from this article it's that using a DataSet in .NET 1.1 is a bad idea. DataSet Total committed bytes before reading into DataSet : 663472 Total committed bytes after reading into DataSet : 13160368 Total committed bytes after writing to file : 10149808 Time to append a row : 18.359963 These results are for 20,000 records not 500,000 like the other tests. My experiments showed me that the DataSet gets almost exponentially worse as the number of rows increases. For the 20,000 rows, it ends up using about 12 megs. Multiply that by 25 to estimate 500,000 rows and you get about 300 megs. A 300 meg DataSet compared to a 60 meg XmlDocument is definitely a heavyweight object. It's also slow and clunky. Bottom line, don't use it. Well, whatever it was in .NET 1.1 that made the DataSet unusable has most certainly been fixed in .NET 2.0. Not only is the DataSet now able to handle all 500,000 rows, it can do it in a fairly reasonable amount of time. Granted, it's still slow, but now it's at least an option. Total committed bytes before reading into DataSet : 1454072 Total committed bytes after reading into DataSet : 140300280 Total committed bytes after writing to file : 140300280 Time to append a row : 13.250085 These results are for 500,000 records. Suddenly the DataSet doesn't feel so heavy anymore. Sure, it doesn't compare speed wise with the other options, but it's an incredible improvement over .NET 1.1. It is still a memory hog, using about 140 megs to load our 500,000 row XML file, but much less so than in 1.1. This graph shows the performance monitor's take on the whole thing for .NET 1.1: The graph for .NET 2.0 is pretty similar with the big change being that the DataSet actually works now: Kudos to Microsoft for doing such an excellent job of tuning the code to get such large speed and memory usage improvements in .NET 2.0. Speed improvements on the order of seconds are very significant. They also made the DataSet a lot lighter and faster. Developers should feel less anxiety about passing DataSets around. This article was not meant to tell you the only correct way to append to a large XML file. It was meant to show you all the different options and explain the pitfalls with real data. A peer might tell you to not use an XmlDocument because it uses too much memory, but they might not know exactly how much. I wanted to know for sure. If you're ever in a forum or having an argument with a colleague about the finer points of appending to XML files, you can link them to this article. The only option here which is not viable is the DataSet in 1.1. The rest depend on your situation and needs. This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here <br /> <br /> ...<br /> <br /> XmlWriterSettings settings = new XmlWriterSettings();<br /> settings.Indent = true;<br /> settings.IndentChars = (" ");<br /> settings.Encoding = Encoding.GetEncoding("iso-8859-1");<br /> <br /> FileInfo fi = new FileInfo(@"filename.xml"); //What ever your filepath is<br /> XmlTextReader xtr = new XmlTextReader(fi.OpenRead()); //Yes it have to be a XmlTextReader or I got a exception when I wanted to delete the file later.<br /> <br /> XmlWriter xtw = XmlWriter.Create(fi.FullName + "_temp", settings); //To get the Encoding right<br /> <br /> Copy(xtr, xtw);<br /> <br /> xtw.Close();<br /> xtr.Close();<br /> <br /> fi.Delete();<br /> File.Move(fi.FullName + "_temp", fi.FullName);<br /> <br /> <br /> public void Copy(XmlReader xtr, XmlWriter xtw)<br /> {<br /> string docElemName = null;<br /> bool b = true;<br /> while (b)<br /> {<br /> xtr.Read();<br /> switch (xtr.NodeType)<br /> {<br /> case XmlNodeType.CDATA:<br /> xtw.WriteCData(xtr.Value);<br /> break;<br /> case XmlNodeType.Comment:<br /> xtw.WriteComment(xtr.Value);<br /> break;<br /> case XmlNodeType.DocumentType:<br /> xtw.WriteDocType(xtr.Name, null, null, null);<br /> break;<br /> case XmlNodeType.Element:<br /> xtw.WriteStartElement(xtr.Prefix, xtr.LocalName, xtr.NamespaceURI);<br /> if (xtr.HasAttributes)<br /> {<br /> while (xtr.MoveToNextAttribute())<br /> {<br /> xtw.WriteAttributeString(xtr.Prefix, xtr.LocalName, xtr.NamespaceURI, xtr.Value);<br /> }<br /> xtr.MoveToElement();<br /> }<br /> if (xtr.IsEmptyElement)<br /> xtw.WriteEndElement();<br /> if (docElemName == null)<br /> docElemName = xtr.Name;<br /> <br /> break;<br /> case XmlNodeType.EndElement:<br /> if (docElemName == xtr.Name)<br /> b = false;<br /> else<br /> xtw.WriteEndElement();<br /> break;<br /> case XmlNodeType.EntityReference:<br /> xtw.WriteEntityRef(xtr.Name);<br /> break;<br /> case XmlNodeType.ProcessingInstruction:<br /> xtw.WriteProcessingInstruction(xtr.Name, xtr.Value);<br /> break;<br /> case XmlNodeType.SignificantWhitespace:<br /> xtw.WriteWhitespace(xtr.Value);<br /> break;<br /> case XmlNodeType.Text:<br /> xtw.WriteString(xtr.Value);<br /> break;<br /> case XmlNodeType.Whitespace:<br /> xtw.WriteWhitespace(xtr.Value);<br /> break;<br /> }<br /> }<br /> <br /> ... What ever you want to append ...<br /> <br /> xtw.WriteEndElement();<br /> xtw.Flush();<br /> }<br /> <br /> <br /> <br /> public delegate void ToAppendDelegate(XmlWriter xtw); public class AppendLogEntryTemplate<br /> {<br /> private DateTime entryTime;<br /> private string entryTitle;<br /> private object entrySender;<br /> private string entryText;<br /> <br /> public AppendLogEntryTemplate(DateTime entryTime, string entryTitle, object entrySender, string entryText)<br /> {<br /> this.entryTime = entryTime;<br /> this.entryTitle = entryTitle;<br /> this.entrySender = entrySender;<br /> this.entryText = entryText;<br /> }<br /> <br /> <br /> public void appendLogEntry(XmlWriter xtw)<br /> {<br /> xtw.WriteWhitespace(" ");<br /> xtw.WriteStartElement("entry");<br /> xtw.WriteAttributeString("entryTitle", entryTitle);<br /> xtw.WriteAttributeString("entryTime", entryTime.ToString("F"));<br /> xtw.WriteWhitespace("\r\t");<br /> xtw.WriteElementString("entrySender", entrySender.ToString());<br /> xtw.WriteWhitespace("\r\t");<br /> xtw.WriteElementString("entryText", entryText);<br /> xtw.WriteWhitespace("\r ");<br /> xtw.WriteEndElement();<br /> xtw.WriteWhitespace("\r");<br /> xtw.WriteEndElement();<br /> }<br /> } private static void AppendXmlFragment() { using (FileStream fs = File.Open("test.xml", FileMode.Append, FileAccess.Write, FileShare.Read)) { XmlTextWriter writer = new XmlTextWriter(fs, Encoding.ASCII); writer.WriteElementString("item", "", DateTime.Now.ToString()); writer.WriteWhitespace("\n"); writer.Close(); } } XPathDocument XPathNavigator \\\|/// \\ - - // ( @ @ ) +---------------oOOo-(_)-oOOo-----------------+ | Stephan Pilz stephan.pilz@stephan-pilz.de | | <a href=></a> | | ICQ#: 127823481 | +-----------------------Oooo------------------+ oooO ( ) ( ) ) / \ ( (_/ \_) Copy() XmlReader General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/15278/How-to-Append-to-a-Large-XML-File?msg=1952181
CC-MAIN-2018-05
refinedweb
3,116
50.63
Things You'll Need: - Money Belts - Travel Clothes - Letters Of Introduction - Local Guidebooks - Passport Services - Step 1 Research cultural mores and local laws before your departure, even if you are traveling in a country that seems similar to your own. Learn which behaviors are unacceptable, what's illegal, and the penalties if you break the law. - Step 2 Investigate whether or not bribes are a part of police culture in your destination, and if so, what constitutes an appropriate bribe. If uncertain and you encounter trouble, try asking a police officer if it is customary to pay a "fine" on the spot rather than be hauled into a police station to pay. - Step 3 Check with your country's embassy and find out when they can offer assistance for legal entanglements. Some embassies will not offer assistance in repatriating their citizens after particular kinds of crimes such as drug possession or drug dealing. - Step 4 Pack and wear clothing that is culturally appropriate to the area. Bring along at least one set of modest or conservative clothing that you can wear when interacting with local authorities. - Step 5 Carry appropriate identification and papers with you at all times in case you are stopped by a police officer. Know when your passport, visa or other permits expire and the appropriate means of renewing these before they become invalid. - Step 6 Carry a letter of introduction from a person of social position (such as a business, university or government leader) when traveling in a country where such letters are used. This may be an important tool in keeping local police from harassing you. - Step 7 Don't carry anything through a metal detector or onto a flight that may be interpreted as illegal or threatening. Tools such as sewing scissors and pocket knives are best kept in checked baggage. - Step 8 Carry your ticket or a receipt for your ticket when traveling on a train or bus. In some countries you will be asked to show your ticket on exiting a railway station or to produce your ticket randomly when the conductor has time to check it. - Step 9 Avoid confrontations or using disrespectful language when interacting with authority figures of any kind. Anonymous said on 11/22/2005 What you and I may think of as conservative may be embarrassingly trashy in another country. Definitely study the local mores. If you're a woman, bring a long skirt and a high-necked shirt (these will get you by in most countries), then shop locally for clothing that lets you blend in and deal with the local climate. Remember to declare your new purchases, even if you've worn them. Let the US Customs agents decide if you need to pay duty on it or not. You may be happily surprised upon your return (or a percentage of happy).
http://www.ehow.com/how_13137_avoid-trouble-with.html
crawl-002
refinedweb
477
59.33
The Microsoft .NET Framework consists of ADO.NET which enables developers to interact with the database. ADO.NET provides many rich features that can be used to retrieve and display data in a number of ways. Apart from the flexibility provided by ADO.NET, sometimes we find ourselves repeating the same code again and again. Consider that at some point in our application we need to pass some parameters and retrieve some information from the database. We can perform this task by writing 5-6 lines of code which is cool. But when later we need to pass.Text) ); As you see in the above code we perform the whole operation in a single line instead of writing 5-6 lines. First of all you should always add the namespace Microsoft.ApplicationBlocks.Data without using the namespace you will not be able to use the Application Block. The next interesting thing that you might note is the SqlHelper class. The SqlHelper class is developed by Microsoft developers which contains the static methods to access the database. You can view the SqlHelper class by opening it in any word editor. Microsoft.ApplicationBlocks.Data SqlHelper Let's see some more features of the Microsoft Data Access Application Block. Consider a situation that you need to retrieve multiple rows from the database. This retrieval can be for only displaying purposes and you want this task to be completed very fast. Since you only need to display the rows and you need it very fast your best bet is to use SqlDataReader since its a forward only reader. Lets see how you can use SqlDataReader to get the rows you wanted in an efficient and quick manner. SqlDataReader using Microsoft.ApplicationBlocks.Data; SqlDataReader dr = SqlHelper.ExecuteReader(connection,CommandType.StoredProcedure,"SELECT_PERSON"); STORED PROCEDURE SELECT_PERSON: SELECT * FROM tblPerson; As you can see executing the reader is pretty simple. All you have to do is pass few parameters and that's it and it will return the datareader object which you can use to bind to the datagrid. Also remember that Execute Reader method of the SqlHelper class has several overloads which you can use according to your needs. You can pass parameters or simple execute a simple procedure like I did. I have also shown the stored procedure which simple selects all the rows from the tblPerson and returns them. You can also use a dataset to retrieve multiple rows. The question that comes to your mind right now should be that when should you use DataReader and when you should use DataSet. You should use DataReader when your sole purpose is to display the data to the end user. Since datareader is a forward only reader its very fast in reading the records. SqlDataAdapter also uses SqlDataReader when reading records from the database. Sometimes you have a need to retrieve a single row instead of group of rows. Whenever you need to retrieve a single row you will have to change your stored procedure. I am not saying that this is the only way to referring to a single row since you can retrieve all the rows into a dataset and than pick the row you like. I am talking about retrieving a single row from the database. Lets see what you need to do in your stored procedure to get one row out of You can perform the same operations in a number of ways. I like it this way since its more clear. As you can see you first created an array of Parameters. After the array is created you simply assigns the value and also informs the C# compiler that which one of the parameters are OUTPUT. You can not only retrieve data from the database but also from any XML File. Lets see a small code sample which shows this operation..
http://www.codeproject.com/Tips/555870/SQL-Helper-Class-Microsoft-NET-Utility?fid=1827663&df=90&mpp=10&sort=Position&spc=Relaxed&tid=4620653&PageFlow=FixedWidth
CC-MAIN-2014-52
refinedweb
640
64.61
tgamma, tgammaf, tgammal - compute gamma() function #include <math.h> double tgamma(double x); float tgammaf(float x); long double tgammal(long double x); These functions shall compute the gamma() function Gamma( x). If x is a negative integer, a domain error shall occur, and either a NaN (if supported), or an implementation-defined value shall be returned. If the correct value would cause overflow, a range error shall occur and tgamma(), tgammaf(), and tgammal() shall return ±HUGE_VAL, ±HUGE_VALF, or ±HUGE_VALL, respectively, with the same sign as the correct value of the function. If x is NaN, a NaN shall be returned. If x is +Inf, x shall be returned. If x is ±0, a pole error shall occur, and tgamma(), tgammaf(), and tgammal() shall return ±HUGE_VAL, ±HUGE_VALF, and ±HUGE_VALL, respectively. If x is -Inf, a domain error shall occur, and either a NaN (if supported), or an implementation-defined. For IEEE Std 754-1985 double, overflow happens when 0 < x < 1/DBL_MAX, and 171.7 < x. On error, the expressions (math_errhandling & MATH_ERRNO) and (math_errhandling & MATH_ERREXCEPT) are independent of each other, but at least one of them must be non-zero. This function is named tgamma() in order to avoid conflicts with the historical gamma() and lgamma() functions. It is possible that the error response for a negative integer argument may be changed to a pole error and a return value of ±Inf. feclearexcept() , fetestexcept() , lgam .
http://manpages.sgvulcan.com/tgammaf.3p.php
CC-MAIN-2017-47
refinedweb
235
53.92
WebPageTexture Since: BlackBerry 10.0.0 #include <bb/cascades/WebPageTexture> The texture that can be used as a render target when compositing the internal scene graph of a webpage. A WebPageTexture can be used as a render target when compositing the internal scene graph of a WebPage using a WebPageCompositor. The texture can then be used in custom OpenGL rendering code to render the webpage as part of an OpenGL scene. This object can be created on any thread. However, the object has an affinity for the thread it was created on, and may not be used on any other thread. When calling the textureId() method on a WebPageTexture object, an OpenGL ES 2.0 capable EGL context must be current. Overview QML properties QML signals Properties Index Public Functions Index Public Slots Index Signals Index Properties QSize The size of the texture. BlackBerry 10.0.0 quint32 The ID of an OpenGL texture. WebPageTexture::textureId(). BlackBerry 10.0.0 Public Functions Constructs a WebPageTexture object with the requested size. BlackBerry 10.0.0 virtual Destructor. QSize Retrieves the texture size requested. The actual size of the texture may not be equal to the requested size due to GPU limitations. The size requested when constructing this WebPageTexture object. BlackBerry 10.0.0 quint32 Retrieves the ID of an OpenGL texture in the EGL context that is current on the calling thread. When calling this method, an OpenGL ES 2.0 capable EGL context must be current. GL_TEXTURE_MIN_FILTER: GL_LINEAR GL_TEXTURE_MAG_FILTER: GL_LINEAR GL_TEXTURE_WRAP_S: GL_CLAMP_TO_EDGE GL_TEXTURE_WRAP_T: GL_CLAMP_TO_EDGE // eglMakeCurrent(...); glBindTexture(GL_TEXTURE_2D, m_texture->textureId()); // ... The ID of the texture. BlackBerry 10.0.0 Public Slots void Sets the requested size of the texture. This operation can be expensive, because a new texture is allocated internally. BlackBerry 10.0.0 Signals void Emitted when the size changes. BlackBerry 10.0.0 void Emitted when the texture ID or appearance of the texture changes. This signal is typically emitted after a call to WebPageCompositor::renderToTexture(), when the asynchronous rendering completes. BlackBerry 10.0.0 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/native/reference/cascades/bb__cascades__webpagetexture.html
CC-MAIN-2016-50
refinedweb
353
51.24
Introducing Page Type Builder Independent systems architect and developer. Co-founder of, the best search solution for EPiServer. Page Type Builder is an open source project that offers page type inheritance, strongly typed property access and a more object oriented way of working with pages in EPiServer CMS. One of the most sought after features by developers in EPiServer CMS is inheritance between page types. That is, if page type B inherits from page type A and we add a new property to page type A the same property will also be added to page type B. Another feature that I've heard a lot of developers say they wish the CMS had is strongly typed access to properties. Instead of accessing property values with something like CurrentPage["MainBody"] we would often much rather be able to write CurrentPage.MainBody. In early June, heavily inspired by blog posts by Daniel Rodin, Fredrik Tjärnberg and Mikael Nordberg, I begun working on the Page Type Builder project which aims to deliver just that. Since the first release, which was purely experimental, I've continued to work on the project and I have received a lot of great feedback, contributions and ideas from the community. Especially from Daniel Rodin and Cristian Libardo. A total of seven releases have been made public and the latest one was good and complete enough to be dubbed version 1.0. The project is hosted at CodePlex and the binaries as well as the source code is available for download from the projects site there. Installation Installing Page Type Builder for use in a project is easily done in three steps: 1. Download the binaries from the projects site and unzip the downloaded zip file to a directory on your computer. 2. Make all of the unzipped assemblies (PageTypeBuilder.dll, Castle.Core.dll and Castle.DynamicProxy2.dll) available to an EPiServer CMS project by adding them to its bin folder or to the Global Assembly Cache. 3. Add a reference to PageTypeBuilder.dll to your project. No further configuration is needed so with that done you are ready to take it for a spin by creating a first page type. Hello World The Hello World of Page Type Builder is creating a new page type, taking a look at it in admin mode and then creating a new page based on that page type and display the value of one of its properties. To create a new page type you start by adding a new class to your project. You let the class inherit from TypedPageData and you add a PageType attribute to it. using PageTypeBuilder; namespace MyProject.PageTypes { [PageType] public class MyPageType : TypedPageData { } } That's all you have to do to create a new page type, but a page type without a single property is hardly useful, at least not when we want to be able to display a Hello World message, so let's add a property to it. Page type properties are created with code properties with a PageTypeProperty attribute. Let's create a property called MainBody of type XHTML string. using PageTypeBuilder; namespace EPiServer.PageTypes { [PageType] public class MyPageType : TypedPageData { [PageTypeProperty] public virtual string MainBody { get; set; } } } We also need a page that can display pages of our page type so we add a new ASPX page to the project. In the code behind file we make it inherit from TemplatePage<MyPageType>. using System; using MyProject.PageTypes; using PageTypeBuilder.UI; namespace MyProject.Templates { public partial class HelloWorld : TemplatePage<MyPageType> { protected void Page_Load(object sender, EventArgs e) { } } } Finally, in the markup file we display the value of the MainBody property. <body> <form id="form1" runat="server"> <div> <%= CurrentPage.MainBody %> </div> </form> </body> Notice how Intellisense kicks in and help us locate the property. Before we are done there's one last thing that we'll have to do and that is to associate the page type with our newly created ASPX-file (otherwise it will default to "~/default.aspx"). We do this by modifying the PageType attribute. [PageType(Filename = "~/Templates/HelloWorld.aspx")] With that done compile the project and open up the sites admin interface and click the Page Type tab and you'll find your newly created page type there. Head on over to edit mode and create a new page with your page type. Add some beautifully styled message in the WYSIWYG editor and hit the Save and Publish button and behold your Hello World page which you just created without using any "magic strings" or having to enter admin mode! Beyond Hello World By now I hope you've learnt a bit about the Page Type Builder project and gained a basic understanding of how to work with it. There are however quite a lot more you can do with it. You can actually define every aspect of a page type that you would normally do in admin mode in code. And considering that each page type can be a class and each page an instance of a class that you define its possible to work in a much more object oriented way than before with pages in EPiServer CMS. You can for instance use interfaces to define common behaviors that each page type can implement in its own way, such as an interface called IRssFeedProvider, or let all your page types inherit from an abstract base class that stipulates that all its subtypes must implement a MetaDescription property. These topics are however beyond the scope of this article. You'll find a list of blog entries about the project at the projects site and a lengthier tutorial here . Ted Nyberg has also posted a great introduction to the project on his blog. The projects site and my blog is also the place to turn to if you have any questions or other types of feedback, which I of course hope you do! Hi, I am getting this error when I created a Page Type as suggested in this blog. Method not found: 'Boolean EPiServer.DataAbstraction.PageType.op_Equality(EPiServer.DataAbstraction.PageType, EPiServer.DataAbstraction.PageType)'. To add: I am using EPi CMS 7 and Page Type Builder 2.0 Same error here.
http://world.episerver.com/en/Articles/Items/Introducing-Page-Type-Builder/
CC-MAIN-2017-09
refinedweb
1,029
61.46
Issues ZF-3512: Improper handling of unsigned int values in quote() Description The default quote() method of the parent class (Zend_Db_Adapter_Abstract) uses the following cast operations to ensure that the value is a valid 32-bit integer. case Zend_Db::INT_TYPE: // 32-bit integer return (string) intval($value); break; This works for signed integers, but for fields declared as UNSIGNED in MySQL, this turns valid values between 2147483648 and 4294967296 into '2147483647'. Posted by Maghiel Dijksman (maghiel) on 2010-02-06T18:58:48.000+0000 I think a possible solution would be to add Zend_Db::UNSIGNED_TYPE type, add case Zend_Db::UNSIGNED_TYPE: // Unsigned integer $quotedValue = sprintf('%u', $value); break; to Zend_Db_Adapter_Abstract::quote() and implement in all adapters. Ralph if you want I can write a patch, i'm almost done with it, but time for bed now. Posted by Maghiel Dijksman (maghiel) on 2010-02-09T19:10:19.000+0000 Here's a patch. Passes all current unit tests. UNSIGNED_TYPE might not be the best name for the constant, as this patch only implements integers as unsigned types. But would extending it to other datatypes be necessary? Maybe for consequence and completeness sake... Am I taking the right actions to take on bugs like this? If not, someone please slap me ;) If this is the right way and this patch is ok, I'll write tests for it tomorrow! Let me know :) Posted by Maghiel Dijksman (maghiel) on 2010-02-09T19:26:34.000+0000 I looked at the activity log and it didn't really look like anyone was working on this? So I took the liberty of assigning it to myself. Someone please review! Posted by Maghiel Dijksman (maghiel) on 2010-02-14T19:07:26.000+0000 Tests Posted by Maghiel Dijksman (maghiel) on 2010-02-14T19:08:47.000+0000 Please review Posted by Holger Schletz (hschletz) on 2010-02-25T00:32:16.000+0000 Unsigned integers are not part of the SQL standard and not available on all DBMS. How will this patch affect compatibility with DBMS that don't support it, like PostgreSQL? Is it wise to implement it in their respective adapters? Posted by Mickael Perraud (mikaelkael) on 2010-02-25T07:37:02.000+0000 Why this issue is 'Fixed' as there is no associated SVN commit? Posted by Maghiel Dijksman (maghiel) on 2010-02-25T12:09:45.000+0000 Sorry guys, I was confused with the workflow of my work when I put the status of this issue to Resolved. Posted by Maghiel Dijksman (maghiel) on 2010-02-25T12:35:40.000+0000 Not committed into repo Posted by Maghiel Dijksman (maghiel) on 2010-02-25T12:41:01.000+0000 Assigned to automatic, please review and commit attached patches Posted by Marc Bennewitz (private) (mabe) on 2011-06-08T05:27:04.000+0000 The constant Zend_Db::UNSIGNED_TYPE is very confusing because UNSIGNED is an additional flag of 'all' numeric data types. I think it would better to throw an exception if the value to quote has non numeric characters or how do you quote $db->quote('abc', Zend_Db::INT_TYPE);
http://framework.zend.com/issues/browse/ZF-3512?page=com.atlassian.jira.plugin.system.issuetabpanels:changehistory-tabpanel
CC-MAIN-2016-18
refinedweb
514
66.33
- NAME - SYNOPSIS - DESCRIPTION - Video::ZVBI::capture - Video::ZVBI::proxy - Video::ZVBI::rawdec - Video::ZVBI::dvb_mux - Video::ZVBI::dvb_demux - Video::ZVBI::idl_demux - Video::ZVBI::pfc_demux - Video::ZVBI::xds_demux - Video::ZVBI::vt - Video::ZVBI::page - Video::ZVBI::export - Video::ZVBI::search - Miscellaneous (Video::ZVBI) - EXAMPLES - AUTHORS - COPYING NAME Video::ZVBI - VBI decoding (teletext, closed caption, ...) SYNOPSIS use Video::ZVBI; # OR: to import all constants use Video::ZVBI qw(/^VBI_/); DESCRIPTION This module provides a Perl interface to libzvbi. The ZVBI library allows to access broadcast data services such as teletext or closed caption via analog video or DVB capture devices. Official library description: "The ZVBI library provides routines to access raw VBI sampling devices (currently the Linux V4L & V4L2 APIs and the FreeBSD, OpenBSD, NetBSD and BSDi bktr driver API are supported), a versatile raw VBI bit slicer, decoders for various data services and basic search, render and export functions for text pages. The library was written for the Zapping TV viewer and Zapzilla Teletext browser." The ZVBI Perl module covers all exported libzvbi functions. Most of the functions and parameters are exposed nearly identical, or with minor adaptions for the Perl idiom. Note: This manual page does not reproduce the full documentation which is available along with libzvbi. Hence it's recommended that you use the libzvbi documentation in parallel to this one. It is included in the libzvbi-dev package in the doc/html sub-directory and online at Finally note there's also another, older, module which covers VBI data capture: Video::Capture::VBI and based on that another one which covers Teletext caching: Video::TeletextDB. Check for yourself which one fits your needs better. Video::ZVBI::capture The following functions create and return capture contexts with the given parameters. Upon success, the returned context can be passed to the read, pull and other control functions. The context is automatically deleted and the device closed when the object is destroyed. The meaning of the parameters to these function is identical to the ZVBI C library. Upon failure, these functions return undef and an explanatory text in errorstr. - $cap = v4l2_new($dev, $buffers, $services, $strict, $errorstr, $trace) Initializes a device using the. $buffers is the number of device buffers for raw VBI data if the driver supports streaming. Otherwise one bounce buffer is allocated for $cap->pull() $services is a logical OR of VBI_SLICED_*symbols describing the data services to be decoded. On return the services actually decodable will be stored here. See ZVBI::raw_dec::add_services() for details. If you want to capture raw data only, set to VBI_SLICED_VBI_525, VBI_SLICED_VBI_625or both. If this parameter is undef, no services will be installed. You can do so later with $cap->update_services() (Note in this case the $reset parameter to that function will have to be set to 1.) $strict is passed internally to Video::ZVBI::raw_dec::add_services(). $errorstr is used to return an error description. $trace can be used to enable output of progress messages on stderr. - $cap = v4l_new($dev, $scanning, $services, $strict, $errorstr, $trace) Initializes a device using the Video4Linux API version 1. Should only be used after trying. $scanning can be used to specify the current TV norm for old drivers which don't support ioctls to query the current norm. Allowed values are: 625 for PAL/SECAM family; 525 for NTSC family; 0 if unknown or if you don't care about obsolete drivers. $services, $strict, $errorstr, $trace: see function v4l2_new() above. - $cap = v4l_sidecar_new($dev, $given_fd, $services, $strict, $errorstr, $trace) Same as v4l_new() however working on an already open device. Parameter $given_fd must be the numerical file handle, i.e. as returned by Perl's fileno. - $cap = bktr_new($dev, $scanning, $services, $strict, $errorstr, $trace) Initializes a video device using the BSD driver. Result and parameters are identical to function v4l2_new() - $cap = dvb_new($dev, $scanning, $services, $strict, $errorstr, $trace) Initializes a DVB video device. This function is deprecated as it has many bugs (see libzvbi documentation for details). Use dvb_new2() instead. - dvb_new2($dev, $pid, $errorstr, $trace) Initializes a DVB video device. The function returns a blessed reference to a capture context. Upon error the function returns undefas result and an error message in $errorstr Parameters: $dev is the path of the DVB device to open. $pid specifies the number (PID) of a stream which contains the data. You can pass 0 here and set or change the PID later with $cap->dvb_filter(). $errorstr is used to return an error descriptions. $trace can be used to enable output of progress messages on stderr. - $cap = $proxy->proxy_new($buffers, $scanning, $services, $strict, $errorstr) Open a new connection to a VBI proxy to open a VBI device for the given services. On side of the proxy daemon, one of the regular capture context creation functions (e.g. v4l2_new()) is invoked. If the creation succeeds, and any of the requested services are available, capturing is started and all captured data is forwarded transparently to the client. Whenever possible the proxy should be used instead of opening the device directly, since it allows the user to start multiple VBI clients in parallel. When this function fails (usually because the user hasn't started the proxy daemon) applications should automatically fall back to opening the device directly. Result: The function returns a blessed reference to a capture context. Upon error the function returns undefas result and an error message in $errorstr Parameters: $proxy is a reference to a previously created proxy client context (Video::ZVBI::proxy.) The remaining parameters have the same meaning as described above, as they are used by the daemon when opening the device. $buffers specifies the number of intermediate buffers on server side of the proxy socket connection. (Note this is not related to the device buffer count.) $scanning indicates the current norm: 625 for PAL and 525 for NTSC; set to 0 if you don't know (you should not attempt to query the device for the norm, as this parameter is only required for old v4l1 drivers which don't support video standard query ioctls.) $services is a set of VBI_SLICED_*symbols describing the data services to be decoded. On return $services contains actually decodable services. See Video::ZVBI::raw_dec::add_services() for details. If you want to capture raw data only, set to VBI_SLICED_VBI_525, VBI_SLICED_VBI_625or both. If this parameter is undef, no services will be installed. $strict has the same meaning as described in the device-specific capture context creation functions. $errorstr is used to return an error message when the function fails. The following functions are used to read raw and sliced VBI data from a previously created capture context $cap (the reference is implicitly inserted as first parameter when the functions are invoked as listed below.) All these functions return a status result code: -1 on error (and an error indicator in $!), 0 on timeout (i.e. no data arrived within $timeout_ms milliseconds) or 1 on success. There are two different types of capture functions: The functions named read... copy captured data into the given Perl scalar. In contrast the functions named pull... leave the data in internal buffers inside the capture context and just return a blessed reference to this buffer. When you need to access the captured data directly via Perl, choose the read functions. When you use functions of this module for further decoding, you should use the pull functions since these are usually more efficient. - $cap->read_raw($raw_buf, $timestamp, $timeout_ms) Read a raw VBI frame from the capture device into scalar $raw_buf. The buffer variable is automatically extended to the exact length required for the frame's data. On success, the function returns in $timestamp the capture instant in seconds and fractions since 1970-01-01 00:00 in double format. Parameter $timeout_ms gives the limit for waiting for data in milliseconds; if no data arrives within the timeout, the function returns 0. Note the function may fail if the device does not support reading data in raw format. - $cap->read_sliced($sliced_buf, $n_lines, $timestamp, $timeout_ms) Read a sliced VBI frame from the capture context into scalar $sliced_buf. The buffer is automatically extended to the length required for the sliced data. Parameter $timeout_ms specifies the limit for waiting for data (in milliseconds.) On success, the function returns in $timestamp the capture instant in seconds and fractions since 1970-01-01 00:00 in double format and in $n_lines the number of sliced lines in the buffer. Note for efficiency the buffer is an array of vbi_sliced C structures. Use get_sliced_line() to process the contents in Perl, or pass the buffer directly to class Video::ZVBI::vt or other decoder objects. Note: it's generally more efficient to use pull_sliced() instead, as that one may avoid having to copy sliced data into the given buffer (e.g. for the VBI proxy) - $cap->read($raw_buf, $sliced_buf, $n_lines, $timestamp, $timeout_ms) This function is a combination of read_raw() and read_sliced(), i.e. reads a raw VBI frame from the capture context into $raw_buf and decodes it to sliced data which is returned in $sliced_buf. For details on parameters see above. Note: Depending on the driver, captured raw data may have to be copied from the capture buffer into the given buffer (e.g. for v4l2 streams which use memory mapped buffers.) It's generally more efficient to use one of the following "pull" interfaces, especially if you don't require access to raw data at all. - $cap->pull_raw($ref, $timestamp, $timeout_ms) Read a raw VBI frame from the capture context, which is returned in $ref in form of a blessed reference to an internal buffer. The data remains valid until the next call to this or any other "pull" function. The reference can be passed to the raw decoder function. If you need to process the data in Perl, use read_raw() instead. For all other cases read_raw() is more efficient as it may avoid copying the data. On success, the function returns in $timestamp the capture instant in seconds and fractions since 1970-01-01 00:00 in double format. Parameter $timeout_ms specifies the limit for waiting for data (in milliseconds.) Note the function may fail if the device does not support reading data in raw format. - $cap->pull_sliced($ref, $n_lines, $timestamp, $timeout_ms) Read a sliced VBI frame from the capture context, which is returned in $ref in form of a blessed reference to an internal buffer. The data remains valid until the next call to this or any other "pull" function. The reference can be passed to get_sliced_line() to process the data in Perl, or it can be passed to a Video::ZVBI::vt decoder object. On success, the function returns in $timestamp the capture instant in seconds and fractions since 1970-01-01 00:00 in double format and in $n_lines the number of sliced lines in the buffer. Parameter $timeout_ms specifies the limit for waiting for data (in milliseconds.) - $cap->pull($raw_ref, $sliced_ref, $sliced_lines, $timestamp, $timeout_ms) This function is a combination of pull_raw() and pull_sliced(), i.e. returns blessed references to an internal raw data buffer in $raw_ref and to a sliced data buffer in $sliced_ref. For details on parameters see above. For reasons of efficiency the data is not immediately converted into Perl structures. Functions of the "read" variety return a single byte-string in the given scalar which contains all VBI lines. Functions of the "pull" variety just return a binary reference (i.e. a C pointer) which cannot be used by Perl for other purposes than passing it to further processing functions. To process either read or pulled data by Perl code, use the following function: - ($data, $id, $line) = $cap->get_sliced_line($buffer, $line_idx) The function takes a buffer which was filled by one of the slicer or capture & slice functions and a line index. The index must be lower than the line count returned by the slicer. The function returns a list of three elements: sliced data from the respective line in the buffer, slicer type ( VBI_SLICED_...) and physical line number. The structure of the data returned in the first element depends on the kind of data in the VBI line (e.g. for teletext it's 42 bytes, partly hamming 8/4 and parity encoded; the content in the scalar after the 42 bytes is undefined.) The following control functions work as described in the libzvbi documentation. - $cap->parameters() Returns a hash reference describing the physical parameters of the VBI source. This hash can be used to initialize the raw decoder context described below. The hash array has the following members: - scanning Either 525 (M/NTSC, M/PAL) or 625 (PAL, SECAM), describing the scan line system all line numbers refer to. - sampling_format Format of the raw VBI data. - sampling_rate Sampling rate in Hz, the number of samples or pixels captured per second. - bytes_per_line Number of samples or pixels captured per scan line, in bytes. This determines the raw VBI image width and you want it large enough to cover all data transmitted in the line (with headroom). - offset The distance from 0H (leading edge hsync, half amplitude point) to the first sample (pixel) captured, in samples (pixels). You want an offset small enough not to miss the start of the data transmitted. - start_a, start_b First scan line to be captured, first and second field respectively, according to the ITU-R line numbering scheme (see vbi_sliced). Set to zero if the exact line number isn't known. - count_a, count_b Number of scan lines captured, first and second field respectively. This can be zero if only data from one field is required. The sum count_a + count_b determines the raw VBI image height. - interlaced In the raw vbi image, normally all lines of the second field are supposed to follow all lines of the first field. When this flag is set, the scan lines of first and second field will be interleaved in memory. This implies count_a and count_b are equal. - synchronous Fields must be stored in temporal order, i. e. as the lines have been captured. It is assumed that the first field is also stored first in memory, however if the hardware cannot reliable distinguish fields this flag shall be cleared, which disables decoding of data services depending on the field number. - $services = $cap->update_services($reset, $commit, $services, $strict, $errorstr) Adds and/or removes one or more services to an already initialized capture context. Can be used to dynamically change the set of active services. Internally the function will restart parameter negotiation with the VBI device driver and then call $rd->add_services() on the internal raw decoder context. You may set $reset to rebuild your service mask from scratch. Note that the number of VBI lines may change with this call even if a negative result is returned. Result: The function returns a bitmask of supported services among those requested (not including previously added services), 0 upon errors. $reset when set, clears all previous services before adding new ones (by invoking $raw_dec->reset() at the appropriate time.) $commit when set, applies all previously added services to the device; when doing subsequent calls of this function, commit should be set only for the last call. Reading data cannot continue before changes were commited (because capturing has to be suspended to allow resizing the VBI image.) Note this flag is ignored when using the VBI proxy. $services contains a set of VBI_SLICED_*symbols describing the data services to be decoded. On return the services actually decodable will be stored here, i.e. the behaviour is identical to v4l2_new() etc. $strict and $errorstr are also same as during capture context creation. - $cap->fd() This function returns the file descriptor used to read from the capture context's device. Note when using the proxy this will not be the actual device, but a socket instead. Some devices may also return -1 if they don't have anything similar, or upon internal errors. The descriptor is intended be used in a select(2) syscall. The application especially must not read or write from it and must never close the handle (instead destroy the capture context to free the device.) In other words, the filehandle is intended to allow capturing asynchronously in the background; The handle will become readable when new data is available. - $cap->get_scanning() This function is intended to allow the application to check for asynchronous norm changes, i.e. by a different application using the same device. The function queries the capture device for the current norm and returns value 625 for PAL/SECAM norms, 525 for NTSC; 0 if unknown, -1 on error. - $cap->flush() After a channel change this function should be used to discard all VBI data in intermediate buffers which may still originate from the previous TV channel. - $cap->set_video_path($dev) The function sets the path to the video device for TV norm queries. Parameter $dev must refer to the same hardware as the VBI device which is used for capturing (e.g. /dev/video0when capturing from /dev/vbi0) Note: only useful for old video4linux drivers which don't support norm queries through VBI devices. - $cap->get_fd_flags() Returns properties of the capture context's device. The result is an OR of one or more VBI_FD_*constants: - VBI_FD_HAS_SELECT Is set when select(2) can be used on the filehandle returned by $cap->fd() to wait for new data on the capture device file handle. - VBI_FD_HAS_MMAP Is set when the capture device supports "user-space DMA". In this case it's more efficient to use one of the "pull" functions to read raw data because otherwise the data has to be copied once more into the passed buffer. - VBI_FD_IS_DEVICE Is not set when the capture device file handle is not the actual device. In this case it can only be used for select(2) and not for ioctl(2) - $cap->dvb_filter($pid) Programs the DVB device transport stream demultiplexer to filter out PES packets with the given $pid. Returns -1 on failure, 0 on success. - $cap->dvb_last_pts() Returns the presentation time stamp (33 bits) associated with the data last read from the context. The PTS refers to the first sliced VBI line, not the last packet containing data of that frame. Note timestamps returned by VBI capture read functions contain the sampling time of the data, that is the time at which the packet containing the first sliced line arrived. Video::ZVBI::proxy The following functions are used for receiving sliced or raw data from VBI proxy daemon. Using the daemon instead of capturing directly from a VBI device allows multiple applications to capture concurrently, e.g. to decode multiple data services. - $proxy = create($dev, $client_name, $flags, $errorstr, $trace) Creates and returns a new proxy context, or undefupon error. (Note in reality this call will always succeed, since a connection to the proxy daemon isn't established until you actually open a capture context via $proxy->proxy_new()) Parameters: $dev contains the name of the device to open, usually one of /dev/vbi0and up. Note: should be the same path as used by the proxy daemon, else the client may not be able to connect. $client_name names the client application, typically identical to $0 (without the path though) Can be used by the proxy daemon to fine-tune scheduling or to present the user with a list of currently connected applications. $flags can contain one or more members of VBI_PROXY_CLIENT_*flags. $errorstr is used to return an error descriptions. $trace can be used to enable output of progress messages on stderr. - $proxy->get_capture_if() This function is not supported as it does not make sense for the Perl module. In libzvbi the function returns a reference to a capture context created from the proxy context via $proxy->proxy_new(). In Perl, you must keep the reference anyway, because otherwise the capture context would be automatically closed and destroyed. So you can just use the stored reference instead of using this function. - $proxy->set_callback(\&callback [, $user_data]) Installs or removes a callback function for asynchronous messages (e.g. channel change notifications.) The callback function is typically invoked while processing a read from the capture device. Input parameters are a function reference $callback and an optional scalar $user_data which is passed through to the callback unchanged. Call without arguments to remove the callback again. The callback function will receive the event mask (i.e. one of the constants VBI_PROXY_EV_*in the following list) and, if provided, $user_data as parameters. - VBI_PROXY_EV_CHN_GRANTED The channel control token was granted, so that the client may now change the channel. Note: the client should return the token after the channel change was completed (the channel will still remain reserved for the requested time.) - VBI_PROXY_EV_CHN_CHANGED The channel (e.g. TV tuner frequency) was changed by another proxy client. - VBI_PROXY_EV_NORM_CHANGED The TV norm was changed by another client (in a way which affects VBI, e.g. changes between PAL/SECAM are ignored.) The client must update its services, else no data will be forwarded by the proxy until the norm is changed back. - VBI_PROXY_EV_CHN_RECLAIMED The proxy daemon requests to return the channel control token. The client is no longer allowed to switch the channel and must immediately reply with a channel notification with flag VBI_PROXY_CHN_TOKEN - VBI_PROXY_EV_NONE No news. - $proxy->get_driver_api() This function can be used to query which driver is behind the device which is currently opened by the VBI proxy daemon. Applications which only use libzvbi's capture API need not care about this. The information is relevant to applications which need to switch TV channels or norms. Returns an identifier describing which API is used on server side, i.e. one of the symbols VBI_API_V4L1, VBI_API_V4L2, VBI_API_BKTRor VBI_API_UNKNOWNupon error. The function will fail if the client is currently not connected to the proxy daemon, i.e. VBI capture has to be started first. - $proxy->channel_request($chn_prio [, $profile]) This function is used to request permission to switch channels or norm. Since the VBI device can be shared with other proxy clients, clients should wait for permission, so that the proxy daemon can fairly schedule channel requests. Scheduling differs at the 3 priority levels. For available priority levels for $chn_prio see constants VBI_CHN_PRIO_*. At background level channel changes are coordinated by introduction of a virtual token: only the one client which holds the token is allowed to switch channels. The daemon will wait for the token to be returned before it's granted to another client. This way conflicting channel changes are avoided. At the upper levels the latest request always wins. To avoid interference, the application still might wait until it gets indicated that the token has been returned to the daemon. The token may be granted right away or at a later time, e.g. when it has to be reclaimed from another client first, or if there are other clients with higher priority. If a callback has been registered, the respective function will be invoked when the token arrives; otherwise $proxy->has_channel_control() can be used to poll for it. To set the priority level to "background" only without requesting a channel, omit the $profile parameter. Else, this parameter must be a reference to a hash with the following members: "sub_prio", "allow_suspend", "min_duration" and "exp_duration". - $proxy->channel_notify($notify_flags [, $scanning]) Sends channel control request to proxy daemon. Parameter $notify_flags is an OR of one or more of the following constants: - VBI_PROXY_CHN_RELEASE Revoke a previous channel request and return the channel switch token to the daemon. - VBI_PROXY_CHN_TOKEN Return the channel token to the daemon without releasing the channel; This should always be done when the channel switch has been completed to allow faster scheduling in the daemon (i.e. the daemon can grant the token to a different client without having to reclaim it first.) - VBI_PROXY_CHN_FLUSH Indicate that the channel was changed and VBI buffer queue must be flushed; Should be called as fast as possible after the channel and/or norm was changed. Note this affects other clients' capturing too, so use with care. Other clients will be informed about this change by a channel change indication. - VBI_PROXY_CHN_NORM Indicate a norm change. The new norm should be supplied in the scanning parameter in case the daemon is not able to determine it from the device directly. - VBI_PROXY_CHN_FAIL Indicate that the client failed to switch the channel because the device was busy. Used to notify the channel scheduler that the current time slice cannot be used by the client. If the client isn't able to schedule periodic re-attempts it should also return the token. - $proxy->channel_suspend($cmd) Request to temporarily suspend capturing (if $cmd is VBI_PROXY_SUSPEND_START) or revoke a suspension (if $cmd equals VBI_PROXY_SUSPEND_STOP.) - $proxy->device_ioctl($request, $arg) This function allows to manipulate parameters of the underlying VBI device. Not all ioctls are allowed here. It's mainly intended to be used for channel enumeration and channel/norm changes. The request codes and parameters are the same as for the actual device. The caller has to query the driver API via $proxy->get_driver_api() first and use the respective ioctl codes, same as if the device would be used directly. Parameters and results are equivalent to the called ioctl operation, i.e. $request is an IO code and $arg is a packed binary structure. After the call $arg may be modified for operations which return data. You must make sure the result buffer is large enough for the returned data. Use Perl's pack to build the argument buffer. Example: # get current config of the selected chanel $vchan = pack("ix32iLss", $channel, 0, 0, 0, $norm); $proxy->device_ioctl(VIDIOCGCHAN, $vchan); The result is 0 upon success, else and $!set appropriately. The function also will fail with error code EBUSYif the client doesn't have permission to control the channel. - $proxy->get_channel_desc() Retrieve info sent by the proxy daemon in a channel change indication. The function returns a list with two members: scanning value (625, 525 or 0) and a boolean indicator if the change request was granted. - $proxy->has_channel_control() Returns 1 if client is currently allowed to switch channels, else 0. See examples/proxy-test.pl for examples how to use these functions. Video::ZVBI::rawdec The functions in this section allow converting raw VBI samples to bits and bytes (i.e. analog to digital conversion - even though the data in a raw VBI buffer is obviously already digital, it's just a sampled image of the analog wave line.) These functions are used internally by libzvbi if you use the slicer functions of the capture object (e.g. pull_sliced) - $rd = Video::ZVBI::rawdec::new($ref) Creates and initializes a new raw decoder context. Parameter $ref specifies the physical parameters of the raw VBI image, such as the sampling rate, number of VBI lines etc. The parameter can be either a reference to a capture context (Video::ZVBI::capture) or a reference to a hash. The contents for the hash are as returned by method $cap->parameters() on capture contexts, i.e. they describe the physical parameters of the source. - $services = Video::ZVBI::rawdec::parameters($href, $services, $scanning, $max_rate) Calculate the sampling parameters required to receive and decode the requested data services. This function can be used to initialize hardware prior to calling $rd->add_service(). The returnes sampling format is fixed to VBI_PIXFMT_YUV420, $href->{bytes_per_line}set accordingly to a reasonable minimum. Input parameters: $href must be a reference to a hash which is filled with sampling parameters on return (contents see Video::ZVBI::capture::parameters().) $services is a set of VBI_SLICED_*constants. Here (and only here) you can add VBI_SLICED_VBI_625or VBI_SLICED_VBI_525to include all VBI scan lines in the calculated sampling parameters. If $scanning is set to 525 only NTSC services are accepted; if set to 625 only PAL/SECAM services are accepted. When scanning is 0, the norm is determined from the requested services; an ambiguous set will result in undefined behaviour. The function returns a set of VBI_SLICED_*constants describing the data services covered by the calculated sampling parameters returned in $href. This excludes services the libzvbi raw decoder cannot decode assuming the specified physical parameters. On return parameter $max_rate is set to the highest data bit rate in Hz of all services requested (The sampling rate should be at least twice as high; $href-{sampling_rate}> will be set by libzvbi to a more reasonable value of 27 MHz derived from ITU-R Rec. 601.) - $rd->reset() Reset a raw decoder context. This removes all previously added services to be decoded (if any) but does not touch the sampling parameters. You are free to change the sampling parameters after calling this. - $services = $rd->add_services($services, $strict) After you initialized the sampling parameters in raw decoder context (according to the abilities of your VBI device), this function adds one or more data services to be decoded. The libzvbi raw VBI decoder can decode up to eight data services in parallel. You can call this function while already decoding, it does not change sampling parameters and you must not change them either after calling this. Input parameters: $services is a set of VBI_SLICED_*constants (see also description of the parameters function above.) $strict is value of 0, 1 or 2 and requests loose, reliable or strict matching of sampling parameters respectively. For example if the data service requires knowledge of line numbers while they are not known, value 0 will accept the service (which may work if the scan lines are populated in a non-confusing way) but values 1 or 2 will not. If the data service may use more lines than are sampled, value 1 will still accept but value 2 will not. If unsure, set to 1. Returns a set of VBI_SLICED_*constants describing the data services that actually can be decoded. This excludes those services not decodable given sampling parameters of the raw decoder context. - $services = $rd->check_services($services, $strict) Check and return which of the given services can be decoded with current physical parameters at a given strictness level. See add_services for details on parameter semantics. - $services = $rd->remove_services($services) Removes one or more data services given in input parameter $services to be decoded from the raw decoder context. This function can be called at any time and does not touch sampling parameters stored in the context. Returns a set of VBI_SLICED_*constants describing the remaining data services that will be decoded. - $rd->resize($start_a, $count_a, $start_b, $count_b) Grows or shrinks the internal state arrays for VBI geometry changes. Returns undef. - $n_lines = $rd->decode($ref, $buf) This is the main service offered by the raw decoder: Decodes a raw VBI image given in $ref, consisting of several scan lines of raw VBI data, into sliced VBI lines in $buf. The output is sorted by line number. The input $ref can either be a scalar filled by one of the "read" kind of capture functions (or any scalar filled with a byte string with the correct number of samples for the current geometry), or a blessed reference to an internal capture buffer as returned by the "pull" kind of capture functions. The format of the output buffer is the same as described for $cap->read_sliced(). Return value is the number of lines decoded. Note this function attempts to learn which lines carry which data service, or none, to speed up decoding. Hence you must use different raw decoder contexts for different devices. Video::ZVBI::dvb_mux These functions convert raw and/or sliced VBI data to a DVB Packetized Elementary Stream or Transport Stream as defined in EN 300 472 "Digital Video Broadcasting (DVB); Specification for conveying ITU-R System B Teletext in DVB bitstreams" and EN 301 775 "Digital Video Broadcasting (DVB); Specification for the carriage of Vertical Blanking Information (VBI) data in DVB bitstreams". Note EN 300 468 "Digital Video Broadcasting (DVB); Specification for Service Information (SI) in DVB systems" defines another method to transmit VPS data in DVB streams. Libzvbi does not provide functions to generate SI tables but the encode_dvb_pdc_descriptor() function is available to convert a VPS PIL to a PDC descriptor (since version 0.3.0) Available: All of the functions in this sction are available only since libzvbi version 0.2.26 - $mx = pes_new( [$callback, $user_data] ) Creates a new DVB VBI multiplexer converting raw and/or sliced VBI data to MPEG-2 Packetized Elementary Stream (PES) packets as defined in the standards EN 300 472 and EN 301 775. Returns undefupon error. Parameter $callback specifies a handler which is called by $mx->feed() when a new packet is available. must be omitted if $mx->cor() is used. The $user_data is passed through to the handler. For further callback parameters see the description of the feed function. - $mx = ts_new($pid [, $callback, $user_data] ) Allocates a new DVB VBI multiplexer converting raw and/or sliced VBI data to MPEG-2 Transport Stream (TS) packets as defined in the standards EN 300 472 and EN 301 775. Returns undefupon error. Parameter $pid is a program ID that will be stored in the header of the generated TS packets. The value must be in range 0x0010 to 0x1FFE inclusive. Parameter $callback specifies a handler which is called by $mx->feed() when a new packet is available. Must be omitted if $mx->cor() is used. The $user_data is passed through to the handler. For further callback parameters see the description of the feed function. - $mx->mux_reset() This function clears the internal buffers of the DVB VBI multiplexer. After a reset call the $mx->cor() function will encode a new PES packet, discarding any data of the previous packet which has not been consumed by the application. - $mx->cor($buf, $buffer_left, $sliced, $sliced_left, $service_mask, $pts [, $raw, $sp]) This function converts raw and/or sliced VBI data to one DVB VBI PES packet or one or more TS packets as defined in EN 300 472 and EN 301 775, and stores them in the output buffer. If the returned $buffer_left value is zero and the returned $sliced_left value is greater than zero another call will be necessary to convert the remaining data. After a reset() call the cor() function will encode a new PES packet, discarding any data of the previous packet which has not been consumed by the application. Parameters: buffer will be used as output buffer for converted data. This scalar may be undefined; else it should have the length given in $buffer_left. $buffer_left the number of bytes available in $buffer, and will be decremented by number of bytes stored there. _left must contain the number of sliced VBI lines in the input buffer $sliced. It will be decremented by the number of successfully converted structures. On failure it will point at the offending line index (relative to the end of the sliced array.) $service_mask Only data services in this set will be encoded. Other data services in the sliced input buffer will be discarded without further checks. Create a set by ORing VBI_SLICED_*constants. $pts containts the presentation time stamp which constraints described for feed() below. The function returns 0 on failures, which may occur under the following curcumstances: * The maximum PES packet size, or the value selected with $mx->set_pes_packet_size(), is too small to contain all the sliced and raw VBI data. * The sliced array is not sorted by ascending line number, except for elements with line number 0 (undefined). * Only the following data services can be encoded: (1) VBI_SLICED_TELETEXT_Bon lines 7 to 22 and 320 to 335 inclusive, or with line number 0 (undefined). All Teletext lines will be encoded with data_unit_id 0x02 ("EBU Teletext non-subtitle data"). (2) VBI_SLICED_VPSon line 16. (3) VBI_SLICED_CAPTION_625on line 22. (4) VBI_SLICED_WSS_625on line 23. (5) Raw VBI data with id VBI_SLICED_VBI_625can be encoded on lines 7 to 23 and 320 to 336 inclusive. Note for compliance with the Teletext buffer model defined in EN 300 472, EN 301 775 recommends to encode at most one raw and one sliced, or two raw VBI lines per frame. * A vbi_sliced structure contains a line number outside the valid range specified above. * parameter $raw is undefined although the sliced array contains a structure with id VBI_SLICED_VBI_625. * One or more members of the $sp structure are invalid. * A vbi_sliced structure with id VBI_SLICED_VBI_625contains a line number outside the ranges defined by $sp. On all errors $sliced_left will refer to the offending sliced line in the index buffer (i.e. relative to the end of the buffer) and the output buffer remains unchanged. - $mx->feed($sliced, $sliced_lines, $service_mask, $pts [, $raw, $sp]) This function converts raw and/or sliced VBI data to one DVB VBI PES packet or one or more TS packets as defined in EN 300 472 and EN 301 775. To deliver output, the callback function passed to pes_new() or ts_new() is called once for each PES or TS packet. Parameters: _lines number of valid lines in the $sliced input buffer. $service_mask Only data services in this set will be encoded. Other data services in the sliced buffer will be discarded without further checks. Create a set by ORing VBI_SLICED_*constants. $pts This Presentation Time Stamp following additional constraints: * videostd_set must contain one or more bits from the VBI_VIDEOSTD_SET_625_50. * scanning must be 625 (libzvbi 0.2.x only) * sampling_format must be VBI_PIXFMT_Y8or VBI_PIXFMT_YUV420. Chrominance samples are ignored. * sampling_rate must be 13500000. * offset must be >= 132. * samples_per_line (in libzvbi 0.2.x bytes_per_line) must be >= 1. * offset + samples_per_line must be <= 132 + 720. * synchronous must be set. The function returns 0 on failures. For a description of failure conditions see cor() above. - $mx->get_data_identifier() Returns the data_identifier the multiplexer encodes into PES packets. - $ok = $mx->set_data_identifier($data_identifier) This function can be used to determine the $data_identifier byte to be stored in PES packets. For compatibility with decoders compliant to EN 300 472 this should be a value in the range 0x10 to 0x1F inclusive. The values 0x99 to 0x9B inclusive as defined in EN 301 775 are also permitted. The default data_identifier is 0x10. Returns 0 if $data_identifier is outside the valid range. - $size = $mx->get_min_pes_packet_size() Returns the maximum size of PES packets the multiplexer generates. - $size = $mx->get_max_pes_packet_size() Returns the minimum size of PES packets the multiplexer generates. - $ok = $mx->set_pes_packet_size($min_size, $max_size) Determines the minimum and maximum total size of PES packets generated by the multiplexer, including all header bytes. When the data to be stored in a packet is smaller than the minimum size, the multiplexer will fill the packet up with stuffing bytes. When the data is larger than the maximum size the feed() and cor() functions will fail. The PES packet size must be a multiple of 184 bytes, in the range 184 to 65504 bytes inclusive, and this function will round $min_size up and $max_size down accordingly. If after rounding the maximum size is lower than the minimum, it will be set to the same value as the minimum size. The default minimum size is 184, the default maximum 65504 bytes. For compatibility with decoders compliant to the Teletext buffer model defined in EN 300 472 the maximum should not exceed 1472 bytes. Returns 0 on failure (out of memory) The next functions provide similar functionality as described above, but are special as they work without a dvb_mux object. Meaning and use of parameters is the same as described above. - Video::ZVBI::dvb_multiplex_sliced($buf, $buffer_left, $sliced, $sliced_left, $service_mask, $data_identifier, $stuffing) Converts the sliced VBI data in the $sliced buffer to VBI data units as defined in EN 300 472 and EN 301 775 and stores them in $buf as output buffer. - Video::ZVBI::dvb_multiplex_raw($buf, $buffer_left, $raw, $raw_left, $data_identifier, $videostd_set, $line, $first_pixel_position, $n_pixels_total, $stuffing) Converts one line of raw VBI samples in $raw to one or more "monochrome 4:2:2 samples" data units as defined in EN 301 775, and stores them in the $buf output buffer. Parameters: $line The ITU-R line number to be encoded in the data units. It must not change until all samples have been encoded. $first_pixel_position The horizontal offset where decoders shall insert the first sample in the VBI, counting samples from the start of the digital active line as defined in ITU-R BT.601. Usually this value is zero and $n_pixels_total is 720. $first_pixel_position + $n_pixels_total must not be greater than 720. This parameter must not change until all samples have been encoded. $n_pixels_total Total size of the raw input buffer in bytes, and the total number of samples to be encoded. Initially this value must be equal to $raw_left, and it must not change until all samples have been encoded. Remaining parameters are the same as described above. Note: According to EN 301 775 all lines stored in one PES packet must belong to the same video frame (but the data of one frame may be transmitted in several successive PES packets). They must be encoded in the same order as they would be transmitted in the VBI, no line more than once. Samples may have to be split into multiple segments and they must be contiguously encoded into adjacent data units. The function cannot enforce this if multiple calls are necessary to encode all samples. Video::ZVBI::dvb_demux Separating VBI data from a DVB PES stream (EN 301 472, EN 301 775). - $dvb = Video::ZVBI::dvb_demux::pes_new( [$callback [, $user_data]] ) Creates a new DVB VBI demultiplexer context taking a PES stream as input. Returns a reference to the newly allocated DVB demux context. The optional callback parameters should only be present if decoding will occur via the $dvb>feed() method. The function referenced by $callback will be called inside of $dvb->feed() whenever new sliced data is available. Optional parameter $user_data is appended to the callback parameters. See $dvb>feed() for additional details. - $dvb->reset() Resets the DVB demux to the initial state as after creation. Intended to be used after channel changes. - $n_lines = $dvb->cor($sliced, $sliced_lines, $pts, $buf, $buf_left) This function takes an arbitrary number of DVB PES data bytes in $buf, filters out PRIVATE_STREAM_1 packets, filters out valid VBI data units, converts them to sliced buffer format and stores the data at $sliced. Usually the function will be called in a loop: $left = length($buffer); while ($left > 0) { $n_lines = $dvb->cor ($sliced, 64, $pts, $buffer, $left); if ($n_lines > 0) { $vt->decode($sliced, $n_lines, pts_conv($pts)); } } Input parameters: $buf contains data read from a DVB device (needs not align with packet boundaries.) Note you must not modify the buffer until all data is processed as indicated by $buf_left being zero (unless you remove processed data and reset the left count to zero.) $buffer_left specifies the number of unprocessed bytes (at the end of the buffer.) This value is decremented in each call by the number of processed bytes. Note the packet filter works faster with larger buffers. $sliced_lines specifies the maximum number of sliced lines expected as result. Returns the number of sliced lines stored in $sliced. May be zero if more data is needed or the data contains errors. Demultiplexed sliced data is stored in $sliced. You must not change the contents until a frame is complete (i.e. the function returns a non-value.) $pts returns the Presentation Time Stamp associated with the first line of the demultiplexed frame. Note: Demultiplexing of raw VBI data is not supported yet, raw data will be discarded. - $ok = $dvb->feed($buf) This function takes an arbitrary number of DVB PES data bytes in $buf, filters out PRIVATE_STREAM_1 packets, filters out valid VBI data units, converts them to vbi_sliced format and calls the callback function given during creation of the context. Returns 0 if the data contained errors. The function is similar to $dvb->cor(), but uses an internal buffer for sliced data. Since this function does not return sliced data, it's only useful if you have installed a handler. Do not mix calls to this function with $dvb->cor(). The callback function is called with the following parameters: $ok = &$callback($sliced_buf, $n_lines, $pts, $user_data); $sliced is a reference to a buffer holding sliced data; the reference has the same type as returned by capture functions. $n_lines specifies the number of valid lines in the buffer. $pts is the timestamp. The last parameter is $user_data, if given during creation. The handler should return 1 on success, 0 on failure. Note: Demultiplexing of raw VBI data is not supported yet, raw data will be discarded. - $dvb->set_log_fn($mask [, $log_fn [, $user_data]]) The DVB demultiplexer supports the logging of errors in the PES stream and information useful to debug the demultiplexer. With this function you can redirect log messages generated by this module from general log function Video::ZVBI::set_log_fn() to a different function or enable logging only in the DVB demultiplexer. The callback can be removed by omitting the handler name. Input parameters: $mask specifies which kind of information to log; may be zero. $log_fn is a reference to the handler function. Optional $user_data is passed through to the handler. The handler is called with the following parameters: level, $context, $message and, if given, $user_data. Note: Kind and contents of log messages may change in the future. Video::ZVBI::idl_demux The functions in since section decode data transmissions in Teletext Independent Data Line packets (EN 300 708 section 6), i.e. data transmissions based on packet 8/30. - $idl = Video::ZVBI::idl_demux::new($channel, $address [, $callback, $user_data] ) Creates and returns a new Independent Data Line format A (EN 300 708 section 6.5) demultiplexer. $channel filter out packets of this channel. $address filter out packets with this service data address. Optional: $callback is a hanlder to be called by $idl->feed() when new data is available. If present, $user_data is passed through to the handler function. - $idl->reset(dx) Resets the IDL demux context, useful for example after a channel change. - $ok = $idl->feed($buf) This function takes a stream of Teletext packets, filters out packets of the desired data channel and address and calls the handler given context creation when new user data is available. Parameter $buf is a scalar containing a teletext packet's data (at last 42 bytes, i. e. without clock run-in and framing code), as returned by the slicer functions. The function returns 0 if the packet contained incorrectable errors. Parameters to the handler are: $buffer, $flags, $user_data. - $ok = $idl->feed_frame($sliced_buf, $n_lines) This function works like $idl-::pfc_demux Separating data transmitted in Page Function Clear Teletext packets (ETS 300 708 section 4), i.e. using regular packets on a dedicated teletext page. - $pfc = Video::ZVBI::pfc_demux::new($pgno, $stream [, $callback, $user_data] ) Creates and returns a new demultiplexer context. Parameters: $page specifies the teletext page on which the data is transmitted. $stream is the stream number to be demultiplexed. Optional parameter $callback is a reference to a handler to be called by $pfc->feed() when a new data block is available. Is present, $user_data is passed through to the handler. - $pfc->reset() Resets the PFC demux context, useful for example after a channel change. - $pfc->feed($buf) This function takes a raw stream of Teletext packets, filters out the requested page and stream and assembles the data transmitted in this page in an internal buffer. When a data block is complete it calls the handler given during creation. The handler is called with the following parameters: $pgno is the page number given during creation; $stream is the stream in which the block was received; $application_id is the application ID of the block; $block is a scalar holding the block's data; optional $user_data is passed through from the creation. - $ok = $pfc->feed_frame($sliced_buf, $n_lines) This function works like $pfc-::xds_demux Separating XDS data from a Closed Caption stream (EIA 608). - $xds = Video::ZVBI::xds_demux::new( [$callback, $user_data] ) Creates and returns a new Extended Data Service (EIA 608) demultiplexer. The optional parameters $callback and $user_data specify a handler and passed-through parameter which is called when a new packet is available. - $xds->reset() Resets the XDS demux, useful for example after a channel change. - $xds->feed($buf) This function takes two successive bytes of a raw Closed Caption stream, filters out XDS data and calls the handler function given during context creation when a new packet is complete. Parameter $buf is a scalar holding data from NTSC line 284 (as returned by the slicer functions.) Only the first two bytes in the buffer hold valid data. Returns 0 if the buffer contained parity errors. The handler is called with the following parameters: $xds_class is the XDS packet class, i.e. one of the VBI_XDS_CLASS_*constants. $xds_subclass holds the subclass; meaning depends on the main class. $buffer is a scalar holding the packet data (already parity decoded.) optional $user_data is passed through from the creation. - $ok = $xds->feed_frame($sliced_buf, $n_lines) This function works like $xds-::vt This section describes high level decoding functions. Input to the decoder functions in this section is sliced data, as returned from capture objects (Video::ZVBI::capture) or the raw decoder (Video::ZVBI::rawdec) - $vt = Video::ZVBI::vt::decoder_new() Creates and returnes a new data service decoder instance. Note the type of data services to be decoded is determined by the type of installed callbacks. Hence you must install at least one callback using $vt->event_handler_register(). - $vt->decode($buf, $n_lines, $timestamp) This is the main service offered by the data service decoder: Decodes zero or more lines of sliced VBI data from the same video frame, updates the decoder state and invokes callback functions for registered events. The function always returns undef. Input parameters: $buf is either a blessed reference to a slicer buffer, or a scalar with a byte string consisting of sliced data. $n_lines gives the number of valid lines in the sliced data buffer and should be exactly the value returned by the slicer function. $timestamp specifies the capture instant of the input data in seconds and fractions since 1970-01-01 00:00 in double format. The timestamps are expected to advance by 1/30 to 1/25 seconds for each call to this function. Different steps will be interpreted as dropped frames, which starts a resynchronization cycle, eventually a channel switch may be assumed which resets even more decoder state. So this function must be called even if a frame did not contain any useful data (with parameter $n_lines = 0) - $vt->channel_switched( [$nuid] ) Call this after switching away from the channel (RF channel, video input line, ... - i.e. after switching the network) from which this context used to receive VBI data, to reset the decoding context accordingly. This includes deletion of all cached Teletext and Closed Caption pages from the cache. Optional parameter $nuid is currently unused by libzvbi and dfaults to zero. The decoder attempts to detect channel switches automatically, but this does not work reliably, especially when not receiving and decoding Teletext or VPS (since only these usually transmit network identifiers frequently enough.) Note the reset is not executed until the next frame is about to be decoded, so you may still receive "old" events after calling this. You may also receive blank events (e. g. unknown network, unknown aspect ratio) revoking a previously sent event, until new information becomes available. - ($type, $subno, $lang) = $vt->classify_page($pgno) This function queries information about the named page. The return value is a list consisting of three scalars. Their content depends on the data service to which the given page belongs: For Closed Caption pages ($pgno value in range 1 ... 8) $subno will always be zero, $language set or an empty string. $type will be VBI_SUBTITLE_PAGEfor page 1 ... 4 (Closed Caption channel 1 ... 4), VBI_NORMAL_PAGEfor page 5 ... 8 (Text channel 1 ... 4), or VBI_NO_PAGEif no data is currently transmitted on the channel. For Teletext pages ($pgno in range hex 0x100 ... 0x8FF) $subno returns the highest subpage number used. Note this number can be larger (but not smaller) than the number of sub-pages actually received and cached. Still there is no guarantee the advertised sub-pages will ever appear or stay in cache. Special value 0 means the given page is a "single page" without alternate sub-pages. (Hence value 1 will never be used.) $language currently returns the language of subtitle pages, or an empty string if unknown or the page is not classified as VBI_SUBTITLE_PAGE. Note: The information returned by this function is volatile: When more information becomes available, or when pages are modified (e. g. activation of subtitles, news updates, program related pages) subpage numbers can increase or page types and languages can change. - $vt->set_brightness($brightness) Change brightness of text pages, this affects the color palette of pages fetched with $vt->fetch_vt_page() and $vt->fetch_cc_page(). Parameter $brightness is in range 0 ... 255, where 0 is darkest, 255 brightest. Brightness value 128 is default. - $vt->set_contrast($contrast) Change contrast of text pages, this affects the color palette of pages fetched with $vt->fetch_vt_page() and $vt->fetch_cc_page(). Parameter $contrast is in range -128 to 127, where -128 is inverse, 127 maximum. Contrast value 64 is default. - $vt->teletext_set_default_region($default_region) The original Teletext specification distinguished between eight national character sets. When more countries started to broadcast Teletext the three bit character set id was locally redefined and later extended to seven bits grouping the regional variants. Since some stations still transmit only the legacy three bit id and we don't ship regional variants of this decoder as TV manufacturers do, this function can be used to set a default for the extended bits. The "factory default" is 16. Parameter $default_region is a value between 0 ... 80, index into the Teletext character set table according to ETS 300 706, Section 15 (or libzvbi source file lang.c). The three last significant bits will be replaced. Fetches a Teletext page designated by parameters $pgno and $subno from the cache, formats and returns it as a blessed reference to a page object of type Video::ZVBI::page. The reference can then be passed to the various libzvbi methods working on page objects, such as the export functions. The function returns undefif the page is not cached or could not be formatted for other reasons, for instance is a data page not intended for display. Level 2.5/3.5 pages which could not be formatted e. g. due to referencing data pages not in cache are formatted at a lower level. Further input parameters: If $subno is VBI_ANY_SUBNOthen the newest sub-page of the given page is returned. $max_level is one of the VBI_WST_LEVEL_*constants and specifies the Teletext implementation level to use for formatting. $display_rows limits rendering to the given number of rows (i.e. row 0 ... $display_rows - 1) In practice, useful values are 1 (format the page header row only) or 25 (complete page). Boolean parameter $navigation can be used to skip parsing the page for navigation links to save formatting time. The last three parameters are optional and default to VBI_WST_LEVEL_3p5, 25 and 1 respectively. Although safe to do, this function is not supposed to be called from an event handler since rendering may block decoding for extended periods of time. The returned reference must be destroyed to release resources which are locked internally in the library during the fetch. The destruction is done automatically when a local variable falls out of scope, or it can be forced by use of Perl's undef operator. - $pg = $vt->fetch_cc_page($pgno, $reset) Fetches a Closed Caption page designated by $pgno from the cache, formats and returns it and as a blessed reference to a page object of type Video::ZVBI::page. Returns undefupon errors. Closed Caption pages are transmitted basically in two modes: at once and character by character ("roll-up" mode). Either way you get a snapshot of the page as it should appear on screen at the present time. With $vt->event_handler_register() you can request a VBI_EVENT_CAPTIONevent to be notified about pending changes (in case of "roll-up" mode that is with each new word received) and the vbi_page->dirty fields will mark the lines actually in need of updates, to speed up rendering. If the $reset parameter is set to 1, the page dirty flags in the cached paged are reset after fetching. Pass 0 only if you plan to call this function again to update other displays. If omitted, the parameter defaults to 1. Although safe to do, this function is not supposed to be called from an event handler, since rendering may block decoding for extended periods of time. - $yes_no = $vt->is_cached($pgno, $subno) This function queries if the page specified by parameters $pgno and $subno is currently available in the cache. The result is 1 if yes, else 0. This function is deprecated for reasons of forwards compatibility: At the moment pages can only be added to the cache but not removed unless the decoder is reset. That will change, making the result volatile in a multithreaded environment. - $subno = $vt->cache_hi_subno($pgno) This function queries the highest cached subpage of the page page specified by parameter $pgno. This function is deprecated for the same reason as $vt->is_cached() - $title = $vt->page_title($pgno, $subno) The function makes an effort to deduce a page title to be used in bookmarks or similar purposes for the page specified by parameters $pgno and $subno. The title is mainly derived from navigation data on the given page. The function returns the title or undefupon error. Typically the transmission of VBI data elements like a Teletext or Closed Caption page spans several VBI lines or even video frames. So internally the data service decoder maintains caches accumulating data. When a page or other object is complete it calls the respective event handler to notify the application. Clients can register any number of handlers needed, also different handlers for the same event. They will be called by the $vt->decode() function in the order in which they were registered. Since decoding is stopped while in the callback, the handlers should return as soon as possible. The handler function receives two parameters: First is the event type (i.e. one of the VBI_EVENT_* constants), second a hash reference describing the event. See libzvbi for a definition of contents. - $vt->event_handler_register($event_mask, $handler [, $user_data]) Registers a new event handler. $event_mask can be any 'or' of VBI_EVENT_*constants, -1 for all events and 0 for none. When the $handler function with $user_data is already registered, its event_mask will be changed. Any number of handlers can be registered, also different handlers for the same event which will be called in registration order. Apart of adding handlers this function also enables and disables decoding of data services depending on the presence of at least one handler for the respective data. A VBI_EVENT_TTX_PAGEhandler for example enables Teletext decoding. This function can be safely called at any time, even from inside of a handler. Note only 10 event callback functions can be registered in a script at the same time. Callbacks are automatically unregistered when the decoder object is destroyed. - $vt->event_handler_unregister($handler [, $user_data]) Unregisters the event handler $handler with parameter $user_data, if such a handler was previously registered. Apart of removing a handler this function also disables decoding of data services when no handler is registered to consume the respective data. Removing the last VBI_EVENT_TTX_PAGEhandler for example disables Teletext decoding. This function can be safely called at any time, even from inside of a handler removing itself or another handler, and regardless if the handler has been successfully registered. - $vt->event_handler_add($event_mask, $handler [, $user_data]) Depreceated: Installs $handler as event callback for the given events. When using this function you can install only a single event handler per decoder (note this is a stronger limitation than the one in libzvbi for this function.) For this reason the function is depreceated; use event_handler_register() in new code. The function returns boolean FALSE on failure, else TRUE. Parameters: $event_mask is one of the VBI_EVENT*constants and specifies the events the handler is waiting for. $handler is a reference to a handler function. The optional $user_data is stored internally and passed through in calls to the event handler function. - $vt->event_handler_remove($handler) Depreceated: This function removes an event handler function (if any) which was previously installed via $vt>event_handler_add(). Parameter $handler is a reference to the event handler which is to be removed (currently ignored as only one handler can be installed.) Use event_handler_register() and event_handler_unregister() in new code instead. The following event types are defined: - VBI_EVENT_NONE No event. - VBI_EVENT_CLOSE The vbi decoding context is about to be closed. This event is sent when the decoder object is destroyed and can be used to clean up event handlers. - VBI_EVENT_TTX_PAGE The vbi decoder received and cached another Teletext page designated by $ev->{pgno} and $ev->{subno}. $ev->{roll_header} flags the page header as suitable for rolling page numbers, e. g. excluding pages transmitted out of order. The $ev->{header_update} flag is set when the header, excluding the page number and real time clock, changed since the last VBI_EVENT_TTX_PAGE. Note this may happen at midnight when the date string changes. The $ev->{clock_update} flag is set when the real time clock changed since the last VBI_EVENT_TTX_PAGE(that is at most once per second). They are both set at the first VBI_EVENT_TTX_PAGEsent and unset while the received header or clock field is corrupted. If any of the roll_header, header_update or clock_update flags are set $ev->{raw_header} is a pointer to the raw header data (40 bytes), which remains valid until the event handler returns. $ev->{pn_offset} will be the offset (0 ... 37) of the three digit page number in the raw or formatted header. Allways call $vt-<Egtfetch_vt_page()> for proper translation of national characters and character attributes, the raw header is only provided here as a means to quickly detect changes. - VBI_EVENT_CAPTION A Closed Caption page has changed and needs visual update. The page or "CC channel" is designated by $ev->{pgno}. When the client is monitoring this page, the expected action is to call $vt->fetch_cc_page(). To speed up rendering, more detailed update information can be queried via $pg->get_page_dirty_range(). (Note the vbi_page will be a snapshot of the status at fetch time and not event time, i.e. the "dirty" flags accumulate all changes since the last fetch.) - VBI_EVENT_NETWORK Some station/network identifier has been received or is no longer transmitted (in the latter case all values are zero, eg. after a channel switch). The event will not repeat until a different identifier has been received and confirmed. (Note: VPS/TTX and XDS will not combine in real life, feeding the decoder with artificial data can confuse the logic.) The referenced hash contains the following elements: nuid, name, call, tape_delay, cni_vps, cni_8301, cni_8302, cycle. Minimum time to identify network, when data service is transmitted: VPS (DE/AT/CH only): 0.08 seconds; Teletext PDC or 8/30: 2 seconds; XDS (US only): unknown, between 0.1x to 10x seconds. - VBI_EVENT_TRIGGER Triggers are sent by broadcasters to start some action on the user interface of modern TVs. Until libzvbi implements all of WebTV and SuperTeletext the information available are program related (or unrelated) URLs, short messages and Teletext page links. This event is sent when a trigger has fired. The hash parameter contains the following elements: type, eacem, name, url, script, nuid, pgno, subno, expires, itv_type, priority, autoload. - VBI_EVENT_ASPECT The vbi decoder received new information (potentially from PAL WSS, NTSC XDS or EIA-J CPR-1204) about the program aspect ratio. The hash parameter contains the following elements: first_line, last_line, ratio, film_mode, open_subtitles. - VBI_EVENT_PROG_INFO We have new information about the current or next program. (Note this event is preliminary as info from Teletext is not implemented yet.) The referenced has contains the programme description including a lot of parameters. See libzvbi documentation for details. - VBI_EVENT_NETWORK_ID Like VBI_EVENT_NETWORK, but this event will also be sent when the decoder cannot determine a network name. Available: since libzvbi version 0.2.20 Video::ZVBI::page These are functions to render Teletext and Closed Caption pages directly into memory, essentially a more direct interface to the functions of some important export modules described in Video::ZVBI::export. All of the functions in this section work on page objects as returned by the page cache's "fetch" functions (see Video::ZVBI::vt) or the page search function (see Video::ZVBI::search) - $canvas = $pg->draw_vt_page($fmt=VBI_PIXFMT_RGBA32_LE, $reveal=0, $flash_on=0) Draw a complete Teletext page. Each teletext character occupies 12 x 10 pixels (i.e. a character is 12 pixels wide and each line is 10 pixels high. Note that this aspect ratio is not optimal for display, so pixel lines should be doubled. This is done automatically by the XPM conversion functions.) The image is returned in a scalar which contains a byte string. When using format VBI_PIXFMT_RGBA32_LE, each pixel consists of 4 subsequent bytes in the string (RGBA). Hence the string is 4 * 12 * $pg_columns * 10 * $pg_rowsbytes long, where $pg_columnsand $pg_rowsare the page width and height in teletext characters respectively. When using format VBI_PIXFMT_PAL8(only available with libzvbi version 0.2.26 or later) each pixel uses one byte. In this case each pixel value is an index into the color palette as delivered by $pg->get_page_color_map(). Note this function is just a convienence interface to $pg->draw_vt_page_region() which automatically inserts the page column, row, width and height parameters by querying page dimensions. The image width is set to the full page width (i.e. same as when passing value -1 for $img_pix_width) See the following function for descriptions of the remaining parameters. - $pg->draw_vt_page_region($fmt, $canvas, $img_pix_width, $col_pix_off, $row_pix_off, $column, $row, $width, $height, $reveal=0, $flash_on=0) Draw a sub-section of a Teletext page. Each character occupies 12 x 10 pixels (i.e. a character is 12 pixels wide and each line is 10 pixels high.) The image is written into $canvas. If the scalar is undefined or not large enough to hold the output image, the canvas is initialized as black. Else it's left as is. This allows to call the draw functions multiple times to assemble an image. In this case $img_pix_width must have the same value in all rendering calls. See also $pg->draw_blank(). The image is returned in a scalar which contains a byte string. With format VBI_PIXFMT_RGBA32_LEeach pixel uses 4 subsequent bytes in the string (RGBA). With format VBI_PIXFMT_PAL8(only available with libzvbi version 0.2.26 or later) each pixel uses one byte (reference into the color palette.) Input parameters: $fmt is the target format. Currently only VBI_PIXFMT_RGBA32_LEis supported (i.e. each pixel uses 4 subsequent bytes for R,G,B,A.) $canvas is a scalar into which the image is written. $img_pix_width is the distance between canvas pixel lines in pixels. When set to -1, the image width is automatically set to the width of the selected region (i.e. $pg_columns * 12bytes.) $col_pix_off and $row_pix_off are offsets to the upper left corner in pixels and define where in the canvas to draw the page section. $column is the first source column (range 0 ... pg->columns - 1); $row is the first source row (range 0 ... pg->rows - 1); $width is the number of columns to draw, 1 ... pg->columns; $height is the number of rows to draw, 1 ... pg->rows; Note all four values are given as numbers of teletext characters (not pixels.) Example to draw two pages stacked into one canvas: my $fmt = Video::ZVBI::VBI_PIXFMT_RGBA32_LE; my $canvas = $pg->draw_blank($fmt, 10 * 25 * 2); $pg_1->draw_vt_page_region($fmt, $canvas, -1, 0, 0, 0, 0, 40, 25); $pg_2->draw_vt_page_region($fmt, $canvas, -1, 0, 10 * 25, 0, 0, 40, 25); Optional parameter $reveal can be set to 1 to draw characters flagged as "concealed" as space (U+0020). Optional parameter $flash_on can be set to 1 to draw characters flagged "blink" (see vbi_char) as space (U+0020). To implement blinking you'll have to draw the page repeatedly with this parameter alternating between 0 and 1. - $canvas = $pg->draw_cc_page($fmt=VBI_PIXFMT_RGBA32_LE) Draw a complete Closed Caption page. Each character occupies 16 x 26 pixels (i.e. a character is 16 pixels wide and each line is 26 pixels high.) The image is returned in a scalar which contains a byte string. Each pixel uses 4 subsequent bytes in the string (RGBA). Hence the string is 4 * 16 * $pg_columns * 26 * $pg_rowsbytes long, where $pg_columnsand $pg_rowsare the page width and height in Closed Caption characters respectively. Note this function is just a convienence interface to $pg->draw_cc_page_region() which automatically inserts the page column, row, width and height parameters by querying page dimensions. The image width is set to the page width (i.e. same as when passing value -1 for $img_pix_width) - $pg->draw_cc_page_region($fmt, $canvas, $img_pix_width, $column, $row, $width, $height) Draw a sub-section of a Closed Caption page. Please refer to $pg->draw_cc_page() and $pg->draw_vt_page_region() for details on parameters and the format of the returned byte string. - $canvas = $pg->draw_blank($fmt, $pix_height, $img_pix_width) This function can be used to create a blank canvas onto which several Teletext or Closed Caption regions can be drawn later. All input parameters are optional: $fmt is the target format. Currently only VBI_PIXFMT_RGBA32_LEis supported (i.e. each pixel uses 4 subsequent bytes for R,G,B,A.) $img_pix_width is the distance between canvas pixel lines in pixels. $pix_height is the height of the canvas in pixels (note each Teletext line has 10 pixels and each Closed Caption line 26 pixels when using the above drawing functions.) When omitted, the previous two parameters are derived from the referenced page object. - $xpm = $pg->canvas_to_xpm($canvas [, $fmt, $aspect, $img_pix_width]) This is a helper function which converts the image given in $canvas from a raw byte string into XPM format. Due to the way XPM is specified, the output is a regular text string. (The result is suitable as input to Tk::Pixmap but can also be written into a file for passing the image to external applications.) Optional boolean parameter $aspect when set to 0, disables the aspect ration correction (i.e. on teletext pages all lines are doubled by default; closed caption output ration is already correct.) Optional parameter $img_pix_width if present, must have the same value as used when drawing the image. If this parameter is omitted or set to -1, the referenced page's full width is assumed (which is suitable for converting images generated by draw_vt_page() or draw_cc_page().) Note: Since libzvbi 0.2.26, you can also obtain XPM snapshots via the Video::ZVBI::export class. - $txt = $pg->print_page($table=0, $rtl=0) Print and return the referenced Teletext or Closed Caption page in text form, with rows separated by linefeeds ("\n".) All character attributes and colors will be lost. Graphics characters, DRCS and all characters not representable in UTF-8 will be replaced by spaces. When optional parameter $table is set to 1, the page is scanned in table mode, printing all characters within the source rectangle including runs of spaces at the start and end of rows. Else, sequences of spaces at the start and end of rows are collapsed into single spaces and blank lines are suppressed. Optional parameter $rtl is currently ignored; defaults to 0. - $txt = $pg->print_page_region($table, $rtl, $column, $row, $width, $height) Print and return a sub-section of the referenced Teletext or Closed Caption page in text form, with rows separated by linefeeds ("\n") All character attributes and colors will be lost. Graphics characters, DRCS and all characters not representable in UTF-8 will be replaced by spaces. $table Scan page in table mode, printing all characters within the source rectangle including runs of spaces at the start and end of rows. When 0, scan all characters from position {$column, $row}to {$column + $width - 1, $row + $height - 1}and all intermediate rows to the page's full columns width. In this mode runs of spaces at the start and end of rows are collapsed into single spaces, blank lines are suppressed. Parameter $rtl is currently ignored and should be set to 0. The next four parameters specify the page region: $column: first source column; $row: first source row; $width: number of columns to print; $height: number of rows to print. You can use $pg->get_page_size() below to determine the allowed ranges for these values, or use $pg->print_page() to print the complete page. - ($pgno, $subno) = $pg->get_page_no() This function returns a list of two scalars which contain the page and sub-page number of the referenced page object. Teletext page numbers are hexadecimal numbers in the range 0x100 .. 0x8FF, Closed Caption page numbers are in the range 1 .. 8. Sub-page numbers are used for teletext only. These are hexadecimal numbers in range 0x0001 .. 0x3F7F, i.e. the 2nd and 4th digit count from 0..F, the 1st and 3rd only from 0..3 and 0..7 respectively. A sub-page number zero means the page has no sub-pages. - ($rows, $columns) = $pg->get_page_size() This function returns a list of two scalars which contain the dimensions (i.e. row and column count) of the referenced page object. - ($y0, $y1, $roll) = $pg->get_page_dirty_range() To speed up rendering these variables mark the rows which actually changed since the page has been last fetched from cache. $y0 ... $y1 are the first to last row changed, inclusive. $roll indicates the page has been vertically scrolled this number of rows, negative numbers up (towards lower row numbers), positive numbers down. For example -1 means row y0 + 1 ... y1moved to y0 ... y1 - 1, erasing row $y1 to all spaces. Practically this is only used in Closed Caption roll-up mode, otherwise all rows are always marked dirty. Clients are free to ignore this information. - $av = $pg->get_page_color_map() The function returns a reference to an array with 40 entries which contains the page color palette. Each array entry is a 24-bit RGB value (i.e. three 8-bit values for red, green, blue, with red in the lowest bits) To convert this into the usual #RRGGBB syntax use: sprintf "#%02X%02X%02X", $rgb&0xFF, ($rgb>>8)&0xFF, ($rgb>>16)&0xFF - $av = $pg->get_page_text_properties() The function returns a reference to an array which contains the properties of all characters on the given page. Each element in the array is a bitfield. The members are (in ascending order, width in bits given behind the colon): foreground:8, background:8, opacity:4, size:4, underline:1, bold:1, italic:1, flash:1, conceal:1, proportional:1, link:1. - $txt = $pg->get_page_text( [$all_chars] ) The function returns the complete page text in form of an UTF-8 string. This function is very similar to $pg->print_page(), but does not insert or remove any characters so that it's guaranteed that characters in the returned string correlate exactly with the array returned by $pg->get_page_text_properties(). Note since UTF-8 is a multi-byte encoding, the length of the string in bytes may be different from the length in characters. Hence you should access the variable with string manipulation functions only (e.g. substr()) When the optional parameter $all_chars is set to 1, even characters on the private Unicode code pages are included. Otherwise these are replaced with blanks. Note use of these characters will cause warnings when passing the string to transcoder functions (such as Perl's encode() or print.) - $href = $pg->vbi_resolve_link($column, $row) The referenced page $pg (in practice only Teletext pages) may contain hyperlinks such as HTTP URLs, e-mail addresses or links to other pages. Characters being part of a hyperlink have their "link" flag set in the character properties (see $pg->get_page_text_properties()), this function returns a reference to a hash with a more verbose description of the link. The returned hash contains the following elements (depending on the type of the link not all elements may be present): type, eacem, name, url, script, nuid, pgno, subno, expires, itv_type, priority, autoload. - $href = $pg->vbi_resolve_home() All Teletext pages have a built-in home link, by default page 100, but can also be the magazine intro page or another page selected by the editor. This function returns a hash reference with the same elements as $pg->vbi_resolve_link(). - $pg->unref_page() This function can be use to de-reference the given page (see also $vt->fetch_vt_page() and $vt->fetch_cc_page()) The call is equivalent to using Perl's undef operator on the page reference (i.e. undef $pg;) Note use of this operator is deprecated in Perl. It's recommended to instead assign page references to local variables (i.e. declared with my) so that the page is automatically destroyed when the function or block which works on the reference is left. Video::ZVBI::export Once libzvbi received, decoded and formatted a Teletext or Closed Caption page you will want to render it on screen, print it as text or store it in various formats. libzvbi provides export modules converting a page object into the desired format or rendering directly into an image. - $exp = Video::ZVBI::export::new($keyword, $errstr) Creates a new export module object to export a VBI page object in the respective module format. As a special service you can initialize options by appending to the $keyword parameter like this: $keyword = "keyword; quality=75.5, comment=\"example text\""; Note: A quick overview of all export formats and options can be ptained by running the demo script examples/explist.pl in the ZVBI package. - $href = Video::ZVBI::export::info_enum($index) Enumerates all available export modules. You should start with $index 0, incrementing until the function returns undef. Some modules may depend on machine features or the presence of certain libraries, thus the list can vary from session to session. On success the function returns a reference to an hash with the following elements: keyword, label, tooltip, mime_type, extension. - $href = Video::ZVBI::export::info_keyword($keyword) Similar to the above function info_enum(), this function returns info about available modules, although this one searches for an export module which matches $keyword. If no match is found the function returns undef, else a hash reference as described above. - $href = $exp->info_export() Returns the export module info for the export object referenced by $exp. On success a hash reference as described for the previous two functions is returned. - $href = $exp->option_info_enum($index) Enumerates the options available for the referenced export module. You should start at $index 0, incrementing until the function returns undef. On success, the function returns a reference to a hash with the following elements: type, keyword, label, min, max, step, def, menu, tooltip. The content format of min, max, step and def depends on the type, i.e. it may be an integer, double or string - but usually you don't have to worry about that in Perl. If present, menu is an array reference. Elements in the array are of the same type as min, max, etc. If no label or tooltip are available for the option, these elements are undefined. - $href = $exp->option_info_keyword($keyword) Similar to the above function $exp->option_info_enum() this function returns info about available options, although this one identifies options based on the given $keyword. - $exp->option_set($keyword, $opt) Sets the value of the option named by $keword to $opt. Returns 0 on failure, 1 on success. Example: $exp->option_set('quality', 75.5); Note the expected type of the option value depends on the keyword. The ZVBI interface module automatically converts the option into type expected by the libzvbi library. Mind that options of type VBI_OPTION_MENUmust be set by menu entry number (integer), all other options by value. If necessary it will be replaced by the closest value possible. Use function $exp->option_menu_set() to set options with menu by menu entry. - $opt = $exp->option_get($keyword) This function queries and returns the current value of the option named by $keyword. Returns undefupon error. Similar to $exp->option_set() this function sets the value of the option named by $keyword to $entry, however it does so by number of the corresponding menu entry. Naturally this must be an option with menu. Similar to $exp->option_get() this function queries the current value of the option named by $keyword, but returns this value as number of the corresponding menu entry. Naturally this must be an option with menu. - $exp->stdio($io, $pg) This function writes contents of the page given in $pg, converted to the respective export module format, to the stream $io. The caller is responsible for opening and closing the stream, don't forget to check for I/O errors after closing. Note this function may write incomplete files when an error occurs. The function returns 1 on success, else 0. You can call this function as many times as you want, it does not change state of the export or page objects. - $exp->file($name, $pg) This function writes contents of the page given in $pg, converted to the respective export module format, into a new file specified by $name. When an error occurs the file will be deleted. The function returns 1 on success, else 0. You can call this function as many times as you want, it does not change state of the export or page objects. - $data = $exp->alloc($pg) This functions renders the page $pg and returns it as a (byte-)string. Returns undef if the function fails. Available: since libzvbi version 0.2.26 - $size = $exp->mem($data, $pg) This functions renders the page $pg into scalar $data. The size of the scalar must be large enough to hold all of the data. The result is -1 upon internal errors, or the size of the output. You must check if the size is larger then the length of $data: my $sz = $ex->mem($img, $page); die "Export failed: ". $exp->errstr() ."\n" if $s < 0; die "Export failed: Buffer too small.\n" if $sz > length($img); Usually you should get the same performance from the $exp->alloc() variant, which has much simpler semantics. Note you can also use $exp->alloc() to determine the size. For image formats without compression the output size will usually be the same for all pages with the same dimensions. Available: since libzvbi version 0.2.26 - $text = $exp->errstr() When an export function failed, this function returns a string with a more detailed error description. Video::ZVBI::search The functions in this section allow to search across one or more Teletext pages in the cache for a given sub-string or a regular expression. - $search = Video::ZVBI::new($vt, $pgno, $subno, $pattern, $casefold=0, $regexp=0, $progress=NULL, $user_data=NULL) Create a search context and prepare for searching the Teletext page cache with the given expression. Regular expression searching supports the standard set of operators and constants, with these extensions: Input Parameters: $pgno and $subno specify the number of the first (forward) or last (backward) page to visit. Optionally VBI_ANY_SUBNOcan be used for $subno. $pattern contains the search pattern (encoded in UTF-8, but usually you won't have to worry about that when using Perl; use Perl's Encode module to search for characters which are not supported in your current locale.) Boolean $casefold can be set to 1 to make the search case insensitive; default is 0. Boolean $regexp must be set to 1 when the search pattern is a regular expression; default is 0. If present, $progress can be used to pass a reference to a function which will be called for each scanned page. When the function returns 0, the search is aborted. The callback function receives as only parameter a reference to the search page. Use $pg->get_page_no() to query the page number for a progress display. Note due to internal limitations only 10 search callback functions can be registered in a script at the same time. Callbacks are automatically unregistered when the search object is destroyed. Note: The referenced page is only valid while inside of the callback function (i.e. you must not assign the reference to a variable outside if the scope of the handler function.) \x....or \X.... Hexadecimal number of up to 4 digits \u....or \U.... Hexadecimal number of up to 4 digits :title: Unicode specific character class :gfx: Teletext G1 or G3 graphic :drcs: Teletext DRCS \pN1,N2,...,Nn Character properties class \PN1,N2,...,Nn Negated character properties class Property definitions: alphanumeric alpha control digit graphical lowercase printable punctuation space uppercase hex digit title defined wide nonspacing Teletext G1 or G3 graphics Teletext DRCS Character classes can contain literals, constants, and character property classes. Example: [abc\U10A\p1,3,4]. Note double height and size characters will match twice, on the upper and lower row, and double width and size characters count as one (reducing the line width) so one can find combinations of normal and enlarged characters. Note: In a multithreaded application the data service decoder may receive and cache new pages during a search session. When these page numbers have been visited already the pages are not searched. At a channel switch (and in future at any time) pages can be removed from cache. All this has yet to be addressed. - $status = $search->next($pgref, $dir) The function starts the search on a previously created search context. Parameter $dir specifies the direction: 1 for forwards, or -1 for backwards search. The function returns a status code which is one of the following constants: - VBI_SEARCH_ERROR Pattern not found, pg is invalid. Another vbi_search_next() will restart from the original starting point. - VBI_SEARCH_CACHE_EMPTY No pages in the cache, pg is invalid. - VBI_SEARCH_CANCELED The search has been canceled by the progress function. $pg points to the current page as in success case, except for the highlighting. Another $search->next() continues from this page. - VBI_SEARCH_NOT_FOUND Some error occured, condition unclear. - VBI_SEARCH_SUCCESS Pattern found. $pgref points to the page ready for display with the pattern highlighted. If and only if the function returns VBI_SEARCH_SUCCESS, $pgref is set to a reference to the matching page. Miscellaneous (Video::ZVBI) - lib_version() Returns the version of the ZVBI library. - set_log_fn($mask [, $log_fn [, $user_data ]] ) Various functions can print warnings, errors and information useful to debug the library. With this function you can enable these messages and determine a function to print them. (Note: The kind and contents of messages logged by particular functions may change in the future.) Parameters: $mask specifies which kind of information to log. It's a bit-wise OR of zero or more of the constants VBI_LOG_ERROR, VBI_LOG_WARNING, VBI_LOG_NOTICE, VBI_LOG_INFO, VBI_LOG_DEBUG, VBI_LOG_DRIVER, VBI_LOG_DEBUG2, VBI_LOG_DEBUG3. $log_fn is a reference to a function to be called with log messages. Omit this parameter to disable logging. The log handler is called with the following parameters: level is one of the VBI_LOG_*constants; $context which is a text string describing the module where the event occured; $message the actual error message; finally, if passed during callback definition, a $user_data parameter. Note only 10 event log functions can be registered in a script at the same time. Available: since libzvbi version 0.2.22 - set_log_on_stderr($mask) This function enables error logging just like set_log_fn(), but uses the library's internal log function which prints all messages to stderr, i.e. on the terminal. $mask is a bit-wise OR of zero or more of the VBI_LOG_*constants. The mask specifies which kind of information to log. To disable logging call set_log_fn(0), i.e. without passing a callback function reference. Available: since libzvbi version 0.2.22 - par8(val) This function encodes the given 7-bit value with Parity. The result is an 8-bit value in the range 0..255. - unpar8(val) This function decodes the given Parity encoded 8-bit value. The result is a 7-bit value in the range 0...127 or a negative value when a parity error is detected. (Note: to decode parity while ignoring errors, simply mask out the highest bit, i.e. $val &= 0x7F) - par_str(data) This function encodes a string with parity in place, i.e. the given string contains the result after the call. - unpar_str(data) This function decodes a Parity encoded string in place, i.e. the parity bit is removed from all characters in the given string. The result is negative when a decoding error is detected, else the result is positive or zero. - rev8(val) This function reverses the order of all bits of the given 8-bit value and returns the result. This conversion is required for decoding certain teletext elements which are transmitted MSB first instead of the usual LSB first (the teletext VBI slicer already inverts the bit order so that LSB are in bit #0) - rev16(val) This function reverses the order of all bits of the given 16-bit value and returns the result. - rev16p(data, offset=0) This function reverses 2 bytes from the string representation of the given scalar at the given offset and returns them as a numerical value. - ham8(val) This function encodes the given 4-bit value (i.e. range 0..15) with Hamming-8/4. The result is an 8-bit value in the range 0..255. - unham8(val) This function decodes the given Hamming-8/4 encoded value. The result is a 4-bit value, or -1 when there are uncorrectable errors. - unham16p(data, offset=0) This function decodes 2 Hamming-8/4 encoded bytes (taken from the string in parameter "data" at the given offset) The result is an 8-bit value, or -1 when there are uncorrectable errors. - unham24p(data, offset=0) This function decodes 3 Hamming-24/18 encoded bytes (taken from the string in parameter "data" at the given offset) The result is an 8-bit value, or -1 when there are uncorrectable errors. - dec2bcd($dec) Converts a two's complement binary in range 0 ... 999 into a packed BCD number in range 0x000 ... 0x999. Extra digits in the input are discarded. - $dec = bcd2dec($bcd) Converts a packed BCD number in range 0x000 ... 0xFFF into a two's complement binary in range 0 ... 999. Extra digits in the input will be discarded. - add_bcd($bcd1, $bcd2) Adds two packed BCD numbers, returning a packed BCD sum. Arguments and result are in range 0xF0000000 ... 0x09999999, that is -10**7 ... +10**7 - 1 in decimal notation. To subtract you can add the 10's complement, e. g. -1 = 0xF9999999. The return value is a packed BCD number. The result is undefined when any of the arguments contain hex digits 0xA ... 0xF. - is_bcd($bcd) Tests if $bcd forms a valid BCD number. The argument must be in range 0x00000000 ... 0x09999999. Return value is 0 if $bcd contains hex digits 0xA ... 0xF. - vbi_decode_vps_cni(data) This function receives a sliced VPS line and returns a 16-bit CNI value, or undef in case of errors. Available: since libzvbi version 0.2.22 - vbi_encode_vps_cni(cni) This function receives a 16-bit CNI value and returns a VPS line, or undef in case of errors. Available: since libzvbi version 0.2.22 - rating_string($auth, $id) Translate a program rating code given by $auth and $id into a Latin-1 string, native language. Returns undefif this code is undefined. The input parameters will usually originate from $ev->{rating_auth} and $ev->{rating_id} in an event struct passed for a data service decoder event of type VBI_EVENT_PROG_INFO. - prog_type_string($classf, $id) Translate a vbi_program_info program type code into a Latin-1 string, currently English only. Returns undefif this code is undefined. The input parameters will usually originate from $ev->{type_classf} and array members @$ev->{type_id} in an event struct passed for a data service decoder event of type VBI_EVENT_PROG_INFO. - $str = iconv_caption($src [, $repl_char] ) Converts a string of EIA 608 Closed Caption characters to UTF-8. The function ignores parity bits and the bytes 0x00 ... 0x1F, except for two-byte special and extended characters (e.g. music note 0x11 0x37) See also caption_unicode(). Returns the converted string $src, or undefwhen the source buffer contains invalid two byte characters, or when the conversion fails, when it runs out of memory. Optional parameter $repl_char when present specifies an UCS-2 replacement for characters which are not representable in UTF-8 (i.e. a 16-bit value - use Perl's ord() to obtain a character's code value.) When omitted or zero, the function will fail if the source buffer contains unrepresentable characters. Available: since libzvbi version 0.2.23 - $str = caption_unicode($c [, $to_upper] ) Converts a single Closed Caption character code into an UTF-8 string. Codes in range 0x1130 to 0x1B3F are special and extended characters (e.g. caption command 11 37). Input character codes in $c are in range 0x0020 ... 0x007F, 0x1130 ... 0x113F, 0x1930 ... 0x193F, 0x1220 ... 0x123F, 0x1A20 ... 0x1A3F, 0x1320 ... 0x133F, 0x1B20 ... 0x1B3F. When the optional $to_upper is set to 1, the character is converted into upper case. (Often programs are captioned in all upper case, but except for one character the basic and special CC character sets contain only lower case accented characters.) Available: since libzvbi version 0.2.23 EXAMPLES The examples sub-directory in the Video::ZVBI package contains a number of scripts used to test the various interface functions. You can also use them as examples for your code: - capture.pl This is a translation of test/capture.cin the libzvbi package. The script captures sliced VBI data from a device. Output can be written to a file or passed via stdout into one of the following example scripts. Call with option --helpfor a list of options. - decode.pl This is a direct translation of test/decode.cin the libzvbi package. Decodes sliced VBI data on stdin, e. g. ./capture --sliced | ./decode --ttx Call with option --helpfor a list of options. - caption.pl This is a translation of test/caption.cin the libzvbi package, albeit based on Perl::Tk here. When called without an input stream, the application displays some sample messages (character sets etc.) for debugging the decoder. When the input stream is the output of capture.pl --sliced(see above), the applications displays the live CC stream received from a VBI device. The buttons on top switch between Closed Caption channels 1-4 and Text channels 1-4. - export.pl This is a direct translation of test/export.cin the libzvbi package. The script captures from /dev/vbi0until the page specified on the command line is found and then exports the page in a requested format. - explist.pl This is a direct translation of test/explist.cin the libzvbi package. Test of page export options and menu interfaces. The script lists all available export modules (i.e. formats) and options. - hamm.pl This is a direct translation of test/hamm.cin the libzvbi package. Automated test of the odd parity and Hamming encoder and decoder functions. - network.pl This is a direct translation of examples/network.cin the libzvbi package. The script captures from /dev/vbi0until the currently tuned channel is identified by means of VPS, PDC et.al. - proxy-test.pl This is a direct translation of test/proxy-test.cin the libzvbi package. The script can capture either from a proxy daemon or a local device and dumps captured data on the terminal. Also allows changing services and channels during capturing (e.g. by entering "+ttx" or "-ttx" on stdin.) Start with option -helpfor a list of supported command line options. - test-vps.pl This is a direct translation of test/test-vps.cin the libzvbi package. It contains tests for encoding and decoding the VPS data service on randomly generated data. - search-ttx.pl The script is used to test search on teletext pages. The script captures from /dev/vbi0until the RETURN key is pressed, then prompts for a search string. The content of matching pages is printed on the terminal and capturing continues until a new search text is entered. - browse-ttx.pl The script captures from /dev/vbi0and displays teletext pages in a small GUI using Perl::Tk. - osc.pl This script is loosely based on test/osc.cin the libzvbi package. The script captures raw VBI data from a device and displays the data as an animated gray-scale image. One selected line is plotted and the decoded teletext or VPS Data of that line is shown. - dvb-mux.pl This script is a small example for use of the DVD multiplexer functions (available since libzvbi 0.2.26) The scripts captures teletext from an analog VBI device and generates a PES or TS stream on STDOUT. AUTHORS The ZVBI Perl interface module was written by Tom Zoerner <tomzo@sourceforge.net> starting March 2006 for the Teletext EPG grabber accompanying nxtvepg The module is based on the libzvbi library, mainly written and maintained by Michael H. Schimek (2000-2007) and Iñaki García Etxebarria (2000-2001), which in turn is based on AleVT 1.5.1 by Edgar Toernig (1998-1999). See also COPYING Parts of the descriptions in this man page are copied from the "libzvbi" documentation, licensed under the GNU General Public License version 2 or later, Copyright (C) 2000-2007 Michael H. Schimek, Copyright (C) 2000-2001 Iñaki García Etxebarria, Copyright (C) 2003-2004 Tom Zo <>.
https://metacpan.org/pod/release/TOMZO/Video-ZVBI-0.2.5/ZVBI.pm
CC-MAIN-2019-47
refinedweb
16,144
55.84
_beauw_ wrote:Anyone who abuses the memory of someone who just died should be banned. Quote:One person overreacted Nagy Vilmos wrote:I find abusive posts from Geordies abusiveincomprehensible _beauw_ wrote:unintelligible nonsense (e.g. about some Spy magazine) _beauw_ wrote:I'm not sure how that works, _beauw_ wrote:lack of a downvote option _beauw_ wrote:I've already removed all of my content from this site _beauw_ wrote:or morally bankrupt garbage _beauw_ wrote:Sayonara, chumps. Mike Hankey wrote:Oh wait a minute maybe I am...never mind! Marco Bertschi wrote:You are still here on CP... public class Naerling : Lazy<Person>{ public void DoWork(){ throw new NotImplementedException(); } } _Josh_ wrote:I think it's still a crime in a lot of places _Josh_ wrote: I agree about the Mindy McCready post, not one single person mentioned the two kids she left behind. Suicide has a massive stigma, I think it's still a crime in a lot of places. You are overreacting though. General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Lounge.aspx?fid=1159&df=10000&mpp=10&sort=Position&spc=None&tid=4498678
CC-MAIN-2016-44
refinedweb
195
60.85
Configuration Settings and Compiling Modes¶ Configuration¶ The config module contains several attributes that modify Theano’s behavior. Many of these attributes are examined during the import of the theano module and several are assumed to be read-only. As a rule, the attributes in the config module should not be modified inside the user code. Theano’s code comes with default values for these attributes, but you can override them from your .theanorc file, and override those values in turn by the THEANO_FLAGS environment variable. The order of precedence is: - an assignment to theano.config.<property> - an assignment in THEANO_FLAGS - an assignment in the .theanorc file (or the file indicated in THEANORC) You can display the current/effective configuration at any time by printing theano.config. For example, to see a list of all active configuration variables, type this from the command-line: python -c 'import theano; print(theano.config)' | less For more detail, see Configuration in the library. Exercise¶ Consider the logistic regression: import numpy import theano import theano.tensor as T rng = numpy.random N = 400 feats = 784 D = (rng.randn(N, feats).astype(theano.config.floatX), rng.randint(size=N,low=0, high=2).astype(theano.config.floatX)) training_steps = 10000 # Declare Theano symbolic variables x = T.matrix("x") y = T.vector("y") w = theano.shared(rng.randn(feats).astype(theano.config.floatX), name="w") b = theano.shared(numpy.asarray(0., dtype=theano.config.floatX), name="b") x.tag.test_value = D[0] y.tag.test_value = D[1] # Construct Theano expression graph p_1 = 1 / (1 + T.exp(-T.dot(x, w)-b)) # Probability of having a one prediction = p_1 > 0.5 # The prediction that is done: 0 or 1 xent = -y*T.log(p_1) - (1-y)*T.log(1-p_1) # Cross-entropy cost = xent.mean() + 0.01*(w**2).sum() # The cost to optimize gw,gb = T.grad(cost, [w,b]) # Compile expressions to functions train = theano.function( inputs=[x,y], outputs=[prediction, xent], updates=[(w, w-0.01*gw), (b, b-0.01*gb)], name = "train") predict = theano.function(inputs=[x], outputs=prediction, name = "predict") if any([x.op.__class__.__name__ in ['Gemv', 'CGemv', 'Gemm', 'CGemm'] for x in train.maker.fgraph.toposort()]): print('Used the cpu') elif any([x.op.__class__.__name__ in ['GpuGemm', 'GpuGemv'] for x in train.maker.fgraph.toposort()]): print('Used the gpu') else: print('ERROR, not able to tell if theano used the cpu or the gpu') print(train.maker.fgraph.toposort()) for i in range(training_steps): pred, err = train(D[0], D[1]) print("target values for D") print(D[1]) print("prediction on D") print(predict(D[0])) Modify and execute this example to run on CPU (the default) with floatX=float32 and time the execution using the command line time python file.py. Save your code as it will be useful later on. Note - Apply the Theano flag floatX=float32(through theano.config.floatX) in your code. - Cast inputs before storing them into a shared variable. - Circumvent the automatic cast of int32 with float32 to float64: - Insert manual cast in your code or use [u]int{8,16}. - Insert manual cast around the mean operator (this involves division by length, which is an int64). - Note that a new casting mechanism is being developed. Mode¶ Every time theano.function is called, the symbolic relationships between the input and output Theano variables are optimized and compiled. The way this compilation occurs is controlled by the value of the mode parameter. Theano defines the following modes by name: 'FAST_COMPILE': Apply just a few graph optimizations and only use Python implementations. So GPU is disabled. 'FAST_RUN': Apply all optimizations and use C implementations where possible. 'DebugMode': Verify the correctness of all optimizations, and compare C and Python implementations. This mode can take much longer than the other modes, but can identify several kinds of problems. 'NanGuardMode': Same optimization as FAST_RUN, but check if a node generate nans. The default mode is typically FAST_RUN, but it can be controlled via the configuration variable config.mode, which can be overridden by passing the keyword argument to theano.function. Note For debugging purpose, there also exists a MonitorMode (which has no short name). It can be used to step through the execution of a function: see the debugging FAQ for details. Linkers¶ A mode is composed of 2 things: an optimizer and a linker. Some modes, like NanGuardMode and DebugMode, add logic around the optimizer and linker. DebugMode uses its own linker. You can select which linker to use with the Theano flag config.linker. Here is a table to compare the different linkers. For more detail, see Mode in the library. Optimizers¶ Theano allows compilations with a number of predefined optimizers. An optimizer consists of a particular set of optimizations, that speed up execution of Theano programs. The optimizers Theano provides are summarized below to indicate the trade-offs one might make between compilation time and execution time. These optimizers can be enabled globally with the Theano flag: optimizer=name or per call to theano functions with theano.function(...mode=theano.Mode(optimizer="name")). For a detailed list of the specific optimizations applied for each of these optimizers, see Optimizations. Also, see Unsafe optimization and Faster Theano Function Compilation for other trade-off. Using DebugMode¶ While normally you should use the FAST_RUN or FAST_COMPILE mode, it is useful at first (especially when you are defining new kinds of expressions or new optimizations) to run your code using the DebugMode (available via mode='DebugMode). The DebugMode is designed to run several self-checks and assertions that can help diagnose possible programming errors leading to incorrect output. Note that DebugMode is much slower than FAST_RUN or FAST_COMPILE so use it only during development (not when you launch 1000 processes on a cluster!). DebugMode is used as follows: x = T.dvector('x') f = theano.function([x], 10 * x, mode='DebugMode') f([5]) f([0]) f([7]) (see DebugMode) rather than the keyword DebugMode you can configure its behaviour via constructor arguments. The keyword version of DebugMode (which you get by using mode='DebugMode') is quite strict. For more detail, see DebugMode in the library.
http://deeplearning.net/software/theano/tutorial/modes.html
CC-MAIN-2019-09
refinedweb
1,025
51.14
We continually strive to make the JavaScript editing experience better, part of this is providing support for popular libraries and patterns used by developers. AngularJS is one of the most popular JavaScript libraries and you’ve asked for even better support for it in Visual Studio. This post illustrates how to improve your experience in Visual Studio 2013 when working with AngularJS; if this framework is new to you, take a look at the tutorial on the AngularJS website.\JavaScript\References. Thanks for writing this up Jordan. In the interest of full disclosure, the seed for this whole project came from here: stackoverflow.com/…/22256208 This truly was a community effort! Great job, John. Will definitely give it a try. Nice to see more support for JS in Visual Studio. Its an good approach to create it as extension to be downloaded from extension and upgrades from VS13 and allows developer to upgrade it when angular 2.0 arrives. Good tips. I'd also recommend using the SideWaffle and Web Essentials extensions. SideWaffle includes item templates for controllers, directives, etc. Web Essentials really is essential for editing CSS / HTML / JavaScript in Visual Studio 2013. It includes modern intellisense files for Angular and installs them directly rather than making you copy a javascript file to that folder path I can never remember. Info here: madskristensen.net/…/improved-javascript-intellisense-in-visual-studio Great. Please try AngularGo template as well. Thanks, Jon. This extension for Angular IntelliSense is the same one originally used in Web Essentials, but the Web Essentials version doesn't work for all project types, has some performance issues, and it can behave inconsistently due to the order in which it's loaded. I'd recommend using this new approach with the special References folder instead, it guarantees the load order of the library and will work more consistently. I'm working with Mads to see how we could ease the installation of this extension in the future. Thanks Jordan. How does this compare to the R# AngularJS extension? Hi, I just installed Visual Studio 2012 Not Responding on a brand new laptop and — hey, presto! — IT'S NOT RESPONDING! Getting on for a minute now! Do you have any idea how tiresome that is? I don't even have a solution loaded. I"m in Source Control Explorer trying to figure out how to map a TFS project directory to a different local directory (which is a whole other triumph of software engineering) and the entire IDE locked up for a full minute. Does anybody care about this stuff or are you… Not Responding? @Not Responding. Jordan would know for certain, but I don't think VS 2012 has the infrastructure to support this. He is addressing VS 2013. @Not Responding – Can you follow up with me directly and email jomatthi -at- Microsoft com? I'd like to find out more about the issues you're running into and ask some more questions that are better discussed outside of comments. Thanks! Good job! Very nice extension. I am sorry, I am trying this on Visual Studio 2013 Update 5 CTP and it's not working. Great job! Everything that lights up IntelliSense is a great addition to productivity as there is no need to switch contexts from VS to documentation. The only thing missing now is some plugin similar to GhostDoc to press CTRL+SHIFT+D that can add JSDoc comments in *.ts and *.js files, or maybe there is one you can recommend? Also produces really good documentation typedoc.io/…/td.html. @Jaliya Udagedara Can you submit a new Issue to the project site for this extension, with some details about your environment? I can help you troubleshoot it some more there: github.com/…/angularjs-visualstudio-intellisense @Matija For Visual Studio 2015 we've added in native JS editor support for JSDoc comments, you can see this working with AngularJS in Visual Studio 2015 CTP5 or later. It would be great to hear how well it works for you. LightSwitch ?? social.msdn.microsoft.com/…/can-we-have-another-town-hall-and-detailed-product-road-map-please Grt job Hi Jordan, Having followed your instructions, I still get no intellisense with injected parameters. I am using VS 2013 Community edition – could this be the reason? Regards @Benjamin Jones – This will work fine with the Community Edition, so there must be something else going on. It may be related to a coding pattern you're using or how references are setup in your project. Would you please report the issue to the project site (github.com/…/angularjs-visualstudio-intellisense) with some more details and I can help troubleshoot there? I'm unable to download this in either Chrome or IE. "angular.intellisense.js contained a virus and was deleted" (IE) "Virus scan failed" (Chrome). Machine is Win 8.1 I was able to left-click on the file to get it to display in my browser, then copy and paste to Notepad. It only failed if I tried to directly download. Outstanding! It looks like angular.js is becoming a first-class citizen in the Microsoft stack. I can't get this to work. I followed the instructions here, no luck. I'm using VS2013 update 4 Pro. Please help… Hi, Jordan, I'm using VS 2015 CTP 5. I am not very happy that my HTML intellisense is all cluttered up by Angular. Firstly, why on earth is this not optional? Secondly, how do I get rid of it? Many thanks. @Shawn Clabough – I'm sorry you had issues when trying to download the file. I'm glad the right-click solution worked. Please let me know if it was incorrectly detected as a virus when you added the file to your file system, though. @StefanDD – Can you report your issue to the project site, where I can help troubleshoot this further? github.com/…/angularjs-visualstudio-intellisense Sorry, I just realised that this is probably the wrong place to complain about the HTML intellisense. @Noel Abrahams – Thanks for the feedback about Angular IntelliSense in the HTML editor; I did pass it along to the team working on that. I got it to work: I was missing the _references.js file in my project. I figured that out by reading more about the project at github.com/…/angularjs-visualstudio-intellisense Thanks! a NuGet package has been added to support this…/AngularJS.Intellisense Really very helpful, I'm really excited about this extension. For anyone else having problems getting this to work, you do indeed need a _references.js file. Thanks to StephanDD for the pointer. More background on the _references.js file can be found at the following: – madskristensen.net/…/the-story-behind-_referencesjs – gurustop.net/…/javascript-js-intellisense-auto_complete-in-visual-studio-11-beta-the-web-_references-js-file Great post! This does not work for Cordova Multi-Device projects though. Re: var app = angular.module("project", ["ngRoute"]); app.controller("listController", ["$scope", function ($scope){}]); This introduces a global and simply is never necessary. I really wish blog writers could pick up on this so that people new to Angular would stop picking up this bad habit. Instead: angular.module("project", ["ngRoute"]); angular.module("project").controller("listController", ["$scope", function ($scope){}]); Hi Jordan – great with IntelliSense, but I seem to have a problem with IntelliSense inside controller functions (see example below) – am I doing something wrong? app.controller('testIt', ['$scope', 'srvDate', function ($scope, srvDate) { srvDate.WORKING (intellisense works here½) $scope.TestFunc = function () { srvDate.NOTWORKING (here there's no intellisense) } }]); @Jeff Dunlop – great point, I absolutely agree that you don't want to pollute the global namespace. In this article I only I chose to assign the module to a variable to make it easier to read samples throughout the post. How on earth do I find the screen in the article referred to as the 'Nuget Package Manager', where the package 'AngularJS Core' is being installed? I'm using VS2013 Pro. I can find the Extensions and Updates window but not the version seen above. I have the Nuget Package Manager installed but from the Tools menu I only have options to open the Settings and Console. I ask because I cannot see things like 'Nuget.org' listed under the 'Online' menu, and I cannot find an Extension/Package called "AngularJS". This is very confusing. @Jeremy W – You can open the NuGet Package Manager dialog by right-clicking on your project in the Visual Studio Solution Explorer and choosing the Manage NuGet Packages… menu. Hi Jordan, I tried to use this on VS 2015, but so far it is not working for me, I followed your instructions and added it to the global javascript references and it didn't work, I also tried adding it directly to the angular file location but got the same results. I created a ticket with a more detailed explanation of my case: github.com/…/25 Thanks, Nice and Helped @JordanMatthiesen, one year on and there doesn’t seem to be a way to turn off AngularJS intellisense for HTML tags. Do you know if this is possible? It is really useful article for the developers like us. How I can add angularjs 2 to my old projects asp.net mvc 4 and asp.net mvc 5? I want to migrate asp.net mvc to angular 2. Thank you for sharing this great post. Check this one as well… “AngularJS – When JavaScript Met MVC”
https://blogs.msdn.microsoft.com/visualstudio/2015/02/05/using-angularjs-in-visual-studio-2013/
CC-MAIN-2017-39
refinedweb
1,579
57.98
Answered by: How to update VSTO ClickOnce after modified data files in the package We have an issue of update the VSTO ClickOnce package after modify the data files. I tried to use mage.exe to do that but all have different issues. The updated ClickOnce will not run because either files not match etc. I have run the mage.exe to update both the application and deploy manifest files. I also resigned both files.So the question is: 1. is it possible to use mage.exe to update a VSTO ClickOnce package? If yes, how? 2. if not, any other way we can use to deploy VSTO package after user modify data files(xml configuration files) We are using Office 2007 and VS 2008. Thanks, Question Answers also, you may have to update the manifest to include your data file. see this post: m. All replies hello johnmiddled, this appears to be related to your other post at because you have changed the data files, the file hash no longer matches the values stored in the manifests. you can update your manifests manually using the mage information in Walkthrough: Integrating ClickOnce for a Managed Object Model or Walkthrough: Manually Deploying a ClickOnce Application. first create the application manifest. there is also content in the <vsta:v2> namespace that you will have to copy from the original application manifest to your new application manifest. then sign the application manifest before creating and signing the deployment manifest. you do not need to change the deployment manifest before signing it. the VSTO deployment manifests are ClickOnce manifests. to learn more about VSTO application manifests (especially the elements in the <vsta:v2> node), seeApplication Manifests for Office Solutions (2007 System) and Deployment Manifests for Office Solutions (2007 System).m. also, you may have to update the manifest to include your data file. see this post: m.
https://social.msdn.microsoft.com/Forums/vstudio/en-US/e4b9cb8f-2639-41f0-84cd-4874d81fe573/how-to-update-vsto-clickonce-after-modified-data-files-in-the-package?forum=vsto
CC-MAIN-2015-22
refinedweb
313
58.58
New update method in beta - just curious @omz, this is not important. I am just curious why the code below does not work. Maybe I missed something. I know it was designed to work with custom ui.Views, but it seems like this would work anyway. I don't mean to waste your time. Only answer if you have a spare min. import ui def myupdate(): print('in Update') v = ui.load_view() v.update_interval = 1 v.update = myupdate v.present('sheet') print(v.update) @Phuket2 I don't have the code handy to look at right now, but I think the reason this doesn't work is that ui.Viewchecks whether it has an update method when it's created, and not just before presentation. @Phuket2 There is a similar mechanism with the drawmethod. You essentially get a slightly different kind of view object, depending on whether these methods are present. If you want to add a method dynamically, you can do it as shown below. Method assignment works differently (see the code). I feel that checking for update method is a bug. Omz should provide a dummy update method. import ui import types class CustomView(ui.View): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) def update(self): pass def myupdate(self): print('in Update') def button_action(sender): s = sender.superview s['view1'].update_interval = 1 s['view1'].update = types.MethodType(myupdate, s['view1']) #print(s['view1'].update) v = ui.View(frame=(0,0,600,600)) v.add_subview(ui.Button(title='update', action=button_action)) v1 = CustomView(frame=(0,300,300,300), name='view1') v.add_subview(v1) v.present('sheet') @omz , ok. Understand. Now you mention it, I remember the issue with the draw method. I guess you have a good reason to do it like this. @enceladus I don't really consider this a bug. Your program shows intent by having or not having a drawor updatemethod. @omz , as a side note I think you also changed the draw method at one time. I think there was a type of a bug where a bunch of structures/buffers were being setup in ui.View for use with the draw method even if it wasn't defended. I think you changed the behaviour so a ui.view would be lighter weight if draw method is not defined. @omz, sorry to bring this up again. I can image it drives you crazy sometimes :). But anyway..... I wanted to ask is there any real reason that any ui.View cant have its own update function. I sort of forgot about this thread and started out to make a still dismissing btn today. The crude attempt is below. When I could see it was not working the way I expected, I remembered this thread and found it. I tried to follow @enceladus example, but realised he was adding a update method to a custom class. In the below, I want my created button to look after itself. In this case just thinking about a modal case where if there is no user interaction you want the process to continue after a set timeout. Why is it important? At first I thought, hmmm so many other ways to do this. But I think is not so clean if you would like to make a reusable ui Item for others. Then I started to think about other use cases. Eg. a textfield updating itself using requests to get a exchange rate. But the beauty would be in if it could be coded in one function using closures. Then the consumer of the "control" only needs to know some pertinent params. The control could be presented without a host container (superview) and still work. Anyway, I am not trying to be stupid for stupid sake! I can just see some cool possibilities for ppl to create some interesting reusable ui controls that are self contained. After all that, I have no idea if its a feasible idea or not. I mean the ability to have all your controls to be able to have its own update method (callback). Personally I would say it is feasible after the tests I did with a lot of customviews in a table all having their own update methods. Anyway, it's just an idea. I can just see the merit in it. import ui import types def auto_button(time_out_secs=1, *args, **kwargs): ''' Incomplete attempt at getting something other than a Custom View to have its own update event ''' btn = None def myupdate(): ''' here we might change the btns title in x secs. Could then call the btn's action. eg, might close a view that is blocked by ui.wait_modal. ''' print('In Update') btn = ui.Button(**kwargs) btn.update = types.MethodType(myupdate, btn) btn.update_interval = 1 return btn class MyClass(ui.View): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.make_view() def make_view(self): btn = auto_button(title='Continue', width=100, height=32, bg_color='white', corner_radius=6, ) btn.center = self.center self.add_subview(btn) if __name__ == '__main__': f = (0, 0, 300, 400) v = MyClass(frame=f, bg_color='teal') v.present(style='sheet') If not running the Pythonista 3 beta, you can get periodic updates with the code below. This is not my idea, I just simplified an idea in the TimedRefreshView.py example program, written by cclauss, that I found at: My program, currently running. I like having this functionality built into the ui.View class much more than having to implement it, so I look forward to the beta becoming the released product. import ui import threading import speech from random import randint class TimedUpdateView(ui.View): """ This class contains a method named updatex, which is periodically called. """ def __init__(self): self.updatex_interval = 5 self.update_after_delay() def updatex(self): """ Say a random digit from 0 to 9 every 5 seconds. """ speech.say('%s' % (randint(0, 9))) def update_after_delay(self): """ This method calls the updatex method periodically """ self.updatex() update_thread = threading.Timer(self.updatex_interval, self.update_after_delay).run() if __name__ == "__main__": v = TimedUpdateView() v.present('sheet')
https://forum.omz-software.com/topic/4199/new-update-method-in-beta-just-curious
CC-MAIN-2018-47
refinedweb
1,006
69.07
> It could also be done (though not as cleanly) by making macros act as > import hooks. > > import defmacro # Stop processing until defmacro is loaded. > # All future lines will be preprocessed by the > # hook collection > ... > from defmacro import foo # installs a foo hook, good for the rest of the file Brrr. What about imports that aren't at the top level (e.g. inside a function)? > Why not just introduce macros? Because I've been using Python for 15 years without needing them? Sorry, but "why not add feature X" is exactly what we're trying to AVOID here. You've got to come up with some really good use cases before we add new features. "I want macros" just doesn't cut it. > If the answer is "We should, it is just > hard to code", then use a good syntax for macros. If the answer is > "We don't want > > xx sss (S\<! 2k3 ] > > to ever be meaningful", then we need to figure out exactly what to > prohibit. Lisp macros are (generally, excluding read macros) limited > to taking and generating complete S-expressions. If that isn't enough > to enforce readability, then limiting blocks to expressions (or even > statements) probably isn't enough in python. I suspect you've derailed here. Or perhaps you should use a better example; I don't understand what the point is of using an example like "xx sss (S\<! 2k3 ]". > Do we want to limit the changing part (the "anonymous block") to > only a single suite? That does work well with the "yield" syntax, but it > seems like an arbitrary restriction unless *all* we want are resource > wrappers. Or loops, of course. Pehaps you've missed some context here? Nobody seems to be able to come up with other use cases, that's why "yield" is so attractive. > Or do we really just want a way to say that a function should share its > local namespace with it's caller or callee? In that case, maybe the answer > is a "lexical" or "same_namespace" keyword. Or maybe just a recipe to make > exec or eval do the right thing. > > def myresource(rcname, callback, *args): > rc=open(rcname) > same_namespace callback(*args) > close(rc) > > def process(*args): > ... But should the same_namespace modifier be part of the call site or part of the callee? You seem to be tossing examples around a little easily here. -- --Guido van Rossum (home page:)
https://mail.python.org/pipermail/python-dev/2005-April/052922.html
CC-MAIN-2016-36
refinedweb
402
75.5
Title: Efficient character escapes decoding Submitter: Wai Yip Tung (other recipes) Last Updated: 2006/01/14 Version no: 1.1 Category: Text Not Rated yet Description: You have some string input with some specical characters escaped using syntax rules resemble Python's. For example the 2 characters '\n' stands for the control character LF. You need to decode these control characters efficiently. Use Python's builtin codecs to decode them efficiently. Source: Text Source >>> len('a\\nb') 4 >>> len('a\\nb'.decode('string_escape')) 3 >>> Or for unicode strings >>> len(u'\N{euro sign}\\nB') 4 >>> len(u'\N{euro sign}\\nB'.encode('utf-8').decode('string_escape').decode('utf-8')) 3 This compares to naive approach to decode character escape by writing your own scanner in pure Python. For example: def decode(s): output = [] iterator = iter(s) for c in iterator: if c == '\\': ...enter your state machine and decode... else: output.append(c) return ''.join(output) or def decode(s): return s\ .replace('\\n','\n')\ .replace('\\t','\t')\ ...and so on for the few escapes supported... The navie approaches are expected to be much slower. Discussion: Python's builtin codecs not only decode various character encoding to unicode, it also has a number of codecs that does useful transformation, such as base64 encoding. In this case the 'string_escape' codecs would decode string literal as in Python source code. These builtin codecs are presumably highly optimized. They should be lot more efficient compares to looking them up character by character with pure Python code. In case of unicode string, there is a seemingly parallel 'unicode_string' codecs. However when we apply this to non-ASCII string it runs into problem. >>> len(u'\N{euro sign}\\nB'.decode('unicode_escape')) Traceback (most recent call last): File "", line 1, in ? UnicodeEncodeError: 'latin-1' codec can't encode character u'\u20ac' in position 0: ordinal not in range(256) The issue is unlike 'string_escape', which convert from byte string to byte string, 'unicode_escape' decode byte string into unicode. If the operand is unicode, Python tries to convert it using system encoding first. This causes failures in many cases. Steven Bethard has proposed a 3 steps decoding on the comp.lang.python newsgroup. It resolves this problem by first encode the unicode string using UTF-8, then applies string_escape, and finally decode it back using UTF-8. This procedure is shown in the second algorithm of the recipe. ----------------------------------------------------------------- Due diligence on UTF-8 encoding Before we call the problem solved, we should examine the possibility that the UTF-8 encoding might introduces bytes sequences that match Python string escape accidentally, and thus corrupts the output. A careful look in the mechanism of UTF-8 encoding assures us this will not happen. UTF-8 - Wikipedia A byte sequence for one character never occurs as part of a longer sequence for another character. For instance, US-ASCII octet values do not appear otherwise in a UTF-8 encoded character stream. From Python documentation, all string escapes are defined by ASCII characters. Since non-ASCII character would not appears in UTF-8 encoded stream as ASCII octet, it would not introduce extra Python escapes by accident. For comparison, notice how a valid unicode character \u5c6e would be mistaken as Python escape \n in UTF-16 encoding. >>> u'\u5c6e'.encode('utf-16be') '\\n' >>> u'\u5c6e'.encode('utf-16be').decode('string_escape') '\n'
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/466293
crawl-001
refinedweb
559
56.96
C++ Interview Questions and Answers Ques 131. Is C an object-oriented language? Ans. C is not an object-oriented language, but limited object-oriented programming can be done in C. Is it helpful? Add Comment View Comments Ques 132. Name some major differences between C++ and Java.Ans. C++ has pointers; Java does not. Java is platform-independent; C++ is not. Java has garbage collection; C++ does not. Java does have pointers. In fact all variables in Java are pointers. The difference is that Java does not allow you to manipulate the addresses of the pointer Is it helpful? Add Comment View Comments Ques 133. What is the difference between Stack and Queue?Ans. Stack is a Last In First Out (LIFO) data structure. Queue is a First In First Out (FIFO) data structure Is it helpful? Add Comment View Comments Ques 134. Write a fucntion that will reverse a string.Ans. char *strrev(char *s) { int i = 0, len = strlen(s); char *str; if ((str = (char *)malloc(len+1)) == NULL) /*cannot allocate memory */ err_num = 2; return (str); } while(len) str[i++]=s[?len]; str[i] = NULL; return (str); } Is it helpful? Add Comment View Comments Ques 135. What is the software Life-Cycle?Ans. The software Life-Cycle are 1) Analysis and specification of the task 2) Design of the algorithms and data structures 3) Implementation (coding) 4) Testing 5) Maintenance and evolution of the system 6) Obsolescence Is it helpful? Add Comment View Comments Most helpful rated by users:
http://www.withoutbook.com/Technology.php?tech=12&page=27&subject=Interview%20Questions%20and%20Answers
CC-MAIN-2020-16
refinedweb
252
70.09
$25 Reward! It has come to our notice that oome irresponsible person or persons have been tamperbin with the meters, eooal and wires in some of the places we are furnishing light and power to in Allgers, McDoenghville and Gretna. Notice is hereby given that this is strictly against the law and all ouch persons that may be found guilty of doing or allowing same to be done with a view of defrauding the company will be vigorously prosecuted. No one has any right to tamper with your wires or meter installed in your premises anless they show a badge of the company. We are now making a careful investigatlen of all meters and wires and hereby offer a reward of twenty-five dollars ($25.00) for evidence laooding to the conviction of the guilty party or partieo. Algiers Railway & Lighting Co., 222 ElmIra Avenue. ALGIERS SAZERAC 1K.; T. dMII'II IIET SA LOON JI. SrCIEl.E, Pp. Oyster Loaves and Sandwiches Sandwiches of All Kihds Day and Night E. J. MOTHE UNDERTAKER AND EMBALMER Phone, Algiers 29. No. 222 Morgan Street f ESTABLISHIED 1853. John C. Meyer & Son. JEWELERS WATCHES, DIAMONDS, JEWELRY, SILVER AND PLATED WARE. 133 S eestr st., i Na U. S. i. t, a New Orlens, Leuilsne. ,. Abacl & Bro., Ltd. Dealers In Groceries I Wster Paglae, PELICAN AVE., Cor. Verret St. ALGIERS, LA. Sierra Bros., -DEALtRS IN GROCERIES. IMPORTED WINES, LIQUORS, CIGARS, TOBACCO, ETC. - kievlle St. £ Opeloeusas Ave ALaEtS, LA. , AAMMMMMM MUMM MUUMMMMMMMMM MARTIN S. MAHONEY, ATTORNEY-AT-LAW, NOTARY PUBLIC. -Oase: 1!1 Claendelt Street, iSM Pellean Ave AMENDMENT TO THE CHARTER OF THE PLANTERS' COMPANY. UNITED STATES OF AIMDRI(A, STATE OF LOUISIANA, PARISH OF-ORLEANS, CITY OF NEW ORLEANS. Be It hmewn, that on this the 5th day of the month of May, i the year of our Lord, Se thesan mime hundred and eleven, and t t ixdhe oedefs o the United States of America, te e hundred and thirty-fifth. btefore me, Alexis ri, a Notary Public, duly cemmiselemed and qualified within sad for the Parish of Orleans, State of Louisi ama, and to the presence of the witnesses hereinafter named and undersigned, person ally came and appeared, Mr. W.S. Penick, Jr., and Mr. C. S. Beard, both residents of the Ctty et New Orleans, herein appearing and acting in their capackles as pesident and secretary repectlvely of the :s ters' Cni i, a Louisiana corporatloo orp.g Ied under the laws of the State of LouIil sa, heretofore dmiciled toIn the city of Shreveport, La., Incorporated by act pased before Allen RAndall, Notary Public of the Parish of Caddo, on the nineteenth day of March, 101t, duly recorded In the Mortgage Oeeo a the Parish of Caddo, Who declared, that at a general meeting at the stockholders of said corporation bheld in the City of Shreveport, Louisiana, on the seventh day of January, 1911, said meet itg having been called for the specific pur pose hereinafter set forth, the following amendments of the charter of said corpor atlon were adopted by unanimous vote of the owners of the outstanding stock of said sempay, to-wit : Article II of the charter of said corpor tiom was amended so as to read as follows : "LARTICLE II. The domicile of this corporation shall be to the City of New Orleans, La.. and all cl tation and other legal process shall be served oa the president, or In his absenc-, the sam shall be served on the Vice-Pre. .4t." Article IV of the charter of said corpor atioe was amended so as to read as follows : "ARTICAL IV. "The capital stock of this corporation shall he Ave hundred thousand ($500,000.00) dol lar, divided nto and represented by Ave thousand (5,000) shares of the per value of one hundred ($100) dollars each. All oe Is to be aMld for at the tim of sub. Toe ,mpnn4.r may begin operation as sen as one thousand two hundred and twenty (1.220) shares are subscribed for. 8ix hundred and ten (610) of said shares shell be classlSed as Series A and six hun dred and ten (610) of said shares shall be duel fled as Series B, all of which will be represented y cash or property, rights and edit aetnally received by. said corporation and no stock shall be ssed until the con seMeratten therefor has been received by the corporatioa. "Br a two-thirds (23) vote of the stock holdse a this corporatlon the books may e opened for the su bsription and issue of the remaining three thousand seven h ANe and eibhty (.780) share of stock or ea loan thereof, bet when so subscribed Imssed halt of the stock subscribed and l issued shall be clnasled as Serie A and the ether half as Series B. Whem the books e opned for the sbrpto of the re- I thne thousand seven hunaded an tit. lato Wi n a er A "ni I Series B as shown by the books of the com pany. shall have the first privilege of sub. scribing for sname in proportion to their re spectlve holdings and in their resp-ective Series. ".No transfer of stock shall affect this corporation unless the said transfer shall be made In the books of the said company at its offce In the ('ity of New Orleans, where said corporation shall he domiciled, and only on surrender of the certificate therefor." Article V of the Charter of said corpor ation was amended so as to read as fol lows : "ARTICLE V. "The corporate powers of this corpora tion shall be vested In and represented by a Board of Directors composed of six stock holders. The following persons shall con stitute the first Board of Directors: Messrs. W. 8. Penick. Jr., Jas. P. Ford. N. T. Pe aick, C. 8. Beard, M. 8. Standifer, August Soniat. with W. 8. Penlck, Jr., as president, . Jas. P. Ford as vice-president, and C. 8. Beard secretary. The officers and directors shall hold their said offices until the first Tuesday in January, 1912, and until their a successors shall have been duly elected and a ualified. On the said first Tuesday in 4 January, 1912, and every five years there-I after, a Boaru of Directors shall be elected I unless said date shall be a legal holiday, I and in that case the said election shall be 4 held the day following said holiday, and I notice of said election shall be given by ten days notice published in one of the 1 newspapers published in the Parish of Or. leans. The Board of Directors shall ap point one or more stockholders to preside at such election. Any failure from any cause to hold said meetlnr or to elect said board on the day named for that purpose shall not dissolve the corporation but the directors then nla office shall hold over until their successors are chosen and quallfied. After such election the Board of Directors shall elect from their number the offers, but the absove named officers as resident vice-president and secretary shal remain Soffice as above stated until the first elec tion fixed herein on the first Tuesday in January, 1912; and until their successors are elected and qualified. Any vacancies oc curring among the officers of president, vice president or secretary shall be filled by the Board of Directors but any vacancy in the board shall first be filled before any va cancy among said officers shall be filled. "Ia case any vacancy prior to the expira tioa of their term shall exist or occur among the first three directors named it shall be filled by a selection made by the owners of stock of Series A and in case of any va eaney among the last three directors named it shall be illed by a selectloa made by the owners of aeries B. "At every election and stockholders' meet lag each stockholder shall be entitled to one vote for each share of stock represented on the books of the company In his same ; votes may be cast- In persoa or by proxy autborised In writing, but at eache eleotiom of directors (by the stockholders), the next oceurring the first Tuesday nla January, 1912, nad every fith year thereafter as above provided, the holders of stock of Bries A shall elect and designate three directors and the holders of stock of Series B shall elect and designate three directors. After each election of directors as above pro vided, the next being the first Tuesday In January. 1912 and qultenlally thereafter, the board shall elect from their number the officers, to-wit: President, vicepresident and secretary. The president, in addition to his other duties, shall be general manager, and in his absence the vice-president, In ad dition to his other duties, shall be general manager. No by-laws of this corporation shall be amended or repealed except by a vote of the stockholders owning three-fourths (3-4) of the capital stock." The foregoing amendments, and the au thority of the appearers herein will more fully appear by reference to a duly certl fied copy of the minutes of said stockhold ers' meetin, which is hereto attached and msde part hereof. Bald Penick and Beard in their respective capacities further declared, that they now, pursuant to the direction of said stockhold ers' meeting, request me, Notary, to receive said amendments In the form of this public act, in order that the same may be promul- i gaeed, published and recorded and thus be come part of the original charter, with which request 1, Notar, do hereby comply. Thus done and passed, in my oete, at the City of New Orleans, on the day and date herein first above written, in the presence of G. W. Schweitter and E. L. msabary, com-i petent witnesses, who hereunto sign their names with said appearers and me, Notary, after due reading of the whole. W. S. PVNICK. JR., President C. . Bl iTRD, Secretary. GRQ. W. 8CIWEITZER, . Lr 8ZABARY. ALEXIS BRIAN, Notary Public. -I 1, the undersigned Deputy ecorder of Mortgages, hereby ertify that the fore-. going act of amendment of the charter of 4 the ters' Company was this day duly recorded in my office in book 101S folio -. New Orleans, May 6, 1911. (Signed) EMILE LEONARD, Deputy Recorder of Mortgages. I hereby certify the foreging to be a true copy of the original act of smemst of the charter aof the Pitater" Co pwmy, and of the certiate oe the Deputy e corder of M t 4hereto attached, which t s-on fie in y e New Ow Ma- i.1L May 11 16 25Jams 10 1- 18.11 CHARTER TIHE LOUISIANA COMPANY, LIMIT®ED. UNITED STATES OF AMERICA. STATE OF LOUISLIANA, P'ARISII OF OR LEANS, CITY OF NEW ORLEANS. Be it known, that on this 20th day of t April, in the year of our Lord, one thousand nine hundred and eleven, before me. Gustaf itR. Westfeldt, Jr., a notary public, duly commissioned and qualified, In and for the pariash of Orleans, and In the presence of the witnesses hereinafter named and under signed, personally came and appeared the several persons whose names are hereunto sutbcribed, who severally declared that, availing themselves of the provisions of the laws of this state relative to the organiza tion of corporations, they have contracted and agreed, and do by these present con tract, agree and bind and obligate them selves, as well as all such other persons as may become associated with them, to form themselves into and constitute a corporation and body politic in law, for the objects and purposes and under the conditions and stip ulations of the articles following, to-wit: AR"TICLE I. 'The name and title of the said corpora tion shall he TillE I'ISIANA c'OMP'ANY, I.IM I'PElt, and under its said corporate name it shall have power and authority and shall enjoy succession for the full term and period of ninety nine years from and after the elate hereof ; to contract, sue and Iw - sued, to make and use a corporate seal, and same to break and alter at pleasure; to pur -Ihase. rece-lve, lease. hold and convey, ias well as mortgage and hypothecate undef its corporate name, property both real and per sonal: to n:ame and appoint such managers :an. directors. of-icers and agents as the in terest and convenience of said corporation ntm:tV require. and to make and establish, as well as alter and amend at pleasure. such by laws. rules and regulations for the pro per management and regulation of the af falrs of said ectrloration, as may 1w nteces s;ry :andl proler. ARTICIE II. lThe donmitle of the said crporation shallt tie at New )Orleans, parlshl of Orleans. state of ltouislana. and all citations or other le - gal process shall be served upon the presl dent of sail cotrlporatlon. or in case of his absence. upIln the vice-president, or in the abseince of lhth of these officers, upon tile secretary of the corporation. ARTICLE Ill. The objects and purposes for which this corporatlion is established and the nature of the business to le carried on by it are de cl:tred anl specifled to be to purchase, own. I:tld, -el.,n mortgage, lease, alienate, receive and otherwise acquire and ditpose of real estate,. oil and mineral land and oil and mineal rights and any other Interest in landis, both In Louisiana and elsewhere. To bul, seIIl, own lease or otherwise acquire. and to loperate and maintain as principal or as agent In this state, or elsewhere, drilling outtits. machinery, appliances, apparatus and buildings of every description used or suitable for use in the extraction of oil or any other minerals from the earth, and for t the housing, preservation, handling, refin ing and transportation of the same when so extracted, and particularly storage tanks and lpipe lines, all solely for the uses of its own business and without power of eminent domain: to by,. sell or otherwise acquire and alienate, and to establlish, export or import oil. minerals or mineral products of all kinds and in connection with said ot jects to lease, buy, sell, build, or otherwise to acquire. and to locate and maintain ware houses, sheds, storage tanks, tank cars. i dwellings, storehouses and repair shops; to sell merchandise and carry on a general a- store and commissary, and to conduct ho b tels, boarding houses and lodging houses for e- its agents and employes, and generally to re hold and exercise all such incidental powers and privileges not expressly excluded, as re 1 late to the objects hereinbefore set forth. t ARTICLE IV. The capital stock of the said corporation is hereby fixed at twenty thousand dollars ($20.000.00). divided Into and represented by two thousand (2,000) shares of ten dol lars ($10.00) each. The whole of the said stock or any part thereof may be issued and - delivered to any person, frm or corporation ty for the acquirement of rights, privileges, t- franchises and property purchased and ac ' quired by this corporation, also In payment. - settlement and adjustment of the costs, fees, P- charges and expenses and commission In it curred for services rendered In the formation t. and organisation of this corporation, tad in S. acquiring and bringing about the purchase ra of the property, rights and franchises afore st said; also for cash or in installments of Ir such amounts as the board of directors may id determine; also for merchandise received In or services actually rendered to this com e- panv. The board of directors as hereinaf hd t created is especially authorised to ds y, pose of the stock of this company In whole be or in part, for any and all of the purposes id above stated, as in its jqdgment may seem )y fair and proper. This company shall begin e business as soon as three thousand dollars ($3,000) of Its stock shall have been sub scribed. SARTICL V. be All the powers of this corporation shall be Iii vested In and exeried by the board of dl . rectors, composed of seven stockholders, any tr four of whom shall constitute a quorum for a, the transaction of business. The said board t. of dlrectors shall be elected annually on the in last Wednesday in January of each year. - The first election to be held In the year ia 1912. rs All electioans shall be by ballot at the c- oclce of the corporation under the super e- vision of three commissioners to be appoint he ed by the board of directors, and in the ab se sace of any commissoner the presldent a- shall have the power to ill the place by appolatment, and of all such elections as a- well uas meettings of sotekholder, exept for g the purpose of liqldation or dissolution, or e as otherwise requlred by law, ten days' no tl tice shall e given by malling to each stock . holder who appears as such upon the books ed of the company, at his last desgnated ad e dress or to the General Delivery at New Orleans, it he huas not designateda a d ,d,~ dres, a anoncement stangthe time to and the place of the meeting. Each share ed holder shall be entitled to one vote for each Sshare of stock standing In his name on the books of the company, east in person 6r by proxy, and the majr of the votes east t halleset. The elt of directors shall 2 have the power to Ill all vaceanele that may oenr on the board. lFlure to elect direc A tors In the day above spefed shall not dis solve the oraorst, but the diarectors then Sain oBee rhall rema in dece until their eccessors are elected, and quallked. Duo ntice of another election shall foeth in with he gn as above provided, inch no r,~ tiof elotln shall be continued to he ved antil u eleetioe is held. The board d a director at the Irst meetn followling toan uanual election shall elect from their n umber a president, a vlcepresident, a see Srotor and a trasurer, all of whom shall be stocknolders, and such other ecers as the board shall deem necessary. The board shall have the power ln tts dlscretion to ha unkte two or more of the above or other oAdices and the same to confer upon one person, and shall have the power to Ix the u salkries of all ocers and of such other ol cer a they deem cessary. The bore of directors shall have the power to make and Sestablish as well as alter and amend all by d laws, rules and regulations necessary and proper for the support and manaement of e the busiM s ud afir of this corporatlon, * not nlaconsistent with Its charter. The said board shall also have full power Ic and authority to borrow money throua the I presmldent or some other duly athorled Sagent or agents; to execute mortgages and b isue notes, bonds or other obligatlons in such a manner and on such terms as In their judgment may be advantageous, and generally to do all the thingsp necessary for the proper carrylng on of the business of the saild corporatlo; as also to issue and de liver full paid shares of stocks and bonds or Ir other obligations of the said corporation In y, payment of the mosey borrowed or money, I Ic, sersic , property or rights actually received by the suald eorporattlon, as hereto fore et forth. At any meethg of the board of directors, any director ahbent from the meeting may be represented by any other altreactor, who may cast the vote of said ab sent director, aecordlng to the written tin treutions of the said abent dnirector. The board of directors shalil have the power, by I afn vote of enot les than fOear dnectenrs, to e.sell, eas mortgage (y bod mortgag or oherwise), or to pldlg any or ail of the proprt, onvahe sand lmmovable belonlgi to tke corporatel or to reclive In ex ehan emformasen J r stuoes or handa witheot rein to the shar the power to d . Amnd they may aso eIrchas for stac In o su8 thi e a r- o iged p flo r1 wl5htlit autLh e f the coo. I UM1til the at meetin to he held nader Sthis charter, er mtil their duly quolid ocass~u m eletd sad the I Iard e esteesa all ahe ed e:i: I_, . J. & ebeuck, with Geo. H. Smith as presi- t dent, H. B. learn as vice-president, W. I Brewer as secretary and E. Miller as treas- g urer. t ARTICLE VI. r Whenever this corporation shall be dis solved, either by limitation or from any other cause, its affairs shall be liquidated Iby three stockholders to be appointed at a gen eral meeting of the stockholders convened I for the purpose of liquidation, as hereinafter provided. each share being entitled to one vote to be east by the holder either in per son or by proxy. Said comnlmissioners snall remain In offire" until the affairs of said cor poration shall have twen fully settled and liquidated, and they shall have full power and authority to transfer and give title to all the assets and property of the corpora ttion and to dlstrllbute the proceeds. In case of death or disability or resignation of one or nmore comnmissiouers, the vacancy shall be tilled by the surviving commlssioncr or comn missioners. ARTICL(E VII. This act of incrporation may be modit ied. changed,. or altered or the said corpo ration may It dissolved with the assent of three-fourths of the capital stock repre senlted at any general mee-ting of stockhold -rs convented for such purpose. after prevl otis notice shall have been given in one or more dally newspapers puhlisled in the par ish of Orleans. state of Louisiana, once a week during the thirty days next preceding a such nmeeting ani ul)pon the date of s(uchI meeting, and by notice mailed at least forty days prior to such meetln, to eac. stock holer who appewars as such on thie books of the c- nplany, to the post offtce address des Ignateld by himu. and in case of failure to designate an address to tile General Ieliv cry at New Orleans. Any change which may is- proposed or made with reference to the capital stock of said corporation shall be made in accordance with tihe laws of the state of Louisiana on the sllbject of altering tile amnoulnt of c-aid tal stock of corporations, and it may be in creals,,d or diminished upon a compliance therewlth. upon tile affirmative vote of two thirds ,of tlhe stock of the corporation. No stockhohlter shall ever te, held liable or re lsponsllle for the c-antgr:acats or faults of this cocrlporatlon. in any fulrtiher slim than tile lunlpaid halance dllcc on the shares of stock owned by him n, nor shall any mere Informal ity in o'rganizatio n have tile effect of ren d.iring thils charter nllll and voht or of ext poIeing any stc-khlolder to any liability Is vylendl the ancmount dule on his stock. In torder that this chartelr may also serve I as the orictinal sulscrlltion list, the subil scrtllel-s helrel-to hlave set opposite tic their I natet., the unlnear of shares of stock sub scrtll.l for Iby etIach of t!ihen. TIhus doine and passledl, In my octlch- In the ' eity of New c(rleanls. In the plresence f .1 I Itlanc Moenroee and John laluten. Jr., con, ietent witnesses, of lawful age, who have signed thceir tanames with tile said appearers 1 and mIe. notary. on the day and date afore said. after reading the whole. cOriginal signed) : J. R. Bannon. 45 shares: A. J. cihapman, 45 shares : E. A. Kelley. 45 shares: A. W. Wenham. 43 shares: I'dolpho Wolfe-. 45 shares: J. E. Schenk. 43 shares: ieo. W. Smith. 43 shares: It. J. Anderson. 45 shares: Ernest Miller. 45 shares : W. T. Brewer, 45 shares. tWitnesses- : J. Blanc Monroe, John kal men, Jr. m ('sTArI R. VENTIF-I.DT. JR., Notaryl Public. I, the undersigned, recorder of mortgages In and for the parish of Orleans. state of L,ousiana, do hereby certify that the above and foregoing act of in'orporation of The l.ouslana Company. Limited. was this day duly recorded in my office, in book 1018. folio s:ts. New Orleans, April 20th. 1911. (Original signed) EMILE LaONARI, (Se-al) . R. A true copy: G4csTAr R. WESTFELDT, JR., Notary Public. apl 27 may 4 11 18 25 Jun 1 1911 CHARTER OF "JOSEI'lP P. SIMONE COMPANY, LIMITED." UNITED STATES OF AMERICA, STATE I OF LOUISIANA. PARISH OF ORLEANS, CITY OF NEW ORLEANS. Be it known, that on this thirtieth day of the month of March, In the year of our Lord one thousand nine- hundred and eleven and of the independence of the United States of America the one hundred and thirty-fifth, before me. Clifford M. Enastls, a notary public, duly commissioned and sworn, In and for this city and Parish of Orleans, therein residing, and in the pres ence of the witnesses hereinafter named and mndersigaed, personally came and ap peared : the several persons whose names are horeunto subscribed, all above the age of majority and residents of this city, par ish and state, who severally declared that availing themselves of the provisions of the laws of this state relative to the or ganization of corporations in general, and especially of the provisions of Act 78 of 1904, of the General Assembly of Louisi ana, they have covenanted and agreed and by these presents do covenant and agree, bind, form and constitute themselves as well as much persona as may hereafter join or necome associated with them, In a uor poration and body politic In law for the objects and purposes, and under the agree ments and stlpolations followlng to-wit: ARTICLE I. The name and title of this corporatlon shall be "Joseph P. Bimone Company, Ilm Ited." and its domicile shall be In the City of New Orlesns, Parish of Orleans, 8tate of alouisan. Under this corporate name the said corporation shall have power and authorlty to exist and to enjoy secerssion for the full term of nalnety-nine years from the date hereof, unless sooner dissolved; to contract, sue and be sued; to make and use a corporate seal; to break and alter the same at pleasure, to have, hold, purchase, own, recelve, lese, sell, convey, mortgage and hypothecate or pledges roperty, real, rrsona I and mixed; to borrow money and n lead ay portion of its t leaome proceed ig from Its capital stock or otherwise and to give ead receive securitles therefor; to mae advances on reat ad pereonal sec- . ritlees; to elect and appoint much ocers, directors, managers, agents and employees as the iantermats and convenience of the corporation may irel; to mae ead talish seh by-lns rule ad regution for the proper management of the a or the corporation as may be eaessary and proper, and the samen to change, alter, amend or abrogate at pleasure; ed geier ally to do an perform all acts, matters and things as may be Incidental to the corporation or requlsites ,nd necessary to carry out the objects and purposes or the same. ARTICEW II. All citation and legal process eshall be erved on the president of said eorporation, or, In case of his absene or disability, on the vice president. and In the absence or 1 disability of both the president and vice a pesaidet, upoan the secretary-treansurer of a tae sid orporatio. ARTICLE III. The objects and purposesa for whichb this cororation is orngaised and the busineas to be condeted by it are hereby declared to be as follows: To establish and conduet a leneral ship pltng and paekig bsies to deal in pro duce, vetables, gardea seeds and general merchandise; to boy, sell, exchange, barter or trade in any wares or material nlacl dental thereto; to acquire by lease, pur chase, or otherwie any equipment, con satitng of movable or Immovable property, aeeasary for the conduet of said busi aess, and generally to invest the funds of this company in such manner as may be found desirable or necesary. ARTICLE IV. The cagpital stock of this corporation is hereby xed at the sam of tea thousand I dollars ($10,000.00), to be divided nlato anad represented by one hndred (100) shares of the par value of one hundred dollar 1 (P100.00) each. The capiotal stock may bek inereased or dilnlahed ina eompliace with the laws of the btate of louisiana. All I stock shell be plaid for ln cash, Uas calledt: for b the Bard of Directors, or may be ased as fll paid stotk In payment or and In payment of arI rendered, and all stock hea tell paid ad on-assess, No share shall be treasfearred except on the hoss of e corportio or ntil the rtllcatss ev be shre or shares of stok to be LshaU lhave been n delivered to the erpeeto ad daly an 1 ailed, ad the urpoto shall have the right to refts to ma es tr pbr upn Ia boees as long as the owner ota stock Isi ndebtd in_ any bo the eorpeentie a No stokelder shalWlh the righ tol dipse of his steek or b l saum Imtl I he has ret made a writien sar e e -emt thee eratis the Ise sad I * 5 sadstm, sarl ow ee hare a thepu ber enge b es th e dataet oI the receipt of such offer to purchase such share or shares for the benefit of the cor poration at their book value, as shown by the inventory last made. Tne Board of Directors shall have the right to call in and purchase at their book value, as shown by the Inventory last made, such share or shares of stock as may be Inherited from any of the stockloldlers who may die. All retired stock may Is' relasued by said IHoard of Directors, at not less than their book value. There shall lee printed or engraved across each certificate of st'ck the following "Thiese sh'tres are Issued and shall ee held atluject to, the charter and by-laws of this corporation." This corporation shall begin busln.'ss as soon as three thousand dollars ($3',t00.Ot) of the capital stock is sub scribed for. ARTICLE V. All of the corporate powers of this cor poration, and the management aad control of its tbusiness shall Ice invested in and exercised by a Boarfld of Directors to be composed of three members, any two of whom shall constitute a quorum for the transaction of all business, and their de cisions shall be valid. corporate acts. The following persotns shall constitute the first itoard of Diirectors, viz.: John Mallhes. Jo seph I'. Simone. Salvadore l'ellegrinnl. Each Board of Iirectors shall elect from among their number its own officers, as follows : A president. a vice president and a secretary, who shall also be treasurer. The Board of Dlirectors shall have the pI)wer to determine the manner and tihe cau'ses for which vacancies may le de clared on suid Idcard. No pe'rson shall I'w eleitglle as a memler of the Board of Iirect'ors unless a stock holder of this e'orporatlln. Said Board of lirectors shall continue In office until the first Wednesckty in April, !912. on which day, and annually there after a Board of Directors shall be elected bhy Iallot for the term of one year. Such election shall ie held under sntch rules and regulations as may tee provided by the Board of Ilirectors. Any failure from any cause whatever to elect a Hoard of ItIree tors on the iday named shall not dissolve the corporation, but thie Itoard of Direc tors then in office shall hoel over until their successors are chosen. At every else tien and mneeting of the stockholders. each stockhlolder shall lt' entitled to onie vote only. regardless of Ithe' numler of shares registered In his name. and may vote in person only, and not Iby proxy. Any va cancy occurring on said lboard shall be filled by the remaining directors by the ap pointmentt of sonie qualified stocklhobl er who shall serve until his successor I elect eel at the annual election. The Board of Dire-ctors shall have thle right to appolnt or disc.harge such cl'erks. agents, man:lagers. and other employees as may be deemed nec essary or expedilent, and to make. chlange and amend all by-laws,. rules and regunll ttions fIor thie proper condnuct ind manage ment of the affairs, business and concerns of said corporation, not in conflict with this charter or the laws of thlis state: pro vided that the' president may suels'end any employees in his discretion. pending the action of tite Board of Directors. All warrants, checks, drafts, notes or other obligations of this corporation shall lie signed by the secretary-treasurer. Any officer or director may be removed by a najorlty vote of the stockholdlers. When all the directors consent thereto. the notices otherwise required for holding meetings may be dispensed with. ARTICLE VI. Meetings may ice called at the pleasure of the president or upon written request of a majority of the stockholders. All meetings of the stockholders, whether general or special, shall he held only after ten days' written notice to each stock holder. ARTICLE VII. ,No stockholder shall te held lilale or responsible for the contract or faults of said corporation in any further snum than the unpalid balance due the corporation on the shares owned by him, nor shall any mere ltformality in organization have the effect of rendering this charter null, or ex posing a shareholder to any liability he yond the amount remaining unpaid on his subscription to the stock. kRTICLE VIII. This act of incorporation may be amend ed. altered or modified, or said corporation may be dissolved by a vote of three-fourths of the stockholders present at a general meeting of the stockholders convened for that purpose. ARTICLE IX. Whenever this corporation is dissolved, either by limitation or otherwise. Its af- b fairs shall be liquidated under the super glslon of three liquidators to be appointed from amongst the stockholders at a gen eral meeting of the stockholders convened after due notice, as required by Article VI of this charter. Thus done and passed, In my oie, at New Orleans, aforesaid, in the presence t of Philip Shielda and Edmund S. Ogden, domiciled in this city, who sign these pres ents. together with the partles and me, no tary, the day and date first aforesaid. Original signed: John Malibes, J. I'. Si mone. Salvadore Pellegrlnanl. WItaesses : Edmund 8. Ogden, P. F. Shields. C. M. EUSTIS, Not. .Pub. I, the undersigned Recorder of Mortgages I In and for the Parish of Orleans, State of I Loulaiana, do hereby certify that the above and foregoling act of Incorporation of the "this P. BImono Company, Limited," was < this day duly recorded In my odee in book 1018, folio 501. New Orleans. L., April 11, 1911. (Signed) EMILE LEONARD, D. R. . hereby certify the above'and foregoing Sto be a true and correct cpy of the aet Sof incorporation of the "Joeph P. Simone SCompany, Limited," with th exception of I the number of ashares aubscrlbed for by the c raltors, on file and of record in my C. . EUSTIS, Not. Puab. I apl 2 may 4 11 18 25 Jun 1 1911 CHARTER I OP TII SOUTHERN RICE SbALES COMPANY. a UNITED 8TITUS OP AMERICA, sTAT I OP OUI)uIANA, PARISH OP ORLEANS, CITY OF NEW ORLEAN. 1 Se It known, that on this 18th day of Sthe moeth of Apr in the year of our Iord, one thousnd nine hundred and eleven, and of the tadependanee of the Unifted States of America, the one hundred and thlrty4tth, before me, W. Morgan Gur le. a Notary Public, duly commissioned and qualified in and for the Parish of Or I leana, 8tate of Louisiana, sad in the pros sase of the witnesses hereinafter named aad undersigned, personally ame and appeared a the persona whose names are hereunto sub scrlbed, who declared that, availg them. selves of the laws of this state renlative to the organilation of coorations, and the provisioss of the cotatution of this said state, have covenanted and agreed, and do, by these presents, covenant and aree, obli I gate and bind themselves, as well s uch I perseos as may herester become assoec I ted with the, to form and constitute a body politic in law, under the follolwing ar t ticles which they adopt as their charter, to wit: ARTICLE I. The name of th'is corporation shall he the SOUTERRN RICE 8aLE8 COMPANY. with its domielle in the City of New Or leanas, tate of Louilslana, and It shall ea joy mcessimon for a period of ninety-nine S(99) years from the date hereof. It shall have and exereee, for the purpose of Its I bIusles, all the power coaferred by law on similar corporations, and It shall be ao thorised to do and perform any act and thing, and to conduct any businea not I peeilly rohIbited by law to cor rations. It sha have power and autorilty to receive, bay, own, hold, purchase, alienate, leaste, rent, convey, mortgage, Lypothecate, pledge, or otherwise encumber or dispose Iof property, reat personal and mied, to I ue bond, certel ates ad notes or other I evidences of indebtedness: to borrow mo I y, to name and anpoint its anages anad employees, ad fix their cempeasaton, and dlscharge them at pleasure It shall have the power to elect oeers, director ad agents; to establish rse by laws, roles and regulations as may be nee eMeary and proper-~ the conduet and Sm anaeet of Its sead the same d Ito alter ad amed or aboli at pleasure. It shall have the right to ncrease ora I diminsth its csattal ite with no ether - formality than is hereiate provided or. I SIt shall have the power and athority to 1 Scoantret, sue ad be sued a Itscrporate ame ad generally to do ad erform all .ech ets as may he necessar ad propr to excute ad sarrl out the arpmes of Sthis erporattn. It shadll have owe e ad authority to er tor and s _to_ say I e ain t emere l b iss or kq In befis, t . pent or** edh Do You Know That in the average three-minute telephone ",nversation t least 300 words are spoken? That, unlike the telegram, a telephone talk is a mecq sent and answer received? That this is accomplished at one and the same time forkt same price? What would the cost be if you sent by telegraph the sae number of words spoken in the ordinary telephone conveyt tion? Our splendid facilities go everywhere. The rates are reasonable. Save time and money by patronizing us. We transmit money by telephone on reasorable terms. Cumberland Telephone & Telegraph Co,, In Improve Your Parks and Giardens Hinderer's Iron Works 1112-1118 Camp Street - - - - New Orleans, La. Iron Fences Cheaper Than Woo Iren Chain, TaMe., Settee, Flower Boxes, Hanging Pote, A Arches, Vae.*, Feuntein. and Bench.. for Public Parks. Office Steble Fixture., Hitching Poete, Carriges Step., Malleable and Grey! Castingl, Water reughs, Fence Material, Hygienic Drinking Fe Cemetery Fences and Memorial Cro FOR YOUR-- Comfort and Convenien OUR ELEGANT AND COMPLETE LINE OF CABINET, ELEVA OVEN AND STANDARD RANGES NOW ON DISPLAY AT SALESROOM. INQUIRE ABOUT OUR NEW CIRCULATING WA HEATERS. N.O.Gas Light Compan Sid. . swa Plumbing and S. C. Oswad, Sewerin Work OUR SPECIALTY PROMPT ESTIMATES. 401 OPELOUSAS AVE. PHONE ALGIORIEU 2 er enterprise, and to establish such agencies as it may choose for the conduct of its business here or elsewhere. It shall also have the right to act as the agent or proxy for any individuals or corporations. ARTICLE II. All citation and other legal process shall be served on the president of this corpora tion and in the event of his absence or inabllit- to act, then upon the vice-presi dent, at the domicile of this corporation. ARTICLE Ill. The purpose for which this corporation Is organised and the purposes of the busi aess to be carried on by It are hereby de. clared to be: To do a general business in rice, and all of its by-products ; to buy and sell at whole sale or retail, on commission or otherwise; to do a general commercial buiness In any product of any kind; to operate factories, plants or other mills of any kind in con. nection with its said business, both here and elsewhere, and generally to do what ever may be Incidental to a general mer cantile or commission business. ARTICLE IV. The capital stock of this corporation is hereby declared to be: One Hundred Thou sand 'Dollars ($100,000.00), divided nlato one thousand shares (1000) of one hundred dollars ($100.00) each, which said stock shall be paid for in cash or its equivalent, whether paid directly or indirectly, ln such amounts and on such terms and conditions and In such manner as the Board of Direc tors may determine, or it may be issued full paid or non-assessable for property actually purchased. or labor actually per formed or rendered to this corporation. This corporation shall be authorised to commence bustiness as soon as twenty-five thousand dollars ($25,000.00) of its stock shall be subscribed fdr. ARTICLE V. All the corporate powers of this corpor ation shall be vested in and exercised by a Board of Directors of not less than three (3) and not more than seven (7) directors. who must be stockholders and who shall be elected annually on the second Monday in April of each year, by the stockholders commencing on the second Monday In April, 1912. at which time the ofeers of this cor poration shall also be elected by the board Rom amo their number or as soon there after as possible and the directors and o@ cer shall hold their respective ocees uatil their successors are duly elected and qual led. The said ocers are declared to -be a president, a vice-president and a treasurer. They shall also elect a secretary, who may or may not be a member of the board or a stockholder, at the discretion of the board, and the omes of secretary anad treaaurer may be filled by one and the same person as the board may determne. The first Board of Directors is hereby declared to be: .Mr. lordon 6. Orme. Mr. Julius R. Boss and Mr. James I. Pitot, with the aid Gordon 8. Orme as presi dent, the said James L. 1Ptot as vice-pres ident and the said Julius R. Boss as sec tary-treasurer. Which said Board of Directors sand o era shall retain their respective oiees un til the second Monday in April 1912, or until their successors are duly elected and qualified. Any vaeancles occurring la the .oard of Directors before the expiration of their term shall be filled by the stoekhold ers, after ten (10) full days' notice shaball' have been sent to each stockholder by mail to his last known address. Two (2) mem bers of said board shall constitute a quo rm for the transaction of all businea. Eald Board of Directors shall have the power to make such by-laws, rules and reulations as it deems proper, and to alter, amend, break and revoke same at pleasure. No director shall be allowed to sell his stock without first offerin the same to the Board of Directors of this corporation. at its then book-value. and the said offer must be made to the board nla writing and if same is not taken advantage of within thirty days. the said stockholders shall have the right to sell his stock to whom they please. Any director shall have the right to be represented by written proxy given to any stockholder to resenat him at any direc tors meeting, and any director may wavlw any or all notices of meetings by written notle to the seeretary and each member of the board as a also the stockholders shall leave their last address with the seeretary of this corporation and their falling to do so shall be construed as a waiver of all notices. All notices of steckholders meetings s- -se herenl therwlise provided for shall be by tea (10) days written notice in the maaner hereain sed. This nperts n shall retala frnt lien on any stock of say stockholder for any Indebtedeass due this eororation by him. and all trasers of stock shall be made subjeet to this elause t ... e aeye os inSth I d m l-ee ARTICLE VI. The failure to hold an electlg - elect either directors or officers at poration on the day fixed, shall ns this charter or affect this co any way, but the then existiag oficers, or both, shall retain their lye ofices until a meeting can be until such board or omcers are l ARTICLE VII. This charter may be amended, modified, or the capital stock dimlnlished, or the corporatis by a vote of three-fourths ot b issued at any meeting called fikr pose, after forty (401 days wreit thereof shall have been sent by th tarto each stockholder at his haIm .At the dissolution othis either by limitation or otbUrwi, fairs shall be liquidated by the the of Drecto r ouch of thm main 1m oflce, who shall be full power to liquidate and No stockholder shall ever be hl for the contracts or faults of this ation, nor shall any mere the organlsation hereof have the renderlag this charter null or et any stockholder to any liability Thrs dome and passed in my City of New Orleams, on the dW and year heren first above wltI presence of Pierre A. Leloeg, Jr. reace M. Janin, competest hereunto sign their names with appearers and me, notary, after tag of the whole. (Original signed) : GORDON I. JULIUS R. JAMeS L Witnesses : P. A. LELO)NG, JJR. LAWRENCE M. JANIN. W. MORGAN GURIE, .-m I. the undersigned Recordrr o la and for the Parish orf Oig' Louisana, do hereby certi mt and foregolig act of "Southern Rice Sales da duly recorded in my folio 535. New Orleaso, Ia., - (Signed) EMI LE LUONi ,- I certify the above and a true and correct copy th . corporation of the "Southeke Company," together with the the Recorder of Mortgages ed. on fle and of record Is oSe in the ('lt of New W. MORc.AN GURL, Apr 27 may 4 I 18 25 Juse 1 La Fona 111 kuC Utrt ftu Ice Cream Cakes delivered to place in t at thet wanted. Buffet L 11:30 a. m. to 3:00 SOLE AGENTS Park & T' Candi xml | txt
http://chroniclingamerica.loc.gov/lccn/sn88064020/1911-05-18/ed-1/seq-2/ocr/
CC-MAIN-2017-04
refinedweb
7,952
64.85
Chapter 13: SharePoint Client Object Model and jQuery (Professional SharePoint Branding and User Interface Design) Summary: This chapter discusses objects that are supported by the Client Object Model, how to update a site title by using the Client Object Model, common list operations, and using jQuery with SharePoint 2010. Applies to: Business Connectivity Services | SharePoint Foundation 2010 | SharePoint Server 2010 | Visual Studio This article is an excerpt from Professional SharePoint Branding and User Interface Design by Randy Drisgill, John Ross, Jacob J. Sanford, and Paul Stubb, published by Wrox Press (ISBN 978-0-470-58463-7, copyright © 2010 by Wrox, Understanding the Client Object Model Updating the Site Title Using the Client Object Model Common List Operations Using jQuery with SharePoint 2010 Summary About the Authors Introduction There are two basic approaches to accessing data programmatically in Microsoft SharePoint 2010. The first approach is to use the SharePoint API on the server. When you run code directly on the SharePoint server, the SharePoint API gives you complete control over all aspects of SharePoint 2010 and the data. If your application is not running on the server and needs to access the SharePoint data, you need to use the SharePoint web services. The web services offer similar functionality compared to the SharePoint API, although not every function is covered. In SharePoint 2010, you have another option when programming against SharePoint data: the Client Object Model. The Client Object Model is a new approach to remotely programming against SharePoint data. Although using web services gives you broad coverage to SharePoint features, using the programming model and API is very different from using the server API. This makes it difficult for developers, as they need to learn two completely different programming models. Also, calling web services from JavaScript clients is complicated and requires a lot of manual XML creation and manipulation. The Client Object Model solves all these issues, making client-side programming easy and straightforward. The Client Object Model is really three separate object models, one for the .NET CLR, one for Silverlight, and one for JavaScript. The .NET CLR version is used to create applications such as WinForms, Windows Presentation Foundation (WPF), and console applications, as well as PowerShell scripts. The Silverlight version works with both in-browser and out-of-browser Silverlight applications. The JavaScript version enables your Ajax and jQuery code to call back to SharePoint. You will read more about the Silverlight version in Chapter 14: Silverlight and SharePoint Integration (Professional SharePoint Branding and User Interface Design). We are not going to go into the .NET CLR version in this chapter, but as you will see in Chapter 14, once you learn how the Client Object Model works, you can easily program against any version. In this chapter, you will see how the Client Object Model works with JavaScript and jQuery to perform some common development tasks. Understanding the Client Object Model Understanding how the Client Object Model works will help you be more effective across all three versions. As shown in Figure 13-1, the Client Object Model fundamentally is a new Windows Communication Foundation (WCF) SharePoint service called Client.svc and three proxies: .NET CLR, Silverlight, and JavaScript. You program against the client proxies just as you do with any service. Calls to and from the server are sent in batches to increase network efficiency. Figure 1. Client Object Model Architecture Objects supported by the Client Object Model One of the first questions that comes up when talking about the Client Object Model is, "What can you do with it?" Again, understanding how the Client Object Model works will help you appreciate its capabilities and limitations. The Client Object Model proxies are based on the Microsoft.SharePoint.dll. Right away this establishes what you can do to this API. The Microsoft.SharePoint API supports the most common objects, such as sites, webs, content types, lists, folders, navigations, and workflows. Obviously, this is not an exhaustive list. So, how do you find out if a particular object is supported in the Client Object Model? Fortunately, the help documentation for SharePoint 2010 is very good, and you can find the complete list by looking at the Microsoft.SharePoint.Client Namespace on MSDN. Another way to find out if a particular object is supported is to look at the attributes on the Microsoft.SharePoint API. Remember that the client proxies are created from the server API (Microsoft.SharePoint Namespace). The tool knows what to put in the client proxies by looking for the attribute named ClientCallableTypeAttribute. The Client Object Model does not require a reference to the server assemblies. The ClientCallableTypeAttribute also specifies the name to call the object in the proxy. This brings up another important point about the Client Object Model. The names in the Client Object Model have been cleaned up a little by removing the 'sp' from the beginning of most of the objects. But knowing how the Client Object Model is implemented enables you to look in the help documentation and see what the server object is and what the client object is called. For example, open the help page for the SPSite class. In the Syntax section at the top of the help topic, you can see the ClientCallableTypeAttribute and the name that it will be called in the proxy classes. [ClientCallableTypeAttribute( Name = "Site", ServerTypeId = "{E1BB82E8-0D1E-4e52-B90C-684802AB4EF6}")] [SubsetCallableTypeAttribute] public class SPSite : IDisposable At the heart of the Client Object Model is the ClientContext class, the object used to access the objects returned from the server and the queries sent to the server. The first step to programming the Client Object Model is to always get a reference to the current ClientContext using the static method on the ClientContext class, SP.ClientContext.get_current(). You will see how to use the ClientContext class in each of the examples throughout this chapter. Although it is not possible to go into great depth about the entire Client Object Model API here, you should now have an understanding of how the Client Object Model works, and how to figure out what's in the client API, and how it maps to the server API. This was one of the goals of the SharePoint team: to make it easy for someone who already knows the server API to be productive quickly on the client API. Let's take a look at how to program with the Client Object Model in practice by walking through a few common scenarios. The two main scenarios you will see are how to use the Client Object Model to read and write properties of the SharePoint site, and how to read and write data from the SharePoint List. Updating the Site Title Using the Client Object Model One of the first examples to work with is to retrieve and update the site title. This is a very simple example, but it shows some of the basic patterns of using the Client Object Model. To retrieve and update a site title Open Visual Studio and create a new empty SharePoint project called ClientOMProject. Add a Visual Web Part project item to the project and name it ChangeTitleWebPart. Using a Visual Web Part will require this to be a farm solution. Open the ChangeTitleWebPart.ascx page. Click the Design button at the bottom of the page to switch between Source view and Design view. Add an HTML textbox and a button. Once in Design view, drag an Input (Text) control and an Input (Button) control from the HTML section of the Toolbox pane on the left side. Add the button click handler. Double-click the button to create the JavaScript click event handler. Visual Studio will generate the following code for you. <script language="javascript" type="text/javascript"> // <![CDATA[ function Button1_onclick() { } // ]]> </script> <p> <input id="Text1" style="width: 229px" type="text" /> <input id="Button1" type="button" value="button" onclick="return Button1_onclick()" /></p> Call the method to retrieve the site's title property. You need to use the Client Object Model to retrieve the site's title property, but you need to make sure the Client Object Model is loaded first. Add the following code inside your script tag to ensure the Client Object Model is ready before you use it. ExecuteOrDelayUntilScriptLoaded(GetTitle, "sp.js"); Create the GetTitle method to retrieve the site's title property. In the previous step, you added code to call the GetTitle method when the Client Object Model was ready. Now you will implement the GetTitle method using the Client Object Model. Add the following code to retrieve the site's title property. var site; var context; function GetTitle() { //Get the current client context context = SP.ClientContext.get_current(); //Add the site to query queue site = context.get_web(); context.load(site); //Run the query on the server context.executeQueryAsync(onQuerySucceeded, onQueryFailed); } First you define the site and context variable to hold a reference for later. In the GetTitle function, you first get a reference to the current client context. This is the context that the page is running in, which in this example is intranet.contoso.com. The second step is to define which objects you want returned and load them onto the context's query queue using the load method. The last step is to execute the calls on the server. In Silverlight and JavaScript this is an asynchronous call, so you will need to define a succeeded and failed callback handler. Implement the failed callback handler. If your call to the server fails, the failed callback handler will fi re. The callback provides some information to help you debug the problem such as the error message and call stack information. Add the following code to implement a failed callback handler that will display an alert box with the error message. function onQueryFailed(sender, args) { alert('request failed ' + args.get_message() + '\n' + args.get_stackTrace()); } Implement the succeeded callback handler. Add the following code to set the value of the textbox to the site's title property. function onQuerySucceeded(sender, args) { document.getElementById("Text1").value = site.get_title(); } Update the site's title on the button click event. Up to this point you have retrieved the Site's title, now you will add the ability to update the title. Add the following code to the button click handler. function Button1_onclick() { //Update the Site's title property site.set_title(document.getElementById("Text1").value); site.update(); //Add the site to query queue context.load(site); //Run the query on the server context. executeQueryAsync(onTitleUpdate, onQueryFailed); } function onTitleUpdate(sender, args) { } Press F5 to run the application. Once the site opens, you will need to add the ChangeTitleWebPart to the page. You should see something similar to Figure 2. Figure 2. Example site page Refresh the page. You may have noticed that the title does not appear to update when you click the button. In fact, it does update but you will need to refresh the page to see the changes. Add the following code to refresh the page after the title is updated. function onTitleUpdate(sender, args) { SP.UI.ModalDialog.RefreshPage(SP.UI.DialogResult.OK); } This code actually uses the new SharePoint dialog framework APIs to perform the page refresh. Running the application now will display the site's title in the text box and then set the site's title when you click the button. After the title is updated, the page refreshes to show the new site title. This is a very simple example, but it clearly demonstrates a code pattern for using the Client Object Model: Define the objects to retrieve, load them on the query queue, execute them on the server, and handle the callback. As you look at more complex examples in the next section, you will use this pattern over and over again. Common List Operations Now that you have a good understanding of the fundamentals, let's look at a few of the most common tasks you will use the Client Object Model for: reading and writing SharePoint list data. In this sample, you will add a Visual Web Part to the solution and add buttons to call the various operations. You also will see how to use a separate JavaScript fi le to contain most of the Client Object Model code to call SharePoint. To read and write SharePoint list data Add the following code to your Visual Web Part. This code will add five buttons for the different operations and some boilerplate code to call the functions. It also contains a reference to the external ListOperations.js file. <script src="/SiteAssets/ListOperations.js" type="text/javascript"></script> <script language="javascript" type="text/javascript"> // <![CDATA[ function CreateButton_onclick() { CreateList(); } function AddButton_onclick() { AddListItem(); } function ReadButton_onclick() { ReadListItem(); } function UpdateButton_onclick() { UpdateListItem(); } function DeleteButton_onclick() { DeleteListItems(); } // ]]> </script> <p><input id="CreateButton" type="button" value="Create List" style="width: 150px" onclick="CreateButton_onclick()" /></p> <p><input id="AddButton" type="button" value="Add List Item" style="width: 150px" onclick="AddButton_onclick()" /></p> <p><input id="ReadButton" type="button" value="Read List Item" style="width: 150px" onclick="return ReadButton_onclick()" /></p> <p><input id="UpdateButton" type="button" value="Update List Item" style="width: 150px" onclick="return UpdateButton_onclick()" /></p> <p><input id="DeleteButton" type="button" value="Delete List Item" style="width: 150px" onclick="return DeleteButton_onclick()" /></p> Add a JavaScript file called ListOperations.js file to your Visual Web Part node. Set the deployment type property to Element File. And add a Module node to the elements file to deploy the ListOperations.js file to the Assets library. Deploying this to the Assets library will allow you to deploy this as a sandboxed solution to both on-premise SharePoint sites and to SharePoint online sites. Add the following text to the Elements.xml file. You should need to add only the second Module node, as the first one was added by Visual Studio. <?xml version="1.0" encoding="utf-8"?> <Elements xmlns="" > <Module Name="ListOperationsVisualWebPart" List="113" Url="_catalogs/wp"> <File Path="ListOperationsVisualWebPart\ListOperationsVisualWebPart.webpart" Url="ListOperationsVisualWebPart.webpart" Type="GhostableInLibrary"> <Property Name="Group" Value="Custom" /> </File> </Module> <Module Name="jsFiles"> <File Path="ListOperationsVisualWebPart\ListOperations.js" Url="SiteAssets/ListOperations.js" /> </Module> </Elements> Now that you have the Visual Web Part complete, you should deploy it and verify that you can add the Web Part to the site and that the ListOperations.js file is in the Assets library. You should see something similar to Figure 3. Now you are ready to focus on the actual JavaScript. Figure 3. Customized site page Creating lists There are many ways to create lists in SharePoint. But sometimes you may need to create them dynamically from the client. Using the Client Object Model makes this very easy and straightforward. As you saw earlier, you need to get a reference to the ClientContext first. Next, you create a ListCreationInfo object to define the list and add it to the collection of lists in SharePoint. Finally, you define the actual field definitions for the columns of the list. The most difficult part about creating a custom list is defining the fields. The fields are defined using Collaborative Markup Language (CAML). To see all the values, view Field Element on MSDN. Add the following code to your ListOperations.js file. This code will create a list called Airports that has two fields, AirportName and AirportCode. var site; var context; function CreateList() { var listTitle = "Airports"; var listDescription = "List of Airports"; //Get the current client context context = SP.ClientContext.get_current(); var site = context.get_web(); //Create a new list var listCreationInfo = new SP.ListCreationInformation(); listCreationInfo.set_templateType(SP.ListTemplateType.genericList); listCreationInfo.set_title(listTitle); listCreationInfo.set_description(listDescription); listCreationInfo.set_quickLaunchOption(SP.QuickLaunchOptions.on); var airportList = site.get_lists().add(listCreationInfo); //Create the fields var airportNameField = airportList.get_fields().addFieldAsXml( '<Field DisplayName=\'AirportName\' Type=\'Text\' />', true, SP.AddFieldOptions.defaultValue); var airportCodeField = airportList.get_fields().addFieldAsXml( '<Field DisplayName=\'AirportCode\' Type=\'Text\' />', true, SP.AddFieldOptions.defaultValue); context.load(airportNameField); context.load(airportCodeField); context.executeQueryAsync(CreateListSucceeded, CreateListFailed); } function CreateListSucceeded() { alert('List created.'); } function CreateListFailed(sender, args) { alert('Request failed. ' + args.get_message() + '\n' + args.get_stackTrace()); } You can also create predefined lists, such as an announcements list, without the need to define any fields. You can also define other field types, such as numbers, date and time, and Boolean. To see all the values, view Field Element on MSDN. Adding list items Now that you have created a custom list, the next thing to do is to add new records to the list. To create a list item, first create a new list item using the ListItemCreationInformation object, and then add it to the list using the addItem method. Next, you set the values for all the fields in the list item using the set_item method. Finally, you call the update method on the list item to flag it for updating before calling executeQueryAsync to commit the changes on the server. Add the following code to the ListOperations.js file. function AddListItem() { var listTitle = "Airports"; //Get the current client context context = SP.ClientContext.get_current(); var airportList = context.get_web().get_lists().getByTitle(listTitle); //Create a new record var listItemCreationInformation = new SP.ListItemCreationInformation(); var listItem = airportList.addItem(listItemCreationInformation); //Set the values listItem.set_item('AirportName', 'Seattle/Tacoma'); listItem.set_item('AirportCode', 'SEA'); listItem.set_item('Title', 'SEATAC'); listItem.update(); context.load(listItem); context.executeQueryAsync(AddListItemSucceeded, AddListItemFailed); } function AddListItemSucceeded() { alert('List Item Added.'); } function AddListItemFailed(sender, args) { alert('Request failed. ' + args.get_message() + '\n' + args.get_stackTrace()); } Reading list items Reading lists follows the same pattern as other Client Object Model calls. First you get a reference to the client context and a reference to the list you want to read items from. In this case, you will read items from the Airports list that you created in the previous section. Next, you need to create a CAML query to specify which items you want returned. Tip You can learn more about Introduction to Collaborative Application Markup Language (CAML) on MSDN. Building CAML queries can be fairly complicated. It is recommended that you use a CAML query builder to help you build the correct query. A number of community CAML query tools are available on CodePlex. You can find the one that works best for you by searching CodePlex. SharePoint provides a method, createAllItemsQuery, which returns a valid CAML string that returns all items. If you look at the createAllItemsQuery method in SP.Debug.js or debug the code, you will see the following CAML string that is returned. <View Scope=\"RecursiveAll\"> <Query></Query> </View> After you specify the CAML query, you load the list items query on the context, and then execute the query on the server asynchronously by calling executeQueryAsync. Once the query returns from the server, in this case the ReadListItemsSucceeded method is called. Iterate over the returned items by getting a reference to the enumerator and calling the moveNext method in a while loop. On each iteration of the loop, you can get the current list item using the get_current method of the item. Finally, once you have a reference to the item, you can call the get_item method, passing in the string name of the field you want. Add the following code to the ListOperations.js fi le to read the list items._id().toString() + '\n'; } alert(itemsString); } function ReadListItemFailed(sender, args) { alert('Request failed. ' + args.get_message() + '\n' + args.get_stackTrace()); } Updating list items Updating list items is very straightforward. First, get a reference to the current context, and then get a reference to the list item that you want to update. This can be done in a number of ways. For example, you could use a CAML query to find the correct record or to iterate over a number of records. This example gets a reference to the record by using the record ID. Once you have the item, you can call the set_item method, passing in the name of the field and the updated value. You need to call the update method on the item to flag that it is to be updated. Finally, call the executeQueryAsync method to perform the update on the server. Add the following code to the ListOperations.js fi le to update the first record in the Airports list. function UpdateListItem() { var listTitle = "Airports"; //Get the current client context context = SP.ClientContext.get_current(); var airportList = context.get_web().get_lists().getByTitle(listTitle); //Get the list item to update var listItem = airportList.getItemById(2); //Set the new property value listItem.set_item('AirportName', 'Seattle Tacoma Airport'); listItem.update(); context.executeQueryAsync(UpdateItemSucceeded, UpdateItemFailed); } function UpdateItemSucceeded() { alert('List Item Updated.'); } function UpdateItemFailed(sender, args) { alert('Request failed. ' + args.get_message() + '\n' + args.get_stackTrace()); } As a brief aside, one of the problems is verifying that you have the correct item ID. In this case I have only one record but I used the ID of 2. This is because I previously created a record and deleted it. Then when I created the record again, SharePoint assigned it the next available ID, which was 2. This makes it difficult to get the correct ID during development. One easy technique is to use the List Service to view the list data as a REST request. SharePoint will return the data as an OData Atom feed in which you can verify the fields and the records, including the item IDs. For example, if you browse to the Airports list using the List Service path at\_vti\_bin/listdata.svc/Airpor. You will see the following Atom feed. <?xml version="1.0" encoding="utf-8" standalone="yes"?> <feed xml: <title type="text">Airports</title> <id></id> <updated>2010-07-14T16:52:35Z</updated> <link rel="self" title="Airports" href="Airports" /> <entry m: <id></id> <title type="text">SEATAC</title> <updated>2010-07-14T09:52:18-07:00</updated> <author> <name /> </author> <link rel="edit" title="AirportsItem" href="Airports(2)" /> <link rel="" type="application/atom+xml;type=entry" title="CreatedBy" href="Airports(2)/CreatedBy" /> <link rel="" type="application/atom+xml;type=entry" title="ModifiedBy" href="Airports(2)/ModifiedBy" /> <link rel="" type="application/atom+xml;type=feed" title="Attachments" href="Airports(2)/Attachments" /> <category term="Microsoft.SharePoint.DataService.AirportsItem" scheme="" /> <content type="application/xml"> <m:properties> <d:ContentTypeID>0x010041919BD85A48CA4B95F735848786C29C</d:ContentTypeID> <d:Title>SEATAC</d:Title> <d:AirportName>Seattle Tacoma Airport</d:AirportName> <d:AirportCode>SEA</d:AirportCode> <d:Id m:2</d:Id> <d:ContentType>Item</d:ContentType> <d:Modified m:2010-07-14T09:52:18</d:Modified> <d:Created m:2010-07-11T22:09:54</d:Created> <d:CreatedById m:16</d:CreatedById> <d:ModifiedById m:16</d:ModifiedById> <d:Owshiddenversion m:2</d:Owshiddenversion> <d:Version>1.0</d:Version> <d:Path>/Lists/Airports</d:Path> </m:properties> </content> </entry> </feed> You can see the ID in a couple of places, first in the href attribute in the link node and again in the Id field under the properties node. This query actually returns all the records in the list. (In this case, there is only one record.) To drill down on to a specific record, you could use a path with the item ID in parentheses at the end,\_vti\_bin/listdata.svc/Airports(. Deleting list items Deleting records using the Client Object Model is very similar to updating records. First, get a reference to the current context. Next, get a reference to the list item that you want to update. Again, in this example I will get a reference to the record by using the record ID. Once you have a reference to the list item, call the deleteObject method to mark the record for deletion. Then call executeQueryAsync to perform the deletion of the record on the server. Add the following code to the ListOperation.js file. function DeleteListItems() { var listTitle = "Airports"; //get the current client context context = SP.ClientContext.get_current(); var airportList = context.get_web().get_lists().getByTitle(listTitle); //get the list item to delete var listItem = airportList.getItemById(2); //delete the list item listItem.deleteObject(); context.executeQueryAsync(DeleteItemSucceeded, DeleteItemFailed); } function DeleteItemSucceeded() { alert('List Item Deleted.'); } function DeleteItemFailed(sender, args) { alert('Request failed. ' + args.get_message() + '\n' + args.get_stackTrace()); } You have seen some of the most common methods for operating with list data in SharePoint. Although this doesn't cover everything that you can do with the Client Object Model, you should have enough information to understand the basic pattern that is used in all the operations. With this basic information, you will be able to understand the reference documentation and samples in the SharePoint SDK and on MSDN. The next section looks briefly at how you can combine the power of the Client Object Model with the flexibility of jQuery. Using jQuery with SharePoint 2010 jQuery is an open source JavaScript library that helps you build rich, dynamic, client-side applications. The power in jQuery comes from its simplicity and powerful query syntax. One of jQuery’s most powerful abilities is to quickly select various HTML DOM elements. Once you find the element or collection of elements, jQuery makes it easy to modify attributes and CSS for those elements. jQuery also supports extensibility through a rich plug-in model. In fact, a huge community of jQuery plug-ins is available. It is actually a core design point of jQuery to keep the core library small and provide most of the rich functionality via plug-ins. Although it is not possible to cover all aspects of jQuery in this chapter, there is one very important jQuery API with which SharePoint developers and designers should become familiar: the Ajax library. You learned about calling SharePoint from the client using the Client Object Model earlier in this chapter, but the Client Object Model doesn’t cover all SharePoint functionality. For example, Search is not covered by the Client Object Model and many others. The Client Object Model covers only APIs in the Microsoft.SharePoint.dll. This is where the jQuery Ajax library comes into play. Fortunately, SharePoint covers almost all its functionality with SOAP-based .asmx web services. The Ajax library makes it relatively easy to call these web services using jQuery from the client. In this section, you will see how to call SharePoint web services using jQuery and dynamically display the results in a Content Editor Web Part (CEWP), without writing any server code. You can download the jQuery library from the jQuery homepage. The current version as of thiswriting is 1.4.2. The jQuery library is a single fi le called jquery-1.4.2.js. There are actually two versions of this file. jquery-1.4.2.js — A human-readable source version. jquery-1.4.2.min.js — A minified and condensed version I recommend using the source version for development and the minified version in production. Download the jquery-1.4.2.js file and put it in somewhere on your SharePoint site. Create a Scripts folder under the SiteAssets library to hold your JavaScript files. The path would be something similar to. To add the jQuery library, use the following script tag on your page. <script src="/SiteAssets/Scripts/jquery-1.4.2.js" type="text/javascript"></script> Another option is to use the jQuery library hosted on Microsoft’s content delivery network (CDN). The CDN geographically distributes the fi le around the world, making it faster for clients to download the file. With SharePoint on-premise installations, such as your intranet, this is not as important, but with SharePoint Online or SharePoint-based Internet sites, this will increase the perceived performance of your site. Add the following script tag to your page to use the Microsoft CDN to load the jQuery library. <script src=""type="text/javascript"></script> Ajax script loader One thing that you need to be concerned with when using jQuery is that the jQuery library is loaded only once. There are a number of ways that you could do this, but this section mentions three ways and the various caveats associated with each method. The first method is to just include the script tags, like you saw previously, directly to the page or, even better, to the master page. You would need to ensure that no other components also add a reference to the jQuery library. Here, the term "components" refer to anything that may inject code when the page renders, such as Web Parts. This is an acceptable approach if you control the entire page, but many times this is not possible due to the modular nature of SharePoint development. The next approach is to use the ScriptLink control. The ScriptLink control ensures that the script is loaded only once and will also ensure that other dependencies have been loaded first. Add the following ScriptLink server-side tag to your page to load the jQuery library. <SharePoint:ScriptLink </SharePoint:ScriptLink> The ScriptLink control requires that you put the jQuery library file in the LAYOUTS directory, C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\TEMPLATE\LAYOUTS. This may not be possible if you have limited rights to the server, such as when you are creating sandboxed solutions. Also, even if the JavaScript library is in the LAYOUTS folder, the ScriptLink control is not allowed to run as a sandboxed solution. Therefore, I do not recommend this approach. The third method, and the one that you should use, is to load jQuery using the Microsoft Ajax script loader, or another client-side script loader. One thing to be aware of is that the Microsoft ASP.NET Ajax library is now included as part of the Ajax Control Toolkit. This means that the ASP.NET Ajax library was split into server controls, which are now in the Ajax Control Toolkit, and client code, which is now done using jQuery. So, most of the functionality that was provided is now done in jQuery or through a jQuery plug-in, except the script loader. The Ajax library script loader has not been released yet for jQuery, so you will need to use the existing Start.js script loader library until it is released. Download the Start.js library to your Site Assets library’s Script folder that you created earlier to hold your scripts. You can find the current script loader on Microsoft’s CDN at the following URL. .microsoft.com/ajax/beta/0910/Start.js. You should also download the source version for development from the following URL.. Alternatively, you could load the Start.js library directly from the Microsoft CDN. There are two steps to loading the jQuery library, or any of your custom JavaScript libraries. First, reference the script loader on your page using the following script tag. <script src= "/SiteAssets/Scripts/Start.debug.js " type= "text/javascript "></script> Or, if you are loading the library from the CDN, use the following script tag instead. <script src= " "type= "text/javascript"></script> The second step is to reference the jQuery library or your own libraries using the Sys.loadScripts method, which is part of the Start.js library. The Sys.loadScripts method takes an array of scripts to load and a callback function to call when they have been loaded. Add the follow code to load the jQuery library. <script type= "text/javascript "> Sys.loadScripts(["/SiteAssets/Scripts/jquery-1.4.2.js "], function() { alert("jQuery Loaded "); }); </script> The Ajax Script Loader prevents the jQuery library from being loaded multiple times on the same page, even if you add many Web Parts that are using this code. Calling SharePoint web services with jQuery You have seen how to get SharePoint list data using the Client Object Model, but there are many types of SharePoint data that are not covered by the Client Object Model. The Client Object Model applies only to data in the Microsoft.SharePoint.dll, essentially functionality found in SharePoint Foundation only. To leverage other SharePoint data, such as profile data or search data, you will need to call the SharePoint web services. Calling these web services from the client using JavaScript has become much easier using the jQuery Ajax API. Let’s first take a quick look at how to retrieve list data, in this case the Announcements list, using jQuery. You could do this using the Client Object Model, but this example should serve as a bridge from doing it with the Client Object Model to doing it with jQuery. jQuery in the Content Editor web part To keep things simple and demonstrate another technique for using JavaScript on your pages, you will use the Content Editor Web Part (CEWP) to display a list of announcements. This example does not require Visual Studio; everything can be done using only a web browser. To display a list of announcements by using JavaScript Start by adding a CEWP to the right column of your home page. You can find the CEWP in the Web Part gallery under the Media and Content category. Put the Web Part into edit mode by selecting Edit Web Part from the Web Part’s context menu. Click the link in the Web Part titled Click here to add new content. Next, edit the source HTML for the Web Part. Click the Editing Tools context-sensitive Format Text tab on the ribbon. In the Markup Ribbon group, select Edit HTML source from the HTML drop-down button. In the HTML source dialog, add the following code. <!--Load the Script Loader--> <script src= "/SiteAssets/Scripts/Start.debug.js" type= "text/javascript"></script> <!-- Load jQuery library--> <script type= "text/javascript"> Sys.loadScripts(["/SiteAssets/Scripts/jquery-1.4.2.js"], function() { GetAnnouncements(); }); </script> <script type= "text/javascript"> function GetAnnouncements() { var</ul> The GetAnnouncements function builds the SOAP message and then uses the jQuery.ajax API to call the lists.asmx web service. The jQuery.ajax calls the GetListItemsCompleted callback method when the web service returns. The GetListItemsComplete method parses the XML data that returns from the lists.asmx web service. As it parses each record in the XML data, it appends a list item to the Announcements list using the appendTo function. There are two key pieces to calling various SharePoint web services. The first is to understand the exact SOAP XML that is required to call the service, and the second is to understand the returned XML data and how to parse it to extract the exact values required. Although these change between the various services, the code pattern is the same for all services. Unfortunately, discovering how to format the SOAP message can be a challenge. Although MSDN documents the methods, it does not tell you the exact SOAP format or which parameters are optional. One of the easiest ways to discover the syntax is to create a console application in Visual Studio that calls the web service you are interested in calling from JavaScript. Then use the web debugging tool Fiddler to intercept and inspect the web service calls. Summary In this chapter you have seen how the new Client Object Model makes accessing SharePoint data as easy on the client as it is on the server. The Client Object Model covers the Microsoft.SharePoint.dll API on the client through a proxy object model that closely mirrors the server object model. The Client Object Model offers a very efficient calling pattern that not only gives you control over when and how often you call the server but also gives you control over the amount of data that is returned. You have learned how you can leverage the power of jQuery to access SharePoint web services using the jQuery.Ajax API. You have also seen a number of different approaches to loading the jQuery library and other custom libraries. In the end, jQuery and the Client Object Model are complementary techniques to bring all the power of SharePoint to the client to create very rich applications that can run in both on-premise and online scenarios. These two techniques for accessing SharePoint data from the client will enable you to create dynamic branding solutions based on data in SharePoint. About the Authors Randy Drisgill is a consultant with SharePoint911. He is a Microsoft MVP for SharePoint Server and the coauthor of Professional SharePoint 2007 Design. John Ross is a consultant with SharePoint911 and a Microsoft MVP. He is an active member of the SharePoint community and a frequent guest speaker. Jacob J. Sanford is a senior consultant for Cornerstone Software Services. Paul Stubbs works at Microsoft and is a frequent speaker at Tech Ed and Dev Connections events about branding and SharePoint.
https://docs.microsoft.com/en-us/previous-versions/office/developer/sharepoint-2010/gg701783(v=office.14)
CC-MAIN-2019-09
refinedweb
6,056
56.05
A simple devRant api wrapper for the lazy people Project description devRantSimple A simple devRant api wrapper for the lazy people Installation First, get it from pip pip install devRantSimple Then, import the library import devRantSimple as dRS That's it! Usage These are the avalible functions and vars: import devRantSimple as dRS # Rant Types # These are passed in to some functions to specify what data you want returned dRS.RantType.algo # Algo dRS.RantType.top # Top dRS.RantType.recent # Recent # Invalid Response # This is a string returned by some functions when something goes wrong. # It is always a smart idea to check if the thing returned by the function you are using is equal to this # If it is, you messsed up somewhere. (or the api changed) dRS.InvalidResponse # Functions dRS.getUserId("<username>") # Get a user id from a username (returns an int) dRS.userExists("<username>") # Check to see if a user exists with this username (returns a bool) dRS.getRant(dRS.RantType.<type>, <n>) # Get the n'th rant from list <type> # Example return data for these parameters: (dRS.RantType, 1): # {'id': 1604103, 'text:': "Oh yeah. Hey guys. 2 things. \nFirst off. Forgot to say. Officially got a job. Finally. So thank you for all the help/advice and patience with my depressive rants!! \n\nI'm in a new chapter of my life now so thanks. \n\nAnd secondly. \n\nI FUCKING HATE MY JOB", 'score': 66, 'username': 'al-m'} dRS.getUserRant("<username>", <n>) # Get the n'th most recent rant posted by <username> # Example return data for these parameters: ("ewpratten", 1): # {'text': 'I wonder..\n\nDo the new devduck capes say "devrant.com" on the back? Or do they still say "devrant.io"', 'score': 20, 'tags': ['devrant', 'i wonder'], 'id': 1600704} More functions, data, and info will come soon. Example This is an example script that gets every rant posted by a user (one at a time) and prints it to the screen: import devRantSimple ad dRS username = "ewpratten prevdata = "" i = 0 while prevdata != dRS.InvalidResponse: prevdata = dRS.getUserRant(username, i)["text"] print(prevdata) i+=1 Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/devRantSimple/
CC-MAIN-2019-47
refinedweb
373
66.84
This Python3 CGI HTTPS server used to work a few weeks (or months) ago, but now no longer works under Linux (Ubuntu). I tried on Ubuntu 10.04 and Ubuntu 14.04 and the behavior is the same. Now when I try to access any CGI script I am getting: Secure Connection Failed An error occurred during a connection to 127.0.0.1:4443. SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long) import http.server import ssl import os server_address = ('', 4443) cert = os.path.abspath('./server.pem') handler = http.server.CGIHTTPRequestHandler handler.cgi_directories = ['/cgi-bin'] httpd = http.server.HTTPServer(server_address, handler) httpd.socket = ssl.wrap_socket(httpd.socket, server_side=True, certfile=cert) print ("Server started...") httpd.serve_forever() File "/usr/lib/python3.4/ssl.py", line 618, in read v = self._sslobj.read(len, buffer) ssl.SSLError: [SSL: SSLV3_ALERT_UNEXPECTED_MESSAGE] sslv3 alert unexpected message (_ssl.c:1767) I found the answer at: The solution is to add: http.server.CGIHTTPRequestHandler.have_fork=False # Force the use of a subprocess before starting the server. This is required for Mac and Unix implementation because, for efficiency reasons, they employ a fork to start the process that executes the CGI rather than creating a subprocess as used by other implementations (i.e. Windows). In a non-wrapped CGI implementation the fork works fine and the output is sent to the socket correctly, however, when the socket is SSL wrapped things go terribly wrong. The solution is to force the Unix and Mac implementations to use a subprocess leaving the SSL socket happily working and having the Python Server transfer the output of the CGI script to the client while translating the output into SSL. I still have no clue why this used to work!
https://codedump.io/share/HeLh49HD0ajY/1/python3-cgi-https-server-fails-on-unix
CC-MAIN-2017-09
refinedweb
293
59.9
Component should implements HasEnabled ? Component should implements HasEnabled ? Hello, I think Component class would benefit for implementing HasEnabled interface (there'd be no API change, since Component already has setEnabled(boolean)/isEnabled). The benefit is that when i use the enabled/disabled feature on a view of my MVP, i would declare HasEnabled in my view, and just return the component in the view implementation. For now, i need to wrap the component in a anonymous inner class of type HasEnabled and forward the call to setEnabled/isEnabled into the component's setEnabled/isEnabled. As an example : My view Code: public interface View { HasEnabled myOkButton(); interface Presenter { } } Code: public class ViewImpl implements View { private TextButton okButton; @Override public HasEnabled myOkButton() { return new HasEnabled() { @Override public void setEnabled(boolean enabled) { okButton.setEnabled(enabled); } @Override public boolean isEnabled() { return okButton.isEnabled(); } }; } } Code: public class ViewImpl implements View { private TextButton okButton; @Override public HasEnabled myOkButton() { return okButton; } } Hermann Thanks for the report. I've filed an API review ticket and will update this thread with any decision or changes we make. Like you said, Component already implements these methods, so this change seems to make sense at first glance. I've had a chance to review this and have committed the change to SVN as r2619. The fix for this shortcoming has been included in the public release of Sencha GXT 3.0.0. You should be able to observe this change in our source. Although we're confident that this issue has been resolved, please reply here (or start a new bug thread linking to this one) if you have any issues with this change.
https://www.sencha.com/forum/showthread.php?195217-Component-should-implements-HasEnabled&s=f9d1ca447914606da383a04609e68975&p=794682
CC-MAIN-2015-18
refinedweb
273
52.8
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). Hello again; here's an observation that I find interesting. Python and C++ use different object storage concepts (e.g. a simple type like integer is actually a reference in Python but a value in C++), so whatever object is constructed in C++ cannot be used 1:1 in Python (no memory mirroring). That means for the Python API, every object returned by a function cannot be identical to the C++ object that C4D itself uses. Python would not understand the memory allocations there. So, the API function must return a proxy object that provides access for Python to the actual C++ objects. (Right?) Here's a test (sorry for using the GeListHead again, this topic has nothing to do with my previous one): GeListHead import c4d def main(): obj = op prevHead = None print "---------------" while obj != None: head = obj.GetListHead() print type(head), id(head), head if prevHead != None: print prevHead == head, prevHead is head else: print "None" prevHead = head obj = obj.GetNext() if __name__=='__main__': main() The script goes through some objects in the Object Manager which are all on the same tree level, starting with the selected one, and check the GeListHead for each. The result is compared with the previous GeListHead by equality comparison == and identity comparison is. The output looks like this: --------------- <type 'c4d.GeListHead'> 1809691147248 <c4d.GeListHead object at 0x000001A559FF7BF0> None <type 'c4d.GeListHead'> 1809691146800 <c4d.GeListHead object at 0x000001A559FF7A30> True False <type 'c4d.GeListHead'> 1809691146928 <c4d.GeListHead object at 0x000001A559FF7AB0> True False <type 'c4d.GeListHead'> 1809691147248 <c4d.GeListHead object at 0x000001A559FF7BF0> True False >>> Theoretically, the GeListHead should be the same object - identical, at the same address in memory - for all objects, as the objects belong to the same tree structure. That is not the case: the type is correct, but Python's own id() function returns a different number, and the memory address printed by the plain object printout is also different. These numbers repeat after three iterations, presumably because the garbage collection has destroyed the Python proxy by then and is reusing its properties. id() It is logical that the equality test returns True (as the object is identical on the C++ side) but the identity test returns False (as the proxy object is different)... at least, it is logical in this interpretation. True False Unfortunately, the objects should be identical too. If we were working in C++, the plain pointers would be the same, referencing the same object. The Python API adds an abstraction level which destroys the identity relation. I guess it would be necessary for the API to check for existing proxys and reuse those. (In the sample code, this would prevent the garbage collection from destroying the proxy until the very end of the script, as there would always be a reference to it through prevHead.) prevHead Is that interpretation of mine even correct? I'm kind of reverse engineering here... So, this raises first the question how to check for identity of two C4D objects from different references. The BaseObject has GetGUID() available for comparisons; a GeListHead as used here is no BaseObject though, and doesn't provide a unique ID. BaseObject GetGUID() Second, are there any more Python development details we have to be careful with, because the Python/C++ interface may introduce unforeseen difficulties? e.g. memory allocation: C++ has an explicit allocation, while Python uses a garbage collector. Python will destroy an object when there are no more references to it (which in turn may destroy other objects that become non-referenced). Will that properly happen to a C4D object too? Like, I Remove() a BaseObject from a tree and then the variable where I store it goes out of scope... I haven't found evidence to the contrary, but it must be hellishly complicated to count references if the code changes between C++ modifications and Python modifications to the "same" object when methods of a Python plugin are executed. Remove() e.g. destructor calls: when is the C4D object actually destroyed? The garbage collector may destroy the Python proxy later (C++: immediately) when it happens to run. Is the destructor of the C4D object called when the proxy is destroyed, or earlier (when the reference counter drops to 0), or later? If there are dependent objects (like tags for a BaseObject), when are these destroyed (provided the BaseObject C++ destructor takes care of them)? e.g. Generator functions: these keep their state and execution point between calls, so all variables are preserved; including any proxy objects handling C4D objects. These are obviously vulnerable to changes between calls (okay, that's a bad example as Python has the same issue). I've trying to find details on the C4D/Python interfacing in the Python manual but can't locate any deeper information. One way to "identify" objects in a C4D scene is to use the position in the object tree. That is what the "unique IP" system is using (GetUniqueIP(), GetUniqueID()). Hi, yes, that is an undocumented oddity of the Cinema 4D SDK. To test nodes for identity you have to use the equality operator and an equality comparison is performed by comparing the data containers of the nodes. One related and rather annoying side-effect of this memory location peek-a-boo is that almost all types in Cinema are not hashable, i.e. do not implement __hash__, even those where you would not really expect it, like for example c4d.Vector (although vectors are testable for identity via is). __hash__ c4d.Vector is They probably should document hat more thoroughly. Cheers, zipit Hi @Cairyn you are right, our Python Object is in most of the time a holder of a C++ object pointer, and copy the python Object actually copy the pointer, not the pointed object which is way more optimized. I would like to say that trust something on the pointer even in C++ is not really safe as Baseobject or GeListHead object can change. So that's why it recommended sticking to GetGUID/GeMarker/UniqueIP. And all of this stuff is accessible through python, for GeMarker you can retrieve a BitSeq with C4DAtom.FindUniqueID(c4d.MAXON_CREATOR_ID) see Layers with the same name?. For more information about how to identify objects, I let you read about Why are GUIDs not globally unique?. But I would say in Python you have the same tool than in C++ except that you can access raw data, but you shouldn't trust pointer as they can change a lot (e.g. GetActiveObject() before and after an undo will not return the same pointed BaseObject). Cheers, Maxime. @PluginStudent said in Identity of C4D objects in Python: One way to "identify" objects in a C4D scene is to use the position in the object tree. That is what the "unique IP" system is using (GetUniqueIP(), GetUniqueID()). Thanks for the suggestion - GetUniqueIP() is only available in BaseObject though. As far as I understand it, it is meant for the use with generators, so it's probably not "universal" enough to be used in arbitrary situations. @m_adam Thank you for the confirmation and the reading suggestions. Just for context, this is a general conceptual question and not connected to a specific code issue. I noticed the behavior while doing some test scripts with id() and is. It is worth noting that these Python properties need to be used carefully in concert with C4D objects. (I wouldn't exactly recommend using pointer comparisons in C++ either ) I will include an "advanced" chapter in my Python/C4D book to mention this.
https://plugincafe.maxon.net/topic/12512/identity-of-c4d-objects-in-python
CC-MAIN-2021-31
refinedweb
1,302
62.98
On Tue, 15 Jul 2003 03:16:08 +0100, Stephen Horne <intentionally at blank.co.uk> wrote: >On 14 Jul 2003 23:31:56 GMT, kamikaze at kuoi.asui.uidaho.edu (Mark >'Kamikaze' Hughes) wrote: > >> It looks like you've almost reached understanding, though you've been >>given a bad start with a very idiosyncratic and monolingual >>understanding of what "computer science" teaches--that was certainly not >>in the classes I took, but apparently yours were different. > >Really. My computer science lessons were taught in a way that >respected the source of most of the theory in mathematics - >algorithms, Church's lambda calculus and that kind of stuff. > >What did your lessons teach? > >> Since every operation in Python operates on pointers, there's no use >>in having a special syntax for it. You don't need all the * and & and >>-> line noise. > >The point of the thread is not pointers - they are a side issue. > >When you use variables, you are using a concept from mathematics. In >mathematics, variables bind to values. All values are immutable. > >Python binds variables to objects, not values. For immutable objects >this is an unimportant implementation detail. For mutable objects, it >breaks the mathematical principle of variables being bound to values. Only if you persist in thinking of Python name bindings as variable bindings in your sense. They are not. They are aliases. That concept exists in C/C++ and is e.g. a major concern in optimization. > >> Stop trying to make Python into C/C++, and you'll be happier with it. >>Or stop using Python, if you really don't like the design philosophy. >>There are plenty of Algol-derived languages out there. PHP and >>especially Perl are more C-like in their internal logic, and you might >>find them more pleasant. > >This is bogus. > >I don't want Python to become C or C++. I want Python to respect >principles that come from mathematics and computer science. Not for >reasons of theory pedanticism, but because the current system can and >does regularly cause confusion and errors. > That always happens when you use the wrong theory to interpret your data ;-) Quit projecting your "variable binding" idea onto Python's name bindings. It doesn't fit. Not because it's bad, but because it's the wrong theory. >The fact that Python claims to be a very high level language, and yet >you have to worry about the binding of variables to objects - Python's bindings are not about what you call variables. I think we should call them by a different word, e.g., aliases. I.e., different names for the same thing. >something that should be a low level implementation detail - has very >real everyday implications. > >Respect the idea of variables binding to values and suddenly the need >for pointers becomes more obvious. You cannot abuse mutable objects to >fake pointer functionality (another everyday fact of Python >programming) if the binding of variables to values (rather than just >objects) is respected. > Quit it with this "respect" thing ;-) Python's name bindings just aren't your variable bindings, and Python's names aren't variable names. They're aliases. Persisting in calling Python's names variables-in-your-sense is disrespect ;-) (See another post for a namespace (attibute name space of class NSHorne instances ;-) that forces name binding to copied objects having equal values). Regards, Bengt Richter
https://mail.python.org/pipermail/python-list/2003-July/220144.html
CC-MAIN-2016-40
refinedweb
564
66.23
Opened 2 years ago Last modified 9 months ago Compiling the module below works fine in GHC 6.4.2. In GHC 6.6 and 6.6.1, it gives a type error. /Koen {-# OPTIONS -fglasgow-exts #-} module Bug where import Control.Monad.ST import Data.STRef newtype M s a = MkM (STRef s Int -> ST s a) runM :: (forall s . M s a) -> a runM mm = runST ( do ref <- newSTRef 0 m ref ) where MkM m = mm -- the instance declaration and function definition -- of "inc" are just here for giving context; -- removing them still makes runM not type check in GHC 6.6 instance Monad (M s) where return x = MkM (\_ -> return x) MkM m >>= k = MkM (\ref -> do x <- m ref let MkM m' = k x m' ref ) inc :: M s Int inc = MkM (\ref -> do n <- readSTRef ref writeSTRef ref (n+1) return n ) Aha. You are the second person to discover that pattern bindings are monomorphic by default. See I've added this report to the feedback there. Meanwhile, you can use -fno-mono-pat-binds or don't use a pattern binding. I'll mark this 'wont-fix', because the above page is the place for feedback. Simon Interesting, that makes two of three be newtype-unwrapping. It'd be ugly to have an exception for that, though.... In this case could where MkM m = mm easily become where m = unM mm for the usual newtype-unwrapping "un" function unM :: M s a -> (STRef s Int -> ST s a) unM (MkM m) = m ? It compiles for me with that technique, unM either as a function or a record selector works fine. By Edgewall Software. To edit, login as user guest, password guest
http://hackage.haskell.org/trac/ghc/ticket/1369
crawl-002
refinedweb
288
82.54
Mike Waychison wrote:> Tim Hockin wrote:>> >On Wed, Aug 25, 2004 at 04:25:24PM -0400, Rik van Riel wrote:>> >>>You can think of this as chroot on steroids.> >>> >>Sounds like what you want is pretty much the namespace stuff> >>that has been in the kernel since the early 2.4 days.> >>> >>No need to replicate VFS functionality inside the filesystem.>>> >When I was at Sun, we talked a lot about this. Mike, does Sun have any> >iterest in this?>>> Not that I know of. I believe the functionality Hans is looking for has> already been handled by SELinux.Everybody who takes a 3 minute read of SELinux keeps saying it has, but it hasn't quite, not when you look at the details. SELinux is not written by filesystem folks, and there are scalability details that matter.> What is needed (if it doesn't already> exist) is a tool to gather these 'viewprints' automagically.It doesn't exist, and viewprints are also not stored with executables either, so it is not process oriented.People think the problem is allowing the OS to enact fine grained security. It is not. The problem is allowing the user to enact fine grained security, and without a lot of work to automate it, users will continue to be unable to bear that time cost.>> --> Mike Waychison> Sun Microsystems, Inc.> 1 (650) 352-5299 voice> 1 (416) 202-8336 voice>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~> NOTICE: The opinions expressed in this email are held by me,> and may not represent the views of Sun Microsystems, Inc.> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2004/8/26/26
CC-MAIN-2016-07
refinedweb
285
73.17
.6. See also the API Documentation, and the index of names and types. s build definition, scopes, and task graph. But we don’t promise that it’s a good idea to skip the other pages in the guide. It’s best to read in order, as later pages in the Getting Started Guide build on concepts introduced earlier. Thanks for trying out sbt and have fun! To create an sbt project, you’ll need to take these steps:. Follow Follow the link to install Java SE Development Kit 8. Download ZIP or TGZ package and expand it. Download msi installer and install it. Download ZIP or TGZ package and expand it. You must first install a JDK. We recommend Oracle JDK 8 or OpenJDK 8. The details around the package names differ from one distribution to another. For example, Ubuntu xenial (16.04LTS) has openjdk-8-jdk. Redhat family calls it java-1.8.0-openjdk-devel.: Note: There’s been reports about SSL error using Ubuntu: Server access Error: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty url=, which apparently stems from OpenJDK 9 using PKCS12 format for /etc/ssl/certs/java/cacerts cert-bug. According to it is fixed in Ubuntu Cosmic (18.10), but Ubuntu Bionic LTS (18.04) is still waiting for a release. See the answer for a woraround.. On Fedora, sbt 0.13.1 available on official repos. If you want to install sbt 1.1.6 or above, you may need to uninstall sbt 0.13 (if it’s install) and indicate that you want to install the newest version of sbt (i.e. sbt 1.1.6 or above) using bintray-sbt-rpm.repo then. sudo dnf remove sbt # uninstalling sbt if sbt 0.13 was installed (may not be necessary) sudo dnf --enablerepo=bintray--sbt-rpm install sbt Note: Please report any issues with these to the sbt-launcher-package project. The official tree contains ebuilds for sbt. To install the latest available version do: emerge dev-java/sbt This page assumes you’ve installed sbt 1. Let’s start with examples rather than explaining how sbt works or why. $ mkdir foo-build $ cd foo-build $ touch build.sbt $ sbt [info] Updated file /tmp/foo-build/project/build.properties: set sbt.version to 1.1.4 [info] Loading project definition from /tmp/foo-build/project [info] Loading settings from build.sbt ... [info] Set current project to foo-build (in build file:/tmp/foo-build/) [info] sbt server started at local:///Users/eed3si9n/.sbt/1.0/server/abc4fb6c89985a00fd95/sock sbt:foo-build> To leave sbt shell, type exit or use Ctrl+D (Unix) or Ctrl+Z (Windows). sbt:foo-build> exit As a convention, we will use the sbt:...> or > prompt to mean that we’re in the sbt interactive shell. $ sbt sbt:foo-build> compile Prefixing the compile command (or any other command) with ~ causes the command to be automatically re-executed whenever one of the source files within the project is modified. For example: sbt:foo-build> ~compile [success] Total time: 0 s, completed May 6, 2018 3:52:08 PM 1. Waiting for source changes... (press enter to interrupt) Leave the previous command running. From a different shell or in your file manager create in the project directory the following nested directories: src/main/scala/example. Then, create Hello.scala in the example directory using your favorite editor as follows: package example object Hello extends App { println("Hello") } This new file should be picked up by the running command: [info] Compiling 1 Scala source to /tmp/foo-build/target/scala-2.12/classes ... [info] Done compiling. [success] Total time: 2 s, completed May 6, 2018 3:53:42 PM 2. Waiting for source changes... (press enter to interrupt) Enter to exit ~compile. From sbt shell, press up-arrow twice to find the compile command that you executed at the beginning. sbt:foo-build> compile Use the help command to get basic help about the available commands. sbt:foo-build> help about Displays basic information about sbt and the build. tasks Lists the tasks defined for the current project. settings Lists the settings defined for the current project. reload (Re)loads the current project or changes to plugins project or returns from it. new Creates a new sbt build. projects Lists the names of available projects or temporarily adds/removes extra builds to the session. project Displays the current project or changes to the provided `project`. .... Display the description of a specific task: sbt:foo-build> help run Runs a main class, passing along arguments provided on the command line. sbt:foo-build> run [info] Packaging /tmp/foo-build/target/scala-2.12/foo-build_2.12-0.1.0-SNAPSHOT.jar ... [info] Done packaging. [info] Running example.Hello Hello [success] Total time: 1 s, completed May 6, 2018 4:10:44 PM sbt:foo-build> set ThisBuild / scalaVersion := "2.12.7" ThisBuild / organization := "com.example" lazy val hello = (project in file(".")) .settings( name := "Hello" ) Use the reload command to reload the build. The command causes the build.sbt file to be re-read, and its settings applied. sbt:foo-build> reload [info] Loading project definition from /tmp/foo-build/project [info] Loading settings from build.sbt ... [info] Set current project to Hello (in build file:/tmp/foo-build/) sbt:Hello> Note that the prompt has now changed to sbt:Hello>. Using an editor, change build.sbt as follows: ThisBuild / scalaVersion := "2.12.7" ThisBuild / organization := "com.example" lazy val hello = (project in file(".")) .settings( name := "Hello", libraryDependencies += "org.scalatest" %% "scalatest" % "3.0.5" % Test, ) Use the reload command to reflect the change in build.sbt. sbt:Hello> reload sbt:Hello> test sbt:Hello> ~testQuick Leaving the previous command running, create a file named src/test/scala/HelloSpec.scala using an editor: import org.scalatest._ class HelloSpec extends FunSuite with DiagrammedAssertions { test("Hello should start with H") { assert("hello".startsWith("H")) } } ~testQuick should pick up the change: 2. Waiting for source changes... (press enter to interrupt) [info] Compiling 1 Scala source to /tmp/foo-build/target/scala-2.12/test-classes ... [info] Done compiling. [info] HelloSpec: [info] - Hello should start with H *** FAILED *** [info] assert("hello".startsWith("H")) [info] | | | [info] "hello" false "H" (HelloSpec.scala:5) [info] Run completed in 135 milliseconds. [info] Total number of tests run: 1 [info] Suites: completed 1, aborted 0 [info] Tests: succeeded 0, failed 1, canceled 0, ignored 0, pending 0 [info] *** 1 TEST FAILED *** [error] Failed tests: [error] HelloSpec [error] (Test / testQuick) sbt.TestsFailedException: Tests unsuccessful Using an editor, change src/test/scala/HelloSpec.scala to: import org.scalatest._ class HelloSpec extends FunSuite with DiagrammedAssertions { test("Hello should start with H") { // Hello, as opposed to hello assert("Hello".startsWith("H")) } } Confirm that the test passes, then press Enter to exit the continuous test. Using an editor,, ) We can find out the current weather in New York. sbt:Hello> console [info] Starting scala interpreter... Welcome to Scala 2.12.7 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_171). Type in expressions for evaluation. Or try :help. scala> :paste // Entering paste mode (ctrl-D to finish) import gigahorse._, support.okhttp.Gigahorse import scala.concurrent._, duration._ Gigahorse.withHttp(Gigahorse.config) { http => val r = Gigahorse.url("").get. addQueryString( "q" -> """select item.condition from weather.forecast where woeid in (select woeid from geo.places(1) where text='New York, NY') and u='c'""", "format" -> "json" ) val f = http.run(r, Gigahorse.asString) Await.result(f, 10.seconds) } // press Ctrl+D // Exiting paste mode, now interpreting. import gigahorse._ import support.okhttp.Gigahorse import scala.concurrent._ import duration._ res0: String = {"query":{"count":1,"created":"2018-05-06T22:49:55Z","lang":"en-US", "results":{"channel":{"item":{"condition":{"code":"26","date":"Sun, 06 May 2018 06:00 PM EDT", "temp":"16","text":"Cloudy"}}}}}} scala> :q // to quit, ) lazy val helloCore = (project in file("core")) .settings( name := "Hello Core", ) Use the reload command to reflect the change in build.sbt. sbt:Hello> projects [info] In file:/tmp/foo-build/ [info] * hello [info] helloCore sbt:Hello> helloCore/compile Change build.sbt as follows: ThisBuild / scalaVersion := "2.12.7" ThisBuild / organization := "com.example" val scalaTest = "org.scalatest" %% "scalatest" % "3.0.5" lazy val hello = (project in file(".")) .settings( name := "Hello", libraryDependencies += "com.eed3si9n" %% "gigahorse-okhttp" % "0.3.1", libraryDependencies += scalaTest % Test, ) lazy val helloCore = (project in file("core")) .settings( name := "Hello Core", libraryDependencies += scalaTest % Test, ) Set aggregate so that the command sent to hello is broadcast to helloCore too: ThisBuild / scalaVersion := "2.12.7" ThisBuild / organization := "com.example" val scalaTest = "org.scalatest" %% "scalatest" % "3.0.5" lazy val hello = (project in file(".")) .aggregate(helloCore) .settings( name := "Hello", libraryDependencies += "com.eed3si9n" %% "gigahorse-okhttp" % "0.3.1", libraryDependencies += scalaTest % Test, ) lazy val helloCore = (project in file("core")) .settings( name := "Hello Core", libraryDependencies += scalaTest % Test, ) After reload, ~testQuick now runs on both subprojects: sbt:Hello> ~testQuick Enter to exit the continuous test. Use .dependsOn(...) to a add dependency on other subprojects. Also let’s move the Gigahorse dependency to helloCore. ThisBuild / scalaVersion := "2.12.7" ThisBuild / organization := "com.example" val scalaTest = "org.scalatest" %% "scalatest" % "3.0.5" lazy val hello = (project in file(".")) .aggregate(helloCore) .dependsOn(helloCore) .settings( name := "Hello", libraryDependencies += scalaTest % Test, ) lazy val helloCore = (project in file("core")) .settings( name := "Hello Core", libraryDependencies += "com.eed3si9n" %% "gigahorse-okhttp" % "0.3.1", libraryDependencies += scalaTest % Test, ) Let’s add Play JSON to helloCore.) .settings( name := "Hello", libraryDependencies += scalaTest % Test, ) lazy val helloCore = (project in file("core")) .settings( name := "Hello Core", libraryDependencies ++= Seq(gigahorse, playJson), libraryDependencies += scalaTest % Test, ) After reload, add core/src/main/scala/example/core/Weather.scala: package example.core import gigahorse._, support.okhttp.Gigahorse import scala.concurrent._ import play.api.libs.json._ object Weather { lazy val http = Gigahorse.http(Gigahorse.config) def weather: Future[String] = { val r = Gigahorse.url("").get. addQueryString( "q" -> """select item.condition |from weather.forecast where woeid in (select woeid from geo.places(1) where text='New York, NY') |and u='c'""".stripMargin, "format" -> "json" ) import ExecutionContext.Implicits._ for { f <- http.run(r, Gigahorse.asString) x <- parse(f) } yield x } def parse(rawJson: String): Future[String] = { val js = Json.parse(rawJson) (js \\ "text").headOption match { case Some(JsString(x)) => Future.successful(x.toLowerCase) case _ => Future.failed(sys.error(rawJson)) } } } Next, change src/main/scala/example/Hello.scala as follows: package example import scala.concurrent._, duration._ import core.Weather object Hello extends App { val w = Await.result(Weather.weather, 10.seconds) println(s"Hello! The weather in New York is $w.") Weather.http.close() } Let’s run the app to see if it worked: sbt:Hello> run [info] Compiling 1 Scala source to /tmp/foo-build/core/target/scala-2.12/classes ... [info] Done compiling. [info] Compiling 1 Scala source to /tmp/foo-build/target/scala-2.12/classes ... [info] Packaging /tmp/foo-build/core/target/scala-2.12/hello-core_2.12-0.1.0-SNAPSHOT.jar ... [info] Done packaging. [info] Done compiling. [info] Packaging /tmp/foo-build/target/scala-2.12/hello_2.12-0.1.0-SNAPSHOT.jar ... [info] Done packaging. [info] Running example.Hello Hello! The weather in New York is mostly cloudy. Using an editor, create project/plugins.sbt: addSbtPlugin("com.typesafe.sbt" % "sbt-native-packager" % "1.3.4") Next change build.sbt as follows to add JavaAppPackaging:> dist [info] Wrote /tmp/foo-build/target/scala-2.12/hello_2.12-0.1.0-SNAPSHOT.pom [info] Wrote /tmp/foo-build/core/target/scala-2.12/hello-core_2.12-0.1.0-SNAPSHOT.pom [info] Your package is ready in /tmp/foo-build/target/universal/hello-0.1.0-SNAPSHOT.zip Here’s how you can run the packaged app: $ /tmp/someother $ cd /tmp/someother $ unzip -o -d /tmp/someother /tmp/foo-build/target/universal/hello-0.1.0-SNAPSHOT.zip $ ./hello-0.1.0-SNAPSHOT/bin/hello Hello! The weather in New York is mostly cloudy. sbt:Hello> Docker/publishLocal .... [info] Successfully built b6ce1b6ab2c0 [info] Successfully tagged hello:0.1.0-SNAPSHOT [info] Built image hello:0.1.0-SNAPSHOT Here’s how to run the Dockerized app: $ docker run hello:0.1.0-SNAPSHOT Hello! The weather in New York is mostly cloudy Change build.sbt as follows: ThisBuild / version := "0.1.0"> ++2.11.12! [info] Forcing Scala version to 2.11.12 on all projects. [info] Reapplying settings... [info] Set current project to Hello (in build file:/tmp/foo-build/) Check the scalaVersion setting: sbt:Hello> scalaVersion [info] helloCore / scalaVersion [info] 2.11.12 [info] scalaVersion [info] 2.11.12 scalaVersion [info] 2.12.7 This setting will go away after reload. To find out more about dist, try help and inspect. sbt:Hello> help dist Creates the distribution packages. sbt:Hello> inspect dist To call inspect recursively on the dependency tasks use inspect tree. sbt:Hello> inspect tree dist [info] dist = Task[java.io.File] [info] +-Universal / dist = Task[java.io.File] .... You can also run sbt in batch mode, passing sbt commands directly from the terminal. $ sbt clean "testOnly HelloSpec" Note: Running in batch mode requires JVM spinup and JIT each time, so your build will run much slower. For day-to-day coding, we recommend using the sbt shell or a continuous test like ~testQuick. You can use the sbt new command to quickly setup a simple “Hello world” build. $ sbt new sbt/scala-seed.g8 .... A minimal Scala project. name [My Something Project]: hello Template applied in ./hello When prompted for the project name, type hello. This will create a new project under a directory named hello. This page is based on the Essential sbt tutorial written by William “Scala William” Narmontas. This page assumes you’ve installed sbt and seen sbt by example. In sbt’s terminology, the “base directory” is the directory containing the project. So if you created a project hello containing /tmp/foo-build/build.sbt as in the sbt by example, /tmp/foo-build/). This page describes how to use sbt once you have set up your project. It assumes you’ve installed sbt and went through sbt by example. Run sbt in your project directory with no arguments: $ sbt Running sbt with no command line arguments starts sbt shell. sbt shell has a command prompt (with tab completion and history!). For example, you could type compile at the sbt shell: > compile To compile again, press up arrow and then enter. To run your program, type run. To leave sbt shell, type exit or use Ctrl+D (Unix) or Ctrl+Z (Windows). You can also run sbt in batch mode, specifying a space-separated list of sbt commands as arguments. For sbt commands that take arguments, pass the command and arguments as one argument to sbt by enclosing them in quotes. For example, $ sbt clean compile "testOnly TestA TestB" In this example, testOnly has arguments, TestA and TestB. The commands will be run in sequence ( clean, compile, then testOnly). Note: Running in batch mode requires JVM spinup and JIT each time, so your build will run much slower. For day-to-day coding, we recommend using the sbt shell or Continuous build and test feature described below. := true. To speed up your edit-compile-test cycle, you can ask sbt to automatically recompile or run tests whenever you save a source file. Make a command run when one or more source files change by prefixing the command with ~. For example, in sbt shell try: > ~testQuick Press enter to stop watching for changes. You can use the ~ prefix with either sbt shell or batch mode. See Triggered Execution for more details. Here are some of the most common sbt commands. For a more complete list, see Command Line Reference. sbt shell has tab completion, including at an empty prompt. A special sbt convention is that pressing tab once may show only a subset of most likely completions, while pressing it more times shows more verbose choices. sbt shell remembers history, even if you exit sbt and restart it. The simplest way to access history is with the up arrow key. The following commands are also supported: This page describes sbt build definitions, including some “theory” and the syntax of build.sbt. It assumes you have installed a recent version of sbt, such as sbt 1.2.6, know how to use sbt, and have read the previous pages in the Getting Started Guide. This page discusses the build.sbt build definition. As part of your build definition you will specify the version of sbt that your build uses. This allows people with different versions of the sbt launcher to build the same projects with consistent results. To do this, create a file named project/build.properties that specifies the sbt version as follows: sbt.version=1.2.6 If the required version is not available locally, the sbt launcher will download it for you. If this file is not present, the sbt launcher will choose an arbitrary version, which is discouraged because it makes your build non-portable. A build definition is defined in build.sbt, and it consists of a set of projects (of type Project). Because the term project can be ambiguous, we often call it a subproject in this guide. For instance, in build.sbt you define the subproject located in the current directory like this: lazy val root = (project in file(".")) .settings( name := "Hello", scalaVersion := "2.12.7" ) Each subproject is configured by key-value pairs. For example, one key is name and it maps to a string value, the name of your subproject. The key-value pairs are listed under the .settings(...) method as follows: lazy val root = (project in file(".")) .settings( name := "Hello", scalaVersion := "2.12" ) Let’s take a closer look at the build.sbt DSL: Each entry is called a setting expression. Some among them are also called task expressions. We will see more on the difference later in this page. A setting expression consists of three parts: := On the left-hand side, name, version, and scalaVersion are keys. A key is an instance of SettingKey[T], TaskKey[T], or InputKey[T] where T is the expected value type. The kinds of key are explained below. Because key name is typed to SettingKey[String], the := operator on name is also typed specifically to String. If you use the wrong value type, the build definition will not compile: lazy val root = (project in file(".")) .settings( name := 42 // will not compile ) build.sbt may also be interspersed with vals, lazy vals, and defs. Top-level objects and classes are not allowed in build.sbt. Those should go in the project/ directory as Scala source files. There are three flavors of key:. Check out Input Tasks for more details. The built-in keys are just fields in an object called Keys. A build.sbt implicitly has an import sbt.Keys._, so sbt.Keys.name can be referred to as name.. Note: Typically, lazy vals are used instead of vals to avoid initialization order problems. key-value pairs describing the subproject: lazy val hello = taskKey[Unit]("An example task") lazy val root = (project in file(".")) .settings( hello := { println("Hello!") } ) We already saw an example of defining settings when we defined the project’s name, lazy val root = (project in file(".")) .settings( name := "hello" ) task graph. In sbt shell,. You can place import statements at the top of build.sbt; they need not be separated by blank lines. There are some implied default imports, as follows: import sbt._ import Keys._ (In addition, if you have auto plugins, the names marked under autoImport will be imported.) the plugins. To depend on third-party libraries, there are two options. The first is to drop jars in lib/ (unmanaged dependencies) and the other is to add managed dependencies, which will look like this in build.sbt: val derby = "org.apache.derby" % "derby" % "10.4.1.3" ThisBuild / organization := "com.example" ThisBuild / scalaVersion := "2.12.7" ThisBuild / version := "0.1.0-SNAPSHOT" lazy val root = (project in file(".")) .settings( name := "Hello", libraryDependencies += derby ) This is how you add a managed dependency on the Apache Derby library, version 10.4.1.3. The libraryDependencies key involves two complexities: += rather than :=, and the % method. += appends to the key’s old value rather than replacing it, this is explained in Task Graph. The % method is used to construct an Ivy module ID from strings, explained in Library dependencies. We’ll skip over the details of library dependencies until later in the Getting Started Guide. There’s a whole page covering it later on. This. Continuing. This Zerotask, Zero config, Zero.. discuss scope delegation in detail later. +") This page describes scope delegation. It assumes you’ve read and understood the previous pages, build definition and scopes. Now that we’ve covered all the details of scoping, we can explain the .value lookup in detail. It’s ok to skip this section if this is your first time reading this page. To summarize what we’ve learned so far: Zerofor any of the scope axes. ThisBuildfor the subprojects axis only. Testextends Runtime, and Runtimeextends Compileconfiguration. ${current subproject} / Zero / Zeroby default. /operator. Now let’s suppose we have the following build definition: lazy val foo = settingKey[Int]("") lazy val bar = settingKey[Int]("") lazy val projX = (project in file("x")) .settings( foo := { (Test / bar).value + 1 }, Compile / bar := 1 ) Inside of foo’s setting body a dependency on the scoped key Test / bar is declared. However, despite Test / bar being undefined in projX, sbt is still able to resolve Test / bar to another scoped key, resulting in foo initialized as 2. sbt has a well-defined fallback search path called scope delegation. This feature allows you to set a value once in a more general scope, allowing multiple more-specific scopes to inherit the value. Here are the rules for scope delegation: Zero, which is non-task scoped version of the scope. Zero(same as unscoped configuration axis). ThisBuild, and then Zero. We will look at each rule in the rest of this page. In other words, given two scope candidates, if one has more specific value on the subproject axis, it will always win regardless of the configuration or the task scoping. Similarly, if subprojects are the same, one with more specific configuration value will always win regardless of the task scoping. We will see more rules to define more specific. Zero, which is non-task scoped version of the scope. Here we have a concrete rule for how sbt will generate delegate scopes given a key. Remember, we are trying to show the search path given an arbitrary (xxx / yyy).value. Exercise A: Given the following build definition: lazy val projA = (project in file("a")) .settings( name := { "foo-" + (packageBin / scalaVersion).value }, scalaVersion := "2.11.11" ) What is the value of projA / name? "foo-2.11.11" "foo-2.12.7" The answer is "foo-2.11.11". Inside of .settings(...), scalaVersion is automatically scoped to projA / Zero / Zero, so packageBin / scalaVersion becomes projA / Zero / packageBin / scalaVersion. That particular scoped key is undefined. By using Rule 2, sbt will substitute the task axis to Zero as projA / Zero / Zero (or projA / scalaVersion). That scoped key is defined to be "2.11.11". Zero(same as unscoped configuration axis). The example for that is projX that we saw earlier: lazy val foo = settingKey[Int]("") lazy val bar = settingKey[Int]("") lazy val projX = (project in file("x")) .settings( foo := { (Test / bar).value + 1 }, Compile / bar := 1 ) If we write out the full scope again, it’s projX / Test / Zero. Also recall that Test extends Runtime, and Runtime extends Compile. Test / bar is undefined, but due to Rule 3 sbt will look for bar scoped in projX / Test / Zero, projX / Runtime / Zero, and then projX / Compile / Zero. The last one is found, which is Compile / bar. ThisBuild, and then Zero. Exercise B: Given the following build definition: ThisBuild / organization := "com.example" lazy val projB = (project in file("b")) .settings( name := "abc-" + organization.value, organization := "org.tempuri" ) What is the value of projB / name? "abc-com.example" "abc-org.tempuri" The answer is abc-org.tempuri. So based on Rule 4, the first search path is organization scoped to projB / Zero / Zero, which is defined in projB as "org.tempuri". This has higher precedence than the build-level setting ThisBuild / organization. Exercise C: Given the following build definition: ThisBuild / packageBin / scalaVersion := "2.12.2" lazy val projC = (project in file("c")) .settings( name := { "foo-" + (packageBin / scalaVersion).value }, scalaVersion := "2.11.11" ) What is value of projC / name? "foo-2.12.2" "foo-2.11.11" The answer is foo-2.11.11. scalaVersion scoped to projC / Zero / packageBin is undefined. Rule 2 finds projC / Zero / Zero. Rule 4 finds ThisBuild / Zero / packageBin. In this case Rule 1 dictates that more specific value on the subproject axis wins, which is projC / Zero / Zero that is defined to "2.11.11". Exercise D: Given the following build definition: ThisBuild / scalacOptions += "-Ywarn-unused-import" lazy val projD = (project in file("d")) .settings( test := { println((Compile / console / scalacOptions).value) }, console / scalacOptions -= "-Ywarn-unused-import", Compile / scalacOptions := scalacOptions.value // added by sbt ) What would you see if you ran projD/test? List() List(-Ywarn-unused-import) The answer is List(-Ywarn-unused-import). Rule 2 finds projD / Compile / Zero, Rule 3 finds projD / Zero / console, and Rule 4 finds ThisBuild / Zero / Zero. Rule 1 selects projD / Compile / Zero because it has the subproject axis projD, and the configuration axis has higher precedence over the task axis. Compile / scalacOptions refers to scalacOptions.value, we next need to find a delegate for projD / Zero / Zero. Rule 4 finds ThisBuild / Zero / Zero and thus it resolves to List(-Ywarn-unused-import). You might want to look up quickly what is going on. This is where inspect can be used. sbt:projd> inspect projD / Compile / console / scalacOptions [info] Task: scala.collection.Seq[java.lang.String] [info] Description: [info] Options for the Scala compiler. [info] Provided by: [info] ProjectRef(uri("file:/tmp/projd/"), "projD") / Compile / scalacOptions [info] Defined at: [info] /tmp/projd/build.sbt:9 [info] Reverse dependencies: [info] projD / test [info] projD / Compile / console [info] Delegates: [info] projD / Compile / console / scalacOptions [info] projD / Compile / scalacOptions [info] projD / console / scalacOptions [info] projD / scalacOptions [info] ThisBuild / Compile / console / scalacOptions [info] ThisBuild / Compile / scalacOptions [info] ThisBuild / console / scalacOptions [info] ThisBuild / scalacOptions [info] Zero / Compile / console / scalacOptions [info] Zero / Compile / scalacOptions [info] Zero / console / scalacOptions [info] Global / scalacOptions Note how “Provided by” shows that projD / Compile / console / scalacOptions is provided by projD / Compile / scalacOptions. Also under “Delegates”, all of the possible delegate candidates listed in the order of precedence! projDscoping on the subproject axis are listed first, then ThisBuild, and Zero. Compilescoping on the configuration axis are listed first, then falls back to Zero. console /and the one without. Note that scope delegation feels similar to class inheritance in an object-oriented language, but there’s a difference. In an OO language like Scala if there’s a method named drawShape on a trait Shape, its subclasses can override the behavior even when drawShape is used by other methods in the Shape trait, which is called dynamic dispatch. In sbt, however, scope delegation can delegate a scope to a more general scope, like a project-level setting to a build-level settings, but that build-level setting cannot refer to the project-level setting. Exercise E: Given the following build definition: lazy val root = (project in file(".")) .settings( inThisBuild(List( organization := "com.example", scalaVersion := "2.12.2", version := scalaVersion.value + "_0.1.0" )), name := "Hello" ) lazy val projE = (project in file("e")) .settings( scalaVersion := "2.11.11" ) What will projE / version return? "2.12.2_0.1.0" "2.11.11_0.1.0" The answer is 2.12.2_0.1.0. projD / version delegates to ThisBuild / version, which depends on ThisBuild / scalaVersion. Because of this reason, build level setting should be limited mostly to simple value assignments. Exercise F: Given the following build definition: ThisBuild / scalacOptions += "-D0" scalacOptions += "-D1" lazy val projF = (project in file("f")) .settings( compile / scalacOptions += "-D2", Compile / scalacOptions += "-D3", Compile / compile / scalacOptions += "-D4", test := { println("bippy" + (Compile / compile / scalacOptions).value.mkString) } ) What will projF / test show? "bippy-D4" "bippy-D2-D4" "bippy-D0-D3-D4" The answer is "bippy-D0-D3-D4". This is a variation of an exercise originally created by Paul Phillips. It’s a great demonstration of all the rules because someKey += "x" expands to someKey := { val old = someKey.value old :+ "x" } Retrieving the old value would cause delegation, and due to Rule 5, it will go to another scoped key. Let’s get rid of += first, and annotate the delegates for old values: ThisBuild / scalacOptions := { // Global / scalacOptions <- Rule 4 val old = (ThisBuild / scalacOptions).value old :+ "-D0" } scalacOptions := { // ThisBuild / scalacOptions <- Rule 4 val old = scalacOptions.value old :+ "-D1" } lazy val projF = (project in file("f")) .settings( compile / scalacOptions := { // ThisBuild / scalacOptions <- Rules 2 and 4 val old = (compile / scalacOptions).value old :+ "-D2" }, Compile / scalacOptions := { // ThisBuild / scalacOptions <- Rules 3 and 4 val old = (Compile / scalacOptions).value old :+ "-D3" }, Compile / compile / scalacOptions := { // projF / Compile / scalacOptions <- Rules 1 and 2 val old = (Compile / compile / scalacOptions).value old :+ "-D4" }, test := { println("bippy" + (Compile / compile / scalacOptions).value.mkString) } ) This becomes: ThisBuild / scalacOptions := { Nil :+ "-D0" } scalacOptions := { List("-D0") :+ "-D1" } lazy val projF = (project in file("f")) .settings( compile / scalacOptions := List("-D0") :+ "-D2", Compile / scalacOptions := List("-D0") :+ "-D3", Compile / compile / scalacOptions := List("-D0", "-D3") :+ "-D4", test := { println("bippy" + (Compile / compile / scalacOptions).value.mkString) } ) This: Compile / unmanagedJars := binary Scala version to the artifact name. This is just a shortcut. You could write this without the %%: libraryDependencies += "org.scala-tools" % "scala-stm_2.11" % ... Please read the earlier pages in the Getting Started Guide first, in particular you need to understand build.sbt, task graph, library dependencies, before reading this page. A plugin extends the build definition, most commonly by adding new settings. The new settings could be new tasks. For example, a plugin could add a codeCoverage task which would generate a test coverage If your project is in directory hello, and you’re adding sbt-site plugin to the build definition, create hello/project/site.sbt and declare the plugin dependency by passing the plugin’s Ivy module ID to addSbtPlugin: addSbtPlugin("com.typesafe.sbt" % "sbt-site" % "0.7.0") If you’re adding sbt-assembly, create hello/project/assembly.sbt with the following: addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.11.2") Not every plugin is located on one of the default repositories and a plugin’s documentation may instruct you to also add the repository where it can be found: resolvers += Resolver.sonatypeRepo("public") Plugins usually provide settings that get added to a project to enable the plugin’s functionality. This is described in the next section. A plugin can declare that its settings be automatically added to the build definition, in which case you don’t have to do anything to add them. As of sbt 0.13.5, there is a new auto plugins feature that enables plugins to automatically, and safely, ensure their settings and dependencies are on a project. Many auto plugins should have their default settings automatically, however some may require explicit enablement. If you’re using an auto plugin that requires explicit enablement, then you have to add the following to your build.sbt: lazy val util = (project in file("util")) .enablePlugins(FooPlugin, BarPlugin) .settings( name := "hello-util" ) The enablePlugins method allows projects to explicitly define the auto plugins they wish to consume. Projects can also exclude plugins using the disablePlugins method. For example, if we wish to remove the IvyPlugin settings from util, we modify our build.sbt as follows: lazy val util = (project in file("util")) .enablePlugins(FooPlugin, BarPlugin) .disablePlugins(plugins.IvyPlugin) .settings( name := "hello-util" ) Auto plugins should document whether they need to be explicitly enabled. If you’re curious which auto plugins are enabled for a given project, just run the plugins command on the sbt console. For example: > plugins In file:/home/jsuereth/projects/sbt/test-ivy-issues/ sbt.plugins.IvyPlugin: enabled in scala-sbt-org sbt.plugins.JvmPlugin: enabled in scala-sbt-org sbt.plugins.CorePlugin: enabled in scala-sbt-org sbt.plugins.JUnitXmlReportPlugin: enabled in scala-sbt-org Here, the plugins output is showing that the sbt default plugins are all enabled. sbt’s default settings are provided via three plugins: CorePlugin: Provides the core parallelism controls for tasks. IvyPlugin: Provides the mechanisms to publish/resolve modules. JvmPlugin: Provides the mechanisms to compile/test/run/package Java/Scala projects. In addition, JUnitXmlReportPlugin provides an experimental support for generating junit-xml. Older non-auto plugins often require settings to be added explicitly, so that multi-project build could have different types of projects. The plugin documentation will indicate how to configure it, but typically for older plugins this involves adding the base settings for the plugin and customizing as necessary. For example, for the sbt-site plugin, create site.sbt with the following content site.settings to enable it for that project. If the build defines multiple projects, instead add it directly to the project: // don't use the site plugin for the `util` project lazy val util = (project in file("util")) // enable the site plugin for the `core` project lazy val core = (project in file("core")) .settings(site.settings) Plugins can be installed for all your projects at once by declaring them in ~/.sbt/1.0/plugins/. ~/.sbt/1.0/plugins/ is an sbt project whose classpath is exported to all sbt build definition projects. Roughly speaking, any .sbt or .scala files in ~/.sbt/1.0/plugins/ behave as if they were in the project/ directory for all projects. You can create ~/.sbt/1.0/plugins/build.sbt and put addSbtPlugin() expressions in there to add plugins to all your projects at once. Because doing so would increase the dependency on the machine environment, this feature should be used sparingly. See Best Practices. There’s a list of available plugins. Some especially popular plugins are: For more details, including ways of developing plugins, see Plugins. For best practices, see Plugins-Best-Practices..") ThisBuild / organization := "com.example" ThisBuild / version := "0.1.0-SNAPSHOT" ThisBuild / scalaVersion := "2.12.7" lazy val library = (project in file("library")) .settings(.") ThisBuild / organization := "com.example" ThisBuild / version := "0.1.0-SNAPSHOT" ThisBuild / scalaVersion := "2.12.7" lazy val library = (project in file("library")) .settings(._ ThisBuild / organization := "com.example" ThisBuild / version := "0.1.0-SNAPSHOT" ThisBuild / scalaVersion := "2.12.7" lazy val backend = (project in file("backend")) .settings( name := "backend",. sbt organization is available for use by any sbt plugin. Developers who contribute their plugins into the community organization will still retain control over their repository and its access. The goal of the sbt organization is to organize sbt software into one central location. A side benefit to using the sbt organization for projects is that you can use gh-pages to host websites under the domain. Lightbend has provided a freely available Ivy Repository for sbt projects to use. This Ivy repository is mirrored from the freely available Bintray service. If you’d like to submit your plugin, please follow these instructions: Bintray For Plugins. See Cross Build Plugins. [Edit] this page to submit a pull request that adds your plugin to the list. apiMappingsfor common Scala libraries. The community repository has the following guideline for artifacts published to it: This is currently in Beta mode. sbt hosts their community plugin repository on Bintray. Bintray is a repository hosting site, similar to github, which allows users to contribute their own plugins, while sbt can aggregate them together in a common repository. This document walks you through the means to create your own repository for hosting your sbt plugins and then linking them into the sbt shared repository. This will make your plugins available for all sbt users without additional configuration (besides declaring a dependency on your plugin). To do this, we need to perform the following steps: First, go to to create an Open Source Distribution Bintray Account. If you end up at the Bintray home page, do NOT click on the Free Trial, but click on the link that reads “For Open Source Distribution Sign Up Here“. Now, we’ll create a repository to host our personal sbt plugins. In bintray, create a generic repository called sbt-plugins. First, go to your user page and click on the new repository link: You should see the following dialog: Fill it out similarly to the above image, the settings are: Once this is done, you can begin to configure your sbt-plugins to publish to bintray. First, add the sbt-bintray to your plugin build. First, create a project/bintray.sbt file addSbtPlugin("org.foundweekends" % "sbt-bintray" % "0.5.2") Next, a make sure your build.sbt file has the following settings>", publishMavenStyle := false, bintrayRepository := "sbt-plugins", bintrayOrganization in bintray := None ) Make sure your project has a valid license specified, as well as unique name and organization. - Note: bintray does not support snapshots. We recommend using git-revisions supplied by the sbt-git plugin*. Once your build is configured, open the sbt console in your build and run sbt> publish The plugin will ask you for your credentials. If you don’t know where they are, you can find them on Bintray. This will get you your password. The sbt-bintray plugin will save your API key for future use. Now that your plugin is packaged on bintray, you can include it in the community sbt repository. To do so, go to the Community sbt repository screen. include my packagebutton and select your plugin. From here on, any releases of your plugin will automatically appear in the community sbt repository. Congratulations and thank you so much for your contributions! If you’re a member of the sbt organization on bintray, you can link your package to the sbt organization, but via a different means. To do so, first navigate to the plugin you wish to include and click on the link button: After clicking this you should see a link like the following: Click on the sbt/sbt-plugin-releases repository and you’re done! Any future releases will be included in the sbt-plugin repository. After setting up the repository, all new releases will automatically be included the sbt-plugin-releases repository, available for all users. When you create a new plugin, after the initial release you’ll have to link it to the sbt community repository, but the rest of the setup should already be completed. Thanks for you contributions and happy hacking. Some. Deploying"), ... Below is a running list of potential areas of contribution. This list may become out of date quickly, so you may want to check on the sbt-dev mailing list if you are interested in a specific topic. There are plenty of possible visualization and analysis opportunities. ’compile’ produces an Analysis of the source code containing ~/.ivy2and the .xsland .cssare there as well, so you don’t even need to work with sbt. Other approaches described in the email thread set logLevel := Level.Warn or : set logLevel in Test := Level.Warn You could make commands that wrap this, like: warn test:run Also, trace is currently an integer, but should really be an abstract data type. A lot of the pages could probably have better names, and/or little 2-4 word blurbs to the right of them in the sidebar. These are changes made in each sbt release. .copy(...)") sbt 0.13, sbt 1.0, and sbt 1.1 required sbtPlugin setting and scripted plugin to develop an sbt plugin. sbt 1.2.1 combined both into SbtPlugin plugin. Remove scripted-plugin from project/plugins.sbt, and just use: lazy val root = (project in file(".")) .enablePlugins(SbtPlugin). In 0.13.x, you use other repositories instead of the Maven Central repository: externalResolvers := Resolver.withDefaultResolvers(resolvers.value, mavenCentral = false) After 1.x, withDefaultResolvers was renamed to combineDefaultResolvers. In the meantime, one of the parameters, userResolvers, was changed to Vector instead of Seq. You can use toVector to help migration. externalResolvers := Resolver.combineDefaultResolvers(resolvers.value.toVector, mavenCentral = false) Vectordirectly too. If!. A huge thank you to everyone who’s helped improve sbt and Zinc 1 by using them, reporting bugs, improving our documentation, porting builds, porting plugins, and submitting and reviewing pull requests. sbt 1.1.5 was brought to you by 21 constributors, according to git shortlog -sn --no-merges v1.1.4...v1.1.5 on sbt, zinc, librarymanagement, util, io, launcher-packege, and website: Eugene Yokota, Ethan Atkins, Jason Zaugg, Liu Fengyun, Antonio Cunei, Dale Wijnand, Roberto Bonvallet, Alexey Alekhin, Daniel Parks, Heikki Vesalainen, Jean-Luc Deprez, Jessica Hamilton, Kenji Yoshida (xuwei-k), Nikita Gazarov, OlegYch, Richard Summerhayes, Robert Walker, Seth Tisue, Som Snytt, oneill, and 杨博 (Yang Bo). run! This is a hotfix release for sbt 1.0.x series. ArrayIndexOutOfBoundsExceptionon Ivy when running on Java 9. ivy#27 by @xuwei-k -jvm-debugon Java 9. launcher-package197 by @mkurz runoutputing debug level logs. #3655/#3717 by @cunei testQuick. #3680/#3720 by @OlegYch templateStats()not being thread-safe. #3743 by @cunei http:and https:to be more plugin friendly. lm183 by @tpunder bcby using expr. launcher-package#199 by @thatfulvioguy A huge thank you to everyone who’s helped improve sbt and Zinc 1 by using them, reporting bugs, improving our documentation, porting builds, porting plugins, and submitting and reviewing pull requests. This release was brought to you by 17 contributors, according to git shortlog -sn --no-merges v1.0.3..v1.0.4 on sbt, zinc, librarymanagement, util, io, and website: Eugene Yokota, Kenji Yoshida (xuwei-k), Jorge Vicente Cantero (jvican), Dale Wijnand, Leonard Ehrenfried, Antonio Cunei, Brett Randall, Guillaume Martres, Arnout Engelen, Fulvio Valente, Jens Grassel, Matthias Kurz, OlegYch, Philippus Baalman, Sam Halliday, Tim Underwood, Tom Most. Thank you! This is a hotfix release for sbt 1.0.x series. ~recompiling in loop (when a source generator or sbt-buildinfo is present). #3501/#3634 by @dwijnand nullfor getGenericParameterTypes. zinc#446 by @jvican /in Ivy style patterns. lm#170 by @laughedelic sbt.watch.modesystem property to allow switching back to old polling behaviour for watch. See below for more details. A huge thank you to everyone who’s helped improve sbt and Zinc 1 by using them, reporting bugs, improving our documentation, porting builds, porting plugins, and submitting and reviewing pull requests. This release! This is a hotfix release for sbt 1.0.x series. delivertask, and adds makeIvyXmlas a more sensibly named task. #3487 by @cunei OkUrlFactory, and fixes connection leaks. lm#164 by @dpratt runand bgRunnot picking up changes to directories in the classpath. #3517 by @dwijnand ++so it won’t change the value of crossScalaVersion. #3495/#3526 by @dwijnand consoleProject. zinc#386 by @dwijnand sbt.gigahorseto enable/disable the internal use of Gigahorse to workaround NPE in JavaNetAuthenticatorwhen used in conjunction with repositoriesoverride. lm#167 by @cunei sbt.server.autostartto enable/disable the automatic starting of sbt server with the sbt shell. This also adds new startServercommand to manually start the server. by @eed3si9n A huge thank you to everyone who’s helped improve sbt and Zinc 1 by using them, reporting bugs, improving our documentation, porting plugins, and submitting and reviewing pull requests. This release was brought to you by 19 contributors, according to git shortlog -sn --no-merges v1.0.1..v1.0.2 on sbt, zinc, librarymanagement, and website: Dale Wijnand, Eugene Yokota, Kenji Yoshida (xuwei-k), Antonio Cunei, David Pratt, Karol Cz (kczulko), Amanj Sherwany, Emanuele Blanco, Eric Peters, Guillaume Bort, James Roper, Joost de Vries, Marko Elezovic, Martynas Mickevičius, Michael Stringer, Răzvan Flavius Panda, Peter Vlugter, Philippus Baalman, and Wiesław Popielarski. Thank you! improves the eviction warning presetation. Before: [warn] There may be incompatibilities among your library dependencies. [warn] Here are some of the libraries that were evicted: [warn] * com.google.code.findbugs:jsr305:2.0.1 -> 3 @jrudolph’s sbt-cross-building is a plugin author’s plugin. It adds cross command ^ and sbtVersion switch command ^^, similar to + and ++, but for switching between multiple sbt versions across major versions. sbt 0.13.16 merges these commands into sbt because the feature it provides is useful as we migrate plugins to sbt 1.0. To switch the sbtVersion in pluginCrossBuild from the shell use: ^^ 1.0 page is a relatively complete list of command line options, commands, and tasks you can use from the sbt interactive prompt or in batch mode. See Running in the Getting Started Guide for an intro to the basics, while this page has a lot more detail. toStringrepresentation of these values can be shown using show <task>to run the task instead of just <task>. cleanDeletes all generated files (the targetdirectory). publishLocalPublishes artifacts (such as jars) to the local Ivy repository as described in Publishing. publishPublishes artifacts (such as jars) to the repository defined by the publishTo setting, described in Publishing. updateResolves and retrieves external dependencies as described in library dependencies. Configuration-level tasks are tasks associated with a configuration. For example, compile, which is equivalent to compile:compile, compiles the main source code (the compile configuration). test:compile compiles the test source code (test test configuration). Most tasks for the compile configuration have an equivalent in the test configuration that can be run using a test: prefix. compileCompiles the main sources (in the src/main/scaladirectory). test:compilecompiles test sources (in the src/test/scala/ directory). consoleStarts the Scala interpreter with a classpath including the compiled sources, all jars in the lib directory, and managed libraries. To return to sbt, type :quit, Ctrl+D (Unix), or Ctrl+Z (Windows). Similarly, test:console starts the interpreter with the test classes and classpath. consoleQuickStarts the Scala interpreter with the project’s compile-time dependencies on the classpath. test:consoleQuick uses the test dependencies. This task differs from console in that it does not force compilation of the current project’s sources. consoleProjectEnters an interactive session with sbt and the build definition on the classpath. The build definition and related values are bound to variables and common packages and values are imported. See the consoleProject documentation for more information. docGenerates API documentation for Scala source files in src/main/scalausing scaladoc. test:docgenerates API documentation for source files in src/test/scala. packageCreates a jar file containing the files in src/main/resourcesand the classes compiled from src/main/scala. test:packagecreates a jar containing the files in src/test/resourcesand the class compiled from src/test/scala. packageDocCreates a jar file containing API documentation generated from Scala source files in src/main/scala. test:packageDoc creates a jar containing API documentation for test sources files in src/test/scala. packageSrc: Creates a jar file containing all main source files and resources. The packaged paths are relative to src/main/scala and src/main/resources. Similarly, test:packageSrc operates on test source files and resources. run <argument>*Runs the main class for the project in the same virtual machine as sbt. The main class is passed the arguments provided. Please see Running Project Code for details on the use of System.exit and multithreading (including GUIs) in code run by this action. test:runruns a main class in the test code. runMain <main-class> <argument>*Runs the specified main class for the project in the same virtual machine as sbt. The main class is passed the arguments provided. Please see Running Project Code for details on the use of System.exit and multithreading (including GUIs) in code run by this action. test:runMainruns the specified main class in the test code. testRuns all tests detected during test compilation. See Testing for details. testOnly <test>*Runs the tests provided as arguments. *(will be) interpreted as a wildcard in the test name. See Testing for details. testQuick <test>* Runs the tests specified as arguments (or all tests if no arguments are given) that: *(will be) interpreted as a wildcard in the test name. See [Testing][Testing] for details. exitor quitEnd the current interactive session or build. Additionally, Ctrl+D (Unix) or Ctrl+Z (Windows) will exit the interactive prompt. help <command>Displays detailed help for the specified command. If the command does not exist, help lists detailed help for commands whose name or description match the argument, which is interpreted as a regular expression. If no command is provided, displays brief descriptions of the main commands. Related commands are tasks and settings. projects [add|remove <URI>]List all available projects if no arguments provided or adds/removes the build at the provided URI. (See multi-project builds for details on multi-project builds.) project <project-id>Change the current project to the project with ID <project-id>. Further operations will be done in the context of the given project. (See multi-project builds for details on multiple project builds.) ~ <command>Executes the project specified action or method whenever source files change. See Triggered Execution for details. < filenameExecutes the commands in the given file. Each command should be on its own line. Empty lines and lines beginning with ’#’ are ignored + <command>Executes the project specified action or method for all versions of Scala defined in the crossScalaVersions setting. ++ <version|home-directory> <command>Temporarily changes the version of Scala building the project and executes the provided command. <command>is optional. The specified version of Scala is used until the project is reloaded, settings are modified (such as by the set or session commands), or ++ is run again. <version>does not need to be listed in the build definition, but it must be available in a repository. Alternatively, specify the path to a Scala installation. ; A ; BExecute A and if it succeeds, run B. Note that the leading semicolon is required. eval <Scala-expression> Evaluates the given Scala expression and returns the result and inferred type. This can be used to set system properties, as a calculator, to fork processes, etc … For example: > eval System.setProperty("demo", "true") > eval 1+1 > eval "ls -l" ! reload [plugins|return]If no argument is specified, reloads the build, recompiling any build or plugin definitions as necessary. reload plugins changes the current project to the build definition project (in project/). This can be useful to directly manipulate the build definition. For example, running clean on the build definition project will force snapshots to be updated and the build definition to be recompiled. reload return changes back to the main project. set <setting-expression>Evaluates and applies the given setting definition. The setting applies until sbt is restarted, the build is reloaded, or the setting is overridden by another set command or removed by the session command. See [.sbt build definition][Basic-Def] and [inspecting settings][Inspecting-Settings] for details. session <command>Manages session settings defined by the setcommand. It can persist settings configured at the prompt. See Inspecting-Settings for details. inspect <setting-key>Displays information about settings, such as the value, description, defining scope, dependencies, delegation chain, and related settings. See Inspecting Settings for details. System properties can be provided either as JVM options, or as SBT arguments, in both cases as -Dprop=value. The following properties influence SBT execution. Also see sbt launcher. The Different-core_2.10 when compiled against 2.10.0, 2.10.1" %% "dispatch-core" % "0.13.3" A nearly equivalent, manual alternative for a fixed version of Scala is: libraryDependencies += "net.databinder.dispatch" % "dispatch-core_2.12" % "0.13.3" Define the versions of Scala to build against in the crossScalaVersions setting. Versions of Scala 2.10.2 or later are allowed. For example, in a .sbt build definition: crossScalaVersions := Seq("2.11.11", "2.12.2") To build against all versions listed in crossScalaVersions,.12.7, ./target/becomes ./target/scala_2.12/ ./lib_managed/becomes ./lib_managed/scala_2.12/.11 for your 2.11.x build, the version compiled against 2.12 for your 2.12. Central. You can make a command run when certain files change by prefixing the command with ~. Monitoring is terminated when enter is pressed. This triggered execution is configured by the watch setting, but typically the basic settings watchSources and pollInterval are modified.Project dependencies). pollIntervalselects the interval between polling for changes in milliseconds. The default value is 500 ms. Some example usages are described below. The original use-case was continuous compilation: > ~ test:compile > ~ compile You can use the triggered execution feature to run any command or task. One use is for test driven development, as suggested by Erick on the mailing list. The following will poll for changes to your source code (main or test) and run testOnly for the specified test. > ~ testOnly example.TestA Occasionally, you may need to trigger the execution of multiple commands. You can use semicolons to separate the commands to be triggered. The following will poll for source changes and run clean and test. > ~ ;clean ;test sbt has two alternative entry points that may be used to: These entry points should be considered experimental. A notable disadvantage of these approaches is the startup time involved. To set up these entry points, you can either use conscript or manually construct the startup scripts. In addition, there is a setup script for the script mode that only requires a JRE installed. $ cs sbt/sbt --branch 1.2.6 This will create two scripts: screpl and scalas. Duplicate your standard sbt script, which was set up according to Setup, as scalas and screpl (or whatever names you like). scalas is the script runner and should use sbt.ScriptMain as the main class, by adding the -Dsbt.main.class=sbt.ScriptMain parameter to the java command. Its command line should look like: $ java -Dsbt.main.class=sbt.ScriptMain -Dsbt.boot.directory=/home/user/.sbt/boot -jar sbt-launch.jar "[email protected]" For the REPL runner screpl, use sbt.ConsoleMain as the main class: $ java -Dsbt.main.class=sbt.ConsoleMain -Dsbt.boot.directory=/home/user/.sbt/boot -jar sbt-launch.jar "[email protected]" In each case, /home/user/.sbt/boot should be replaced with wherever you want sbt’s boot directory to be; you might also need to give more memory to the JVM via -Xms512M -Xmx1536M or similar options, just like shown in Setup. The script runner can run a standard Scala script, but with the additional ability to configure sbt. sbt settings may be embedded in the script in a comment block that opens with /***. Copy the following script and make it executable. You may need to adjust the first line depending on your script name and operating system. When run, the example should retrieve Scala, the required dependencies, compile the script, and run it directly. For example, if you name it shout.scala, you would do on Unix: chmod u+x shout.scala ./shout.scala #!/usr/bin/env scalas /*** scalaVersion := "2.12.7" libraryDependencies += "org.scala-sbt" %% "io" % "1.2.6" */ import sbt.io.IO import sbt.io.Path._ import sbt.io.syntax._ import java.io.File import java.net.{URI, URL} import sys.process._ def file(s: String): File = new File(s) def uri(s: String): URI = new URI(s) val targetDir = file("./target/") val srcDir = file("./src/") val toTarget = rebase(srcDir, targetDir) def processFile(f: File): Unit = { val newParent = toTarget(f.getParentFile) getOrElse {sys.error("wat")} val file1 = newParent / f.name println(s"""$f => $file1""") val xs = IO.readLines(f) map { _ + "!" } IO.writeLines(file1, xs) } val fs: Seq[File] = (srcDir ** "*.scala").get fs foreach { processFile } This script will take all *.scala files under src/, append ”!” at the end of the line, and write them under target/. The arguments to the REPL mode configure the dependencies to use when starting up the REPL. An argument may be either a jar to include on the classpath, a dependency definition to retrieve and put on the classpath, or a resolver to use when retrieving dependencies. A dependency definition looks like: organization%module%revision Or, for a cross-built dependency: organization%%module%revision A repository argument looks like: "id at url" To add the Sonatype snapshots repository and add Scalaz 7.0-SNAPSHOT to REPL classpath: $ screpl "sonatype-releases at" "org.scalaz%%scalaz-core%7.0-SNAPSHOT" This syntax was a quick hack. Feel free to improve it. The relevant class is IvyConsole. sbt server is a feature that is newly introduced in sbt 1.x, and it’s still a work in progress. You might at first imagine server to be something that runs on remote servers, and does great things, but for now sbt server is not that. Actually, sbt server just adds network access to sbt’s shell command so, in addition to accepting input from the terminal, server also to accepts input from the network. This allows multiple clients to connect to a single session of sbt. The primary use case we have in mind for the client is tooling integration such as editors and IDEs. As a proof of concept, we created a Visual Studio Code extension called Scala (sbt). The wire protocol we use is Language Server Protocol 3.0 (LSP), which in turn is based on JSON-RPC. The base protocol consists of a header and a content part (comparable to HTTP). The header and content part are separated by a \r\n. Currently the following header fields are supported: Content-Length: The length of the content part in bytes. If you don’t provide this header, we’ll read until the end of the line. Content-Type: Must be set to application/vscode-jsonrpc; charset=utf-8or omit it. Here is an example: Content-Type: application/vscode-jsonrpc; charset=utf-8\r\n Content-Length: ...\r\n \r\n { "jsonrpc": "2.0", "id": 1, "method": "textDocument/didSave", "params": { ... } } A JSON-RPC request consists of an id number, a method name, and an optional params object. So all LSP requests are pairs of method name and params JSON. An example response to the JSON-RPC request is: Content-Type: application/vscode-jsonrpc; charset=utf-8\r\n Content-Length: ...\r\n \r\n { "jsonrpc": "2.0", "id": 1, "result": { ... } } Or the server might return an error response: Content-Type: application/vscode-jsonrpc; charset=utf-8\r\n Content-Length: ...\r\n \r\n { "jsonrpc": "2.0", "id": 1, "error": { "code": -32602, "message": "some error message" } } In addition to the responses, the server might also send events (“notifications” in LSP terminology). Content-Type: application/vscode-jsonrpc; charset=utf-8\r\n Content-Length: ...\r\n \r\n { "jsonrpc": "2.0", "method": "textDocument/publishDiagnostics", "params": { ... } } Sbt server can run in two modes, which differ in wire protocol and initialization. The default mode since sbt 1.1.x is domain socket mode, which uses either Unix domain sockets (on Unix) or named pipes (on Windows) for data transfer between server and client. In addition, there is a TCP mode, which uses TCP for data transfer. The mode which sbt server starts in is goverened by the key serverConnectionType, which can be set to ConnectionType.Local for domain socket/named pipe mode, or to ConnectionType.Tcp for TCP mode. To discover a running server, we use a port file. By default, sbt server will be running when a sbt shell session is active. When the server is up, it will create a file called the port file. The port file is located at ./project/target/active.json. The port file will look different depending on whether the server is running in TCP mode or domain socket/named pipe mode. They will look something like this: In domain socket/named pipe mode, on Unix: {"uri":"local:///Users/someone/.sbt/1.0/server/0845deda85cb41abdb9f/sock"} where the uri key will contain a string starting with local:// followed by the socket address sbt server is listening on. In domain socket/named pipe mode, on Windows, it will look something like {"uri":"local:sbt-server-0845deda85cb41abdb9f"} where the uri key will contain a string starting with local: followed by the name of the named pipe. In this example, the path of the named pipe will be \.\pipe\sbt-server-0845deda85cb41abdb9f. In TCP mode it will look something like the following: { "uri":"tcp://127.0.0.1:5010", "tokenfilePath":"/Users/xxx/.sbt/1.0/server/0845deda85cb41abdb9f/token.json", "tokenfileUri":"file:/Users/xxx/.sbt/1.0/server/0845deda85cb41abdb9f/token.json" } In this case, the uri key will hold a TCP uri with the address the server is listening on. In this mode, the port file will contain two additional keys, tokenfilePath and tokenfileUri. These point to the location of a token file. The location of the token file will not change between runs. It’s contents will look something like this: { "uri":"tcp://127.0.0.1:5010", "token":"12345678901234567890123456789012345678" } The uri field is the same, and the token field contains a 128-bits non-negative integer. To initiate communication with sbt server, the client (such as a tool like VS Code) must first send an `initialize` request. This means that the client must send a request with method set to “initialize” and the InitializeParams datatype as the params field. If the server is running in TCP mode, to authenticate yourself, you must pass in the token in initializationOptions as follows: type InitializationOptionsParams { token: String! } On telnet it would look as follows: $ telnet 127.0.0.1 5010 Content-Type: application/vscode-jsonrpc; charset=utf-8 Content-Length: 149 { "jsonrpc": "2.0", "id": 1, "method": "initialize", "params": { "initializationOptions": { "token": "84046191245433876643612047032303751629" } } } If the server is running in named pipe mode, no token is needed, and the initializationOptions should be the empty object {}. On Unix, using netcat, sending the initialize message in domain socket/named pipe mode will look something like this: $ nc -U /Users/foo/.sbt/1.0/server/0845deda85cb41abcdef/sock Content-Length: 99^M ^M { "jsonrpc": "2.0", "id": 1, "method": "initialize", "params": { "initializationOptions": { } } }^M Connections to the server when it’s running in named pipe mode are exclusive to the first process that connects to the socket or pipe. After sbt receives the request, it will send an `initialized` event. textDocument/publishDiagnosticsevent The compiler warnings and errors are sent to the client using the textDocument/publishDiagnostics event. textDocument/publishDiagnostics Here’s an example output (with JSON-RPC headers omitted): { "jsonrpc": "2.0", "method": "textDocument/publishDiagnostics", "params": { "uri": "file:/Users/xxx/work/hellotest/Hello.scala", "diagnostics": [ { "range": { "start": { "line": 2, "character": 0 }, "end": { "line": 2, "character": 1 } }, "severity": 1, "source": "sbt", "message": "')' expected but '}' found." } ] } } textDocument/didSaveevent As of sbt 1.1.0, sbt will execute the compile task upon receiving a textDocument/didSave notification. This behavior is subject to change. sbt/execrequest A sbt/exec request emulates the user typing into the shell. sbt/exec type SbtExecParams { commandLine: String! } On telnet it would look as follows: Content-Length: 91 { "jsonrpc": "2.0", "id": 2, "method": "sbt/exec", "params": { "commandLine": "clean" } } Note that there might be other commands running on the build, so in that case the request will be queued up. sbt/settingrequest A sbt/setting request can be used to query settings. sbt/setting type SettingQuery { setting: String! } On telnet it would look as follows: Content-Length: 102 { "jsonrpc": "2.0", "id": 3, "method": "sbt/setting", "params": { "setting": "root/scalaVersion" } } Content-Length: 87 Content-Type: application/vscode-jsonrpc; charset=utf-8 {"jsonrpc":"2.0","id":"3","result":{"value":"2.12.2","contentType":"java.lang.String"}} Unlike the command execution, this will respond immediately. sbt/completionrequest (sbt 1.3.0+) A sbt/completion request is used to emulate tab completions for sbt shell. sbt/completion ` type CompletionParams { query: String! }` On telnet it would look as follows: Content-Length: 100 { "jsonrpc": "2.0", "id": 15, "method": "sbt/completion", "params": { "query": "testOnly org." } } Content-Length: 79 Content-Type: application/vscode-jsonrpc; charset=utf-8 {"jsonrpc":"2.0","id":15,"result":{"items":["testOnly org.sbt.ExampleSpec"]}} This will respond immediatly based on the last available state of sbt. sbt/cancelRequest (sbt 1.3.0+) A sbt/cancelRequest request can be used to terminate the execution of an on-going task. sbt/cancelRequest ` type CancelRequestParams { id: String! }` On telnet it would look as follows (assuming a task with Id “foo” is currently running): Content-Length: 93 { "jsonrpc": "2.0", "id": "bar", "method": "sbt/cancelRequest", "params": { "id": "foo" } } Content-Length: 126 Content-Type: application/vscode-jsonrpc; charset=utf-8 {"jsonrpc":"2.0","id":"bar","result":{"status":"Task cancelled","channelName":"network-1","execId":"foo","commandQueue":[]}} This will respond back with the result of the action. Compiling..12 is implemented as a compiler plugin. You can use the compiler plugin support for this, as shown here. val continuationsVersion = "1.0.3" autoCompilerPlugins := true addCompilerPlugin("org.scala-lang.plugins" % "scala-continuations-plugin_2.12.2" % continuationsVersion) libraryDependencies += "org.scala-lang.plugins" %% "scala-continuations-library" % continuationsVersion scalacOptions += "-P:continuations:enable" Adding a version-specific compiler plugin can be done as follows: val continuationsVersion = "1.0.3" autoCompilerPlugins := true libraryDependencies += compilerPlugin("org.scala-lang.plugins" % ("scala-continuations-plugin_" + scalaVersion.value) % continuationsVersion) libraryDependencies += "org.scala-lang.plugins" %% "scala-continuations-library" % continuationsVersion scalacOptions += "-P:continuations:enable" s := ... Compile / unmanagedJars +=/")) Compile / unmanagedJars ++=.2.6, this version is Scala 2.12.7. Because this Scala version is needed before sbt runs, the repositories used to retrieve this version are configured in the sbt launcher. By only fork Compile / run and Compile / runMain: Compile / run / fork := true To only fork Test / run and Test / runMain: Test / run / fork := true Note: run and runMain share the same configuration and cannot be configured separately. To enable forking all test tasks only, set fork to true in the Test scope: Test / fork := run / baseDirectory := file("/path/to/working/directory/") // sets the working directory for `run` and `runMain` only Compile / run / baseDirectory := file("/path/to/working/directory/") // sets the working directory for `Test / run` and `Test / runMain` only Test / run / baseDirectory := file("/path/to/working/directory/") // sets the working directory for `test`, `testQuick`, and `testOnly` Test / baseDirectory := file("/path/to/working/directory/") To specify options to be provided to the forked JVM, set javaOptions: run / javaOptions += "-Xmx8G" or specify the configuration to affect only the main or test run tasks: Test / run / javaOptions += "-Xmx8G" or only affect the test tasks: Test / javaOptions += ": run / javaHome :=: run / connectInput :=) )]): Unit = { val s = Demo.desugar(List(1, 2, 3).reverse) println(s) } } This can be then run at the console: $ sbt > macroSub/test:run scala.collection.immutable.List.apply[Int](1, 2, 3).reverse Actual tests can be defined and run as usual with macro/test. The main project can use the macro in the same way that the tests do. For example, core/src/main/scala/MainUsage.scala: package demo object Usage { def main(args: Array[String]): Unit = { val s = Demo.desugar(List(6, 4, 5).sorted) println(s) } } $ sbt > core/run scala.collection.immutable.List.apply[Int](6, 4, 5).sorted[Int](math.this.Ordering Global /. For example, Global / concurrentRestrictions :=: Global / concurrentRestrictions := { val max = Runtime.getRuntime.availableProcessors Tags.limitAll(if(parallelExecution.value) max else 1) :: Nil } As before, parallelExecution in Test controls whether tests are mapped to separate tasks. To restrict the number of concurrently executing tests in all projects, use: Global / concurrentRestrictions += Tags.limit(Tags.Test, 1) To define a new tag, pass a String to the Tags.Tag method. For example: val Custom = Tags.Tag("custom") Then, use this tag as any other tag. For example: def aImpl = Def.task { ... } tag(Custom) aCustomTask := aImpl.value Global / concurrentRestrictions +=. Scala includes a process library to simplify working with external processes. Use import scala.sys.process._ to bring the implicit conversions into scope. To run an external command, follow it with an exclamation mark !: "find project -name *.jar" ! An implicit converts the String to scala.sys.process.ProcessBuilder, which defines the ! method. This method runs the constructed command, waits until the command completes, and returns the exit code. Alternatively, the run method defined on ProcessBuilder runs the command and returns an instance of scala.sys.process.Process, which can be used to destroy the process before it completes. With no arguments, the ! method sends output to standard output and standard error. You can pass a Logger to the ! method to send output to the Logger: "find project -name *.jar" ! log If you need to set the working directory or modify the environment, call scala.sys.process.Process explicitly, passing the command sequence (command and argument list) or command string first and the working directory second. Any environment variables can be passed as a vararg list of key/value String pairs. Process("ls" :: "-l" :: Nil, Path.userHome, "key1" -> value1, "key2" -> value2) ! log Operators are defined to combine commands. These operators start with # in order to keep the precedence the same and to separate them from the operators defined elsewhere in sbt for filters. In the following operator definitions, a and b are subcommands. a #&& bExecute a. If the exit code is nonzero, return that exit code and do not execute b. If the exit code is zero, execute b and return its exit code. a #|| bExecute a. If the exit code is zero, return zero for the exit code and do not execute b. If the exit code is nonzero, execute b and return its exit code. a #| bExecute aand b, piping the output of ato the input of b. There are also operators defined for redirecting output to Files and input from Files and URLs. In the following definitions, url is an instance of URL and file is an instance of File. a #< urlor url #> aUse urlas the input to a. amay be a File or a command. a #< fileor file #> aUse fileas the input to a. a may be a File or a command. a #> fileor file #< aWrite the output of ato file. a may be a File, URL, or a command. a #>> fileor file #<< aAppend the output of ato file. a may be a File, URL, or a command. There are some additional methods to get the output from a forked process into a String or the output lines as a Stream[String]. Here are some examples, but see the ProcessBuilder API for details. val listed: String = "ls" !! val lines2: Stream[String] = "ls" lines_! Finally, there is a cat method to send the contents of Files and URLs to standard output. Download a URL to a File: url("") #> file("About.html") ! // or file("About.html") #< url("") ! Copy a File: file("About.html") #> file("About_copy.html") ! // or file("About_copy.html") #< file("About.html") ! Append the contents of a URL to a File after filtering through grep: url("") #> "grep JSON" #>> file("About_JSON") ! // or file("About_JSON") #<< ( "grep JSON" #< url("") ) ! null in the source directory: "find src -name *.scala -exec grep null {} ;" #| "xargs test -z" #&& "echo null-free" #|| "echo null detected" ! Use cat: val spde = url("") val dispatch = url("") val build = file("project/build.properties") cat(spde, dispatch, build) #| "grep -i scala" ! Test / as well. These tasks include: Test / compile Test / console Test / consoleQuick Test / run Test / runMain See Running for details on these tasks. By default, logging is buffered for each test source file until all tests for that file complete. This can be disabled by setting logBuffered: Test / logBuffered := / testOptions += Tests.Argument("-verbosity", "1") To specify them for a specific test framework only: Test / testOptions += / testOptions += Tests.Setup( () => println("Setup") ) Test / testOptions += Tests.Cleanup( () => println("Cleanup") ) Test / testOptions += Tests.Setup( loader => ... ) Test / testOptions += Tests.Cleanup( loader => ... ) By default, sbt runs all tasks in parallel and within the same JVM as sbt itself. Because each test is mapped to a task, tests are also run in parallel by default. To make tests within a given project execute serially: : Test / parallelExecution := false Test can be replaced with IntegrationTest to only execute integration tests serially. Note that tests from different projects may still execute concurrently. If you want to only run test classes whose name ends with “Test”, use Tests.Filter: Test / testOptions := Seq(Tests.Filter(s => s.endsWith("Test"))) The setting: Test / fork := / testForkedParallel := scalatest = "org.scalatest" %% "scalatest" % "3.0.5" ThisBuild / organization := "com.example" ThisBuild / scalaVersion := "2.12.7" ThisBuild / version := "0.1.0-SNAPSHOT" lazy val root = (project in file(".")) .configs(IntegrationTest) .settings(, > IntegrationTest / testOnly org.example.AnIntegrationTest Similarly the standard settings may be configured for the IntegrationTest configuration. If not specified directly, most IntegrationTest settings delegate to Test settings by default. For example, if test options are specified as: Test / testOptions += ... then these will be picked up by the Test configuration and in turn by the IntegrationTest configuration. Options can be added specifically for integration tests by putting them in the IntegrationTest configuration: IntegrationTest / testOptions += ... Or, use := to overwrite any existing options, declaring these to be the definitive integration test options: IntegrationTest / testOptions := Seq(...) The previous example may be generalized to a custom test configuration. lazy val: FunTest / testOptions += ... Test tasks are run by prefixing them with fun: > FunTest / test An alternative to adding separate sets of test sources (and compilations) is to share sources. In this approach, the sources are compiled together using the same classpath and are packaged together. However, different tests are run depending on the configuration. lazy valTest"), prefix it with the configuration name as before: > FunTest / test > FunTest / default, the published artifacts are the main binary jar, a jar containing the main sources and resources, and a jar containing the API documentation. You can add artifacts for the test classes, sources, or API or you can disable some of the main artifacts. To add all test artifacts: publishArtifact in Test := true To add them individually: // enable publishing the jar produced by `test:package` publishArtifact in (Test, packageBin) := true // enable publishing the test API jar publishArtifact in (Test, packageDoc) := true // enable publishing the test sources jar publishArtifact in (Test, packageSrc) := true To disable main artifacts individually: // disable publishing the main jar produced by `package` publishArtifact in (Compile, packageBin) := false // disable publishing the main API jar publishArtifact in (Compile, packageDoc) := false // disable publishing the main sources jar publishArtifact in (Compile, packageSrc) := false Each built-in artifact has several configurable settings in addition to publishArtifact. The basic ones are artifact (of type SettingKey[Artifact]), mappings (of type TaskKey[(File,String)]), and artifactPath (of type SettingKey[File]). They are scoped by (<config>, <task>) as indicated in the previous section. To modify the type of the main artifact, for example: artifact in (Compile, packageBin) := { val previous: Artifact = (artifact in (Compile, packageBin)).value previous.withType("bundle") } The generated artifact name is determined by the artifactName setting. This setting is of type (ScalaVersion, ModuleID, Artifact) => String. The ScalaVersion argument provides the full Scala version String and the binary compatible part of the version String. The String result is the name of the file to produce. The default implementation is Artifact.artifactName _. The function may be modified to produce different local names for artifacts without affecting the published name, which is determined by the artifact definition combined with the repository pattern. For example, to produce a minimal name without a classifier or cross path: artifactName := { (sv: ScalaVersion, module: ModuleID, artifact: Artifact) => artifact.name + "-" + module.revision + "." + artifact.extension } (Note that in practice you rarely want to drop the classifier.) Finally, you can get the (Artifact, File) pair for the artifact by mapping the packagedArtifact task. Note that if you don’t need the Artifact, you can get just the File from the package task ( package, packageDoc, or packageSrc). In both cases, mapping the task to get the file ensures that the artifact is generated first and so the file is guaranteed to be up-to-date. For example: val myTask = taskKey[Unit]("My task.") myTask := { val (art, file) = packagedArtifact.in(Compile, packageBin).value println("Artifact definition: " + art) println("Packaged file: " + file.getAbsolutePath) } In addition to configuring the built-in artifacts, you can declare other artifacts to publish. Multiple artifacts are allowed when using Ivy metadata, but a Maven POM file only supports distinguishing artifacts based on classifiers and these are not recorded in the POM. Basic Artifact construction look like: Artifact("name", "type", "extension") Artifact("name", "classifier") Artifact("name", url: URL) Artifact("name", Map("extra1" -> "value1", "extra2" -> "value2")) For example: Artifact("myproject", "zip", "zip") Artifact("myproject", "image", "jpg") Artifact("myproject", "jdk15") See the Ivy documentation for more details on artifacts. See the Artifact API for combining the parameters above and specifying [Configurations] and extra attributes. To declare these artifacts for publishing, map them to the task that generates the artifact: val myImageTask = taskKey[File](...) myImageTask := { val artifact: File = makeArtifact(...) artifact } addArtifact( Artifact("myproject", "image", "jpg"), myImageTask ) addArtifact returns a sequence of settings (wrapped in a SettingsDefinition). In a full build configuration, usage looks like: ... lazy val proj = Project(...) .settings( addArtifact(...).settings ) ... A common use case for web applications is to publish the .war file instead of the .jar file. // disable .jar publishing publishArtifact in (Compile, packageBin) := false // create an Artifact for publishing the .war file artifact in (Compile, packageWar) := { val previous: Artifact = (artifact in (Compile, packageWar)).value previous.withType("war").withExtension("war") } // add the .war file to what gets published addArtifact(artifact in (Compile, packageWar), packageWar) To specify the artifacts to use from a dependency that has custom or multiple artifacts, use the artifacts method on your dependencies. For example: libraryDependencies += "org" % "name" % "rev" artifacts(Artifact("name", "type", "ext")) The from and classifer methods (described on the Library Management page) are actually convenience methods that translate to artifacts: def from(url: String) = artifacts( Artifact(name, new URL(url)) ) def classifier(c: String) = artifacts( Artifact(name, c) ) That is, the following two dependency declarations are equivalent: libraryDependencies += "org.testng" % "testng" % "5.7" classifier "jdk15" libraryDependencies += "org.testng" % "testng" % "5.7" artifacts(Artifact("testng", "jdk15") ) s-grep (run help last-grep/1.0.: Compile / unmanagedJars := (baseDirectory.value ** "*.jar").classpath If you want to add jars from multiple directories in addition to the default directory, you can do: Compile / unmanagedJars ++= {.combineDefault excluded: update / checksums := Nil To disable checksum creation during artifact publishing: publishLocal / checksums := Nil publish / checksums :=" sbt 0.13.6+ will try to reconstruct dependencies tree when it fails to resolve a managed dependency. This is an approximation, but it should help you figure out where the problematic dependency is coming from. When possible sbt will display the source position next to the modules: [warn] :::::::::::::::::::::::::::::::::::::::::::::: [warn] :: UNRESOLVED DEPENDENCIES :: [warn] :::::::::::::::::::::::::::::::::::::::::::::: [warn] :: foundrylogic.vpp#vpp;2.2.1: not found [warn] :::::::::::::::::::::::::::::::::::::::::::::: [warn] [warn] Note: Unresolved dependencies path: [warn] foundrylogic.vpp:vpp:2.2.1 [warn] +- org.apache.cayenne:cayenne-tools:3.0.2 [warn] +- org.apache.cayenne.plugins:maven-cayenne-plugin:3.0.2 (/foo/some-test/build.sbt#L28) [warn] +- d:d_2.10:0.1-SNAPSHOT "" Compile / resources ++=: Compile / classpathConfiguration := config("default") externalPom() or externalPom(Def.setting(baseDirectory.value / "custom-name.xml")) For example, a build.sbt using external Ivy files might look like: externalIvySettings() externalIvyFile(Def.setting(baseDirectory.value / "ivyA.xml")) Compile / classpathConfiguration := Compile Test / classpathConfiguration := Test Runtime / classpathConfiguration :=.Local. In case you need to define credentials to connect to your proxy repository, define an environment variable SBT_CREDENTIALS that points to the file containing your credentials: export SBT_CREDENTIALS="$HOME/.ivy2/.credentials" with file contents realm=My Nexus Repository Manager host=my.artifact.repo.net user=admin password=admin123"), simply set up two virtual/proxy repositories, one for maven and one for ivy. Here’s an example setup: NOTE: If using Nexus as the proxy repository, then it is very important that you set the layout policy to “permissive” for the proxy mapping that you create to the upstream repository. If you do not, Nexus will stop short of proxying the original request to this url and issue a HTTP 404 in its place and the dependency will not resolve. This / ".sbt" / ". Res”, “releases”) This is Sonatype OSS Maven Repository at Resolver.typesafeRepo("releases")(or “snapshots”) This is Typesafe Repository at Resolver.typesafeIvyRepo("releases")(or “snapshots”) This is Typesafe Ivy Repository at Resolver.sbtPluginRepo("releases")(or “snapshots”) This is sbt Community Repository at Resolver.bintrayRepo("owner", "repo")This is the Bintray repository at[owner]/[repo]/ Resolver.jcenterRepoThis is the Bintray JCenter repository at For example, to use the java.net repository, use the following setting in your build definition: resolvers += JavaNet1Repository Predefined repositories will go under Resolver going forward so they are in one place: Resolver.sonatypeRepo("releases") // Or "snapshots" sbt provides an interface to the repository types available in Ivy: file, URL, SSH, and SFTP. A key feature of repositories in Ivy is using patterns to configure repositories. Construct a repository definition using the factory in sbt.Resolver for the desired type. This factory creates a Repository object that can be further configured. The following table contains links to the Ivy documentation for the repository type and the API documentation for the factory and repository class. The SSH and SFTP repositories are configured identically except for the name of the factory. Use Resolver.ssh for SSH and Resolver.sftp for SFTP. These are basic examples that use the default Maven-style repository layout. Define a filesystem repository in the test directory of the current working directory and declare that publishing to this repository must be atomic. resolvers += Resolver.file("my-test-repo", file("test")) transactional() Define a URL repository at "". resolvers += Resolver.url("my-test-repo", url("")) To specify an Ivy repository, use: resolvers += Resolver.url("my-test-repo", url)(Resolver.ivyStylePatterns) or customize the layout pattern described in the Custom Layout section below. The following defines a repository that is served by SFTP from host "example.org": resolvers += Resolver.sftp("my-sftp-repo", "example.org") To explicitly specify the port: resolvers += Resolver.sftp("my-sftp-repo", "example.org", 22) To specify a base path: resolvers += Resolver.sftp("my-sftp-repo", "example.org", "maven2/repo-releases/") Authentication for the repositories returned by sftp and ssh can be configured by the as methods. To use password authentication: resolvers += Resolver.ssh("my-ssh-repo", "example.org") as("user", "password") or to be prompted for the password: resolvers += Resolver.ssh("my-ssh-repo", "example.org") as("user") To use key authentication: resolvers += { val keyFile: File = ... Resolver.ssh("my-ssh-repo", "example.org") as("user", keyFile, "keyFilePassword") } or if no keyfile password is required or if you want to be prompted for it: resolvers += Resolver.ssh("my-ssh-repo", "example.org") as("user", keyFile) To specify the permissions used when publishing to the server: resolvers += Resolver.ssh("my-ssh-repo", "example.org") withPermissions("0644") This is a chmod-like mode specification. These examples specify custom repository layouts using patterns. The factory methods accept an Patterns instance that defines the patterns to use. The patterns are first resolved against the base file or URL. The default patterns give the default Maven-style layout. Provide a different Patterns object to use a different layout. For example: resolvers += Resolver.url("my-test-repo", url)( Patterns("[organisation]/[module]/[revision]/[artifact].[ext]") ) You can specify multiple patterns or patterns for the metadata and artifacts separately. You can also specify whether the repository should be Maven compatible (as defined by Ivy). See the patterns API for the methods to use. For filesystem and URL repositories, you can specify absolute patterns by omitting the base URL, passing an empty Patterns instance, and using ivys and artifacts: resolvers += Resolver.url("my-test-repo") artifacts "[organisation]/[module]/[revision]/[artifact].[ext]" update and related tasks produce a value of type sbt.UpdateReport This data structure provides information about the resolved configurations, modules, and artifacts. At the top level, UpdateReport provides reports of type ConfigurationReport for each resolved configuration. A ConfigurationReport supplies reports (of type ModuleReport) for each module resolved for a given configuration. Finally, a ModuleReport lists each successfully retrieved Artifact and the File it was retrieved to as well as the Artifacts that couldn’t be downloaded. This missing Arifact list is always empty for update, which will fail if it is non-empty. However, it may be non-empty for updateClassifiers and updateSbtClassifers. A typical use of UpdateReport is to retrieve a list of files matching a filter. A conversion of type UpdateReport => RichUpdateReport implicitly provides these methods for UpdateReport. The filters are defined by the DependencyFilter, ConfigurationFilter, ModuleFilter, and ArtifactFilter types. Using these filter types, you can filter by the configuration name, the module organization, name, or revision, and the artifact name, type, extension, or classifier. The relevant methods (implicitly on UpdateReport) are: def matching(f: DependencyFilter): Seq[File] def select(configuration: ConfigurationFilter = ..., module: ModuleFilter = ..., artifact: ArtifactFilter = ...): Seq[File] Any argument to select may be omitted, in which case all values are allowed for the corresponding component. For example, if the ConfigurationFilter is not specified, all configurations are accepted. The individual filter types are discussed below. Configuration, module, and artifact filters are typically built by applying a NameFilter to each component of a Configuration, ModuleID, or Artifact. A basic NameFilter is implicitly constructed from a String, with * interpreted as a wildcard. import sbt._ // each argument is of type NameFilter val mf: ModuleFilter = moduleFilter(organization = "*sbt*", name = "main" | "actions", revision = "1.*" - "1.0") // unspecified arguments match everything by default val mf: ModuleFilter = moduleFilter(organization = "net.databinder") // specifying "*" is the same as omitting the argument val af: ArtifactFilter = artifactFilter(name = "*", `type` = "source", extension = "jar", classifier = "sources") val cf: ConfigurationFilter = configurationFilter(name = "compile" | "test") Alternatively, these filters, including a NameFilter, may be directly defined by an appropriate predicate (a single-argument function returning a Boolean). import sbt._ // here the function value of type String => Boolean is implicitly converted to a NameFilter val nf: NameFilter = (s: String) => s.startsWith("dispatch-") // a Set[String] is a function String => Boolean val acceptConfigs: Set[String] = Set("compile", "test") // implicitly converted to a ConfigurationFilter val cf: ConfigurationFilter = acceptConfigs val mf: ModuleFilter = (m: ModuleID) => m.organization contains "sbt" val af: ArtifactFilter = (a: Artifact) => a.classifier.isEmpty A configuration filter essentially wraps a NameFilter and is explicitly constructed by the configurationFilter method: def configurationFilter(name: NameFilter = ...): ConfigurationFilter If the argument is omitted, the filter matches all configurations. Functions of type String => Boolean are implicitly convertible to a ConfigurationFilter. As with ModuleFilter, ArtifactFilter, and NameFilter, the &, |, and - methods may be used to combine ConfigurationFilters. import sbt._ val a: ConfigurationFilter = Set("compile", "test") val b: ConfigurationFilter = (c: String) => c.startsWith("r") val c: ConfigurationFilter = a | b (The explicit types are optional here.) A module filter is defined by three NameFilters: one for the organization, one for the module name, and one for the revision. Each component filter must match for the whole module filter to match. A module filter is explicitly constructed by the moduleFilter method: def moduleFilter(organization: NameFilter = ..., name: NameFilter = ..., revision: NameFilter = ...): ModuleFilter An omitted argument does not contribute to the match. If all arguments are omitted, the filter matches all ModuleIDs. Functions of type ModuleID => Boolean are implicitly convertible to a ModuleFilter. As with ConfigurationFilter, ArtifactFilter, and NameFilter, the &, |, and - methods may be used to combine ModuleFilters: import sbt._ val a: ModuleFilter = moduleFilter(name = "dispatch-twitter", revision = "0.7.8") val b: ModuleFilter = moduleFilter(name = "dispatch-*") val c: ModuleFilter = b - a (The explicit types are optional here.) An artifact filter is defined by four NameFilters: one for the name, one for the type, one for the extension, and one for the classifier. Each component filter must match for the whole artifact filter to match. An artifact filter is explicitly constructed by the artifactFilter method: def artifactFilter(name: NameFilter = ..., `type`: NameFilter = ..., extension: NameFilter = ..., classifier: NameFilter = ...): ArtifactFilter Functions of type Artifact => Boolean are implicitly convertible to an ArtifactFilter. As with ConfigurationFilter, ModuleFilter, and NameFilter, the &, |, and - methods may be used to combine ArtifactFilters: import sbt._ val a: ArtifactFilter = artifactFilter(classifier = "javadoc") val b: ArtifactFilter = artifactFilter(`type` = "jar") val c: ArtifactFilter = b - a (The explicit types are optional here.) A DependencyFilter is typically constructed by combining other DependencyFilters together using &&, ||, and --. Configuration, module, and artifact filters are DependencyFilters themselves and can be used directly as a DependencyFilter or they can build up a DependencyFilter. Note that the symbols for the DependencyFilter combining methods are doubled up to distinguish them from the combinators of the more specific filters for configurations, modules, and artifacts. These double-character methods will always return a DependencyFilter, whereas the single character methods preserve the more specific filter type. For example: import sbt._ val df: DependencyFilter = configurationFilter(name = "compile" | "test") && artifactFilter(`type` = "jar") || moduleFilter(name = "dispatch-*") Here, we used && and || to combine individual component filters into a dependency filter, which can then be provided to the UpdateReport.matches method. Alternatively, the UpdateReport.select method may be used, which is equivalent to calling matches with its arguments combined with &&. Cached. The, the Cached Resolution feature creates minigraphs — one for each direct dependency appearing in all related subprojects. These minigraphs are resolved using Ivy’s resolution engine, and the result is stored locally under ~/.sbt/1.0/1.0 projects A, B, and C, all hitting the same set of json files. The actual speedup will vary case by case, but you should see significant speedup if you have many subprojects. An initial report from a user showed a change from 260s to 25s. Your mileage the seems experience may degrade. (This could be improved in the future) A setting key called updateOptions customizes the details of managed dependency resolution with the update task. One of its flags is called lastestSnapshots, which controls the behavior of the chained resolver. Up until 0.13.6, sbt was picking the first -SNAPSHOT revision it found along the chain. When latestSnapshots is enabled (default: true), it will look into all resolvers on the chain, and compare them using the publish date. The tradeoff is probably a longer resolution time if you have many remote repositories on the build or you live away from the severs. So here’s how to disable it: updateOptions := updateOptions.value.withLatestSnapshots(false) updateOptions can also be used to enable consolidated resolution for update task. updateOptions := updateOptions.value.withConsolidatedResolution(true) This feature is specifically designed to address Ivy resolution.. This part of the documentation has pages documenting particular sbt topics in detail. Before reading anything in here, you will need the information in the Getting Started Guide as a foundation. Tasks := sys sys := sys := sys := sys = sys.error("I didn't succeed.") def otherIntTask(): Int = try { intTask() } finally { println("finally") } intTask() It is obvious here that calling intTask() will never result in “finally” being printed. Input.7, the current sbt version is 1.2.6,: if the input derives from settings you need to use, for example, Def.taskDyn { ... }! A ._ ThisBuild / organization := "com.example" ThisBuild / scalaVersion := "2.12.7" ThisBuild / version := "0.1.0-SNAPSHOT" lazy val root = (project in file(".")) .settings(("[", ", ", "]") } This. State ... This" This part of the documentation has pages documenting particular sbt topics in detail. Before reading anything in here, you will need the information in the Getting Started Guide as a foundation. This page describes best practices for working with sbt. project/vs. ~/.sbt/ Anything that is necessary for building the project should go in project/. This includes things like the web plugin. ~/.sbt/ should contain local customizations and commands for working with a build, but are not necessary. An example is an IDE plugin. There are two options for settings that are specific to a user. An example of such a setting is inserting the local Maven repository at the beginning of the resolvers list: resolvers := { val localMaven = "Local Maven Repository" at "file://"+Path.userHome.absolutePath+"/.m2/repository" localMaven +: resolvers.value } .sbtfile, such as ~/.sbt/1.0/global.sbt. These settings will be applied to all projects. .sbtfile in a project that isn’t checked into version control, such as <project>/local.sbt. sbt combines the settings from multiple .sbt files, so you can still have the standard <project>/build.sbtand check that into version control. Put commands to be executed when sbt starts up in a .sbtrc file, one per line. These commands run before a project is loaded and are useful for defining aliases, for example. sbt executes commands in $HOME/.sbtrc (if it exists) and then <project>/.sbtrc (if it exists). Write any generated files to a subdirectory of the output directory, which is specified by the target setting. This makes it easy to clean up after a build and provides a single location to organize generated files. Any generated files that are specific to a Scala version should go in crossTarget for efficient cross-building. For generating sources and resources, see Generating Files. Don’t hard code constants, like the output directory target/. This is especially important for plugins. A user might change the target setting to point to build/, for example, and the plugin needs to respect that. Instead, use the setting, like: myDirectory := target.value / "sub-directory" A build naturally consists of a lot of file manipulation. How can we reconcile this with the task system, which otherwise helps us avoid mutable state? One approach, which is the recommended approach and the approach used by sbt’s default tasks, is to only write to any given file once and only from a single task. A build product (or by-product) should be written exactly once by only one task. The task should then, at a minimum, provide the Files created as its result. Another task that wants to use Files should map the task, simultaneously obtaining the File reference and ensuring that the task has run (and thus the file is constructed). Obviously you cannot do much about the user or other processes modifying the files, but you can make the I/O that is under the build’s control more predictable by treating file contents as immutable at the level of Tasks. For example: lazy val makeFile = taskKey[File]("Creates a file with some content.") // define a task that creates a file, // writes some content, and returns the File makeFile := { val f: File = file("/tmp/data.txt") IO.write(f, "Some content") f } // The result of makeFile is the constructed File, // so useFile can map makeFile and simultaneously // get the File and declare the dependency on makeFile useFile := doSomething( makeFile.value ) This arrangement is not always possible, but it should be the rule and not the exception. Construct only absolute Files. Either specify an absolute path file("/home/user/A.scala") or construct the file from an absolute base: base / "A.scala" This is related to the no hard coding best practice because the proper way involves referencing the baseDirectory setting. For example, the following defines the myPath setting to be the <base>/licenses/ directory. myPath := baseDirectory.value / "licenses" In Java (and thus in Scala), a relative File is relative to the current working directory. The working directory is not always the same as the build root directory for a number of reasons. The only exception to this rule is when specifying the base directory for a Project. Here, sbt will resolve a relative File against the build root directory for you for convenience. tokeneverywhere to clearly delimit tab completion boundaries. flatMapfor general recursion. sbt’s combinators are strict to limit the number of classes generated, so use flatMap like: lazy val parser: Parser[Int] = token(IntBasic) flatMap { i => if(i <= 0) success(i) else token(Space ~> parser) } This example defines a parser a whitespace-delimited list of integers, ending with a negative number, and returning that final, negative number. There enable SbtPlugin. lazy val root = (project in file(".")) .enablePlugins(SbtPlugin) .settings( name := "sbt-something" ). This page is intended primarily for sbt plugin authors. This page assumes you’ve read using plugins and Plugins. A plugin developer should strive for consistency and ease of use. Specifically: Here are some current plugin best practices. Note: Best practices are evolving, so check back frequently. Sometimes, you need a new key, because there is no existing sbt key. In this case, use a plugin-specific prefix. package sbtassembly import sbt._, Keys._ object AssemblyPlugin extends AutoPlugin { object autoImport { val assembly = taskKey[File]("Builds a deployable fat jar.") val assembleArtifact = settingKey[Boolean]("Enables (true) or disables (false) assembling an artifact.") val assemblyOption = taskKey[AssemblyOption]("Configuration for making a deployable fat jar.") val assembledMappings = taskKey[Seq[MappingSet]]("Keeps track of jar origins for each source.") val assemblyPackageScala = taskKey[File]("Produces the scala artifact.") val assemblyJarName = taskKey[String]("name of the fat jar") val assemblyMergeStrategy = settingKey[String => MergeStrategy]("mapping from archive member path to merge strategy") } import autoImport._ .... } In this approach, every val starts with assembly. A user of the plugin would refer to the settings like this in build.sbt: assembly / assemblyJarName := "something.jar" Inside sbt shell, the user can refer to the setting in the same way: sbt:helloworld> show assembly/assemblyJarName [info] helloworld-assembly-0.1.0-SNAPSHOT.jar Avoid sbt 0.12 style key names where the key’s Scala identifier and shell uses kebab-casing: val jarName = SettingKey[String]("assembly-jar-name") val jarName = SettingKey[String]("jar-name") val assemblyJarName = taskKey[String]("name of the fat jar") Because there’s a single namespace for keys both in build.sbt and in sbt shell, if different plugins use generic sounding key names like jarName and excludedFiles they will cause name conflict. Use the sbt-$projectname scheme to name your library and artifact. A plugin ecosystem with a consistent naming convention makes it easier for users to tell whether a project or dependency is an SBT plugin. If the project’s name is foobar the following holds: foobar foobar-sbt sbt-foobar-plugin sbt-foobar If your plugin provides an obvious “main” task, consider naming it foobar or foobar... to make it more intuitive to explore the capabilities of your plugin within the sbt shell and tab-completion. Name your plugin as FooBarPlugin. Users who have their build files in some package will not be able to use your plugin if it’s defined in default (no-name) package. Make sure people can find your plugin. Here are some of the recommended steps: sbt has a number of predefined keys. Where possible, reuse them in your plugin. For instance, don’t define: val sourceFiles = settingKey[Seq[File]]("Some source files") Instead, reuse sbt’s existing sources key. Your plugin should fit in naturally with the rest of the sbt ecosystem. The first thing you can do is to avoid defining commands, and use settings and tasks and task-scoping instead (see below for more on task-scoping). Most of the interesting things in sbt like compile, test and publish are provided using tasks. Tasks can take advantage of duplication reduction and parallel execution by the task engine. With features like ScopeFilter, many of the features that previously required commands are now possible using tasks. Settings can be composed from other settings and tasks. Tasks can be composed from other tasks and input tasks. Commands, on the other hand, cannot be composed from any of the above. In general, use the minimal thing that you need. One legitimate use of commands may be using plugin to access the build definition itself not the code. sbt-inspectr was implemented using a command before it became inspect tree. The core feature of sbt’s package task, for example, is implemented in sbt.Package, which can be called via its apply method. This allows greater reuse of the feature from other plugins such as sbt-assembly, which in return implements sbtassembly.Assembly object to implement its core feature. Follow their lead, and provide core feature in a plain old Scala object. If your plugin introduces either a new set of source code or its own library dependencies, only then you want your own configuration. Configurations should not be used to namespace keys for a plugin. If you’re merely adding tasks and settings, don’t define your own configuration. Instead, reuse an existing one or scope by the main task (see below). package sbtwhatever import sbt._, Keys._ object WhateverPlugin extends sbt.AutoPlugin { override def requires = plugins.JvmPlugin override def trigger = allRequirements object autoImport { // BAD sample lazy val Whatever = config("whatever") extend(Compile) lazy val specificKey = settingKey[String]("A plugin specific key") } import autoImport._ override lazy val projectSettings = Seq( specificKey in Whatever := "another opinion" // DON'T DO THIS ) } If your plugin introduces either a new set of source code or its own library dependencies, only then you want your own configuration. For instance, suppose you’ve built a plugin that performs fuzz testing that requires its own fuzzing library and fuzzing source code. scalaSource key can be reused similar to Compile and Test configuration, but scalaSource scoped to Fuzz configuration (denoted as scalaSource in Fuzz) can point to src/fuzz/scala so it is distinct from other Scala source directories. Thus, these three definitions use the same key, but they represent distinct values. So, in a user’s build.sbt, we might see: scalaSource in Fuzz := baseDirectory.value / "source" / "fuzz" / "scala" scalaSource in Compile := baseDirectory.value / "source" / "main" / "scala" In the fuzzing plugin, this is achieved with an inConfig definition: package sbtfuzz import sbt._, Keys._ object FuzzPlugin extends sbt.AutoPlugin { override def requires = plugins.JvmPlugin override def trigger = allRequirements object autoImport { lazy val Fuzz = config("fuzz") extend(Compile) } import autoImport._ lazy val baseFuzzSettings: Seq[Def.Setting[_]] = Seq( test := { println("fuzz test") } ) override lazy val projectSettings = inConfig(Fuzz)(baseFuzzSettings) } When defining a new type of configuration, e.g. lazy val Fuzz = config("fuzz") extend(Compile) should be used to create a configuration. Configurations actually tie into dependency resolution (with Ivy) and can alter generated pom files. Whether you ship with a configuration or not, a plugin should strive to support multiple configurations, including those created by the build user. Some tasks that are tied to a particular configuration can be re-used in other configurations. While you may not see the need immediately in your plugin, some project may and will ask you for the flexibility. Split your settings by the configuration axis like so: obfuscateStylesheet = settingKey[File]("obfuscate stylesheet") } import autoImport._ lazy val baseObfuscateSettings: Seq[Def.Setting[_]] = Seq( obfuscate := Obfuscate((sources in obfuscate).value), sources in obfuscate := sources.value ) override lazy val projectSettings = inConfig(Compile)(baseObfuscateSettings) } // core feature implemented here object Obfuscate { def apply(sources: Seq[File]): Seq[File] = { sources } } The baseObfuscateSettings value provides base configuration for the plugin’s tasks. This can be re-used in other configurations if projects require it. The obfuscateSettings value provides the default Compile scoped settings for projects to use directly. This gives the greatest flexibility in using features provided by a plugin. Here’s how the raw settings may be reused: import sbtobfuscate.ObfuscatePlugin lazy val app = (project in file("app")) .settings(inConfig(Test)(ObfuscatePlugin.baseObfuscateSettings)) In general, if a plugin provides keys (settings and tasks) with the widest scoping, and refer to them with the narrowest scoping, it will give the maximum flexibility to the build users. globalSettings If the default value of your settings or task does not transitively depend on a project-level settings (such as baseDirectory, compile, etc), define it in globalSettings. For example, in sbt.Defaults keys related to publishing such as licenses, developers, and scmInfo are all defined at the Global scope, typically to empty values like Nil and None. obfuscateOption = settingKey[ObfuscateOption]("options to configure obfuscate") } import autoImport._ override lazy val globalSettings = Seq( obfuscateOption := ObfuscateOption() ) override lazy val projectSettings = inConfig(Compile)( obfuscate := { Obfuscate( (obfuscate / sources).value, (obfuscate / obfuscateOption).value ) }, obfuscate / sources := sources.value ) } // core feature implemented here object Obfuscate { def apply(sources: Seq[File], opt: ObfuscateOption): Seq[File] = { sources } } In the above, obfuscateOption is set a default made-up value in the globalSettings; but is used as (obfuscate / obfuscateOption) in the projectSettings. This lets the user either set obfuscate / obfuscateOption at a particular subproject level, or scoped to ThisBuild affecting all subprojects: ThisBuild / obfuscate / obfuscateOption := ObfuscateOption().withX(true) Giving keys default values in global scope requires knowing that every key (if any) used to define that key must also be defined in global scope, otherwise it will fail at load time. Sometimes you want to define some settings for a particular “main” task in your plugin. In this instance, you can scope your settings using the task itself. See the baseObfuscateSettings: lazy val baseObfuscateSettings: Seq[Def.Setting[_]] = Seq( obfuscate := Obfuscate((sources in obfuscate).value), sources in obfuscate := sources.value ) In the above example, sources in obfuscate is scoped under the main task, obfuscate. globalSettings There may be times when you need to rewire an existing key in globalSettings. The general rule is be careful what you touch. Care should be taken to ensure previous settings from other plugins are not ignored. e.g. when creating a new onLoad handler, ensure that the previous onLoad handler is not removed. package sbtsomething import sbt._, Keys._ object MyPlugin extends AutoPlugin { override def requires = plugins.JvmPlugin override def trigger = allRequirements override val globalSettings: Seq[Def.Setting[_]] = Seq( onLoad in Global := (onLoad in Global).value andThen { state => ... return new state ... } ) } Travis=1.2.6 Your build will now use 1.2.6. jdk: oraclejdk8 scala: - 2.10.4 - 2.12.7 By default Travis CI executes sbt ++$TRAVIS_SCALA_VERSION test. Let’s specify that explicitly: language: scala jdk: oraclejdk8 scala: - 2.10.4 - 2.12.7 jdk: oraclejdk8: java -Dfile.encoding=UTF8 -Xms2048M -Xmx2048M -Xss6M -XX:ReservedCodeCacheSize=256M -jar /home/travis/.sbt/launchers/1.2: java -Xms2048M -Xmx2048M -Xss6M -Dfile.encoding=UTF8 -XX:ReservedCodeCacheSize=256M -Xms1024M -jar /home/travis/.sbt/launchers/1.2 jdk: oraclejdk8 scala: - 2.10.4 - 2.12.7 script: - sbt ++$TRAVIS_SCALA_VERSION test.. Here’s a sample that puts them all together. Remember, most of the sections are optional. # Use container-based infrastructure sudo: false language: scala jdk: oraclejdk8 # These directories are cached to S3 at the end of the build cache: directories: - $HOME/.ivy2/cache - $HOME/.sbt/boot/ jdk: oraclejdk8 env: # This splits the build into two parts matrix: - TEST_COMMAND="scripted sbt-assembly/*" - TEST_COMMAND="scripted merging/* caching/*" script: - sbt Let../[email protected],! s. Like we are able to cross build against multiple Scala versions, we can cross build sbt 1.0 plugins while staying on sbt 0.13. This is useful because we can port one plugin at a time. .settings( scalaVersion := "2.12.7", sbtVersion in Global := "1.2.6",. forSources := HiddenFileFilter || "*impl*" To have different filters for main and test libraries, configure Compile and Test separately: includeFilter in (Compile, unmanagedSources) := "*.scala" || "*.java" includeFilter in (Test, unmanagedSources) := HiddenFileFilter || "*impl*" Note: By default, sbt includes .scalaand .javasources, excluding hidden files. When sbt traverses unmanagedResourceDirectoriesResources := HiddenFileFilter || "*impl*" To have different filters for main and test libraries, configure Compile and Test separately: includeFilter in (Compile, unmanagedResources) := "*.txt" includeFilter in (Test, unmanagedResources) := "*. NOTE: For the efficiency of the build, sourceGenerators should avoid regenerating source files upon each call, and cache based on the input values using sbt.Tracked.{ inputChanged, outputChanged } etc instead.. NOTE: For the efficiency of the build, resourceGenerators should avoid regenerating resource files upon each call, and cache based on the input values using sbt.Tracked.{ inputChanged, outputChanged } etc instead. The) By := { (test in (core, Test)).value (test in (tools, Test)).value } A. One. s.2.6. If.6. Thus!") } ) ThisBuild / organization := "com.example" ThisBuild / scalaVersion := "2.12.7" ThisBuild / version := "0.1.0-SNAPSHOT" // An example project that only uses the Scalate utilities. lazy val a = (project in file("a")) .dependsOn(utils % "compile->scalate") //") // Defines the utilities project lazy val utils = (project in file("utils")) .settings( ) )) ) appendWithSession(transformed,.withRevision(revision = "1.8") else module } TL.2.6 = Classpaths.managedJars(proguardConfig, artifactTypes, update.value) // ... do something with , which includes proguard ... } It is possible to register additional jars that will be placed on sbt’s classpath.. This. This. The. The following is a conceptual diagram of the modular layers: This diagram is arranged such that each layer depends only on the layers underneath it. IO API is a low level API to deal with files and directories. Serialization API is an opinionated wrapper around Scala Pickling. The responsibility of the serialization API is to turn values into JSON. Util APIs provide commonly used features like logging and internal datatypes used by sbt. sbt’s library management system is based on Apache Ivy, and as such the concepts and terminology around the library management system are also influenced by Ivy. The responsibility of the library management API is to calculate the transitive dependency graph, and download artifacts from the given repositories. Incremental compilation of Scala is so fundamental that we now seldom think of it as a feature of sbt. There are number of subprojects/classes involved that are actually internal details, and we should use this opportunity to hide them. This is the part that’s exposed to build.sbt. The responsibility of the module is to load the build files and plugins, and provide a way for commands to be executed on the state. This might remain at sbt/sbt. The sbt launcher provides a generic container that can load and run programs resolved using the Ivy dependency manager. sbt uses this as the deployment mechanism, but it can be used for other purposes. See foundweekends/conscript and Launcher for more details. Currently developed in sbt/sbt-remote-control. sbt Server provides a JSON-based API wrapping functionality of the command line experience. One of the clients will be the “terminal client”, which subsumes the command line sbt shell. Other clients that are planned are IDE integrations. This website’s source.afmt) .... }")
https://www.scala-sbt.org/release/docs/Combined+Pages.html
CC-MAIN-2018-47
refinedweb
18,217
51.24
0 hi, i have written a function that can search through a folder and display a picture in a new window if the file is present but i always have to add the image file format or extension before the function can work. is there a way i can work around ignorin the exension like ".gif or .png" `from tkinter import * from PIL import * def display_pix(): hmm = Toplevel(window) hmm.geometry('300x300') label = Label(hmm,) label.pack() PhotoImage(file = 'C:\\Python34\\' + e.get()) logo = PhotoImage(file = 'C:\\Python34\\' + e.get()) label.img = logo label.config(image = label.img) window = Tk() e = Entry(window, width= 20) e.place(x = 50,y = 30) b = Button(window, text = 'search',command = display_pix) b.place(x = 70, y = 50) window.mainloop()`
https://www.daniweb.com/programming/software-development/threads/484323/displaying-an-image-without-adding-the-file-format-using-pil
CC-MAIN-2017-09
refinedweb
126
61.12
This lab will walk you through various tools in AI Platform Notebooks for exploring your data and prototyping ML models. What you learn You'll learn how to: - Create and customize an AI Platform Notebooks instance - Track your notebooks code with git, directly integrated into AI Platform Notebooks - Use the What-If Tool within your notebook The total cost to run this lab on Google Cloud is about $1. Full details on AI Platform Notebooks pricing can be found here. You'll need a Google Cloud Platform project with billing enabled to run this codelab. To create a project, follow the instructions here. Step 2: Enable the Compute Engine API Navigate to Compute Engine and select Enable if it isn't already enabled. You'll need this to create your notebook instance. Step 3: Create a notebook instance Navigate to AI Platform Notebooks section of your Cloud Console and click New Instance. Then select the latest TensorFlow 2 Enterprise instance type without GPUs: Give your instance a name or use the default. Then we'll explore the customization options. Click the Customize button: AI Platform Notebooks has many different customization options, including: the region your instance is deployed in, the image type, machine size, number of GPUs, and more. We'll use the defaults for region and environment. For machine configuration, we'll use an n1-standard-8 machine: We won't add any GPUs, and we'll use the defaults for boot disk, networking, and permission. Select Create to create your instance. This will take a few minutes to complete. Once the instance has been created you'll see a green checkmark next to it in the Notebooks UI. Select Open JupyterLab to open your instance and start prototyping: When you open the instance, create a new directory called codelab. This is the directory we'll be working from throughout this lab: Click into your newly created codelab directory by double-clicking on it and then select Python 3 notebook from the launcher: Rename the notebook to demo.ipynb, or whatever name you'd like to give it. Step 4: Import Python packages Create a new cell in the notebook and import the libraries we'll be using in this codelab: import pandas as pd import tensorflow as tf from tensorflow.keras import Sequential from tensorflow.keras.layers import Dense import numpy as np import json from sklearn.model_selection import train_test_split from sklearn.utils import shuffle from google.cloud import bigquery from witwidget.notebook.visualization import WitWidget, WitConfigBuilder BigQuery, Google Cloud's big data warehouse, has made many datasets publicly available for your exploration. AI Platform Notebooks support direct integration with BigQuery without requiring authentication.. Step 2: Prepare the dataset for training Now that we've downloaded the dataset to our notebook as a Pandas DataFrame, we can do some pre-processing and split it into training and test sets. First, let's drop rows with null values from the dataset and shuffle the data: df = df.dropna() df = shuffle(df, random_state=2) Next, extract the label column into a separate variable and create a DataFrame with only our features. Since is_male is a boolean, we'll convert it to an integer so that all inputs to our model are numeric: labels = df['weight_pounds'] data = df.drop(columns=['weight_pounds']) data['is_male'] = data['is_male'].astype(int) Now if you preview our dataset by running data.head(), you should see the four features we'll be using for training. AI Platform Notebooks has a direct integration with git, so that you can do version control directly within your notebook environment. This supports committing code right in the notebook UI, or via the Terminal available in JupyterLab. In this section we'll initialize a git repository in our notebook and make our first commit via the UI. Step 1: Initialize a git repository From your codelab directory, select Git and then Init from the top menu bar in JupyterLab: When it asks if you want to make this directory a Git Repo, select Yes. Then select the Git icon on the left sidebar to see the status of your files and commits: Step 2: Make your first commit In this UI, you can add files to a commit, see file diffs (we'll get to that later), and commit your changes. Let's start by committing the notebook file we just added. Check the box next to your demo.ipynb notebook file to stage it for the commit (you can ignore the .ipynb_checkpoints/ directory). Enter a commit message in the text box and then click on the check mark to commit your changes: Enter your name and email when prompted. Then go back to the History tab to see your first commit: Note that the screenshots might not match your UI exactly, due to updates since this lab was published. We'll use the BigQuery natality dataset we've downloaded to our notebook to build a model that predicts baby weight. In this lab we'll be focusing on the notebook tooling, rather than the accuracy of the model itself. Step 1: Split your data into train and test sets We'll use the Scikit Learn train_test_split utility to split our data before building our model: x,y = data,labels x_train,x_test,y_train,y_test = train_test_split(x,y) Now we're ready to build our TensorFlow model! Step 2: Build and train the TensorFlow model We'll be building this model using the tf.keras Sequential model API, which lets us define our model as a stack of layers. All the code we need to build our model is here: model = Sequential([ Dense(64, activation='relu', input_shape=(len(x_train.iloc[0]),)), Dense(32, activation='relu'), Dense(1)] ) Then we'll compile our model so we can train it. Here we'll choose the model's optimizer, loss function, and metrics we'd like the model to log during training. Since this is a regression model (predicting a numerical value), we're using mean squared error instead of accuracy as our metric: model.compile(optimizer=tf.keras.optimizers.RMSprop(), loss=tf.keras.losses.MeanSquaredError(), metrics=['mae', 'mse']) You can use Keras's handy model.summary() function to see the shape and number of trainable parameters of your model at each layer. Now we're ready to train our model. All we need to do is call the fit() method, passing it our training data and labels. Here we'll use the optional validation_split parameter, which will hold a portion of our training data to validate the model at each step. Ideally you want to see training and validation loss both decreasing. But remember that in this example we're more focused on model and notebook tooling rather than model quality: model.fit(x_train, y_train, epochs=10, validation_split=0.1) Step 3: Generate predictions on test examples To see how our model is performing, let's generate some test predictions on the first 10 examples from our test dataset. num_examples = 10 predictions = model.predict(x_test[:num_examples]) And then we'll iterate over our model's predictions, comparing them to the actual value: for i in range(num_examples): print('Predicted val: ', predictions[i][0]) print('Actual val: ',y_test.iloc[i]) print() Step 4: Use git diff and commit your changes Now that you've made some changes to the notebook, you can try out the git diff feature available in the Notebooks git UI. The demo.ipynb notebook should now be under the "Changed" section in the UI. Hover over the filename and click on the diff icon: With that you should be able to see a diff of your changes, like the following: This time we'll commit our changes via the command line using Terminal. From the Git menu in the JupyterLab top menu bar, select Git Command in Terminal. If you have the git tab of your left sidebar open while you run the commands below, you'll be able to see your changes reflected in the git UI. In your new terminal instance, run the following to stage your notebook file for commit: git add demo.ipynb And then run the following to commit your changes (you can use whatever commit message you'd like): git commit -m "Build and train TF model" Then you should see your latest commit in the history: The What-If Tool is an interactive visual interface designed to help you visualize your datasets and better understand the output of your ML models. It is an open source tool created by the PAIR team at Google. While it works with any type of model, it has some features built exclusively for Cloud AI Platform. The What-If Tool comes pre-installed in Cloud AI Platform Notebooks instances with TensorFlow. Here we'll use it to see how our model is performing overall and inspect its behavior on data points from our test set. Step 1: Prepare data for the What-If Tool To make the most of the What-If Tool, we'll send it examples from our test set along with the ground truth labels for those examples ( y_test). That way we can compare what our model predicted to the ground truth. Run the line of code below to create a new DataFrame with our test examples and their labels: wit_data = pd.concat([x_test, y_test], axis=1) In this lab, we'll be connecting the What-If Tool to the model we've just trained in our notebook. In order to do that, we need to write a function that the tool will use to run these test data points to our model: def custom_predict(examples_to_infer): preds = model.predict(examples_to_infer) return preds Step 2: Instantiate the What-If Tool We'll instantiate the What-If Tool by passing it 500 examples from the concatenated test dataset + ground truth labels we just created. We create an instance of WitConfigBuilder to set up the tool, passing it our data, the custom predict function we defined above, along with our target (the thing we're predicting), and the model type: config_builder = (WitConfigBuilder(wit_data[:500].values.tolist(), data.columns.tolist() + ['weight_pounds']) .set_custom_predict_fn(custom_predict) .set_target_feature('weight_pounds') .set_model_type('regression')) WitWidget(config_builder, height=800) You should see something like this when the What-If Tool loads: On the x-axis, you can see your test data points spread out by the model's predicted weight value, weight_pounds. Step 3: Explore model behavior with the What-If Tool There are lots of cool things you can do with the What-If Tool. We'll explore just a few of them here. First, let's look at the datapoint editor. You can select any data point to see its features, and change the feature values. Start by clicking on any data point: On the left you'll see the feature values for the data point you've selected. You can also compare that data point's ground truth label with the value predicted by the model. In the left sidebar, you can also change feature values and re-run model prediction to see the effect this change had on your model. For example, we can change gestation_weeks to 30 for this data point by double-clicking on it an re-run prediction: Using the dropdown menus in the plot section of the What-If Tool, you can create all sorts of custom visualizations. For example, here's a chart with the models' predicted weight on the x-axis, the age of the mother on the y-axis, and points colored by their inference error (darker means a higher difference between predicted and actual weight). Here it looks like as weight decreases, the model's error increases slightly: Next, check the Partial dependence plots button on the left. This shows how each feature influences the model's prediction. For example, as gestation time increases, our model's predicted baby weight also increases: For more exploration ideas with the What-If Tool, check the links at the beginning of this section. Finally, we'll learn how to connect the git repo in our notebook instance to a repo in our GitHub account. If you'd like to do this step, you'll need a GitHub account. Step 1: Create a new repo on GitHub In your GitHub account, create a new repository. Give it a name and a description, decide if you'd like it to be public, and select Create repository (you don't need to initialize with a README). On the next page, you'll follow the instructions for pushing an existing repository from the command line. Open a Terminal window, and add your new repository as a remote. Replace username in the repo URL below with your GitHub username, and your-repo with the name of the one you just created: git remote add origin git@github.com:username/your-repo.git Step 2: Authenticate to GitHub in your notebooks instance Next you'll need to authenticate to GitHub from within your notebook instance. This process varies depending on whether you have two-factor authentication enabled on GitHub. If you're not sure where to start, follow the steps in the GitHub documentation to create an SSH key and then add the new key to GitHub. Step 3: Ensure you've correctly linked your GitHub repo To make sure you've set things up correctly, run git remote -v in your terminal. You should see your new repository listed as a remote. Once you see the URL of your GitHub repo and you've authenticated to GitHub from your notebook, you're ready to push directly to GitHub from your notebook instance. To sync your local notebook git repo with your newly created GitHub repo, click the cloud upload button at the top of the Git sidebar: Refresh your GitHub repository, and you should see your notebook code with your previous commits! If others have access to your GitHub repo and you'd like to pull down the latest changes to your notebook, click the cloud download icon to sync those changes. On the History tab of the Notebooks git UI, you can see if your local commits are synced with GitHub. In this example, origin/master corresponds with our repo on GitHub: Whenever you make new commits, just click the cloud upload button again to push those changes to your GitHub repo. You've done a lot in this lab 👏👏👏 To recap, you've learned how to: - Create an customize an AI Platform Notebook instance - Initialize a local git repo in that instance, add commits via the git UI or command line, view git diffs in the Notebook git UI - Build and train a simple TensorFlow 2 model - Use the What-If Tool within your Notebook instance - Connect your Notebook git repo to an external repository on GitHub. Using the Navigation menu in your Cloud Console, browse to Storage and delete both buckets you created to store your model assets.
https://codelabs.developers.google.com/codelabs/prototyping-caip-notebooks/?hl=tr
CC-MAIN-2021-43
refinedweb
2,497
59.43
CodePlexProject Hosting for Open Source Software I'm trying to follow the pattern found in the prototype branch in web api preview 5 for jsonp. The pattern seems to go like this: Create a handler to add a header value if some query string value is found. Create a formatter that checks for the header set in the response handler and modify the output. So what I have done is insert a few formatters into my config object. Is there a way for these to cascade? It seems like only one will ever execute. I would like for the OnWriteToStream to check for the header value and if it doesn't exist let the next formatter in the config check to see if it's header exists until it finds one and the output stream is written. If it doesn't find anything it defaults to the input accept type. Sounds like a switch statement... Do I need to create one formatter that acts as a factory/switch or is this already a built in path? Steve You should look into our MediaTypeMapping api. Using a mapping you could add logic that checks for the header and if the header is not found then it does not match. If you look at formatters they expose a collection of mappings. You can create your own derived one for custom logic. Glenn do you have a link to the api or docs or examples? or do I just do a find on the source code? Hi, As Glenn mentioned above, you can use the api called AddRequestHeaderMapping for your purpose. This AddRequestHeaderMapping adds a mapping into the MediaTypeMappings collection of the formatter. Example: public class FormatterA : MediaTypeFormatter { public FormatterA() { this. SupportedMediaTypes.Add("application/xyz"); this.AddRequestHeaderMapping("header1", "abc", StringComparison.OrdinalIgnoreCase, isValueSubstring: false, mediaType: "application/xyz"); } public class FormatterB : MediaTypeFormatter { public FormatterB() { this. SupportedMediaTypes.Add("application/yahoo"); this.AddRequestHeaderMapping("header1", "ghi", StringComparison.OrdinalIgnoreCase, isValueSubstring: false, mediaType: "application/yahoo"); } I am also looking for a way to *cascade* MediaTypeFormatters. first, i need JsonP, but I still want to use JsonFormatter second, i need pagination in my way, so I need another formatter before all other formatters any idea? thanks Only one formatter will be chosen, and you can't really *cascade* media type formatters together... Why do you want to cascade JsonPMediaTypeFormatter and JsonMediaTypeFormatter? By the way, JsonPMediaTypeFormatter does derive from JsonMediaTypeFormatter... Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
https://wcf.codeplex.com/discussions/275663
CC-MAIN-2017-09
refinedweb
434
57.87
How to: Use SharePoint Object Model Members Last modified: March 05, 2010 Applies to: InfoPath 2010 | InfoPath Forms Services | Office 2010 | SharePoint Server 2010 | Visual Studio | Visual Studio Tools for Microsoft Office Before you can program against members of the SharePoint object model from code running in an InfoPath form template, you must reference the Microsoft.SharePoint.dll assembly in the Microsoft Visual Studio Tools for Applications project for your form. To do that, you must have access to the file system of a licensed copy of Microsoft SharePoint Server 2010 or a server that is running Microsoft SharePoint Foundation 2010 so that you can obtain a copy of the Microsoft.SharePoint.dll assembly. Additionally, your form template must be deployed to the server as either a sandboxed or administrator-approved solution. For more information about these deployment options, see Publishing Forms with Code. By default, Microsoft.SharePoint.Server.dll is installed in C:\Program Files\Common Files\Microsoft Shared\Web Server\Extensions\14\ISAPI in the file system of SharePoint Server 2010 or a server that is running SharePoint Foundation 2010. To reference the Microsoft.SharePoint assembly from an InfoPath form's code project Copy the Microsoft.SharePoint.Server.dll assembly from the server to a local folder, or get access to the assembly from a shared folder. Open the form template project in Microsoft Visual Studio Tools for Applications. On the Project menu, click Add Reference. Click the Browse tab, locate and specify the assembly, and then click OK to add the reference. Now you can write code against members of the SharePoint object model from your form code. To make it easier to reference members of the Microsoft.SharePoint namespace, add using Microsoft.SharePoint; or Imports Microsoft.SharePoint to the directives at the beginning of your code file. For an example that shows how to use members of the SharePoint object model in an InfoPath form, see "Example 2: Managing Vendors in a SharePoint List" in Sample Sandboxed Solutions.
https://msdn.microsoft.com/en-us/library/ff660767(v=office.14).aspx
CC-MAIN-2017-47
refinedweb
332
54.93
source. To.. Some commands and plugins provide custom processing on files matching certain patterns. Per-user rule-based preferences are defined in BZR_HOME/rules. For further information on how rules are searched and the detailed syntax of the relevant files, see Rules in the Bazaar User Reference. Aliases.. While Bazaar is similar to other VCS tools in many ways, there are some important differences that are not necessarily obvious at first glance. This section attempts to explain some of the things users need to know in order to "grok" Bazaar, i.e. to deeply understand it. Note: It isn't necessary to fully understand this section to use Bazaar. You may wish to skim this section now and come back to it at a later time. All revisions in the mainline of a branch have a simple increasing integer. (First commit gets 1, 10th commit gets 10, etc.) This makes them fairly natural to use when you want to say "grab the 10th revision from my branch", or "fixed in revision 3050". For revisions which have been merged into a branch, a dotted notation is used (e.g., 3112.1.5). Dotted revision numbers have three numbers2. The first number indicates what mainline revision change is derived from. The second number is the branch counter. There can be many branches derived from the same revision, so they all get a unique number. The third number is the number of revisions since the branch started. For example, 3112.1.5 is the first branch from revision 3112, the fifth revision on that branch. Imagine a project with multiple developers contributing changes where many changes consist of a series of commits. To give a concrete example, consider the case where: - The tip of the project's trunk is revision 100. - Mary makes 3 changes to deliver feature X. - Bill makes 4 changes to deliver feature Y. If the developers are working in parallel and using a traditional centralized VCS approach, the project history will most likely be linear with Mary's changes and Bill's changes interleaved. It might look like this: 107: Add documentation for Y 106: Fix bug found in testing Y 105: Fix bug found in testing X 104: Add code for Y 103: Add documentation for X 102: Add code and tests for X 101: Add tests for Y 100: ... Many teams use this approach because their tools make branching and merging difficult. As a consequence, developers update from and commit to the trunk frequently, minimizing integration pain by spreading it over every commit. If you wish, you can use Bazaar exactly like this. Bazaar does offer other ways though that you ought to consider. An alternative approach encouraged by distributed VCS tools is to create feature branches and to integrate those when they are ready. In this case, Mary's feature branch would look like this: 103: Fix bug found in testing X 102: Add documentation for X 101: Add code and tests for X 100: ... And Bill's would look like this: 104: Add documentation for Y 103: Fix bug found in testing Y 102: Add code for Y 101: Add tests for Y 100: ... If the features were independent and you wanted to keep linear history, the changes could be pushed back into the trunk in batches. (Technically, there are several ways of doing that but that's beyond the scope of this discussion.) The resulting history might look like this: 107: Fix bug found in testing X 106: Add documentation for X 105: Add code and tests for X 104: Add documentation for Y 103: Fix bug found in testing Y 102: Add code for Y 101: Add tests for Y 100: ... While this takes a bit more effort to achieve, it has some advantages over having revisions randomly intermixed. Better still though, branches can be merged together forming a non-linear history. The result might look like this: 102: Merge feature X 100.2.3: Fix bug found in testing X 100.2.2: Add documentation for X 100.2.1: Add code and tests for X 101: Merge feature Y 100.1.4: Add documentation for Y 100.1.3: Fix bug found in testing Y 100.1.2: Add code for Y 100.1.1: Add tests for Y 100: ... Or more likely this: 102: Merge feature X 100.2.3: Fix bug 100.2.2: Add documentation 100.2.1: Add code and tests 101: Merge feature Y 100.1.4: Add documentation 100.1.3: Fix bug found in testing 100.1.2: Add code 100.1.1: Add tests 100: ... This is considered good for many reasons: - It makes it easier to understand the history of a project. Related changes are clustered together and clearly partitioned. - You can easily collapse history to see just the commits on the mainline of a branch. When viewing the trunk history like this, you only see high level commits (instead of a large number of commits uninteresting at this level). - If required, it makes backing out a feature much easier. - Continuous integration tools can be used to ensure that all tests still pass before committing a merge to the mainline. (In many cases, it isn't appropriate to trigger CI tools after every single commit as some tests will fail during development. In fact, adding the tests first - TDD style - will guarantee it!) In summary, the important points are: Organize your work using branches. Integrate changes using merge. Ordered revision numbers and hierarchy make history easier to follow. As explained above, Bazaar makes the distinction between: - mainline revisions, i.e. ones you committed in your branch, and - merged revisions, i.e. ones added as ancestors by committing a merge. Each branch effectively has its own view of history, i.e. different branches can give the same revision a different "local" revision number. Mainline revisions always get allocated single number revision numbers while merged revisions always get allocated dotted revision numbers. To extend the example above, here's what the revision history of Mary's branch would look like had she decided to merge the project trunk into her branch after completing her changes: 104: Merge mainline 100.2.1: Merge feature Y 100.1.4: Add documentation 100.1.3: Fix bug found in testing 100.1.2: Add code 100.1.1: Add tests 103: Fix bug found in testing X 102: Add documentation for X 101: Add code and tests for X 100: ... Once again, it's easy for Mary to look at just her top level of history to see the steps she has taken to develop this change. In this context, merging the trunk (and resolving any conflicts caused by doing that) is just one step as far as the history of this branch is concerned. It's important to remember that Bazaar is not changing history here, nor is it changing the global revision identifiers. You can always use the latter if you really want to. In fact, you can use the branch specific revision numbers when communicating as long as you provide the branch URL as context. (In many Bazaar projects, developers imply the central trunk branch if they exchange a revision number without a branch URL.) Merges do not change revision numbers in a branch, though they do allocate local revision numbers to newly merged revisions. The only time Bazaar will change revision numbers in a branch is when you explicitly ask it to mirror another branch. Note: Revisions are numbered in a stable way: if two branches have the same revision in their mainline, all revisions in the ancestry of that revision will have the same revision numbers. For example, if Alice and Bob's branches agree on revision 10, they will agree on all revisions before that. focused3. It has the same syntax as the per-project ignore file. Once you have completed some work, it's a good idea to review your changes prior to permanently recording it. This way, you can make sure you'll be committing what you intend>" The bzr log command shows a list of previous revisions. As distributed VCS tools like Bazaar make merging much easier than it is in central VCS tools, the history of a branch may often cat revision other than accidentally put the wrong tree under version control, simply delete the .bzr. If you want to undo changes to a particular file since the last commit but keep all the other changes in the tree, pass the filename as an argument to revert like this: bzr revert foo.pyjacket getting a copy of a branch, have a quick think about where to put it on your filesystem. For maximum storage efficiency down the track, it is recommended that branches be created somewhere under a directory that has been set up as a shared repository. (See Feature branches in n Organizing your workspace for a commonly used layout.) For example: bzr init-repo my-repo cd my-repo You are now ready to grab a branch from someone else and hack away. This example shows explicitly giving the directory name to use for the new branch: bzr branch /home/mary/cool-repo/cool-trunk cool. If you wish to see information about a branch including where it came from, use the info command. For example: bzr info cool If no branch is given, information on the current branch is displayed. Once someone has their own branch of a project, they can make and commit changes in parallel to any development proceeding on the original branch. Pretty soon though, these independent lines of development will need to be combined again. This process is known as merging. This sets the default merge branch if one is not already set. To change the default after it is set, use the --remember option.. As well as being reported by the merge command, the list of outstanding conflicts may be displayed at any time by using the conflicts command. It is also included as part of the output from the status command.. Advanced shared repository layouts) bzr push s PROJECT cd PROJECT bzr branch s bzr checkout --lightweight trunk my-sandbox cd my-sandbox (hack away) Note that ../PROJECT-1.0 bzr switch ../PROJECT-1.0 (fix bug in 1.0) bzr commit -m "blah, blah blah" bzr switch .. a checkout trunk bzr pull Use the... We hope that earlier chapters have given you a solid understanding of how Bazaar can assist you in being productive on your own and working effectively with others. If you are learning Bazaar for the first time, it might be good to try the procedures covered already for a while, coming back to this manual once you have mastered them. Remaining chapters covers various topics to guide you in further optimizing how you use Bazaar. Unless stated otherwise, the topics in this and remaining chapters are independent of each other and can therefore be read in whichever order you wish... For example, consider a working tree with one or more changes made ... $ bzr diff === modified file 'description.txt' --- description.txt +++ description.txt @@ -2,7 +2,7 @@ =============== These plugins -by Michael Ellerman +written by Michael Ellerman provide a very fine-grained 'undo' facility @@ -11,6 +11,6 @@ This allows you to undo some of your changes, -commit, and get +perform a commit, and get back to where you were before. The shelve command interactively asks which changes you want to retain in the working tree: $ bzr shelve --- description.txt +++ description.txt @@ -2,7 +2,7 @@ =============== These plugins -by Michael Ellerman +written by Michael Ellerman provide a very fine-grained 'undo' facility "1". If there are lots of changes in the working tree, you can provide the shelve command with a list of files and you will only be asked about changes in those files. After shelving changes, it's a good idea to use diff to confirm the tree has just the changes you expect: $ bzr diff === modified file 'description.txt' --- description.txt +++ description.txt @@ -2,7 +2,7 @@ =============== These plugins -by Michael Ellerman +written by Michael Ellerman provide a very fine-grained 'undo' facility Great - you're ready to commit: $ bzr commit -m "improve first sentence" At some later time, you can bring the shelved changes back into the working tree using unshelve: $ bzr unshelve Unshelving changes with id "1". M description.txt All changes applied successfully. If you want to, you can put multiple items on the shelf. Normally each time you run unshelve the most recently shelved changes will be reinstated. However, you can also unshelve changes in a different order by explicitly specifying which changes to unshelve.: Bazaar's default format does not yet support filtered views. That is likely to change in the near future. To use filtered views in the meantime, you currently need to upgrade to development-wt6 (or development-wt6-rich-root) format first.:. To create a stacked branch, use the stacked option of the branch command. For example: bzr branch --stacked source-url my-dir This will create my-dir as a stacked branch with no local revisions. If it is defined, the public branch associated with source-url will be used as the stacked-on location. Otherwise, source-url will be the stacked-on location. Direct creation of a stacked checkout is expected to be supported soon. In the meantime, a two step process is required: Most changes on most projects build on an existing branch such as the development trunk or current stable branch. Creating a new branch stacked on one of these is easy to do using the push command like this: bzr push --stacked-on reference-url my-url This creates a new branch at my-url that is stacked on reference-url and only contains the revisions in the current branch that are not already in the branch at reference-url.. B One way to customize Bazaar's behaviour is with hooks. Hooks allow you to perform actions before or after certain Bazaar operations. The operations include commit, push, pull, and uncommit. For a complete list of hooks and their parameters, see Hooks in the User Reference. Most hooks are run on the client, but a few are run on the server. (Also see the bzr-push-and-update plugin that handles one special case of server-side operations.) To use a hook, you should write a plugin. Instead of creating a new command, this plugin will define and install the hook. Here's an example: from bzrlib import branch def post_push_hook(push_result): print "The new revno is %d" % push_result.new_revno branch.Branch.hooks.install_named_hook('post_push', post_push_hook, 'My post_push hook') To use this example, create a file named push_hook.py, and stick it in plugins subdirectory of your configuration directory. (If you have never installed any plugins, you may need to create the plugins directory). That's it! The next time you push, it should show "The new revno is...". Of course, hooks can be much more elaborate than this, because you have the full power of Python at your disposal. Now that you know how to use hooks, what you do with them is up to you., use the hidden hooks command: bzr hooks If you only need the last revision number in your build scripts, you can use the revno command to get that value like this: $ bzr revno 3104 The version-info command can be used to output more information about the latest version like this: $ bzr version-info revision-id: pqm@pqm.ubuntu.com-20071211175118-s94sizduj201hrs5 date: 2007-12-11 17:51:18 +0000 build-date: 2007-12-13 13:14:51 +1000 revno: 3104 branch-nick: bzr.dev You can easily filter that output using operating system tools or scripts. For example (on Linux/Unix): $ bzr version-info | grep ^date date: 2007-12-11 17:51:18 +0000 The --all option will actually dump version information about every revision if you need that information for more advanced post-processing. If using a Makefile to build your project, you can generate the version information file as simply as: library/_version.py: bzr version-info --format python > library/_version.py This generates a file which contains 3 dictionaries: - version_info: A dictionary containing the basic information about the current state. - revisions: A dictionary listing all of the revisions in the history of the tree, along with the commit times and commit message. This defaults to being empty unless --all or --include-history is supplied. This is useful if you want to track what bug fixes, etc, might be included in the released version. But for many projects it is more information than needed. - file_revisions: A dictionary listing the last-modified revision for all files in the project. This can be used similarly to how $Id$ keywords are used in CVS-controlled files. The last modified date can be determined by looking in the revisions map. This is also empty by default, and enabled only by --all or --include-file-revisions. Bazaar supports a template-based method for getting version information in arbitrary formats. The --custom option to version-info can be used by providing a --template argument that contains variables that will be expanded based on the status of the working tree. For example, to generate a C header file with a formatted string containing the current revision number: bzr version-info --custom \ --template="#define VERSION_INFO \"Project 1.2.3 (r{revno})\"\n" \ > version_info.h where the {revno} will be replaced by the revision number of the working tree. (If the example above doesn't work on your OS, try entering the command all on one line.) For more information on the variables that can be used in templates, see Version Info in the Bazaar User Reference. Predefined formats for dumping version information in specific languages are currently in development. Please contact us on the mailing list about your requirements in this area. Most information about the contents of the project can be cheaply determined by just reading the revision entry. However, it can be useful to know if the working tree was completely up-to-date when it was packaged, or if there was a local modification. By supplying either --all or --check-clean, bzr will inspect the working tree, and set the clean flag in version_info, as well as set entries in file_revisions as modified where appropriate. BzrTools is a collection of useful enhancements to Bazaar. For installation instructions, see the BzrTools home page:. Here is a sample of the frequently used commands it provides. bzr shell starts up a command interpreter than understands Bazaar commands natively. This has several advantages: - There's no need to type bzr at the front of every command. - Intelligent auto-completion is provided. - Commands run slightly faster as there's no need to load Bazaar's libraries each time. bzr cdiff provides a colored version of bzr diff output. On GNU/Linux, UNIX and OS X, this is often used like this: bzr cdiff | less -R bzr-svn lets developers use Bazaar as their VCS client on projects still using a central Subversion repository. Access to Subversion repositories is largely transparent, i.e. you can use most bzr commands directly on Subversion repositories exactly the same as if you were using bzr on native Bazaar branches. Many bzr-svn users create a local mirror of the central Subversion trunk, work in local feature branches, and submit their overall change back to Subversion when it is ready to go. This lets them gain many of the advantages of distributed VCS tools without interrupting existing team-wide processes and tool integration hooks currently built on top of Subversion. Indeed, this is a common interim step for teams looking to adopt Bazaar but who are unable to do so yet for timing or non-technical reasons. For installation instructions, see the bzr-svn home page:. Here's a simple example of how you can use bzr-svn to hack on a GNOME project like beagle. Firstly, setup a local shared repository for storing your branches in and checkout the trunk: bzr init-repo --default-rich-root beagle-repo cd beagle-repo bzr checkout svn+ssh://svn.gnome.org/svn/beagle/trunk beagle-trunk Note that using the default-rich-root (hack, hack, hack) bzr commit -m "blah blah blah" (hack, hack, hack) bzr commit -m "blah blah blah" When the feature is cooked, refresh your trunk mirror and merge your change: cd ../beagle-trunk bzr update bzr merge ../beagle-feature1 bzr commit -m "Complete comment for SVN commit" As your trunk mirror is a checkout, committing to it implicitly commits to the real Subversion trunk. That's it! For large projects, it often makes sense to tweak the recipe given above. In particular, the initial checkout can get quite slow so you may wish to import the Subversion repository into a Bazaar one once and for all for your project, and then branch from that native Bazaar repository instead. bzr-svn provides the svn-import command for doing this repository-to-repository conversion. Here's an example of how to use it: bzr svn-import svn+ssh://svn.gnome.org/svn/beagle Here's the recipe from above updated to use a central Bazaar mirror: bzr init-repo --default-rich-root beagle-repo cd beagle-repo bzr branch bzr+ssh://bzr.gnome.org/beagle.bzr/trunk beagle-trunk bzr branch beagle-trunk beagle-feature1 cd beagle-feature1 (hack, hack, hack) bzr commit -m "blah blah blah" (hack, hack, hack) bzr commit -m "blah blah blah" cd ../beagle-trunk bzr pull bzr merge ../beagle-feature1 bzr commit -m "Complete comment for SVN commit" bzr push In this case, committing to the trunk only commits the merge locally. To commit back to the master Subversion trunk, an additional command (bzr push) is required. Note: You'll need to give pull and push the relevant URLs the first time you use those commands in the trunk branch. After that, bzr remembers them. The final piece of the puzzle in this setup is to put scripts in place to keep the central Bazaar mirror synchronized with the Subversion one. This can be done by adding a cron job, using a Subversion hook, or whatever makes sense in your environment. Bazaar and Subversion are different tools with different capabilities so there will always be some limited interoperability issues. Here are some examples current as of bzr-svn 0.5.4: - Bazaar doesn't support versioned properties - Bazaar doesn't support tracking of file copies. See the bzr-svn web page,, for the current list of constraints. There are a range of options available for providing a web view of a Bazaar repository, the main one being Loggerhead. The homepage of Loggerhead can be found at. A list of alternative web viewers including download links can be found on. Note: If your project is hosted or mirrored on Launchpad, Loggerhead code browsing is provided as part of the service. B. B a an internal revision ID, as shown by bzr log and some other commands. For example: $ bzr log -r revid:Matthieu.Moy@imag.fr-20051026185030-93c7cad63ee570df ' path may be the URL of a remote branch, or the file path to a local branch. For example, to get the differences between this and another branch: $ bzr diff -r branch:: project/ # The overall repository, *and* the project's mainline branch + joe/ # Developer Joe's primary branch of development | +- feature1/ # Developer Joe's feature1 development branch | | +- broken/ # A staging branch for Joe to develop feature1 | +- feature2/ # Joe's feature2 development branch | ... + barry/ # Barry's development branch | ... +. Which. <name@isp.com> For more information on the ini file format, see Configuration Settings in the Bazaar User Reference. The second approach is to set email on a branch by branch basis by using the locations.conf configuration file like this: [/some/branch/location] email=Your Name <name@other-isp.com> This will set your email address in the branch at /some/branch/location, overriding the default specified in the bazaar.conf above.. This document describes one way to set up a Bazaar HTTP smart server, using Apache 2.0 and FastCGI or mod_python. For more information on the smart server, and other ways to configure it see the main smart server documentation. is part of pocoo. You sould make sure you place modpywsgi.py in the same directory as bzr-smart.py (ie. /srv/example.com/scripts/). Now you can use bzr+http:// URLs, e.g.: bzr log bzr+ Plain HTTP access should continue to work: bzr log 'bzrlib.relpath' variable set. The make_app helper used in the example constructs a SmartWSGIApp with a transport based on the root path given to it, and calculates the 'bzrlib.relpath` for each request based on the prefix and path_var arguments. In the example above, it will take the 'REQUEST_URI' (which is set by Apache), strip the '/code/' prefix and the '/.bzr/smart' suffix, and set that as the 'bzrlib.relpath', so that a request for '/code/foo/bar/.bzr/smart' will result in a 'bzrlib.relpath' of 'foo. Plugins are very similar to bzr core functionality. They can import anything in bzrlib. A plugin may simply override standard functionality, but most plugins supply new commands.. Simply define version_info to be a tuple defining the current version number of your plugin. eg. version_info = (0, 9, 0) version_info = (0, 9, 0, 'dev', 0) Bzr will scan bzrlib/plugins and ~/.bazaar/plugins for plugins by default. You can override this with BZR_PLUGIN. Please feel free to contribute your plugin to BzrTools, if you think it would be useful to other people. See the Bazaar Developer Guide for details on Bazaar's development guidelines and policies.
http://doc.bazaar.canonical.com/bzr.1.16/en/user-guide/
CC-MAIN-2018-05
refinedweb
4,297
63.9
#include <SmiCoreCombineRule.hpp> Inheritance diagram for SmiCoreCombineRule: In the Stochastic MPS standard, stochastic data updates the underlying core lp data. To specify a new scenario, one only has to identify those data that are different. So, in a sense, the stochastic data is really a "diff" between the scenario and the core data. This class specifies how to perform the "undiff", that is, how to combine core and stochastic data. And of course, a complete implementation specifies the "diff" part as well. Now during a fit of original confusion in the birth of the SMPS standard, we decided to make default combine rule "replace", which has a rather special "diff", but we've learned to live with it. There only needs to be one of these classes. so they're singletons. Definition at line 39 of file SmiCoreCombineRule.hpp. Process. Definition at line 50 of file SmiCoreCombineRule.hpp.
http://www.coin-or.org/Doxygen/Smi/class_smi_core_combine_rule.html
crawl-003
refinedweb
148
56.96
here is the problem: Write a C/C++ program (call it string invert) that takes a string argument from the command line and outputs the string in reversed order. Here comes the twist: Each process can output at most one character. If you want to output more than a single character, you must fork off one or more processes in order to do that, and each of the forked processes in turn outputs a single character. After the call to program string invert with the command line argument, the output should appear, and no more processes should be running, in addition to the shell. Test your program on any UNIX/LINUX machine, and turn in the source code as part of the written assignment. and i really dunno how to use fork to make this work, should I recursive call the function to make this work? or by creating more child processes? I am really new to this fork function and have no idea how to really apply it in this situation, so please help me!!! thank you! #include <stdio.h> /* needed for printf() and fprintf() */ #include <stdlib.h> /* needed for EXIT_FAILURE/EXIT_SUCCESS */ #include <string.h> /* needed for strerror() */ #include <unistd.h> /* needed for fork() and getpid() */ #include <sys/types.h> /* needed for pid_t */ #include <iostream> using namespace std; string result; void print(string s){ pid_t pid; cout<<"once\n"; pid = fork(); cout<<"twice\n"; if(pid == 0){ //child print(s) exit(0); } else if (pid < 0){ //eror } else{ cout<<result; wait(NULL); } return 0; } int main(){ string s; cout<<"enter string you want to invert\n"; cin>>s print(s); }
https://www.daniweb.com/programming/software-development/threads/410364/invert-a-string-using-fork
CC-MAIN-2019-04
refinedweb
271
69.01
Hey everyone, I am wondering whether it is possible to get the x and y values of my mouse click on a graph. Basically something like a MATLAB or matplotlib’s ginput function. I have searched extensively online but didn’t really find a solution. The closest thing I have found is the scatter.on_click() function. This function takes a callback function but it returns nothing. I have seen the official example using it with the “click event” as an update to the figure (i.e. change symbol colors and size for the clicked-on points). Is it possible to define a new callback function that would return the value as something that I can then work with? I have managed to change the callback function in the official example to def return_point(trace, points, selector): xt=points.xs print(xt) but still I can only print but not store these x values. Really appreciate your help!
https://community.plotly.com/t/return-something-as-a-click-event/42133
CC-MAIN-2022-21
refinedweb
157
66.23
DBIx::Objects - Perl extension to ease creation of database-bound objects This module is intended to provide an object-oriented framework for accessing data sources. The source of the data is completely abstract, allowing for complete flexibility for the data back-end. This module is NOT intended to provide a persistance layer - please use another module (like TANGRAMS) if you require object persistance. I'm really not sure how to go about documenting this library, so let me start by explaining the history of why it was written I developed this module when I began to notice that most of my web-applications followed a very similar format - there was a data back end and web methods which could interoperate with them. When I started to need helper applications to work with the web-apps, I started porting all of my applications to use 2 layers. The lower layer was an object framework which contained the Perl code needed to work with the database. This way, I could be sure that all of the helper applications, and of course the web application, all used the same access methods to get to the database to eleminate the possibility that something was getting f#$%ed up in the database by a faulty query somewhere in the big mess of code. (The upper layer was the "business logic" layer, which was the web or helper application.) Then, I noticed that all of these database access objects were very similar: they all had access methods for each member of the class, which represented a single field in the database and had select/insert/update/delete routines. I'd also developed a "dynamic object" at this point, where I'd have a huge variable-length field in the database which conatained many fields. This way I could change the object without worrying about compatibility in the back end database if I added/changed/removed fields. (We'll get back to this later.) Beyond that, there were different ways of embedding objects (for example, a person object might have a phone number object embedded in it as part of an address-book application). (We'll get back to this later, too). So there were different ways of logically grouping different sets of data, but the objects all shared a unified way of accessing the data. Thus was DBIx::Objects born - it provided a framework which would reallly guarantee that the objects would really function in a logically similar way - similar to the way that most GUI applications work in logically similar ways (they all have that File menu with Open , Save, Exit... The Help menu with Help topics, an optional upgrade, etc). So I guess you could call this library an API for developing database bound objects. For more information, see The most basic type of object that can be used with this library simply get tied directly to fields in the database. This is your constructor. Anything that is important for you to add to the object's new() method should be done here. DO NOT DECLARE YOUR OWN new() FUNCTION! This function gets a bless()ed $self passed as the first parameter, followed by the argument list to the new() call. The constructor is expected to call the internal _blank() function described below, and should call _primary(), if applicable. In addition to being called by the constructor, this function is also called every time an empty (blank) instance of the object is needed. A sample structure for the blank() method is included for reference: sub blank { my $self=shift; $self->_register; $self->_blank("FOO", "BAR", ... , "LAST"); $self->_primary(1); # Optional - Marks as containing primary key } This is called internally by refresh() and should contain code to sync the data structure with the back end database. Note that the $package variable is optional, but should be checked for (META: Remove code that makes this necessary). The subroutine should either refresh the data from the database and call _validate() or, if no suitable data is found, should call blank() [NOT _blank()]. In either case, it should return $self. A sample structure for the _refresh() method is included here for reference: sub _refresh { my $self=shift; # Shift $self my $package=(UNIVERSAL::isa($_[0],__PACKAGE__) && shift) || ref($self); # Detect $package my $sth=$dbh->prepare_cached('SELECT FIRSTNAME, LASTNAME FROM PEOPLE WHERE (ID=?)'); # Set SQL $sth->execute(@_) or return $self->blank; # Run SQL or return blank() object if ($sth->rows!=1) { # A good SQL statement should always return EXACTLY one row $self->blank; # Bad SQL - return blank() } else { # Good SQL my $res=$sth->fetchrow_hashref; # Fetch results $self->{FIRSTNAME}=$res->{FIRSTNAME}; # Each element from the result set $self->{LASTNAME}=$res->{LASTNAME}; # gets passed to exactly one hash element $self->_validate; # And call _validate() to mark it as clean and in-sync with the DB } $sth->finish; # Finished with SQL return $self; # ...and return $self } Beyond storing simple values, you can also magically construct objects using the primary key of the target objects. Using the above example, we spoke of a phone book catalog, which has a person object and a phone number object. Would it be nice, if instead of writing: my $phone_number=Phone_Number->new($person->phone_number); print "Number is ".$phone_number->thenumber; ... you could simply write ... print "Number is ".$person->phone_number->thenumber; The advanced objects API lets you do just that, with almost no extra work on your behalf. This should be called in the blank() constructor. It is used to mark access method $var as an embedded object of type $package. If $package is not passed as an argument, it will default to the current package (eg, Your::Object). To use the feature, simply store the data you want passed to the object's constructor the same way you'd store any other data for a normal access method, and when you _validate the object, the data will be appropriately stored. The object will be constructed on the first call to the access method. This should be called in the blank() constructor. It is used to mark access method $var as an embedded array of objects of type $package. If $package is not passed as an argument, it will default to the current package (eg, Your::Object). The functionality is similar to that of _object except that we are dealing with arrays. As such, care must be taken to properly initialize the array. To use it, store an array (not a reference) in the _refresh function. When _validate is called on the object, the array will be stored and objects will be called on subsequent access via the method call $var. The method call will return the number of objects in the array if called in a scalar context, and the array of objects if called in a list context. The objects will be constructed upon access. Note that this functionality is still under construction. This object provides some shortucts for DBIx::Objects which use DBI as the backend datasource. Gets/sets the DBI connection to use in the object. Use is as follows: $dbh=new DBI($DSN); ... (in blank() ) $self->_dbidbh($dbh); Gets/sets the SQL statement to use in refresh() calls. Paramaters can be used, and will be set by the parameters actually passed to refresh() Remember that internally, DBIx::Objects assumes that it should receive valid data by calling $self->refresh($self->id); If that won't work for you, consider overloading the id() call, or implementing your own refresh() routine This should be called by the blank() constructor. The arguments should be all of the access methods provided by this class. It should *not* include inherited access methods, as they will automatically be discovered by AUTOLOAD. This will register all of the access methods in the registry under the module's namespace, so that AUTOLOAD can auto-load the module to refresh or update the database for it. This method should be called from the _refresh function. It tells the object that its data has been updated and to remark itself as having fresh unchanged data. Copyright (c) 2003, 2004 Issac Goldstand <margol@beamartyr.net> - All rights reserved. This library is free software. It can be redistributed and/or modified under the same terms as Perl itself.
http://search.cpan.org/~isaac/DBIx-Objects-0.04/Objects.pm
CC-MAIN-2014-15
refinedweb
1,373
60.65
Feb 24, 2010 06:54 AM|zielony|LINK I use data annotation validators (with communicates with errors in Polish language): using System.ComponentModel; using System.ComponentModel.DataAnnotations; namespace MvcApplication2.Models { [MetadataType(typeof(FirmMetaData))] public partial class Firm { } public class FirmMetaData { public int id { get; set; } [Required(ErrorMessage = "Musisz podać email.")] [RegularExpression(@"^([0-9a-zA-Z]([-\.\w]*[0-9a-zA-Z])*@([0-9a-zA-Z][-\w]*[0-9a-zA-Z]\.)+[a-zA-Z]{2,9})$", ErrorMessage = "Niepoprawny email.")] [StringLength(100, ErrorMessage = "Email nie może zawierać więcej niż 100. znaków.")] public string email { get; set; } [Required(ErrorMessage = "Musisz podać opis.")] [StringLength(10000, ErrorMessage = "Opis nie może zawierać więcej niż 10 000. znaków.")] public string description { get; set; } } } but when I submit form with empty field "description" I see validation message for this field: "The value '' is invalid." - but it should be "Musisz podać opis.". I see that other person has the same problem: what is going on ? Member 237 Points Feb 24, 2010 10:24 AM|johannes.hiemer|LINK Hi, having exactly the same issue... Feb 24, 2010 02:59 PM|bradwils|LINK Are the other validators working properly? If not, then your problem is probably that you have the wrong partial class namespace. If the other validators are working, then are you using Entity Framework? And is Description a non-nullable string in your database? If so, then this is a bug which is known and fixed for RTM. Member 237 Points Feb 24, 2010 05:01 PM|johannes.hiemer|LINK Fits for me, thanks Brad! Member 1 Points Feb 24, 2010 11:46 PM|bradwils|LINK vikrorpert actually say it appears only when client scripts are turn off Right, that's expected. The behavior comes from the fact that Entity Framework is generating your property setters to include code that throws exceptions whenever you try to set a null into the property. The error in question is the generic error when we see an exception without a custom message. So the client-side works just fine. It's not until the data gets to the server that the EF code throws the exception. None 0 Points Aug 09, 2010 04:35 PM|Tanvir Hussain|LINK Hi Brad Thanks for the post. Did the error been fixed yet? 7 replies Last post Aug 09, 2010 04:35 PM by Tanvir Hussain
http://forums.asp.net/p/1529205/3699143.aspx
CC-MAIN-2015-22
refinedweb
390
59.4
All Things SDK Friday, December 7th will be my last full day in the office at Microsoft. December 31st will be my last day as a Microsoft employee.. I'll be spending the next year working on some home remodeling projects, spending more time in Hawaii and California and in my garden. After that, it will be time to catch up on my traveling and scuba diving. I leave you with two quotes I've grown fond of recently: Doubt is not a pleasant condition, but certainty is absurd. Voltaire (1694 - 1778) Time is an illusion. Lunchtime doubly so. Douglas Adams (1952 - 2001) Lori Pearce Two articles caught my attention this week. The first was Walt Mossberg's review of SYNC, a joint project between Ford and Microsoft. .)" The fact that this system works with just about any USB media device (and not just an IPod) is great news for those of us who prefer devices that don't lock you into one format or one music store. I am currently using a Creative Zen player; it's my second Creative Zen device and I really like how well it works with Windows Media Player. The other article that caught my attention was an article on optimism and the brain by Robert Lee Hotz. ." Hope you enjoy learning about SYNC and the optimistic brain! Check out HP's on-line "Color Thesarus" here. I submited the ever popular "Burnt Sienna" and it displayed a color square, listed the RGB values for Burnt Sienna and displayed plus four similar (synonym) colors and four antonym colors..? Seems like underutilization of expensive hardware to me…must make the hardware engineers cry themselves to sleep at night.. Of course, it’s a whole other story when you are talking about server applications like SQL and Exchange. They can take full advantage of what 64 bit OSs offer. Most folks will tell you that it's the lack of good device drivers on 64 bit that is holding up client adoption. But apparently it’s more than that. Today, there was an article on vnunet.com about how poorly anti-virus software is doing on 64 bit Windows Vista. And where are the 64 bit games? It seems like some of the higher end games could take advantage of 64 bit. I figure they are likely to be waiting on the device drivers and watching the 64bit OS sales figures. I believe that when Microsoft starts producing 64 bit versions of its flagship client applications (Office, Visual Studio, etc), we will finally start to see the migration from 32 bit to 64 bit OS installs. Other developers will follow our lead. Question is, when will we step up? Check out this article on an interesting study done by Stanford researchers on what the brain is doing during pauses. Why has there been no posting here in a very long time? Frankly, it’s because I find reading other people’s content more interesting than writing or reading my own. If my blog is not interesting to me, why would it be interesting to you? On the off chance that you’ll find any of the sites I read regularly as interesting as I do, this blog post is for you. 1. I read the local news online—I haven’t read an actual hold-in-your-hands newspaper in a long time. For local news I go to the following web sites: The Seattle Times, The Seattle Post-Intelligencer, KOMO TV, KIRO TV. Why don’t I check out NWCN or KING5? Because they want me to register and that bugs me—I avoid sites that require registration whenever possible. Yeah, I know, I could just provide an anonymous Hotmail account, but why bother? There are plenty of places to get the news. I also read a SFGate since I spent the first 30 years of my life in the Bay Area. Occasionally I check out the Marin Independent Journal, the Sacramento Bee (requires registrationL), The Mercury News and KCRA (not nearly as good as KOMO or KIRO) since I have friends and family in the Bay Area and in the Sacramento area. Another site that makes my required reading list is the Bridges Detail traffic flow map. 2. For national and international news I check out the usual suspects: CNN and BBC and The Wall Street Journal (Microsoft employees have access). I get my stock quotes from MSN—I am sure there are better sites, but MSN is easy. I also get movie times from MSN. The main MSN site is a little too “pop culture” for me. Wikipedia is also a frequent source of quick info. 3. For sports (mostly baseball), I occasionally check out a game with ESPN’s GameCast or listen in with MLB’s GameDay Audio. 4. For tech related news I go to CNET, ZDNet (yeah…I know…it’s mostly the same content), InfoWorld (I still remember when I used to read the actual hardcopy) and Silicon Valley.Com. I have been a fan of Good Morning Silicon Valley for many years and have recently added All Things Digital to my list of regularly sites after John Paczkowski moved there (and Mossberg is always worth a read too J). I used to visit a lot of other tech related sites but got fed up with ads that cover the screen or the ads they make you watch or skip before getting to the real site. 5. I am a good corporate citizen and use as my search site. It works great for me, but if ever fails me, I know who to send feedback to. My home page at work is the internal Microsoft Web site. They revamp and refresh the site from time to time and I think they do an excellent job on it overall. In general, Microsoft has a wealth of internal websites with all the information you could ever need…until they don’t and then you’re hosed and have to find a real, live person to talk to. Easier said than done. 6. What do I read “for fun”? Not much—for fun I usually turn to actual hardcopy books—but I do have a few I check out from time to time. Wet Pixel is one of my favorite scuba related sites. Their Photo of the Week contest is wonderful. I also check out Kona Web, Waikoloa Weather and West Hawaii Today (requires registrationL) for news about my favorite places in Hawaii. 7. And lastly, there are shopping sites. In general, I hate shopping and shopping malls, so I am an internet retailer’s dream customer. Lands’ End and LL Bean (I swear those two companies were separated at birth), Coldwater Creek, Amazon.Com, and occasionally TravelSmith, Orvis, and FrontGate. With the exception of Amazon, I get paper catalogs from all those companies—the USPS must really hate me around the holidays. Now that I have bored you silly, I’ll sign off. Lori Hack, cough, cough...man, it's getting really dusty around here... Yeah...I know...it's been a while...my bad. No excuses. Let me get straight to the new data. The Windows SDK Team recently posted the RC1 version of the Windows SDK for Windows Vista and .NET 3.0 RC1. You can download it here. This summer, we welcomed 7 new team members: Tom Archer, Karen Dominguez, Abhita Chugh, Lisa Supinksi, Sean Grimaldi, Pat Litherland and just this week, Shawn Henry. It's exciting to have so many new faces on the team. I can feel the energy and enthusiams as the newbies dive in to their new roles. I'd like to use this post to welcome Tom Archer to the Windows SDK Team. Today is Tom's official first day--though he's been working with us for many months in his prior role as Content Strategist for MSDN. Tom will be taking over the Tools Program Manager role from Brent Rector who has moved on to bigger and better things on another team (Congrat's Brent!). Tools is a big, complicated area in the Windows SDK so Tom is going to be quite busy. We have three Program Manager Openings: The Windows SDK Team is a great team that touches on all the developer technologies in the Windows Operating system including WinFX. You'll have the opportunity to work with great people from all across the Platforms group. We're a small team, so each and every PM owns their areas from start to finish. If you're a customer focused person who thinks software development can change the world, come join us on the Windows SDK Team.. The gallery can be seen here. The flippant answer is: "Because old APIs never die, we just add new ones." The biggest single part of the SDK is the documentaion. The biggest part of the documentation is the WinFX API reference. Below is a breakdown of the January CTP: Win32: 211 MB (reference and conceptual) WinFX: 467 MB - Conceptual/Tools/GenRef/Portals: 47.3 MB - Integrated Samples: 103 MB - Managed Ref: 253 MB - Collection-level files: 63.7 MB As you can see, the managed API reference pages alone are over 250 MB (that's in a compressed help file). These represent 80 assemblies with 281 namespaces and 10741 types with 103704 members. Each one has a reference page.. The other key parts of the Windows SDK include the tools. In the January CTP we ship a bunch of C++ compilers and other tools. Last but not least, are the samples that exist only in the file system--most of the Win32 samples fall into this bucket. Luckily, these compress very well for download.
http://blogs.msdn.com/loripe/
crawl-002
refinedweb
1,630
72.16
YouTube’s mixed easy movie access with community uploads to create a startling new service. The online problem is that you can only access YouTube when you are online. How can you access those movies when you are offline? Let’s solve that problem by building a downloader with Flex and AIR. In this article we will build a cross platform application that searches for YouTube videos and then provides a mechanism to download those videos and view them locally. You will be able to take your favorite YouTube videos with you wherever you go. Requirements Sample files: YouTube.zip Let’s start by building the search user interface. Searching YouTube YouTube provides a set of RSS feeds that keep you up to date with the latest videos. These feeds take lots of parameters to refine what you are looking for, one of those is an arbitrary keyword search. The Flex code below uses the YouTube search feed to request a set of videos based on the user’s search criteria. The code then uses the e4x language extensions in ActionScript 3 to parse the feed and present the video thumbnails in a TileList. <?xml version="1.0" encoding="utf-8"?> <mx:WindowedApplication xmlns: <mx:Script> <![CDATA[ import mx.rpc.events.ResultEvent; import mx.rpc.http.HTTPService; namespace <mx:TextInput <mx:Button </mx:HBox> <mx:TileList <mx:itemRenderer> <mx:Component> <mx:HBox <mx:Image </mx:Image> </mx:HBox> </mx:Component> </mx:itemRenderer> </mx:TileList> </mx:WindowedApplication> The onSearch method is called from the search button. It creates a new HTTPService on the fly with the URL of the YouTube feed for the search. It then registers onSearchResult as an event handler for the result. The onSearchResult method uses e4x to parse through each ‘entry’ tag in the RSS feed. For each movie it builds an object with three fields. The ‘id’ field holds the URL of the HTML page for the video. The ‘description’ field holds the textual description of the video. And the ‘thumbnail’ field holds the URL of the thumbnail. When I launch this MXML in an AIR project within Flex Builder 3 I see something like Figure 1. Figure 1-1. Just the YouTube search In this case I’ve typed in ‘super mario’ and pressed ‘search’ to get a list of the movies that matched that criteria. From here we need to add the ability to download the flash video, as well as play it back. Downloading From YouTube The user interface for the download version of the project is going to be a lot more complex than the search interface. We need to show the search results, allow for playback, and add a Save button to save the movie locally. The finished product is shown in Figure 2. Figure 1-2. The search and the downloader The interface is separated into two pieces, defined by panels. One panel is for searching and the other panel is shows the downloaded movies. The vertical divider allows you to scale the size of each of these panels to your preferred size. The search section is largely the same though we will add a progress indicator (invisible in the figure) next to the search button. That progress indicator will be used during the downloads of the movies since those tend to be fairly big files. The downloaded movies panel has the list of downloaded movies on the left, and the movie player on the right. As well as a Save button that is only visible if a movie is selected. The Save button allows the user to copy the downloaded movie out of the temporary directory onto the desktop (or wherever they choose). The user interface portion of the MXML code for this example is shown below: <?xml version="1.0" encoding="utf-8"?> <mx:WindowedApplication xmlns: <mx:VDividedBox <mx:Panel <mx:HBox <mx:TextInput <mx:Button <mx:ProgressBar </mx:HBox> <mx:TileList ... </mx:TileList> </mx:Panel> <mx:Panel <mx:HBox <mx:List <mx:itemRenderer> <mx:Component> <mx:HBox <mx:Image </mx:Image> </mx:HBox> </mx:Component> </mx:itemRenderer> </mx:List> <mx:VBox <mx:VideoDisplay <mx:Button </mx:VBox> </mx:HBox> </mx:Panel> </mx:VDividedBox> To keep the code sample short I’ve omit a bit of the original code from the display of the movies found during the search. Don’t be put off by the quantity of tags. The MXML is quite straightforward. Most of the tags define the ‘itemRenderer’s used by the list control to show the video thumbnails. The other elements, the panels, the video display, the save button and so on, are just single elements with a few attributes to refine them. With the interface all laid out it’s time to dig into the code. The search code remains exactly the same, but we’ve now added a thumbClick event handler which is called when the user double clicks on a thumbnail in the search panel. The thumbClick handler starts the request for the HTML page associated with the YouTube video. The onHTMLComplete method receives the HTML code for the page as text. It then calls getFLVURL to get the URL for the FLV data for the movie. The code for this is shown below: <mx:Script> <![CDATA[ import mx.rpc.events.ResultEvent; import mx.rpc.http.HTTPService; import com.adobe.serialization.json.JSONDecoder; namespace atom = "" ; namespace media = "" ; private function onSearch() : void { ...} private function onSearchResult( event:ResultEvent ) : void { ... } public function getFLVURL( sHTML:String ) : String { var swfArgsFound:Array = sHTML.match( /var swfArgs =(.*?);/ ); var swfArgsJS:JSONDecoder = new JSONDecoder( swfArgsFound[1] ); var swfArgs:Object = swfArgsJS.getValue(); var url:String = '' ; var first:Boolean = true ; for ( var k:String in swfArgs ) { if ( swfArgs[k] != null && swfArgs[k].toString().length > 0 ) { url += first ? '?' : '&' ; first = false ; url += k+'=' +escape(swfArgs[k]); } } return url; } private function onHTMLComplete( movie:Object, event:Event ) : void { var loader:URLLoader = event.target as URLLoader; var movieID:String = movie.id.split( /=/ )[1]; var flvStream:URLStream = startRequest( movieID+'.flv' , getFLVURL( loader.data ) ); startRequest( movieID+'.jpg' , movie.thumbnail ); flvStream.addEventListener(Event.COMPLETE,function (event:Event):void { downloadProgress.visible = false ; updateLocalVideos(); } ); downloadProgress.source = flvStream; downloadProgress.visible = true ; } private function onThumbClick() : void { var htmlLoader:URLLoader = new URLLoader(); htmlLoader.addEventListener(Event.COMPLETE, function ( event:Event ) : void { onHTMLComplete( thumbList.selectedItem, event ); } ); htmlLoader.load(new URLRequest(thumbList.selectedItem.id)); } Let me step back for a second and talk briefly about Flash Video. YouTube, like any other site that uses a Flash player to show videos, uses FLV as the movie format. What YouTube does is provide their Flash video application with enough data for it to construct a ‘source’ URL for it’s video display object. That ‘source’ URL is get_video.php along with a set of parameters. Those parameters are stored in a Javascript variable called ‘swfVars’ which is embedded in the page. The GetFLVURL takes the Javascript from the page and extracts the ‘swfVars’. It then uses the JSONDecode class provided by the as3corelib () to decode the Javascript into an ActionScript object. From there it construct the FLV URL from the parameters in the ActionScript object. The weakness in this example application is that it uses this ‘screen scraping’ technique to get to the FLV URL. Unfortunately there is no easier way to do it. If the format of the YouTube HTML changes, then this code might need to be rewritten to compensate for the changes. Once the FLV URL is constructed the onHTMLComplete method calls startRequest on both the thumbnail and the FLV to download the data and store it locally. The code for this is shown below: private function onReqComplete( fileName:String, event:Event ) : void { var stream:URLStream = event.target as URLStream; var byteLength:int = stream.bytesAvailable; var bytes:ByteArray = new ByteArray(); stream.readBytes( bytes, 0, byteLength ); stream.close(); if ( File.applicationStorageDirectory.exists == false ) File.applicationStorageDirectory.createDirectory(); var f:File = new File( File.applicationStorageDirectory.nativePath + File.separator + fileName ); var fs:FileStream = new FileStream(); fs.open( f, FileMode.WRITE ); fs.writeBytes( bytes, 0, byteLength ); fs.close(); } private function startRequest( fileName:String, url:String ) : URLStream { var req:URLStream = new URLStream(); req.addEventListener(Event.COMPLETE, function ( event:Event ) : void { onReqComplete( fileName, event ); } ); req.load( new URLRequest( url ) ); return req; } The startRequest builds a URLStream object to get the data for the given URL. It then sets up onReqComplete as an event handler for when the download is complete. The onReqComplete uses the AIR File API to store the data, which is read directly into an AIR ByteArray class, into a file stored in the application storage directory. The application storage directory is maintained by AIR automatically for you. Once the files are downloaded the updateLocalVideos method is called. This method, shown below, updates the list of local videos in the downloaded videos panel. private function updateLocalVideos() : void { var fileNames:Object = new Object(); for each ( var file:File in File.applicationStorageDirectory.getDirectoryListing() ) { var fName:String = file.name.split( /[.]/ )[0]; fileNames[ fName ] = true ; } var movieList:Array = []; for ( var fileKey:String in fileNames ) { var thumb:File = new File( File.applicationStorageDirectory.nativePath + File.separator + fileKey + '.jpg' ); var movie:File = new File( File.applicationStorageDirectory.nativePath + File.separator + fileKey + '.flv' ); if ( thumb.exists && movie.exists ) movieList.push( { thumbnail: thumb, movie: movie } ); } localList.dataProvider = movieList; } To do that the method uses the getDirectoryListing method, provided by the AIR File API, to get all of the files in the application storage directory. It then chops off the extensions and creates a list of just the file names. From there builds the list of local movies by iterating through the file names and checking to make sure that both the ‘.flv’ file for the video, and the ‘.jpg’ file for the thumbnail, are available. With the list of local videos in hand the method sets the dataProvider of the local list to the movie list that it generated. From there the final few functions handle the playback and the saving of the FLV to the desktop. These methods are shown below: private function onMovieClick() : void { player.source = localList.selectedItem.movie.url; btnSave.visible = true ; } private function onSave() : void { var f:File = File.desktopDirectory; f.addEventListener(Event.SELECT,onSaveSelect); f.browseForSave("Save FLV" ); } private function onSaveSelect( event:Event ) : void { var f:File = event.target as File; var lf:File = localList.selectedItem.movie as File; lf.copyTo( f, true ); } ]]> </mx:Script> </mx:WindowedApplication> The onMovieClick method is called when the user double clicks on a movie in the local list. It just sets the source of the VideoDisplay to the url of the local ‘.flv’ file. The onSave method is called when the user clicks the Save button. It pops up a Save dialog using the AIR File API. If the user hits Save then the onSaveSelect method is called which copies the ‘.flv’ file from the local storage directory to the location they specify. You can see the interface for this in action in Figure 3. Figure 1-3. The save window On it’s face it seems like a lot of code, but it’s really not all that daunting when you break it down into it’s component pieces. Your Next Steps I hope you can leverage the code that I have presented in this article in your own work. There are some good reusable fragments including the file request and storage code in on ReqComplete. The JSON parsing in GetFLVURL can also come in handy when dealing with websites that lack a web services interface. Or in this case, have a web service interface that lacks the information you require. If you do make use of this code, be sure to let me know. You can contact me directly through my website;. July 8th, 2008 at 3:56 pm This only works for AIR. On Flex you get sandbox error. July 30th, 2008 at 3:49 pm Is it legal to download youtube movies? I’ve always wondered. July 30th, 2008 at 8:51 pm Hi Nate, I really dunno. What i know is that there is lot’s of software from known companies that allows you to download the Youtube videos and others. So, if that big companies release software with that feature, i’m intend to agree that is legal. August 11th, 2008 at 2:43 pm Thanks for this, ill take a look this afternoon. I dont think its legal or illegal to download from youtube. It will reflect anti-piracy. For example, if you view a video on youtube like a music video, id say that is youtubes issue to deal with the legalities. However, if you download the FLV and keep it, then its your issue because you own pirated material. Youtube never intended to allow flv downloads and these are mere hacks which could one day be patched. My 2 cents October 15th, 2008 at 9:41 pm A few people asked about the legality of downloading from YouTube, so I thought I’d throw in my .02. This type of application is against the YouTube terms of service. Flex Authority ( ). a quarterly print journal about Flex and related technologies, was going to print this article in our first issue. The tech editor raised questions on the legality and we ran it by the Google legal team who told us not to publish it; so we didn’t. I am not a lawyer, so you can argue amongst yourselves how YouTube’s terms of service relate to laws in your jurisdiction of choice. October 27th, 2008 at 1:26 pm This is some amazing stuff. Can this downloader be used on a website? November 28th, 2008 at 8:36 am there is no way to do run it, may seems youtube changed some url code April 29th, 2009 at 5:50 pm Nice article thanks May 5th, 2009 at 9:37 pm Does anyone know how to delete or remove a movie from the library after it has been downloaded and added to the list? June 9th, 2009 at 11:35 am hmhmh.. i liked it nC … i have one error while i am trying to run .swf files( from Air APPLICATIONS , with flex 3… ) but ok i hope i will fix it… good tutorial , continue !
http://www.thetechlabs.com/tutorials/audionvideo/creating-a-downloader-for-youtube-with-flexair-2/
crawl-002
refinedweb
2,353
66.33
#include <aio.h>)). If mode is LIO_NOWAIT, lio_listio() returns 0 if all I/O operations are successfully queued. Otherwise, −1 is returned, and errno is set to indicate the error. If mode is LIO_WAIT, lio_listio() returns 0 when all of the I/O operations have completed successfully. Otherwise, −1). The lio_listio() function may fail for the following reasons: Out of resources. The number of I/O operations specified by nitems would cause the limit AIO_MAX to be exceeded. mode was LIO_WAIT and a signal was caught before all I/O operations completed; see signal(7). (This may even be one of the signals used for asynchronous I/O completion notification.) mode is invalid, or nitems exceeds the limit AIO_LISTIO_MAX.. For an explanation of the terms used in this section, see attributes(7).. aio_cancel(3), aio_error(3), aio_fsync(3), aio_return(3), aio_suspend(3), aio_write(3), aio(7)
http://manpages.courier-mta.org/htmlman3/lio_listio.3.html
CC-MAIN-2021-17
refinedweb
146
58.18
MANUAL Documentation author K-Team Ch. de Vuasset, CP 111 1028 Prverenges Switzerland email: info@k-team.com WWW: Trademark Acknowledgements IBM PC: International Business Machines Corp. Macintosh: Apple Corp. SUN Sparc-Station: SUN Microsystems Corp. LabVIEW: National Instruments Corp. Khepera: K-Team NOTICE: The contents of this manual are subject to change without notice. All efforts have been made to ensure the accuracy of the content of this manual. However, should any error be detected, please inform K-Team. The above notwithstanding, K-Team can assume no responsibility for any error in this manual. TABLE OF CONTENTIntroduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 How to Use this Manual . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Safety Precautions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Recycling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Unpacking and Inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 The Robot and its Accessories . . . . . . . . . . . . . . . . . . . . . . . . . . 4 The Khepera miniature robot . . . . . . . . . . . . . . . . . . . . . . . 4 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 ON - OFF battery switch . . . . . . . . . . . . . . . . . . . . . 4 Jumpers, reset button and settings . . . . . . . . . . . . . . 5 The S serial line . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Motors and motor control . . . . . . . . . . . . . . . . . . . . . 6 Infra-red proximity sensors . . . . . . . . . . . . . . . . . . . 8 Ambient light measurements . . . . . . . . . . . . . 9 Reflected light measurements . . . . . . . . . . . . 10 Batteries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Cables and accessories . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Power supply . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Interface and charger module . . . . . . . . . . . . . . . . . . . . . 13 Software support floppy disks . . . . . . . . . . . . . . . . . . . . . 13 Unpacking Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Charging configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Configuration for robot-computer communication . . . . . 15 The serial communication protocol . . . . . . . . . . . . . . . . . . . . . . 18 The tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 The control protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Testing a simple interaction . . . . . . . . . . . . . . . . . . . . . . . 20 Using LabVIEW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Hardware configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Set up of the serial link . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Motors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Braitenberg's vehicle . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Advanced programming . . . . . . . . . . . . . . . . . . . . . . . . . 30 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Example of Braitenberg's vehicle . . . . . . . . . . . . . . 32 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Appendix A: Communication protocol to control the robot . . . 34 Appendix B: Connectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Appendix C: How to open the robot and change the ROM . . . 42 Appendix D: RS232 configuration . . . . . . . . . . . . . . . . . . . . . . 45 Appendix E: Running modes . . . . . . . . . . . . . . . . . . . . . . . . . . 48 INTRODUCTION Khepera has originally been designed as a research and teaching tool in the framework of a Swiss Research Priority Program. It allows confrontation to the real world of algorithms developed in simulation for trajectory execution, obstacle avoidance, preprocessing of sensory information, hypothesis on behaviours processing. To be able to program the robot easily, LabVIEW is proposed as a development environment. It is a graphical programming software, basically dedicated to instrumentation, which allows quick development of input-output interfaces, a necessity when dealing with the real world, by denition unpredictable and noisy. Please note that LabVIEW is just a suggestion and is absolutely not needed to use the robot. Any other environment able to deal with the serial port of your computer can be use instead of LabVIEW. The communication protocol implemented on Khepera and used by LabVIEW is presented in chapter 6 and described in detail in appendix A. 1.1 This manual is organised into six chapters and one appendix. To learn how to make the best use of your Khepera robot you are urged to read all of chapters 1 through 5. Chapter 6 presents the serial communication protocol that makes a remote control from a workstation possible. You need to read the chapter 7 if you use the software LabVIEW. The appendix can be referred to as necessary. Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Gives an introduction to this manual and the Khepera robot. explains the contents of the package. explains the functionality of every item present in the package. explains how to make the rst test of the robot after unpacking. gives the standard working congurations. presents the serial communication protocol. is addressed to users of LabVIEW. It shows simple virtual instruments (VI) to control the robot functionality and a little example of programming in this environment. details the commands of the serial communication protocol. details the connectors pinning. details how to open the robot, which means disconnect the upper from the lower part, and change the ROM. Both these operations has to be made only if really necessary! gives some detail on the conguration of some common terminal emulators. details the different running modes. Appendix D Appendix E 1.2 Safety Precautions Check the unit's operating voltage before operation. It must be identical with that of your local power supply. The operating voltage is indicated on the nameplate at the rear of the power supply. Don't plug or unplug any connector when the system is switched ON. All connections (including extension addition or disconnection) must be made when the robot and the interface are switched OFF. Otherwise damages can occur. Switch OFF the robot if you will not use it for more than a day. Disconnect the power supply removing it from the wall socket. Do not open the robot (separate upper from lower part) if you do not have been explicitly allowed. Disconnect the CPU to the sensory-motor board only if you have been allowed by a specific documentation. Perform this operation following carefully the instructions given in appendix C. Do not manually force any mechanical movement. Avoid to force, by any mechanical way, the movement of the wheels or any other part. Avoid to push the robot in a way that forces the wheels. If you have any question or problem concerning the robot, please contact your Khepera dealer. 1.3 Recycling Think about the end of life of your robot! Parts of the robot can be recycled and it is important to do so. It is for instance important to keep Ni-Cd batteries out of the solid waste stream. When you throw away a Ni-Cd battery, it eventually ends up in a landll or municipal incinerator. These batteries, which contain heavy metals, can contribute to the toxicity levels of landlls or incinerator ash. By recycling the Ni-Cd batteries through recycling programs, you can help to create a cleaner and safer environment for generations to come. For those reasons please take care to the recycling of your robot at the end of its life cycle, for instance sending back the robot to the manufacturer or to your local dealer. Thanks for your contribution to a cleaner environment! 7 1 6 2 5 3 4 Open the bag and check each item in the box against figure 1: 1. The documentation you are reading now 2. Power supply 3. Interface and charger module 4. Three disks with the software modules (VIs) for LabVIEW - on SUN - on Macintosh - on PC 5. Cables: - Serial S - Battery 6. Spare parts: tires and jumpers 7. Your Khepera robot in the basic version 33.1 3.1.1 Side view 5 5 Bottom view 7 Make an external inspection of the robot. Note the location of the following parts: 1. LEDs 2. Serial line (S) connector. 3. Reset button. 4. Jumpers for the running mode selection. 5. Infra-Red proximity sensors. 6. Battery recharge connector. 7. ON - OFF battery switch. 8. Second reset button (same function as 3). 3.1.2 ON OFF OFF ON It allows the user to switch the battery of the robot ON or OFF. When ON, the robot is powered by the Ni-Cd internal batteries. In this case the robot cannot be powered by an external supply. When OFF, the batteries are disconnected and the robot can be powered by the S serial line connector. 3.1.3 Top view MC6833 1 The ROM installed on your robot has an important library of software modules for the real time control of the Khepera robot. Part of these modules (building the BIOS) ensure the basic functionnalities of the Khepera robot, like motor control, sensors scanning etc. Another part of these modules ensure the interface with the user through the serial line. Depending on your use of the robot (remote control, downloading, test, demo etc.) you can select a specic module by setting the correspondent running mode. The 3 jumpers (see Overview on page 4) allow the selection of the most important running modes in several congurations. You have the choice between the following jumper congurations (see gure 4 for the corresponding jumper positions): 0. Demonstration mode: Braitenberg vehicle algorithm (number 3 according to the Vehicle book [Braitenberg84]) for obstacle avoidance. 1. Mode for the control of the robot by the serial communication protocol (see The serial communication protocol on page 18) using a serial link with a communication speed of 9600 Baud. 2. Same as mode 1 but with the communication speed of 19200 Baud. 3. Same as mode 1 but with the communication speed of 38400 Baud. 4. User application mode: start an application stored into the EPROM you can add to the standard modules (if any). 5. Downloading mode: in this mode the robot waits for a program to be transferred and executes it when downloaded (S format, 9600 Baud). 6. Same as mode 5 but with the serial link at. The reset button can be used at any time to reset the robot. 3.1.4 The S serial line is an asynchronous serial line with TTL levels (0-5V). An interface is necessary to connect this line to a standard RS232 port. This interface is included in the interface/charger module present in the package (see Interface and charger module on page 13). The S serial line can power the robot. The length of the S serial cable should be limited to two meters for proper operation. 3.1.5 Every wheel is moved by a DC motor coupled with the wheel through a 25:1 reduction gear. An incremental encoder is placed on the motor axis and gives 24 pulses per revolution of the motor. This allows a resolution of 600 pulses per revolution of the wheel that corresponds to 12 pulses per millimetre of path of the robot. The Khepera main processor has the direct control on the motor power supply and can read the pulses of the incremental encoder. An interrupt routine detects every pulse of the incremental encoder and updates a wheel position counter. The motor power supply can be adjusted by the main processor by switching it ON and OFF at a given frequency and during a given time. The basic switching frequency is constant and sufciently high not to let the motor react to the single switching. By this way, the motor react to the time average of the power supply, which can be modied by changing the period the motor is switched ON. This means that only the ratio between ON and OFF periods is modied, as illustrated in gure 5. This power control method is called pulse width modulation (PWM). The PWM value is dened as the time the motor is switched ON.basic period 50s ON period ON power time powerbasic period ON 95% ON period OFF OFF ON 5% ON period powerbasic period time Figure 5: The pulse width modulation (PWM) power supply mode is based on a ratio between the ON time and the total time. The basic switching frequency is constant. The PWM values can be set directly, or can be managed by a local motor controller. The motor controller can perform the control of the speed or position of the motor, setting the correct PWM value according to the real speed or position read on the incremental encoders.target position control trajectory position generator + +- position error speed control +dx/dt speed error Both DC motors can be controlled by a PID controller executed in an interrupt routine of the main processor. Every term of this controller (Proportional, Integral, Derivative) is associated to a constant, setting the weight of the corresponding term: Kp for the proportional, Ki for the integral, Kd for the derivative. The motor controller can be used in two control modes: The speed and the position modes. The active control mode is set according to the kind of command received. If the controller receives a speed control command, it switches to the speed mode. If the controller receives a position control command, the control mode is automatically switched to the position mode. Different control parameters (Kp, Ki and Kd) can be set for each of the two control modes. Used in speed mode, the controller has as input a speed value of the wheels, and controls the motor to keep this wheel speed. The speed modication is made as quick as possible, in an abrupt way. No limitation in acceleration is considered in this mode. Used in position mode, the controller has as input a target position of the wheel, an acceleration and a maximal speed. Using this values, the controller accelerates the wheel until the maximal speed is reached, and decelerates in order to reach the target position. This movement follows a trapezoidal speed prole, as described in gure 7. The input values and the control mode of this controller can be changed at every moment. The controller will update and execute the new prole in the position mode, or control the wheel speed following the new value in the speed mode. A status of the controller indicates the active control mode, the phase of the speed prole (on target or in movement) and the position error of the controller. speed target position 3.1.6 Eight sensors are placed around the robot and are positioned and numbered as shown in gure 8. position 2 1 0 Scale: 1:1 3 4 5 These sensors embed an infra-red light emitter and a receiver. For more information about these particular devices, please refer to the documentation of the sensor manufacturer. The exact part name is SFH900-2 and the manufacturer is SIEMENS. This sensor device allows two measures: The normal ambient light. This measure is made using only the receiver part of the device, without emitting light with the emitter. A new measurement is made every 20 ms. During the 20 ms, the sensors are read in a sequential way every 2.5 ms. The value returned at a given time is the result of the last measurement made. The light reected by obstacles. This measure is made emitting light using the emitter part of the device. The returned value is the difference between the measurement made emitting light and the light measured without light emission (ambient light). A new measurement is made every 20 ms. During the 20 ms, the sensors are read in a sequential way every 2.5 ms. The value returned at a given time is the result of the last measurement made. The output of each measurement is an analogue value converted by a 10 bit A/D converter. The following two sections (3.1.6.1 and 3.1.6.2) illustrate the meaning of this 10 bit values. 3.1.6.1 The measurement of the ambient light versus the distance of a light source has the following shape:500 Dark400 Measured value 300 200 100 As it can be seen, the measured value decreases when the intensity of the light increases. The standard value in the dark is around 450. The measurement of the ambient light versus the angle between the forward direction of the robot and the direction of the light has the shape illustrated in gure 10. Dark 350 300 250 200 150 100 50 0 -180 -150 -120 -90 -60 -30 0 30 60 90 120 150 sensor left 85 sensor left 45 sensor left 10 sensor right 10 sensor right 45 sensor right 85 180 210 240 angle []Figure 10: Typical measurement of the ambient light with a light source moving around the robot. The angle on the X axis is measured between the forward direction of the robot and the direction of the light. All these measurements depend very strongly from various factors like the distance of the light source, the colour, the intensity, the vertical position, etc. These two gures show only the global shape of the sensor's response. 3.1.6.2 The measurement of the proximity of obstacles by reected light depends on two major factors: the reexivity of the obstacle (colour, kind of surface...) and the ambient light. Figure 9 shows some measurements giving an idea of the response of the sensor.1100 1000 900 black plastic green styropor pink sponge white plastic grey plastic wood 1 wood 2 copper 60 Distance to the wall [mm] Figure 11: Measurements of the light reected by various kinds of objects versus the distance to the object. 10 The directionality of the sensor measurement is illustrated in Figure 10: these sensors have a very large eld of view.1000 Obstacle Measured reflection value 750 m 7mAngle 15 mm Sensor 0 10 20 30 40 50 60 70 80 90 Angle []1100 1000 sensor 1 sensor 2 sensor 3 sensor 4 sensor 5 sensor 6 900 800 700 600 500 400 300 200 100 0 10 20 30 40 50 60 500 250 Figure 12: Typical response of a proximity sensor for an obstacle (7 mm in width) at a distance of 15 mm. The measurement is given versus the angle between the forward orientation of the robot and the orientation of the obstacle. The characteristics of the different physical sensors can change in a large range. Figure 13 shows the measurements made on six sensors of the same robot placed in identical conditions. Small differences of conditions (vertical orientation of the sensor, ambient light conditions, colour of the oor) can bring additional differences. 11 3.1.7 Batteries The robot is equipped with 4 rechargeable Nickel-Cadmium batteries with a capacity of 180 mAh (older versions can have 110 mAh). This capacity allows the robot an autonomy of about 45 minutes in the basic conguration. The battery can be charged with the interface/charger module (see Interface and charger module on page 13) There is no specic management of the battery charge level. During a life cycle of the robot, the battery discharges as indicated in gure 14. When the battery voltage is under 4 V, the robot processor stops working correctly and the robot has no more control. At this voltage there is still sufcient power to make the motors move, which means that the robot can move without control.Battery voltage [V] 5.5 5.0 4.5 4.0 3.5 3.0 0 5 10 15 20 25 30 35 40 45 Time [min] End of robot life 3.2 The S cable (2 m long, with a 6 pins connector) allows the connection between the robot and the interface/charger module to support the communication to the host computer (see Interface and charger module on page 13). The recharging cable (0.5 m long, with a 4 pins connector) allows the connection between the robot and the interface/charger module to recharge the robot. Some exchange parts are also included in the package: 4 new tires, 10 jumpers. 3.3 Power supply A transformer provides the power supply to the interface module and, if the battery switch of the robot is OFF, to the robot itself. The connection between the power supply and the interface/charger module is made by the power supply jack (see Interface and charger module on page 13). SAFETY PRECAUTION: The power supply must be connected to the wall socket only when all other connections are already made. 12 3.4 Power supply jack RS232 Charging Robot RxD Robot TxD Recharging connector S serial line connector This module supports the interface between the robot and the host computer, the power supply of the robot from the S serial line cable and the charge of the battery. To work, the module needs to be connected to the power supply. The following features are present in this module: The battery charger: a four pins connector allows the connection with the Khepera robot to charge its battery. When charging, the robot must be disconnected to all other systems and switched OFF. Avoid to recharge full batteries, this can cause damages! It would be optimal to discharge the robot completely (leave the robot swathed on) before recharging it! During the charging period, a yellow led indicates the activity. The charging time for an empty battery is about 45 minutes. See Charging conguration on page 14 for more details. The S - RS232 interface: this interface allows the connection between the robot (S serial line) and a host computer (over a RS232 port). Two connectors are available: a standard female DB25 (the interface module is a DCE) for the RS232 link toward the host computer and a S six pins connector for the link toward the robot. The link towards the robot also powers the robot if the battery switch is OFF. See Conguration for robot-computer communication on page 15 for more detail on this working conguration. 3.5 Three 3,5 oppy disks contain all the modules for interfacing the Khepera robot with LabVIEW on PC, Macintosh and SUN (see Using LabVIEW on page 22). The software LabVIEW is a product of National Instruments and is not included in the package. 13 UNPACKING TEST After unpacking it is important to test the functionality of the robot. A test that uses most of the possible functionnalities is available with the running mode 0: a Braitenberg vehicle (see Jumpers, reset button and settings on page 5). To obtain this running mode operate as following: Put the robot on a at surface without danger for the robot. Water, edge of the table or metallic objects have to be considered as dangerous for the robot. Be aware that the robot will move rather quickly and cover long distances in a short delay. The robot is normally charged when delivered. Verify that the three jumpers are not connected (setting for the running mode 0, see Jumpers, reset button and settings on page 5). Top viewMC6833 1 Figure 16: Settings of the jumpers for the Braitenberg vehicle test. Switch the robot battery switch ON and put the robot on the surface. The robot must start to go forward while avoiding obstacles. The obstacles must be bright to better reect the light of the proximity sensors. If the robot does not operate properly, check the three points mentioned above, recharge the robot and retry. If the robot does not correctly avoid the obstacles, please contact your Khepera dealer. CONNECTIONS There are two standard congurations: a rst one used to charge the robot and a second one that allows the communication between the robot and the host computer. 5.1 Charging conguration Warning: It is necessary to discharge the batteries before recharging. Avoid to start a recharging process on charged batteries. This can cause damages. To charge the battery of the robot, the following connections have to be made: Between the robot and the interface/charger module with the charging cable (4 pins). Warning: the robot battery switch must be on the OFF position. 14 Between the interface/charger module and the power supply with the jack. Plug the power supply to the wall socket only when these connections are established. The charger will perform some measurements and then start to charge. These operations can take 10 minutes if the batteries are hot (after long use) or too discharged. When charging, the charging indication led is ON. If the led does not switch ON after 10 minutes, unplug and re-plug the power supply. The led is switched OFF at the end of the charging process. The charging time for an empty battery is about 40 minutes. At this moment you can unplug the power supply and remove the charger cable. When charging, the battery can be hot (50C). This is normal. 5.2 This conguration allows the communication between the robot and a host computer through a serial link. On the host computer side the link is made by a RS232 line. The interface module converts the RS232 line into the S serial line available on the robot. The following connections must be made: Between the robot and the interface/charger module by the S serial cable. This cable also supports the power supply of the robot. This external power supply is available when the general battery switch is OFF. If the switch is ON, the robot uses its own batteries for power supply. Between the interface module and the host computer by a standard RS232 cable. This cable is not in the package because there are several standards at the level of the host connector. You can easily nd this cable by your host computer dealer. If your host computer RS232 port has a DB25 male connector you can plug the interface module directly in it. 15 Figure 18: Conguration for the communication between the robot and the host computer. Between the interface/charger module and the power supply with the cable xed to the power supply. Set the jumpers according to the desired running mode (see Jumpers, reset button and settings on page 5). Be careful that in running mode 0 the robot starts moving when powered. Plug the power supply to the wall socket. To test the connection and the settings of the serial port do the following manipulations: Put the robot on a at surface on which the robot can move around easily. The battery switch must be OFF. Insert all jumpers like in gure 19 (setting for the running mode 0, see Jumpers, reset button and settings on page 5). Top viewMC68331 16 Run a terminal emulator on your host computer (for instance Hyper Terminal on PCs, Microphone on MACs, tip on UNIX or minicom on linux, see RS232 conguration on page 45) connected to the serial port on which you have connected the robot. Congure your terminal as following: 9600 Baud, 8 bit, 1 start bit, 2 stop bit, no parity. Plug the mural power supply to the main, or, if it is already connected, reset the robot pressing on the reset button. Your terminal should display:ROM of minirobot KHEPERA,... The transmit data (Robot TxD on the interface module) green lamp should blink after reset. If the robot does not respond as indicated (the green led does not blink after reset), check the points mentioned above and retry. If the green led blinks but your computer does not show any message, check the conguration of your serial port and terminal emulator, as well as the connection between interface/charger module and host computer. 17 The serial communication protocol allows the complete control of the functionnalities of the robot through an RS232 serial line. It correspond to running modes 1 to 3 (see Jumpers, reset button and settings on page 5). The connection conguration necessary to use these functionnalities is presented in the section 5.2 of this manual. The conguration (baudrate as well as data, start, stop and parity bits) of the serial line of your host computer must correspond to the one set on the robot with the jumpers (running modes 1 to 3, always 8 bit, 1 start bit, 2 stop bit, no parity). The communication between the host computer and the Khepera robot is made sending and receiving ASCII messages. Every interaction is composed by: A command, sent by the host computer to the Khepera robot and followed by a carriage return or a line feed. When needed, a response, sent by the Khepera to the host computer. In all communications the host computer plays the role of master and the Khepera the role of slave. All communications are initiated by the master. The communication is based on two types of interactions: one type of interaction for the set-up of the robot (for instance to set the running modes, the transmission baudrate...), and one type of interaction for the control of the functionality of the robot (for instance to set the speed of the motors, to get the values of the sensors...). The interactions allowing the set-up of the robot are based on commands called tools. The interactions for the control of the robot functionality uses protocol commands and responses. 6.1 The toolsHere is the description of some basic tools: run Starts a function stored in the ROM. It has to be followed by the function name. The functions available in the ROM can be listed with the list tool and have an identication string beginning with FU. Some of the functions correspond to the running modes presented in the section 3.1.3. Using the run tool it is possible, for instance, to start the demo mode (Braitenberg vehicle, corresponding to the running mode 0, as described in section 3.1.3) typing run demo and return. serial Sets the serial channel to a baud-rate, given as parameter. For instance, typing serial 19200 and return you set the baudrate to 19200 Baud. help Shows the help message of the modules available in the ROM. A parameter can be used to indicate a need of help on a particular item. help list gives an help message for the list tool. help demo gives an help message on the demo function mentioned above. list Gives the list of all the tools, functions, protocol commands 18 k-team net and other modules available in ROM. For every item listed you get an ID, a name, a description and a version. The ID is composed by four letters. The rst two letters dene the family of the module: IDs starting by TA dene tasks running on Khepera, FU functions that can be executed with the run command, PR protocol commands, TO tools like this one and BI BIOS components. Gives a short description of the K-Team active members. Gives an information about the intelligent extension turrets installed on the robot. For each item listed you get a name, an ID (to be used to address the turret, see also the command T in appendix A), a description and a revision. Gives an information about the memory used by the system. Resets the robot. This action is equivalent to a hardware reset. Gives the list of all processes running on Khepera in parallel to the serial communication protocol management. Starts a Motorola S format downloader. This downloader does not start the execution of the code at the end of the download. To download and execute a code, please use the sloader function, started by the command run sloader. 6.2 To control the functionnalities of the Khepera robot (motors, sensors etc.), a set of command are implemented in the control protocol. Also in this case, the communication with the Khepera robot is made sending and receiving ASCII messages. Every interaction between host computer and Khepera is composed by: A command, beginning comma and terminated by a carriage return and a line feed, sent by the Khepera to the host computer. In all communications the host computer plays the role of master and the Khepera the role of slave. All communications are initiated by the master. The protocol commands are 18 and a complete description is given in Appendix A. 19 6.3 To better understand both tools and protocol commands, we propose to do a very simple test as following: Set the jumpers to select the running mode 1 (see Jumpers, reset button and settings on page 5) Top viewMC68331 Figure 20: Settings of the jumpers for the 9600 baud communication mode. Set the connection conguration presented in section 5.2. Start on your host computer a terminal emulator (for instance Hyper Terminal on PCs, Microphone on MACs, tip on UNIX or minicom on linux, see RS232 conguration on page 45) with the serial line set to 9600 Baud, 8 bit data, 1 start bit, 2 stop bit, no parity. We start testing some protocol commands: robot proximity sensors presented in section 3.1.6. Retry the same command (N) putting some obstacles in front of the robot. The response must change. Type the protocol command D,5,-5 followed by a carriage return or a line feed. The robot must start turning on place. 20 nal position. Try other commands following the description given in Appendix A. We continue testing some tools: Type the command help followed by a carriage return or a line feed. The robot must respond with the list of all tools available. Type the command help serial followed by a carriage return or a line feed. The robot must respond with the description of the serial tool. Type the command help D followed by a carriage return or a line feed. The robot must respond with the description of the D protocol command. Type the command list followed by a carriage return or a line feed. The robot must respond with the list of all software modules present in the ROM. In addition to the tools (characterised by a TOXX ID) and the protocol commands (characterised by a PRXX ID) you can nd on the list functions (characterised by a FUXX ID) and BIOS modules (characterised by a BIXX ID). On every module you can have an help message. 21 USING LABVIEW This chapter is to familiarise you with the LabVIEW environment in the context of Khepera use. LabVIEW is a product of National Instruments (). To this end, the examples are presented in an increasing order of complexity. Our advice is to follow the chronological order of presentation. Please refer to the LabVIEW manuals for more information about this software. The following examples and the les distributed with this product are based on LabVIEW version 5. LabVIEW runs on your PC, Macintosh or SUN workstations, and can control the functionality of the Khepera robot using the serial communication protocol described in section 6.2. 7.1 Hardware conguration Set your environment as illustrated in section 5.2. The jumpers must be set as showed in gure 21, to obtain the running mode 2 (for more details on running modes please refer to section 3.1.3). Figure 21: Settings of the jumpers for using LabVIEW and a serial connection at 19200 baud. 7.2 To enable the exchange of information between your computer and the robot, you have to congure the serial link of your host computer, according to the osetting chosen on the Khepera robot. Be sure that the connection cable is connected at both ends (Khepera and interface), that the robot is powered (power adaptor), then start LabVIEW and open the Setup virtual instrument (called VI) present in your oppy disk. The panel illustrated in gure 22 should appear. 22 Now, select the serial port on which the robot is connected. This selection depend on which port you use and the type of computer you have. This choice must be made for every module that you will use. Then click once on the run arrow A stop icon its initial state. at the top of the window. appears for a few seconds, after what the front panel returns to That's all! The serial link with Khepera is set to 19200 baud. It will remain so until you quit LabVIEW. 23 7.3 Motors We will now control the displacement of the robot. Be sure that the serial link has been correctly installed, then open the Motors VI present on the oppy disk. Now your screen displays the following panel: Figure 23: Motors panel: 2 sliders controlling the speed of each wheel. Before to move the robot, you must learn how to stop it. Different means are available. Starting from the most efcient: Press the reset button on the robot once. Put the value 0 for each speed using the sliders. Don't forget that these new values will only be taken into account at the next execution (i.e., click on the arrow). Click on the button labelled Stop. You have also to click on the arrow again so that the robot takes your last decision into account. Before trying to give another values to the motors, click again on the button (to de-select this option). This last option is the best way to stop the robot. You control directly each one of the motors by simply putting the desired speed values in the corresponding slider. This can be made moving the slider or entering the desired speed in the digital display placed between the sliders and their names. 24 Possible values are constrained (only on the sliders) between -20 and +20 so to take care of the mechanics. To transmit your order to the robot, just click once on the arrow. You can change the values and click on the arrow again to validate your choice. You see that the robot continues moving at the same speed until new values are send. If you are getting bored with clicking on the arrow, try one click on the double arrow . Click on the stop icon to stop the execution. The robot has two incremental encoders on the wheels. Using these sensors it is possible to measure the displacement and the speed of each wheel at every moment. The VI Get_position ask for the content of the increment counter, which represent the displacement of the wheel. The unit of displacement correspond to 0.08mm. Figure 24: Get_position panel: 2 indicators show the position of each wheel. To test the functionality of this module just click on the double arrow to start the recurrent running mode. At this moment the VI will show you the actual position of each wheel. To change the position of the wheel use the Motors VI as described above, and observe the result on the Get_position VI. To set the position counter to a given value you can use the Set_position VI. The Get_speed VI display the speed of each wheel. This value is computed on the robot, based on the information of the position and the time. Figure 25: Get_speed panel: 2 indicators show the speed of each wheel. 25 To test the functionality of this module just click on the double arrow to start the recurrent running mode. At this moment the VI will show you the actual speed of each motor. To change the speed of the motors use the Motors VI as described above. You can also try to set the motor speed to 5 using the Motors VI, then slow down the wheels with your ngers and look to the result on the Get_speed VI. It is also possible to give to the robot a position to reach, expressed using the position of the two wheels as described above. The position given as target will be reached using a trapezoidal speed prole (as described in section 3.1.5 of this manual). The target position can be given to the robot with the Control_position VI, illustrated in gure 26. Also here the commands can be made moving the slider or entering the desired position in the digital display placed between the sliders and their names. Figure 26: Control_position panel: 2 sliders controlling the position to reach for every wheel. As described in section 3.1.5 of this manual, every displacement in position control mode will be made following a precise speed prole. The parameters of this speed prole can be congured using the VI Conf_pos_param, showed in gure 27. For each motor you can set acceleration and maximal speed. Deceleration will be set identical to acceleration. Figure 27 shows default values. 26 Figure 27: Conf_pos_param panel: all the parameters of the speed prole can be controlled. To test the position control, please start setting to 0 the two position counters of the two wheels, using the Set_position VI or resetting the robot. At this moment you can try the Control_position VI setting a distance of 1000 on both sliders and running the VI once, clicking on the run arrow. The robot should move forward showing an acceleration, a xed speed and then a deceleration. Test other movements keeping both left and right displacements identical to move on a line. To start other kinds of trajectory, bring back the robot to the 0 position or use the Set_position VI to reset the counter. Then use the Conf_pos_param VI to set the left maximal speed to 10, the left acceleration to 32, and keeping the values of the right trajectory to the default value indicated in gure 27. Run the VI once to make these values effective. Then set, on the Control_position VI, the goal position of the left wheel to 1000 and the goal position of the right wheel to 2000. Run the VI once and observe the trajectory of the robot. The robot should make a circular trajectory. 7.4 Sensors In its basic version, Khepera has eight infra-red sensors, as described in section 3.1.6. You will now easily understand their characteristics. Be sure of the set-up of the serial link, then open the Sensors VI that you have on the oppy disk. You should see on your screen the panel illustrated in gure 28. 27 Figure 28: Sensors panel: 8 gauges displaying the infra red values. Each proximity sensor value is displayed as a gauge. The gauges are placed on the panel like the corresponding sensors on the robot. The exact value received is written underneath. Values are between 0 and 1023. Start the acquisition as before by clicking on the double arrow (to stop the execution, click on the stop icon that will appear). Now you are free to test the response of the sensors. In particular, some materials reect better than others the infra-red light emitted by the sensors. You are also able to state difference in the individual response of the proximity sensors. The graph on the left of the panel shows the values for each sensor against time. 7.5 Braitenberg's vehicle At this stage, you have a good understanding of motors and sensors functionality. In this section we will combine these two modules, as sub-VIs of a main sensory-motor loop VI. Let's open both panels (Motors and Sensors), but do not start them. This way, you will be able to watch, at the same time, data issued from the sensors and those send to the motors. Open the instrument called KheBraitenberg3c. You should see on your screen the panel illustrated in gure 29. 28 Figure 29: Braitenbergs vehicle panel: 3 sliders dening the link weights. Each of these three sliders denes the sensibility of the motors reaction to the obstacles of a given group of sensors: Front corresponds to the two central front sensors (2 and 3), 45 degrees corresponds to the two lateral front sensors (1 and 4), 90 degrees corresponds to the two sensors on the side of the robot (0 and 5). Set carefully the serial port and the baudrate according to the settings of your Khepera robot. Start the application by clicking on the arrow. It is not necessary to click on the double arrow in the present case. The sensibility can be modied by moving the cursors or writing directly the desired values. Khepera now moves, avoiding bumping into obstacles. In a parallel way you can observe the motor commands and the sensor readings from the Motors and Sensors panels. Test different sensibilities on its behaviour. This control structure is inspired from the work of V. Braitenberg [Braitenberg84]. Note that this VI uses Motors and Sensors as sub-VIs. One of the advantages of LabVIEW is to allow a context-free use of the building modules. This fact is particularly interesting for creating programs in a structured way and for debugging. The button (inside the panel) labelled stop stops the robot and the execution of the VI. It is a much better way than using the stop icon (above rubber), because the stop button stops the robot before stopping the VI. 29 7.6 Advanced programming Now that we have executed different manipulations using LabVIEW and Khepera, it becomes interesting to present how it has been programmed. First, it is important to be able to manipulate LabVIEW and its rolling menus. Select the Motors panel and open the diagram using the option Show diagram of the menu Windows. Your screen now displays the following schema: Each element of the panel used for displaying or getting data corresponds to an icon. So, the box I32 under Speed motor right is the getting variable of the corresponding slider on the front panel (of type 32 bits Integer). The same is true for the icon TF, labelled Stop on the left, which is a boolean variable (type True False). This variable allows to stop the motors by sending to each one a null speed value. The triangle represents an indirection controlled by the boolean variable Stop: If Stop is true, then the output value (on the right) will be 0, if Stop is false, then the output value is the Speed variable. These values are formatted by the next two icons to a string of ASCII characters. The character D is placed at the beginning of the string. Then, successively, the two speed values are added and the string is terminated by a carriage return (\n). This string is send to the robot using the serial link. To control the speed of the wheels from another VI, the Motors VI can be used as sub-VI. This use is demonstrated with the KheBraitenberg3c VI. The help window (gure 31) displays positions and semantics associated to the icon. 30 7.6.1 Select the Sensors panel and open the diagram using the option Show diagram of the menu Windows. Your screen now displays the following schema: On the contrary to the Motors panel which sends values, this panel receives values from the serial link. Eight values are extracted in four steps from the string. These values are put into the variables (type Unsigned 16 bits) corresponding to the panel gauges. They are also put into a vector so to be displayed by Sensors_displ. These 8 values are transmitted to others modules through the icon used as a con- 31 nector. This use is demonstrated with the experiment on Braitenberg's vehicle. The help window (gure 33) displays positions and semantics associated to the icon. 7.6.2 Select the Braitenberg panel and open the diagram using the option Show diagram of the menu Windows. Your screen now displays the following schema: 32 This schema may appear complex at rst glance. But, as we will see, it takes back elements already studied. It can be decomposed into two parts: serial link initialisation and avoiding behaviour. LabVIEW is a data-ow controlled language, so the execution order of non-dependent control structures is not xed. Serial link initialisation and avoidance behaviour are independent; but initialisation must occur rst. The sequential structure allows the denition of an execution order, so initialisation is the rst element of the sequence. The second element contains the while loop where the avoidance behaviour is executed until the boolean variable Stop becomes True (note the presence of a logical inversion). This boolean variable is also transmitted to the Motors VI through its icon. Its action will be to stop both motors. In the same way, the two speeds computed here are sent to the Motors VI to be sent through the serial link to Khepera. Sensors values are received from the Sensors icon. They are normalised (/1000) and multiplied by the corresponding sensibility value (type real Single). Front corresponds to the two central sensors (numbers 2 and 3), 45 degree corresponds to the two oblique sensors (numbers 1 and 4) and 90 degree corresponds to the two side sensors (numbers 0 and 5). The result (type single) is added to the value 10,0. The sum of the left sensors (3, 4 and 5) corresponds to motor right (0). The sum of the right sensors (0, 1 and 2) corresponds to motor left (1). These two values are then formatted (I32) and send to the motors. The computation needed is simple and fast enough to control Khepera in real-time without big delays. However, displaying Motors and Sensors panels is a computational expensive operation. If you want Khepera to move faster, close these panels but don't close the Braitenberg panel! REFERENCES [Braitenberg84] Braitenberg V., Vehicles: Experiments in synthetic psychology, MIT Press, 1984. [Mondada93b] Mondada F., Franzi E. and Ienne P., Mobile robot miniaturisation: a tool for investigation in control algorithms., ISER3, Kyoto, Japan, 1993. [National91] National Instruments, LabVIEW manuals for the release 5, january 1998. 33 APPENDIX A COMMUNICATION PROTOCOLTO CONTROL THE ROBOT This communication protocol allows complete control of the functionnalities of the robot through a RS232 serial line. The connection conguration needed is presented in section 5.2. The set-up of the serial line of your host computer must correspond to the one set on the robot with the jumpers (running modes 1 to 3). The protocol is constituted by commands and responses, all in standard ASCII codes. A command goes from the host computer to the robot: it is constituted by a capital letter followed, if necessary, by numerical or literal parameters separated by a comma and terminated by a line feed. The response goes from the robot to the host computer: it is constituted by the same letter of the command but in lower case, followed, if necessary, by numerical or literal parameters separated by a comma and terminated by a line feed. To better understand this protocol we propose a very simple test as following: Set the jumpers of the robot for running mode number 1 (See gure 20). Set the connection conguration presented in section 5.2. Start a terminal emulator on your host computer with the serial line set to 9600 Baud, 8 bit data, 1 start bit, 2 stop bits, no parity. proximity sensors presents on the robots. Retry the same command (N) putting some obstacles on the front of the robot. The response must change. Try other commands: 34 CongureA, Kp, Ki, Kd a Set the proportional (Kp), integral (Ki) and derivative (Kd) parameters of the speed controller. At the reset, these parameters are set to standard values: Kp to 3800, Ki to 800, Kd to 100. Give the version of the software present in the EPROM of the robot. Indicate to the wheel position controller an absolute position to be reached. The motion control perform the movement using the three control phases of a trapezoidal speed shape: an acceleration, a constant speed and a deceleration period. These phases are performed according to the parameters selected for the trapezoidal speed controller (command J). The maximum distance that can be given by this command is (2**23)-2 pulses that correspond to 670m. The unit is the pulse that corresponds to 0.08mm. The movement is done immediately after the command is sent. In the case another command is under execution (speed or position control) the last command replaces the precedent one. Any replacement transition follows acceleration and maximal speed constraints. 35 Set speedD, speed_motor_left, speed_motor_right d Set the speed of the two motors. The unit is the pulse/10 ms that corresponds to 8 millimetres per second. The maximum speed is 127 pulses/ 10ms that correspond to 1m/s. Read speedE e, speed_motor_left, speed_motor_right Read the instantaneous speed of the two motors. The unit is the pulse/10 ms that corresponds to 8 millimetres per second. Set the proportional (Kp), the integral (Ki) and the derivative (Kd) parameters of the position regulator. At the reset, these parameters are set to standard values: Kp to 3000, Ki to 20, Kd to 4000. Set the 32 bit position counter of the two motors. The unit is the pulse, that corresponds to 0,08 mm. Read positionH h, position_motor_left, position_motor_right 36 Effect: Read the 32 bit position counter of the two motors. The unit is the pulse, that corresponds to 0,08 mm. Read the 10 bit value corresponding to the channel_number analog input. The value 1024 corresponds to an analog value of 4,09 Volts. Set the speed and the acceleration for the trapezoidal speed shape of the position controller. The max_speed parameter indicates the maximal speed reached during the displacement. The unit for the speed is the pulse/ 10ms that corresponds to 8 mm/s. The unit for the acceleration is the ((pulse/256)/10 ms)/10 ms, that correspond to 3,125 mm/s2. At the reset, these parameters are set to standard values: max_speed to 20, acc to 64.speed end position 37 Read the status of the motion controller. The status is given by three numbers for every motor: T (target), M (mode) and E (error). T=0 means that the robot is still on movement. T=1 means that the robot is on the target position. M=0 means that the motor control is in the speed mode. M=1 means that the control is in position mode. M=2 means that the control is in PWM mode. E indicates controller position or speed error. Perform an action on one of the two LEDs of the robot. Possible actions are: 0: turn OFF, 1: turn ON, 2: change status. The LED number 0 is the lateral one, the LED number 1 is the frontal one. Read the 10 bit values of the 8 proximity sensors (section 2.1.6.2), from the front sensor situated at the left of the robot, turning clockwise to the back-left sensor. Read the 10 bit values of the 8 light sensors (section 2.1.6.1), from the front left sensor turning clockwise to the back-left sensor. 38 Set the desired PWM amplitude (see Motors and motor control on page 6 for more details) on the two motors. The minimum PWM ratio is 0 (0%). The maximal forward ratio (100%) correspond to a value of 255. The maximal backwards ratio (100%) correspond to a value of -255. Send a command and return the response of the intelligent extension turret with turret_ID. The list of turrets connected and their ID can be requested with the tool net. The command parameter takes the same format as a standard command, including an identication capital letter followed, if necessary, by numerical parameters separated by commas and terminated by a line feed. The response takes the same format, starting with the same letter but in lower case, followed, if necessary, by numerical parameters separated by commas and terminated by a line feed. The command and response formats are specic for every module. Read the data byte available at the relative_address (0...63) of the extension bus. Write the data byte at the relative_address (0...63) of the extension bus. 39 APPENDIX B CONNECTORS Female connector Power supply +5V Serial receive data (TTL levels) Serial transmit data (TTL levels) Power supply ground Power supply ground same signal Power supply ground Figure 35: Serial line S connector (RED), placed on the CPU board and on the interface-charger module. Figure 36: Battery recharger connector, placed on the motor-sensors board and on the charger module. 7 6 22 21 23 20 47 48 53 54 46 49 52 5 51 3 1 37 2 4 38 36 Pin Signal 1 2 3 4 5 6 7 18 19 20 21 22 23 36 37 38 39 40 /Reset VCC_Ext GND VCC GND VRef GNA D8 D9 D10 D11 D12 D13 CH3 CH4 CH5 PAI D14 Pin Signal 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 D15 A0 A1 A2 R/W /CSExt F7 /IRQ6 MISO MOSI SCK /CSCOM TxD RxD A3 A4 A5 Top view39 50 44 43 41 40 42 55 56 57 45 19 18 40 Not connected Not connected Transmit data (data from PC to Khepera) Not connected Receive data (data from Khepera to PC) Not connected Not connected Not connected Not connected Not connected Not connected Not connected Ground Not connected Data Carrier Detect (from Khepera to PC, always set to +12) Not connected Not connected Not connected Not connected Not connected Not connected Not connected Not connected Not connected Not connected GND +8.5V= 41 APPENDIX C Changing the ROM of Khepera needs very fine operations. The complete sequence of actions is described in this appendix. Please follow very carefully these instructions. A wrong action can cause mechanical damages to your Khepera robot. We do assume no responsibility for your wrong manipulations. The ROM is situated inside the robot. To change it please: Disconnect the robot from the power supply or the battery charger, switch the robot OFF. Open the robot by separating the CPU board to the motor-sensors board. Be very careful in this operation and follow the indications given in gure 40 and gure 41. GOOD BAD Figure 40: Good and bad way to remove a CPU board from the basic motor and sensor board. To do this operation without damages to the pins, the CPU board must be disconnected carefully and all pins have to be disconnected together. This can be made using a big plastic screwdriver and operating between the CPU board and the white motor blocks. Open some millimetres right, then left, then right.... and be very careful! CPU board Motor-sensors boardFigure 41: Be very careful opening the robot. The tool should not damage electronic components. The tool should make the effort directly to the green print board or on stable mechanical parts like the white motor boxes indicated in the gure. 42 When the CPU and motor board are disconnected, remove the ROM. The ROM is situated on the bottom of the CPU board, as indicated in gure 42. ROMFigure 42: ROM position, on the bottom side of the CPU board. To extract the EPROM operate carefully as with the robot. There are two access holes that make the extraction possible as indicated in gure 43. Figure 43: Location of the access holes for the extraction of the ROM. Use a specic extraction tool to push out the EPROM, very carefully, and in a parallel way as indicated in gure 44. DAMAGE BADFigure 44: Correct and wrong ROM extraction operation. On the left, the ROM extraction tool. 43 Finally plug the new EPROM. Be careful about the orientation, given by a corner of the ROM, as showed in figure 45. Assemble again CPU and motor-sensors board. Be sure to completely insert the connectors to ensure a good contact. 44 APPENDIX D RS232 CONFIGURATION On PCs, a common terminal emulator is Hyper Terminal. Figure 46 shows the conguration panel set for a communication speed of 9600 baud on COM1. 45 On MAC you can use the software Microphone. The main conguration panel is illustrated in gure 47. You have of course to choose the correct Connection Port where the Khepera is connected. On LINUX you can use MINICOM, where the settings can be adjusted running minicom -s as root or by changing the settings with the CTRL-A O command. The correct settings are illustrated in gure 48 and gure 49. 46 On SUN, a common tool is tip, used on the /dev/ttya or /dev/ttyb ports. More details are given in the manual man tip. 47 APPENDIX ETop viewMC6833 1 RUNNING MODES Running modes (see Jumpers, reset button and settings on page 5): 0. Demonstration mode: execution of a Braitenberg vehicle algorithm (number 3 according to the Vehicle book [Braitenberg84]) for obstacle avoidance. 1. Mode for the control of the robot by the serial communication protocol using a serial link with a communication speed of 9600 Baud. 2. Mode for the control of the robot by the serial communication protocol using a serial link with a communication speed of 19200 Baud. 3. Mode for the control of the robot by the serial communication protocol using a serial link with a communication speed of 38400 Baud. 4. User application mode: start an application stored into the EPROM (if any). 5. Downloading mode: in this mode the robot waits for a program to be transferred (S format, 9600 Baud). 6. Downloading mode: in this mode the robot waits for a program to be transferred (S format,.
https://de.scribd.com/document/57754666/KheperaUserManual
CC-MAIN-2019-47
refinedweb
9,994
64.41
Sketch Notes At UIKonf 2017 I attended UIKonf (iOS developer conference in Berlin) this week. Great conference. I have been at UIKonf in 2014 but this time it was so much better for me because I talked to so many people. It’s not that the people have been more open this time. It’s more like this time I met many people I knew from Twitter and Slack channels. In addition I have been grown more into the community since 2014. Something else was different this time. For the first time I started to do sketch notes during the talks. I’m not good with sketch notes yet meaning that I cannot draw. As a result often I had to write words instead of drawing images. Nevertheless doing sketch notes helped me to stay focused and concentrated. And I still remember what was presented in most of the talks. After each talk I published the sketch note to twitter and shared it with the speaker(s). This way the speaker got feedback about what key points I digested.Read more... Using.Read more... TestingRead more... FBSnapshotTestCase. This class allows you create snapshot tests. A snapshot test compares the UI of a view with a snapshot of how the view should look like. Let’s see how this works. Creating a "smart" Xcode file template Did you know that you can create your own file templates for Xcode? Sure you did. But did you also know that you can create a template that takes a string and puts it into a field you define? Let’s build a file template for a Swift protocol and a protocol extension. Create a file template First things first. Xcode looks for your custom templates at the locationRead more... ~/Library/Developer/Xcode/Templates/. We will start by copying a template that comes with Xcode to that location. 4. Using a Playground instead of Keynote Yesterday I gave a talk at the Cologne Swift meetup and I tried something new. I used a Swift Playground for the “Slides”. Read more... What.“ #selector()as described in this awesome post by Andyy Hope that looks like this: Read more... private extension Selector { static let showDetail = #selector(DetailShowable.showDetail) } Feedback to "The best table view controller" I got some feedback to my last blog post about The best table view controller. In this post I want to comment on the feedback because the comment sections is not really good for detailed discussion. Storyboards I couldn’t get Interface Builder in Xcode 7.3 to support typed view controllers. While you can type in the associated class, the field becomes blank when the storyboard is saved. Interesting. I wasn’t aware of this. But this again tells me, that storyboards (and the Interface Builder in general) are not suited for my style of iOS development. As you may know, I don’t like the Interface Builder. If you depend on storyboards for your User Interface, than this kind of table view controller isn’t for you (yet).Read more... The best table view controller (Mar 2016 edition) Since I read the objc.io post about light view controller, every few month I come back to the same problem: find the best way to write a table view controller. I have tried several different approaches like putting the data source and delegate in a separate class or using MVVM to populate the cell. This post is the March 2016 solution to this problem. And as most of the times, I’m quite happy with the current solution. It uses generics, protocols and value types. The main part is the base table view controller. It holds the array to store the model data, is responsible for registering the cell class and it implements the needed table view data source methods. Let’s start with the class declaration: Read more... import UIKit class TableViewController<T, Cell: UITableViewCell where Cell: Configurable>: UITableViewController { }
http://swiftandpainless.com/
CC-MAIN-2018-34
refinedweb
659
74.39
Strings into Names - RafaŁ Buchner last edited by gferreira Hi, I'm working on extension that will perform some operations on strings. My question is: Is there any way to translate python string object into list of nice names, that later I could use for calling glyphs with those names (with CurrentFont()['name'])? Thanks in advance for your help hi Rafał, if I understand it correctly, you wish to convert a unicode string into PostScript names? if so, try this out: from fontTools.agl import UV2AGL myText = "héllø" myText = myText.decode("utf-8") f = CurrentFont() for char in myText: uni = ord(char) glyphName = UV2AGL.get(uni) if glyphName and glyphName in f: print f[glyphName] based on this example let us know if it works! - RafaŁ Buchner last edited by RafaŁ Buchner @gferreira Thanks! Works perfectly!
http://forum.robofont.com/topic/447/strings-into-names/3
CC-MAIN-2018-34
refinedweb
136
63.9
Hey all, i am tinkering with an arduino equippet with the adafruit motorshield (the old first version). I use a photodiode for a photoelectric barrier. As soon as something passes the barrier the stepper should start and accelerate for a given period of time. The sensor readings are fine, the if / else works too and so does the motor shield and the acceleration. But the whole thing together does not work and i am just not good enough for finding the problem. At the moment the stepper starts as soon as something enters the barrier and rotates if the object stays in there. Of course it should only trigger the start but i do not know how to do this. Furthermore this only works one time until i need to restart. Thank you so much! #include <AccelStepper.h> #include <AFMotor.h> AF_Stepper motor(200, 2); int senRead = 5; int limit = 300; void forwardstep() { motor.onestep(FORWARD, SINGLE); } void backwardstep() { motor.onestep(BACKWARD, DOUBLE); } AccelStepper stepper(forwardstep, backwardstep); void setup() { stepper.setMaxSpeed(600.0); stepper.setAcceleration(900.0); stepper.moveTo(900); Serial.begin(9600); } void loop() { int val = analogRead(senRead); //photodiode reading if (val <= limit) { } else if (val > limit) { stepper.run(); } }
https://forum.arduino.cc/t/reading-value-from-sensor-start-stepper-motor-with-accel-and-motorshield/644774
CC-MAIN-2021-31
refinedweb
200
58.38
Join devRant Search - "crap" - - - - - - You know you've made it when you can quickly catch programming crap in other website images. This site promoted "Top 10 things that make you a good programmer."8 - - PHP 🐘 is so damn easy to learn, run straighforward in all OSs, that anyone can start coding in no time. Therefore, the amount of crap code around, made by unskilled devs, is just *unbelievable*. 💩18 - - - - Dear WhatsApp developers, GOD DAMNIT CHECK IF THE USER IS TYPING A (LONG) MESSAGE BEFORE BLOCKING AND ULTIMATELY CRASHING YOUR APP BECAUSE OF YOUR DAMN BACKUPS. Yours, Everyone P.S. First rant *yay*, feels good 😁6 - - I just woke up, the sun is out... Crap I must missed my alarm! Wait is only 6am.... I love summer? Except for the heat7 - - Fucking useless languages that compile into other languages but provide no real benefit other than some trendy syntax crap.14 - Me: Mom, I'm learning a new programming language Mom: How is it called Me: go Mom: do u like it? Me: yes, it's pretty Mom: do u like it more than linux?42 - - - - My employer gave everyone the day off, so now I have to do Xmas crap instead of coding. Thanks, I guess. :/3 - - - - - - - Guys... DevRant logged in automatically on my new phone. I don't know how I feel about this. Or maybe I've already forgotten. Help. I'm getting old and senile - - Apple, instead of inventing new file systems, stop fucking littering my directories with .DS_Store crap.2 - - - - - Coded in C language for first time (due to college assignments)... Just found out that there are no strings in c language 😐32 - OH MY GOSH SHUT THE CRAP UP. I AM NOT INTERESTED. I DON'T EVEN CODE JAVA. I'm a JavaScript developer. And I don't respond to this spam. Ugh!!!!!14 - Story time..m So I have "computer classes" ay school every monday. As I ranted before we are working with pascal... And one of my classmates told the teacher he is used to using % insted of MODULE from python. So i was like hmm so im not thr only one "coding" here? I was wrong ... He is a totall moron, thats probably the only thing he knows but he is always acting like the fucking smartass from now. HE DIDNT EVEN KNOW WHAT VARIABLES ARE! He is telling random things to teacher so he can look like he knows whats doing. Like litrrwlly its just random crap like Teacher : " so for loop anyone knows what it is?" Him :" yes its false" Teacher "so you put variable here" Him "yea i know python i know what im doing" ... "What is variablr again?" FOR FUCKS SAKE WHY IS HE DOING THIS I LITETALLY WANT TO SMASH KEYBOARD INTO HIS FACE2 - Me at QA, talking about a nasty bug I found in legacy code. QA: what was the root cause? Me: pos code. QA: pos?! Me: piece o' shit. QA: ...1 - - "Don't bother reading the docs, they're just technical crap" Well guess who knows how the framework we're using works 🤔1 - I hope it will be finished soon since it will be fucking fun to play skyrim on linux on Lenovo miix 3-830 So excited11 - - MATLAB sucks so bad. Why my university professors keep forcing us using that crap instead of Python?!21 - - - - - When the code is so bad that the only meaningful thing to do should be executing rm -Rf *; git add -A; git commit -m "bugfix" and then start the project again.4 - Invited by a company who desperately needed me to fix their messed up code and the reception lady talks crap and threatens me. :( Guess who is not replying to their emails ever?4 - - When you see this notif and realise that all the crap you put up with in work has been worth it... - - git commit -m "well described comment explaining addition" oh crap forgot to take out this one thing - ctrl-S - git commit -m "ahskdbejjeebdosjeb - Found out I had two days to write this crap for an 8086 in class. Our instructor doesn't even know assembly. I stopped crying after the tenth jump - Manager: With all the horror stories why are we even developers? Me: Because once we get part the horror, we become geniuses. Manager: So what you're saying is that being a developer is like taking a crap after being constipated for three weeks - I really hate fucking Wordpress! I hate it's stupid API, with it's stupid hooks and actions and all those stupid functions and no fucking logic to any of it! I hate it's stupid plugin system, with all that fucking overhead that brings no real value and adds all that complexity for nothing! I hate stupid fucking multiple calls for the same fucking assets, loading them over and over again because every stupid plugin calls them again and again! I hate motherfucking SHORTTAGS, or whatever the fuck they are called! I hate that every stupid fucking plugin and shortcode and fucking every little fucking piece of HTML comes from a different fucking place, with different fucking structure and different fucking classes and stupid fucking loading seaquences that make no fucking sense! And I hate fucking page builders !!!!! Fuck!!!! I should be fucking coding on this fucking peace of shit, but I just cannot fucking take it any more!!! IT NEEDS TO FUCKING DIE! It should be relegated to the darkest corners of the internet and all the servers that have it's fucking code anyware on their systems should be disconnected and buried in the deepest pits of hell, just to be sure it never, EVER, surfaces again!!! AAARRRRGGGHHHHHHH !!!!!!!!!5 - - Just found there is an alternative to create-react-app with parcel. It's called create-react-app-parcel or in short CRAP. To initialise a new project you even do crap sample-project Nice!3 - When the damn whiteboard is completely filled up with crap and it says "do not erase" in three different places.4 - - Connect a pen drive, format it successfully. Connect to a new machine to copy data and see the data exists. Crap! which drive did I formatted :(1 - -.3 - Thanks windows 😉 I can free ~4TB on my ~32GB drive 😂😂 I astonished how much crap of this OS can be on such a small plate 👌😂2 - Shittttt today i got PC to repair and it have 64MB of ram fuckkkkkkkkkkkkkkkk its impossible to repair this crap because it too old buy new one11 - - - - Sleeping the Thread for 1 sec, because the database had no real timestamp and a transaction on the same item within the same second would lead to a doubled primary key... No real feature, but it is a bug and this makes it a feature I guess.1 - - Bloody Windows kept waking up after a few seconds of being suspended. Someone here mentioned it could be the mouse, so I disabled its function to wake the pc . And you know what? That piece of trash windows still kept waking up for no reason. Makes me angry. So I found this magic command 'powercfg/lastwake', which shows the reason for the last wake. And look at that! The fucking realtek network shitcard is allowed to wake windows. Why would windows enable that on its own? Why? Because I for sure did not make this change and suspend was working for me until a few months ago (yes, these kind of problems take me very long to fix, even though it would tske only two minutes).4 - - - - - Well, my client likes the sailboat picture that I put up on his site as the hero area. Now he wants to know if I can animate the water and put the sounds of waves and seagulls in the background. I can, but fuck you. I won't. I have respect for the people that visit your site.6 - - Spent about an hour going through some bullshit paper for uni. I mean, I like research but I also think we should hang some people for how bullshit and unreadable and full of crap their papers and articles are.7 - - I saw someone rant about XML earlier, but truly the thing that puts me on the edge is XSLT. Who invented that crap.10 - My phone crashed (probably bricked), so I had to return it to the retailer for repairs. They lend me a phone until mine returns from the shop. It is ancient. I feel like an archaeologist.5 - - - function Life(crap):void { crap = Lemons; Return crap; } function Solution(liquid) { liquid = Tequila; Return liquid; } function whatYaGonnaDo():void { Life = null; Liquid = null; life (Life); if (life="Lemons") { solution(liquid); } } //sorry i was bored. (not sorry)4 - - - True story... That moment your phone internet is more reliable than your broadband... Very annoying! 😠4 - - Domain server goes down, it's the gateway and DNS too. Ok I'll just remove the domain, it's been orphaned really since you went to the cloud. Don't have local admin password. Ok call old it company who set up gear Out of business Ok boot to Linux and reset Usb boot locked Don't have bios password Call old it company Still out of business. Wait, can I just set manual ipv4 ? Ok domain without a domain controller... If it works it works.2 - - That late night sunday before sleep feeling, when your are almost eager to know what crap the next day is going to bring you.3 - I'm really excited to have a job where I'm not the lone developer. Kinda nervous, too. Now other people will see the crap I write. :P1 - anyone ever had a relative download a virus and when confronted about it they say no? even though you specifically told them not to open weird looking emails. well, pops apparently did NOT open a ransomware email 😂 Baby, bye, bye, bye... to all the files7 -.9 - - - - Pissed that employers try to post fake happy reviews RIGHT AFTER a bad review. How about fixing your company and not treating people like crap?!2 - -7 - - Converting an int to a string for use in a switch statement... And nothing else. Who the HELL wrote this crap?! - - - After years of working on projects where you git clone, npm install, npm start, I get catapulted into this PHP nightmare built on Symfony, and that has zero documentation or tests... I hate Mondays.3 - - How a node js developer's Terminal history look like -- npm install random_package npm install shitty_package npm install I_don't_know_what_it_does npm install crap2 - When I discovered that our awesome eForm solution for paramedics was a web frame in an iOS app containing a crappy HTML form powered by a 2000 line JS file3 - - - - update : love my new job... but, holly crap the sharepoint environment is a cluster fuck... guess who has to fix it... -!-6 - The shitty trainticket-app doesn't sell me any tickets. Who the fuck coded this crap. I really tried hard to pay for a fucking ticket...2 - - Ninjaaaaaaaaaaaaaaaaa i fucking hate you you fucking piece of useless crap. When i say you build with one thread build it with one fucking damn thread and not the entire damn CPU. See good old make can understand it and does as i say him to do. Why cant you be like him and listen to what i tell you to do ? Nooooo you just have to replace the entire fucking arguments i gave you with your fucking own just for fucking fun and make me fucking mad at you. You really are fucking useless piece of crap but other then android build you are pretty decent. But still piece of crap for android. Not to say that build takes 2 hours longer with you. Im fucking lucky to be building 7.1 android where i can turn you fucking off since i cant do that with 8.1 since google thinks you are fucking better. - - - just woke up in the middle of the night dreaming of merging some branches after working a 16 hour shift straight because my company is too fucking broke to hire another freakin' dev... well fuck it who needs sleep at all! let's get some coffee...3 - When a gamer is also a developer: Idiot: What kind of game is that? I only has a crap load of words! Me: Yeah, it's called Android Studio2 - - 6 Months later... Me: Oh God! This code is horrible! Who wrote this crap? Also Me: Shit, it was me.2 - Dear dev rant. I stopped being a web designer today and am now a software "developer". I feel like I don't know crap again :)3 - - function isBool(input) { return (mixed_var === true || mixed_var === false); } at least this crap wasn't used anywhere in the code base #gemoftheday #wtf - - - - The battery of my good old Huawei Y300 is slowly dying. So I thought it was time to cut the battery consumption a little. What a delusion. A new battery costs < $5 btw, but I'm too lazy to order :) I've tested 16 highly acclaimed (of about 20,000, didn't count all of them) battery apps - they're all!, and I mean all!, total crap. There is not a single app that does what it promises. And all totally fucked up with advertising - including some of the paid apps. Most apps consume more power than they actually save. The winner of all this shit was the app "Battery Repair", which supposedly repairs broken cells. Well, well. All that junk should be thrown out of the store. But, no, these crap apps have ratings of 4.5 - 4.8 with millions of downloads. I don't get it. The only app that actually works is, hard to believe, Kaspersky Battery Saver. So if someone else wants to "optimize" their battery - forget it, it's not even worth looking for it.8 - - - Why do companies feel its ok to put dumb crap like this when their services go down? Shit like "oopsies" or "Oh no" doesn't make your users feel better about your shitty service. Btw this is from Mixpanel, who I wish we hadn't chosen as our analytics platform. - Moving tomorrow to our new place. I'd rather be fixing horrible coding with no comments than hauling all this crap around...1 - First proper day of work today. Started at 11. Listened to multiple guys shout about how much everyone made in this crap-infested cesspit of a sales job. Got to a city I'd never been in at 2. Walked around in the fucking rain knocking on doors till 9. Got on the bus at 11. I earned £20 the whole fucking day, which I'm not even going to get for a fucking month. I'm would firebomb the office tomorrow, but I need this job. Badly :/ - I hate ppl who don't indent code and those who don't add proper comments to explain three crap that they've created6 - - - Achievement unlocked for fuckerfox. You are the first program to permanently freeze my arch install, forcing me to restart. Fuck you!2 - Today i did it, after WEEKS of stuggle and suffering, this fcking odbc driver with multi db login WORKS on cpanel with php. I'm happy, ashamed, and i need a third beer. - Algorithms teacher: I don't know how to work this thing. (computer) ... Students constantly have to go up front and help him do basic computer stuff. Even worse, he types with two fingers! Why education system must you suck so bad???6 - Exercising. Which I stopped doing a few months ago. Which probably explains why everything I’m doing is turning to crap. - - So: git add . - marks the files in my local folder to be added to the repo git rm -rf . - deletes the files off my file system ? Crap!2 - Gotta stop providing better code solutions to crap that's not my business, roped into so much crap and now I have to put all these bad code fixes into some intern training scenarios or something...fml - That feeling when you think your workflow is bad and happens to see"Professional" Degree level team of coders writing crap code with even crappier workflow - - Fuck off OneDriveSetup.exe, nobody asked you to install anything. This "i7" is only dual-core, and I need both of them to run my code, kthx. - - After nearly 30 years developing, I've seen plenty of irrelevant job emails, but this one shows I can still be surprised! I'm struggling to think what in my profile could even come close to matching me for this role.2 - C++ absolute madness. Most C++ devs are just writing hacky stuff, that isnt even near to a great solution...10 - - Just wasted 30 mins of my life wondering where the fuck this bug is coming from. This is why i fucking hate javascript.8 - Freaking Magento! I've been learning how to use Magento for about the past week and i feel like i'm no closer than when i started.. - I just tried to updated mySQL on my development server, the mysql.user table is corrupted, all passwords got expired and i cant set new passwords because as i said, the user table is corrupted.4 - - - At a coffee shop, not sure what is happening, but there is a 1/4 Chance of me connecting to a server each time I try to open a page! Fuck it is frustrating! - I hate SugarCRM! Honestly. I just cannot understand how they manage to sell this peace of php crap.1 - The moment, when a "vr ready" alienware laptop (which is crap, because alienware is crap) cant handle a selfmade vr game made with unity3d...6 - Holy crap, just ran npm install on the vueJS webpack template. node_modules is 272.7mb with 21785 items :/10 - Look at this my university's WEB TECHNOLOGIES paper. It is totally full of crap and shitty questions.😅8 - - - (body.get_parent().get_children())[0].play("open") I really gotta learn how godot works to avoid doing crap like this - Omg I really like IntelliJ, but its GUI form designer is just completely and utter crap! Can't even resize a button! >:(3 - Taking a data structures Edx course. The guy who prepared the question should have been like, "Oh crap, I need to give a fourth option?" - - JSON is crap sold through marketing and doesn't live up to it's proclaimed goals: - - - - - Monday Feelings When you arrive to work and remember you have to work with Windows and you are a linux lover.2 - - - 10 - Well, I guess even googles listing of solutions on the fly from external websites doesnt always work perfectly..1 - - I wish my workday were 8 hours instead of 9. The last hour goes reading crap on the net anyway. Might as well rest in that time.3 - - Instagram new API app submission models is a piece of crap .. Mostly developers can not get applications approved .. Public data should be accessible to developera - When you get so immersed in coding something awesome & people get you off your computer for petty crap.3 - - Oh crap seems like we will switch our automation from java to python. Any recomendation to good recources or books to start with python?4 - Website looks crap but marketing be like "no one visits it anyway".. maybe if it wasn't shit and had information on it people would visit it2 - Hospital? More like i have to stop anime if i want to search for documentation because of the crap wifi.5 - If I have to listen to one more trendy, simplistic, shallow, preppy pop song I'm going to snap and beat the living tar out of whatever happens to be in swinging range. ...ok, I won't. But man I'll be tempted.3 - One of the biggest mistakes of my life: Buying a Microsoft Surface. Haven't been able to sleep in peace since I bought - - - - - - - I see all these rants about crappy employers and bosses, and it reminds to be greatful that I work at a really awesome company. I feel the pain for those who have to deal with crap, because I've been there.2 - Client:' I think that one email account is sending some spam emails. Can you make some checks?' Server:'Queued emails:730511...please kill me'3 - - crap... was about to curse Windows for randomly rebooting again to install some noncritical update... But guess I'm not allowed to do that anymore...2 - That moment you take a crap after eating lunch and equate your digestive system as a circular buffer.. -?! - Right now anything I write quickly becomes crap. I'm happy. It means the whole "lets try out raw javascript again" thing actually helps me learn something. - Windows Memory Diagnostic Tool did not bring me good news.... Hello 2017 you seem to be serving me a crap sandwich already.. - - today our senior dev said that (part of my code) is crap...I asked him how to do this the wright way...he did'nt answer.... :/5 - rant === true I despise university. Since I went there, I have stopped learning exciting and new technologies. Instead, I do mips, lisp and Java. I mean I wouldn't mind java, but it's boring repetitive crap. Making stupid simulations - all the fucking time. I can not be bothered to learn this shit anymore. It's not worth 9k a year. I'm lost. I don't know what to do. I can not physically do this anymore. Edit: Also, I hate this industry. All they want is a cs degree u til you have 2 years experience and then fuck it. It's a 50 k passport... wtf.3 - $ sudo pacman -S npm $ npm install -g @angular/cli $ ng new crap $ du -h crap 366M crap/ me like: "WHAT THE ACTUAL FUCK!!!1" $ rm -rf crap $ npm uninstall -g @angular/cli $ sudo pacman -Rs npm1 - I could work from home sometimes, the coworkers would be cool but focused, new tech would be encouraged and pms would defend devs against crappy clients, oh and no windows allowed, yeah I dont like sunlight (like linus house) - - When a software improvement organization (cough Scrum.org) does this stupid crap with their passwords, causing us all to be pwned.2 - Never had a real fight referring to any development - stuff, because most devs around me are more experienced and most of the time I can see the point in their opinions. I definitely had some discussions about some constructs in C++ for example. But thats it. - Lua and its goddamn metatables. I love the crap out of lua and its syntax, and metatables are really important, but setting them up is unbelievably confusing.1 - - Anyone else has company group chat where you troll the shit out of it besides chat where you post "politically correct" crap? 😬 - just came to the conclusion that my current side-project is a bunch of crap... now desperate finding a new app idea :/4 - My favorite slack bot throwing some crap on my face right before my week ends. Then, tries to motivate me at the same time.1 - When all of the dev cycle process crap is done. So maybe 2-3 hours per week, not including the "underwear programming" on the weekends. - Homebrew is crap ! Cannot uninstall git cannot uninstall PHP, pecl cannot install every fucking thing is restricted.6 - So I am wondering. Am I the only one who finds IBM to be a piece of crap that has slow services? I hate them so much, in that they have added so much time to my work!1 - - - The makefile: ' GOPATH=$(pwd) clean: @echo "cleaning :)" @rm -rf $(GPATH)/bin ' the "oh crap": make clean1 - - How one's inner and outer behaviour would be, when you boss is bullshitting you ? Outer me : oh is it, wow you are knowledgeable person. Inner me: fuck you bitch, get the fuck out. You lame sob..!!!3 - Using Resharper to change a var name: "Hey! I didn't break anything!" "wait... why didn't anything break? crap..." - - - Finishing thesis, Passing degree, Work in a new professional environment, starting Master studies ^^ - - - Exchange...services always failing causing email outages...it's 2017 already and business still trying to save money by using this crappy software - - - Please use re.VERBOSE for regexps and add comments. They help a lot to understand this Pile of symbolic crap. - Out of interest does anyone else have a Microsoft keyboard, and does it just die (requiring it to be plugged out and in again) every now and again? I honestly thought VS and Outlook were bad until I got this piece of crap6 - Started at a new place... Use pretty niche tech, not transferable elsewhere (Vaadin frame work )... Crap..... - - Microsoft needs to deal with getting native compilation of the full .net runtime rather than concentrating efforts just doing store apps that no one wants with the cut down runtime. Winds me up that5 - When you feel like crap but your boss might make you work on the weekend, unpaid, if you take the day off. -_-4 - - - I'd have the power to lint developer brains so they'd write clean code and I wouldn't spend so much time refactoring crap.2 - - This moment, you know ur movement mechanic in Unity works just as u want it to be. But its so damn hacky, u are afraid of refactoring, because u know.. u would break it...1 -. - - That moment when you waste two hours of your work life trying to find a dataset in a sea of crap to answer your bosse's question... - !rant My phone is buggy. I just accidentally downvoted a rant. Crap. Sorry whoever's rant that was. I don't even know what I downvoted it for. Any way I can find out what I've downvoted?2 - - - So guys, I'm going to buy a new Notebook for coding and multimedia usage. The question that bothers me the most right now: Is there a huge difference between IPS and Backlight LED Displays? Anyone got some experiences regarding this topic?1 - Only in Windows does turning the power off on my monitor mean to make it my primary monitor and show login screen there. I swear they'll never get this shit right. I'm forced to turn it on/off so windows can figure out what kind of signal to send it. Partially blame my questionable hdmi switch. No issues with MacOS and Linux though. - In the first place I dont do it that often in private projects because the estimation is always wrong. At work i just think about best and worst case scenario and the average time it could take. If the the worst case scenario is really time intensive and there are a lot of factors that could go wrong in contrast to the best case, I significantly increase the estimated time for the task. Otherwise its 1/6 best case; 1/6 worst case; 4/6 average time2 -...2 - The video shorts done by the devRant team surpasses the crap that shows like "Silicon Valley" or "Halt and Catch Fire" put out. - Designs and artistic impressions...on a website.. . Those two have lots of difference dear non programmers. Stop appreciating random crap. - - - - Windows is crap! Linux is better but does not support many important programs. And I think, a good system should be usable just fine without using the Terminal. So would it be possible to create a 4th big system architecture (Open-Source), which supports Windows Apps & Games without being shitty and without harming their laws?17 - Acumatica, left my last job because of that crap. Their implementation of a query language ('BQL') using generic types is horrendous - Is it just me who finds coding so relaxing? In all the messy crap flying around it's just me and code..my precious! Top Tags
https://devrant.com/search?term=crap
CC-MAIN-2019-22
refinedweb
4,695
74.59
pc over http will certainly work, on the ouside AS WELL on the outside. " it's getting the external domain name to point to the server that is the issue you see." no, when inside it will see server.mydomain.local, and when outside it will see mail.mydomain.com rpc over http is the way to go, he needs to get rid of that pop3, only disadvantages there. a nice tutorial for rpc over http can be found here: really, why make things MUCH more difficult with ALOT more disadvantages when you can just use rpc over http with full exchange ... Have the client to use DHCP to always get the correct DNS-settings independent if he connects internal or external. As the server propably already is registered in DNS as server.company.com, you should create an alias (CNAME) instead of an extra host (A) record, and point the CNAME-record to the FQDN for the server. If creating extra A-records, you can get some confusing resolving errors when reverse lookup doesn't match (pop->IP->server). WatchGuard DNSWatch reduces malware infections by detecting and blocking malicious DNS requests, improving your ability to protect employees from phishing attacks. Learn more about our newest service included in Total Security Suite today! I put in "pop.company.com" as the alias name, I point that to the FQDN (which is our SBS server) e.g. server.company.local but when I ping "pop.company.com" I still get a reply from the external IP, not the internal IP What am I doing wrong? how are you configuring the pop3, via the pop3 connection manager on the sbs management console, or straight into the user's outlook? the FQDN of your server (externally) is for example sbs.yourdomain.com or mail.yourdomain.com. the internal FQDN is server.yourcompany.local. Here is a tutoruial on how to configure pop3 on sbs: but actually, you shouldn't use pop3 at all, you'd better switch to SMTP: > switching pop3 to SMTP. Regards, suppsaws If you want to use pop3 for your domain use the sbs pop3 connection manager or (and you better should) use full SMTP or you will get all sorts of trouble. Don't forget this is SBS, not w2k3 server ... use the WIZARDS ... :p why are you using a separate pop server on the sbs when the pop3 connection manager is there? The POP3 connection manager is for the SBS server to download email but this SBS server doesn't use POP3, it uses SMTP to receive email. We're using the POP server on the SBS box so that the laptop can connect using POP, we're not using the POP connector to get email to the server. so you mean you are using full SMTP to receive mail, why are you trying to configure pop3 on the laptop then? just use rpc over http to get his email if he is outside the company, not pop3 .... . I really don't see the point using pop3 when you can use full smtp, please explain. RPC over HTTP on his laptop won't work either, because when he inside the internal LAN he won't be able to reach the server - it's getting the external domain name to point to the server that is the issue you see. show him the advantages of both solutions, he should be easilly conviced but it stays a ... manager :p they mostly don't care about the technical stuff , they don't have time to listen, things must only WORK :) Did you enter pop.company.com as the alias name and got pop.company.com.company.co If having separated namespace for internal and external DNS and this is the only one single machine that shall be handled this way to have its external DNS-suffix accesible on the intranet, you can create a zone and name it as the FQDN for the machine (pop.company.com). Creae an alias without name and point it on the internal server-FQDN. On your DNS if your local domain is company.com you can create a new alias called pop and point it to the local IP of your pop server or a new host A record and point it to the servers IP Address. If you domain is not company.com you will need to create a new forward dns zone called company.com and create a new host A record called pop and point it to the servers IP Address. Ensure your open a command prompt and do a ipconfig /flushdns to remove just incase its still picking up the external IP Address. Also ensure the laptop is configure to point to your internal dns server.
https://www.experts-exchange.com/questions/23492352/Outlook-2003-cannot-access-Fully-Qualified-mail-server-names-when-behind-Netgear-router.html
CC-MAIN-2018-22
refinedweb
795
70.84
Class holding the list of selected owners. More... #include <AIS_Selection.hxx> Class holding the list of selected owners. creates a new selection. the object is always add int the selection. faster when the number of objects selected is great. Append the owner into the current selection if filter is Ok. removes all the object of the selection. clears the selection and adds the object in the selection. Return the number of selected objects. Start iteration through selected objects. Return true if list of selected objects is empty. checks if the object is in the selection. Return true if iterator points to selected object. Continue iteration through selected objects. Return the list of selected objects. if the object is not yet in the selection, it will be added. if the object is already in the selection, it will be removed. Select or deselect owners depending on the selection scheme. Return selected object at iterator position.
https://dev.opencascade.org/doc/occt-7.6.0/refman/html/class_a_i_s___selection.html
CC-MAIN-2022-27
refinedweb
154
54.79
The SQL Server engine execution operators are of two kinds: blocking and non-blocking. Blocking operators need to consume the complete dataset before returning an output. Think about the COUNT(*) operator: in order to tell you the result it has to go over all your rows. Another example of blocking operator is the Sort operator. Proper sorting requires the dataset to be completely sorted. Non-blocking operators, on the contrary, can produce output before having processed all the input rows. Think about at this query: SELECT a,b, a+b FROM Table SQL Server can tell the exact result of a+b of each row as soon as it has read the row itself. In this case SQL Server will not wait for the whole table to be read from disk and will stream the data to the output as soon as it’s ready. As you might have guessed by now non-blocking operators are generally better: the less bottlenecks your execution plan has the better. The SQL Server optimizer, of course, knows when it’s best to use – and where – these operators on its own. You should however be mindful of the implications of a blocking function in your code, especially when dealing with slow-to-retrieve datasets. As a real example suppose we want to access the OpenData Lombardia and perform some statistical calculations. The data is available as an URL: (rows are available in other formats such as JSON and XML but for the purpose of this article we will work with CSV). In order to show a real example I’ve created a much bigger version of the recordset (exactly 1,305,487,617 bytes) replicating the original dataset. Using the CLR we can access the remote recordset and return as SQL NVARCHAR rows. A first blocking approach can be: [SqlFunction( DataAccess = DataAccessKind.None, SystemDataAccess = SystemDataAccessKind.None, FillRowMethodName = "_StreamLine", IsDeterministic = false, IsPrecise = true, TableDefinition = (@"Line NVARCHAR(MAX)"))] public static System.Collections.IEnumerable BlockingFileLine( SqlString fileName) { List<string> lStreams = new List<string>(); using (System.IO.FileStream fs = new System.IO.FileStream( fileName.Value, System.IO.FileMode.Open, System.IO.FileAccess.Read, System.IO.FileShare.Read)) { using (System.IO.StreamReader sr = new System.IO.StreamReader(fs)) { string str; while ((str = sr.ReadLine()) != null) lStreams.Add(str); } } return lStreams; } This undoubtedly works as expected generating a line-by-line TVF. However if we look at the memory footprint (I suppose a recently started instance for the sake of simplicity): This is far from optimal. Look at the virtual_memory_committed_kb delta: it’s a LOT. SQL went from 13264 KB to 4132364 KB. It’s almost 4GB just for this function (~3.5 times the initial dataset)! In SQL CLR functions we can – and should - use lazy evaluation instead: your main function is expected to return a System.Collections.IEnumerable instance. This is good: IEnumerable instances are not required to be able to tell the exact number of items in advance. This means that we can populate an IEnumerable while it’s being traversed. Using the streaming approach we don’t need to read all the file in advance. We can just read the relevant bits when requested. In the previous example we returned a List (that implements IEnumerable). The list was populated in advance (hence the heavy memory usage). To use a streaming approach we need to return a custom class that implements IEnumerable directly: [SqlFunction( DataAccess = DataAccessKind.None, SystemDataAccess = SystemDataAccessKind.None, FillRowMethodName = "_StreamLine", IsDeterministic = false, IsPrecise = true, TableDefinition = (@"Line NVARCHAR(MAX)"))] public static System.Collections.IEnumerable StreamFileLine( SqlString fileName) { System.IO.FileStream fs = new System.IO.FileStream( fileName.Value, System.IO.FileMode.Open, System.IO.FileAccess.Read, System.IO.FileShare.Read); return new LineStreamer(fs); } Here our LineStreamer class is just a wrapper around our real implementation: public class LineStreamer : Streamer, System.Collections.IEnumerable { public LineStreamer(System.IO.Stream stream) : base(stream) { } System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() { return new LineStreamerEnumerator(stream); } } The real code is in LineStreamerEnumerator class. That class will implement System.Collections.IEnumerator allowing it to be traversed in a foreach loop. In order to achieve streaming we should produce a row only when explicitly requested. We should also discard any non essential information in order to minimize the memory footprint. Here we go: public class LineStreamerEnumerator : System.Collections.IEnumerator, IDisposable { protected System.IO.StreamReader sr = null; protected string _CurrentLine = null; public LineStreamerEnumerator(System.IO.Stream s) { sr = new System.IO.StreamReader(s); } object System.Collections.IEnumerator.Current { get { return _CurrentLine; } } bool System.Collections.IEnumerator.MoveNext() { _CurrentLine = sr.ReadLine(); return _CurrentLine != null; } void System.Collections.IEnumerator.Reset() { throw new NotImplementedException(); } #region IDisposable / destructor void IDisposable.Dispose() { if (sr != null) { sr.Dispose(); sr = null; } } // Finalizers are not supported in SQL CLR //~LineStreamerEnumerator() //{ // if (sr != null) // sr.Dispose(); //} #endregion } Notice that I’ve implemented the IDisposable interface explicitly. SQL Server will call Dispose on our enumerator as soon as it’s done with it. This will give us a chance to cleanup any lingering streams. We cannot use finalizers in SQLCLR as the example above illustrated. Let’s try streaming in action: Here the memory delta is a LOT less (107728 – 16844 = 90884 kb). We lowered our memory footprint from 4GB to 100MB. Impressive isn’t it? Of course this is an ad-hoc example so you should not expect such big gains in real world. It is however a good practice to avoid blocking operators whenever possible. More on the topic: [MSDN] IEnumerable Interface (). [MSDN] IEnumerator Interface (). [MSDN] IDisposable Interface (). Remember also that most .Net streams have automatic buffering so you don’t have to worry about it. This feature fits nicely in the streaming pattern; try it using the System.Net.WebRequest to understand why (or just look here if you’re lazy :)). Happy Coding, Francesco Cogno where is code for _StreamLine Can you provide the zip file file containing the Visual Studio Solution. Hi SQLClrGuy, the _StreamLine method is here: . I suggest you to get the whole solution from here (it's an open source project); alle the relevant classes are in the ITPCfSQL.Azure.Streaming namespace. Cheers, Francesco
http://blogs.technet.com/b/italian_premier_center_for_sql_server/archive/2014/01/23/streaming-in-sql-server-using-sql-clr.aspx
CC-MAIN-2015-22
refinedweb
1,016
51.65
Forum Index Inspiration from this r/C_Programming post What's something you learnt or realised, a habit you developed, something you read or project you worked on that helped accelerate your understanding and/or productivity in D? For example, mixin templates finally clicking and you realise how to use them in your code base. Stuff like that. On Tuesday, 6 July 2021 at 20:53:12 UTC, Dylan Graham wrote: D really did change how I approach programming tasks in any language. D templates with CTFE D syntax; ! for templates, UFCS, optional parenthesis ! These features were a game changer for me because I could finally see through the syntax to semantics and I could easily create more fluent and monadic-like interfaces in my own code. C++ never had this clarity and for me personally it was always a muddy quagmire of Type<typename T::SomeVector<etc<etc>>>. With a better understanding of templates I started thinking in terms of functional types instead of just values and data structures, which was universal to all programming languages, even dynamically typed languages like Python. ... How any combination of UFCS, dynamic code generation and introspection, shortened syntax for calling functions, the ability to pass lambdas to templates (which can also be completely inlined by the compiler without sacrificing on code readability), etc can all be used to create and model code almost exactly how you want it to. I very rarely feel constricted in D like I do in other languages. There's pros and cons to that of course, but when I feel like just tapping out something random, more often than not I can match the code model to my mental model. e.g. the very existence of UDAs can allow for pretty natural looking code: @Command("command", "This is a command that is totally super complicated.") struct ComplexCommand { @CommandArgGroup("Debug", "Arguments related to debugging.") { @CommandNamedArg("verbose|v", "Enables verbose logging.") Nullable!bool verbose; @CommandNamedArg("log|l", "Specifies a log file to direct output to.") Nullable!string log; } } The above is doable in C#, but has to be done at runtime (assume the same with JVM languages). However in C++ you probably have to do weird hacks or use an external tool. C is either the same as C++ or even worse. Another example would be ranges, which we all know about (taken from D's main page): import std.stdio, std.array, std.algorithm; void main() { stdin .byLineCopy .array .sort!((a, b) => a > b) // descending order .each!writeln; } Or maybe I want to design a UI system where, with a small bit of descriptive boiler plate, I can automatically enable new controls to be used within a UI definition file: @DataBinding struct MyControlBinding { @BindingFor("stringVal") string someString; @BindingFor("intVal") @Name("number") int someInt; } @UsesBinding!MyControlBinding class MyControl : UIBase { public string stringVal; public int intVal; } UI:view { name "Some sexy custom UI" MyControl { someString "Hey" number 69 } } Or literally even just simple things like custom error messages: struct Vector(size_t dims) { static assert(dims <= 4, "The dimension is too large!") } Then there's things like pegged, vibe-d's diet templates, etc... All of this is possible within a single language, using standard tooling. Once I learned about how to introspect code and to generate code, the amount of possibilities really opened up to me. D's expressiveness and modeling power is probably the biggest thing I'd say it has over most over languages. I feel even dynamic languages fail to do some of these things as elegantly. On Wednesday, 7 July 2021 at 00:13:44 UTC, SealabJaster wrote: There's also cool shenanigans with things like this: UFCS + ranges made me how algorithms are supposed to compose You can make useful but really horrible interfaces (i.e. OpenCL) rather nice with reflection and CTFE For me, it is the indirect value I reaped by lurking in D's forums. There used to be some interesting discussions by language theorists that pushed me to learn and appreciate quite a bit of "abstract nonsense" and shown how many of what I thought was a cool new thing was just a hackish reincarnation of an old and well explored concept (including mixin templates). On Wednesday, 7 July 2021 at 08:41:05 UTC, Max Samukha wrote: shown showed, etc I wish I could edit. Not really, in Java you can do it with annotation processors and compiler plugins at compile time, whereas in C# you can do it with code generators or T4 templates. On 7/6/21 4:53 PM, Dylan Graham wrote: Learning to not care about using the GC. Learning to have the compiler write code the way I would write it. vibe.d was immensely inspiring for this! -Steve
https://forum.dlang.org/thread/hstrkgyxoyxxlryqnixw@forum.dlang.org
CC-MAIN-2021-39
refinedweb
792
59.53