text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Definition The class checksummer_base is the base class for all LEDA checksummers. It cannot be instantiated. In order to avoid repeating this documentation for each derived class below we will discuss its members here. When a checksummer is used in encoding mode it can add checksums to the output stream. This is controlled by the checksums_in_stream flag. If this flag is set and the block size is zero (default setting) then one checksum is appended at the end of the stream. If the flag is switched on and the block size b is positive then a checksum is written for every block of b characters. When the checksummer is used in decoding mode this flag specifies whether the source stream contains checksums. If so, they will be compared against the computed checksum for the stream (or for the respective block if the block size is positive). If you use a checksummer in a coder pipe (cf. Section Coder Pipes) then it should be the first coder in the pipe. This ensures that the checksum is computed for the original input. Finally, we want to point out that all checksummers provide fast seek operations (cf. Section Decoding File Stream). #include < LEDA/coding/checksum.h > Creation Operations Standard Operations Additional Operations
http://www.algorithmic-solutions.info/leda_manual/checksummer_base.html
CC-MAIN-2017-13
refinedweb
209
66.03
table of contents NAME¶ Tk_InitStubs - initialize the Tk stubs mechanism SYNOPSIS¶ #include <tk.h> const char * Tk_InitStubs(interp, version, exact) ARGUMENTS¶ - Tcl_Interp *interp (in) - Tcl interpreter handle. - char *version (in) - A version string consisting of one or more decimal numbers separated by dots. - int exact (in) - Non-zero means that only the particular Tk version specified by version is acceptable. Zero means that versions newer than version are also acceptable as long as they have the same major version number as version. INTRODUCTION¶ The Tcl stubs mechanism defines a way to dynamically bind extensions to a particular Tcl implementation at run time. the stubs mechanism requires no changes to applications incorporating: - 1) - Call Tcl_InitStubs in the extension before calling any other Tcl functions. - 2) - Call Tk_InitStubs if the extension before calling any other Tk functions. - 2) - Define the USE_TCL_STUBS and the USE_TK_STUBS symbols. Typically, you would include the -DUSE_TCL_STUBS and the -DUSE_TK_STUBS flags when compiling the extension. - 3) - Link the extension with the Tcl and Tk stubs libraries instead of the standard Tcl and Tk libraries. On Unix platforms, the library names are libtclstub8.4.a and libtkstub8.4.a; on Windows platforms, the library names are tclstub84.lib and tkstub84.lib. Adjust the library names with appropriate version number but note that the extension may only be used with versions of Tcl/Tk that have that version number or higher. DESCRIPTION¶ Tk_InitStubs attempts to initialize the Tk stub table pointers and ensure that the correct version of Tk is loaded. In addition to an interpreter handle, it accepts as arguments a version number and a Boolean flag indicating whether the extension requires an exact version match or not. If exact is 0, then the extension is indicating that newer versions of Tk are acceptable as long as they have the same major version number as version; non-zero means that only the specified version is acceptable. Tcl_InitStubs returns a string containing the actual version of Tk satisfying the request, or NULL if the Tk version is not acceptable, does not support the stubs mechanism, or any other error condition occurred. SEE ALSO¶ Tcl_InitStubs KEYWORDS¶ stubs
https://manpages.debian.org/unstable/tk8.6-doc/Tk_InitStubs.3tk.en.html
CC-MAIN-2022-21
refinedweb
356
54.83
ATTRIBUTE(3) Library Functions Manual ATTRIBUTE(3) NAME attribute -- non-standard GCC attribute extensions SYNOPSIS #include <<sys/cdefs.h>> __dead __pure __constfunc __noinline __unused __used __packed __aligned(x); __section(section); __read_mostly __cacheline_aligned __predict_true(exp); __predict_false(exp); DESCRIPTION The GNU Compiler Collection (GCC) provides many extensions to the standard C language. Among these are the so-called attributes. In NetBSD all attributes are provided in a restricted namespace. The described macros should be preferred instead of using the GCC's __attribute__ extension directly. ATTRIBUTES __dead The gcc(1) compiler knows that certain functions such as abort(3) and exit(3) can never return any value. When such a function is equipped with __dead, certain optimizations are possible. GCC is known for aggressive function inlining. Sometimes it is known that inlining is undesirable or that a function will perform incorrectly when inlined. The __noinline macro expands to a function attribute that prevents GCC for inlining the function, irrespective whether the function was declared with the inline keyword. The attribute takes precedence over all other compiler options related to inlining. __unused In most GCC versions the common -Wall flag enables warnings produced by functions that are defined but unused. Marking an unused function with the __unused macro inhibits these warnings. __used The __used macro expands to an attribute that informs GCC that a static variable or function is to be always retained in the object file even if it is unreferenced. _: o Mixing assembly and C code. o Dealing with hardware that may impose alignment requirements greater than the architecture itself. o equals 1. _. SEE ALSO 6.1.5 December 19, 2010 NetBSD 6.1.5
http://modman.unixdev.net/?sektion=3&page=__predict_false&manpath=NetBSD-6.1.5
CC-MAIN-2017-17
refinedweb
275
57.77
It will enable the on screen navigation soft keys. it will also disable the capacitive buttons AND the home button. I used the Framework-res.apk from the CM 10 Preview 6 build. This will NOT disable the capacitive lights, just the keys themselves. To disable the lights go to settings -> Advanced -> Screen and uncheck "Backlight" Why did I do this mod? Well, I really liked the on screen nav buttons on the GNEX. At the time I had the Galaxy S II and enabled them with AOKP but as much as I liked them I hated losing the screen real estate on the 4.3 inch screen. When I got the GS3 I felt the screen was big enough to give up a little screen RE for the nav buttons. I also did it because when I play games or hold my phone in landscape I would accidently hit the capacitive buttons so this mod also eliminated that for me EDIT: Member NemesisRE Has posted a way to universally do this on CM 10 and other roms! Taking this mod a step further and making it easy on me so I wouldnt have to do the mod every time the CM 10 FW changed: From NemesisRE's post() Based upon the work of graffixnyc and labbe- I made a Script that searches for the entries and changes them. All other entries (if there are others) are not affected! Script doesn´t need BusyBox binary so its a very small Package. No Framework-res.apk is added so it should be universal. Please make a Backup before you try this! And report if working! ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- NAV_Only: none of the hardware keys are enabled (except Volume and Power) NAV_HomeWake: Home button wakes device but has no other function NAV_HomeCamera: Home button opens Camera and takes Pictures but has no other function NAV_StockKeys: Functions as normal but with on Screen Navbar NAV_Remove: Removes the mod Install Instructions: - Boot into recovery - flash with CWM - reboot This should work on any device with the same keybindings: Code: key 172 HOME key 158 BACK key 139 MENU Code: /system/usr/keylayout/sec_touchkey.kl /system/usr/keylayout/gpio-keys.kl Code: #include <std_disclaimer.h> /* * Your warranty is now void. * * I am not responsible for bricked devices, dead SD cards, *. Hard. A lot. */ Steven
https://forum.xda-developers.com/galaxy-s3/themes-apps/mod-enable-screen-nav-soft-keys-disable-t1788780
CC-MAIN-2017-43
refinedweb
386
70.53
Don't Try This at Home Dr. Josh Bloch and Dr. Neal Gafter (Click and Hack) guest starred on Mary's Friday Free Stuff Puzzler last week. Every week, Mary, a Sun marketing geek, boxes up random items she finds around her office and ships them off to the weekly winner. Her blog is a fun read. Check it. Their provided example method performs the same function as the throw clause, except the caller is not forced to handle checked exceptions: public class Thrower { public static void sneakyThrow(Throwable t) { Thread.currentThread().stop(t); // deprecated } } See for yourself by running this example program: public class Test { public static void main(String[] args) { Thrower.sneakyThrow(new Exception("Ouch")); } } The first solution takes advantage of a bad design decision in the Class.newInstance() method. Class.newInstance() directly propagates any exception thrown by the default constructor. The API designers have since avoided problems in other reflective methods by wrapping target exceptions. For example, Constructor.newInstance(Object[]) throws InvocationTargetException which wraps any exception thrown by the target constructor. This helps the client differentiate between target exceptions and exceptions thrown by newInstance() itself. public class Thrower { private static Throwable throwable; private Thrower() throws Throwable { throw throwable; } public static synchronized void sneakyThrow(Throwable throwable) { // can't handle these types. if (throwable instanceof IllegalAccessException || throwable instanceof InstantiationException) throw new IllegalArgumentException(); Thrower.throwable = throwable; try { Thrower.class.newInstance(); } catch (InstantiationException e) { // can't happen. e.printStackTrace(); } catch (IllegalAccessException e) { // can't happen. e.printStackTrace(); } finally { Thrower.throwable = null; } } } Note that the code sets the static throwable field back to null in a finally block. Neal added this to prevent a memory leak (this is why we trust these guys with our core code). "Who cares about how much memory one exception takes up?" some might ask. Though this is a trivial example, the real world implications are much larger. For example, without the finally block, if Thrower is in the system class path, we call sneakyThrow() with an exception whose class loaded in a web application, and then we try to hot deploy that web application, the VM can not garbage collect the old deployment. The system class loader holds a strong reference to the exception instance, which holds a strong reference to the class, which holds a strong reference to the web application classloader, which holds a strong reference to all of the classes in the web application (substantially more than one exception). Whether we're talking about a framework that handles AOP, dependency injection, or logging, the implementor has to keep this in mind. The final JDK 1.5 specific solution takes advantage of the fact that the compiler does not type check generics at runtime. In defense of JDK 1.5, the compiler does print a warning. public class TigerThrower { private void sneakyThrow2(Throwable t) throws T { throw (T) t; // Unchecked cast!!! } public static void sneakyThrow(Throwable t) { new TigerThrower ().sneakyThrow2(t); } } - Login or register to post comments - Printer-friendly version - crazybob's blog - 1226 reads
https://weblogs.java.net/blog/crazybob/archive/2004/09/dont_try_this_a.html
CC-MAIN-2014-10
refinedweb
501
57.47
lp:charms/trusty/apache-hadoop-plugin Created by Kevin W Monroe on 2015-06-01 and last modified on 2015-12-16 - Get this branch: - bzr branch lp:charms/trusty/apache-hadoop-plugin Members of Big Data Charmers can upload to this branch. Log in for directions. Branch merges Propose for merging Related bugs Related blueprints Branch information - Owner: - Big Data Charmers - Status: - Mature Recent revisions - 107. By Kevin W Monroe on 2015-12-16 merge compatible bigdata-dev changes: use s3 for java-installer, update benchmarks to work in our .venv - 106. By Cory Johns on 2015-12-02 More explicit version pin - 105. By Cory Johns on 2015-12-02 Fixed version pin - 104. By Cory Johns on 2015-12-02 Pin charm-benchmark to work around update breaking plugin - 103. By Cory Johns on 2015-10-07 Get Hadoop binaries to S3 and cleanup tests to favor and improve bundle tests - 102. By Cory Johns on 2015-09-16 Merged test fixes for CWR - 101. By Kevin W Monroe on 2015-09-15 [merge] merge bigdata-dev r116..117 into bigdata-charmers - 100. By Kevin W Monroe on 2015-08-24 [merge] merge bigdata-dev r106..r115 into bigdata-charmers - 99. By Kevin W Monroe on 2015-06-29 bundle resources into charm for ease of install; add extended status messages; use updated java-installer.sh that ensures java is on the path; update to latest jujubigdata library - 98. By Kevin W Monroe on 2015-06-18 remove namespace refs from readmes now that we are promulgated; update DEV-README with jujubigdata info Branch metadata - Branch format: - Branch format 7 - Repository format: - Bazaar repository format 2a (needs bzr 1.16 or later)
https://code.launchpad.net/~bigdata-charmers/charms/trusty/apache-hadoop-plugin/trunk
CC-MAIN-2017-13
refinedweb
285
65.73
Enums are a double-edged sword. They are extremely useful to create a set of possible values, but they can be a versioning problem if you ever add a value to that enum. In a perfect world, an enum represents a closed set of values, so versioning is never a problem because you never add a value to an enum. However, we live in the real, non-perfect world and what seemed like a closed set of values often turns out to be open. So, let's dive in. Beer API My example API is a Beer API! I have a GET that returns a Beer, and a POST that accepts a Beer. [HttpGet] public ActionResult<Models.Beer> GetBeer() { return new ActionResult<Models.Beer>(new Models.Beer() { Name = "Hop Drop", PourType = Beer.Common.PourType.Draft }); } [HttpPost] public ActionResult PostBeer(Models.Beer beer) { return Ok(); } The Beer class: public class Beer { public string Name { get; set; } public PourType PourType { get; set; } } And the PourType enum: public enum PourType { Draft = 1, Bottle = 2 } The API also converts all enums to strings, instead of integers which I recommend as a best practice. services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_2) .AddJsonOptions(options => { options.SerializerSettings.Converters.Add(new Newtonsoft.Json.Converters.StringEnumConverter()); }); So, the big question comes down to this definition of PourType in the Beer class. public PourType PourType { get; set; } Should it be this insted? public string PourType { get; set; } We're going to investigate this question by considering what happens if we add a new value to PourType, Can = 3. Let's look at the pros/cons. Define As Enum Pros When you define PourType as an Enum on Beer, you create discoverability and validation by default. When you add Swagger (as you should do), it defines the possible values of PourType as part of your API. Even better, when you generate client code off of the Swagger, it defines the Enum on the client-side, so they can easily send you the correct value. Cons Backwards compatibility is now an issue. When we add Can to the PourType, we have created a new value that the client does not know about. So, if the client requests a Beer, and we return a Beer with the PourType of Can, it will error on deserialization. Define As String Pros This allows new values to be backwards compatible with clients as far as deserialization goes. This will work great in cases where the client doesn't actually care about the value or the client never uses it as an enum. However, from the API's perspective, you have no idea if that is true or not. It could easily cause a runtime error anyway. If the client attempts to convert it to an enum it will error. If the client is using the value in an IF or SWITCH statement, it will lead to unexpected behavior and possibly error. Cons The biggest issue is discoverability is gone. The client has no idea what the possible set of values are, it has to pass a string, but has no idea what string. This could be handled with documentation, but documentation is notoriously out of date and defining it on the API is a much easier process for a client. So What Do We Do? Here's what I've settled on. Enum! The API should describe itself as completely as possible, including the possible values for an enum value. Without these values, the client has no idea what the possible values are. So, a new enum should be considered a version change to the API. There are a couple ways to handle this version change. Filter The V1 controller could now filter the Beer list to remove any Beer's that have a PourType of Can. This may be okay if the Beer only makes sense to clients if they can understand the PourType. Unknown Value The Filter method will work in some cases, but in other cases you may still want to return the results because that enum value is not a critical part of the resource. In this case, make sure your enum has an Unknown value. It will need to be there at V1 for this to work. When the V1 controller gets a Beer with a Can PourType, it can change it to Unknown. Here's the enum for PourType: public enum PourType { /// <summary> /// Represents an undefined PourType, could be a new PourType that is not yet supported. /// </summary> Unknown = 0, Draft = 1, Bottle = 2 } Because Unknown was listed in the V1 API contract, all clients should have anticipated Unknown as a possibility and handled it. The client can determine how to handle this situation... it could have no impact, it could have a UI to show the specific feature is unavailable, or it could choose to error. The important thing is that the client should already expect this as a possibility. Resource Solution One thing that should be considered in this situation is that the enum is actually a resource. PourType is a set of values that could expand as more ways to drink Beer are invented (Hooray!). It may make more sense to expose the list of PourType values from the API. This prevents any version changes when the PourType adds a new value. This works well when the client only cares about the list of values (e.g. displaying the values in a combobox). But if the client needs to write logic based on the value it can still have issues with new values, as they will land in the default case. Exposing the enum as a resource also allows additional behavior to be added to the value, which can help with client logic. For example, we could add a property to PourType for RequiresBottleOpener, so the client could make logic decisions without relying on the "Bottle" value, but just on the RequiresBottleOpener property. The PourType resource definition: public class PourType { public string Name { get; set; } public bool RequiresBottleOpener { get; set; } } The PourType controller: [HttpGet] public ActionResult<IEnumerable<PourType>> GetPourTypes() { // In real life, store these values in a database. return new ActionResult<IEnumerable<PourType>>( new List<PourType>{ new PourType {Name = "Draft"}, new PourType {Name = "Bottle", RequiresBottleOpener = true}, new PourType {Name = "Can"} }); } However, this path does increase complexity at the API and client, so I do not recommend this for every enum. Use the resource approach when you have a clear case of an enum that will have additional values over time. Conclusion I have spent a lot of time thinking about this and I believe this is the best path forward for my specific needs. If you have tackled this issue in a different way, please discuss in the comments. I don't believe there is a perfect solution to this, so it'd be interesting to see other's solutions. Discussion (4) I don't actually suggest it, but if the API is versioned, doesn't that mean that the V2 version could use an extended enum that is different from the first by the new value? We usually have a mixed approach for the enums which are likely to change in the future. let's say WalletTransactionType is an enum which holds Credit, Debit. It may change in future versions when we support transactions with digital wallet or EMI. Client : Client sdk is generated as if WalletTransactionType properties as a string. We generate the allowed types in the code documentation tags so that as frontend dev tries to set the value, it shows up the documented code to assist him what are allowed. Server : Server holds the enum type so that its easy to validate for the values passed from client. Case 1 : Lets say if EMI is added in new version, V1 : Client sends only Credit, Debit and server validates it fine. V2 : Client sends all three and server validates it fine. Case 2 : Let's say we removed Debit option and added EMI. V1 : Client sends Credit or Debit, server takes the values, takes necessary steps in its v1 version services. V2 : Works as normal. Basically we maintain different services, controllers and routers for each version and so does the DTO objects. For a new version, we just clone the existing code and work on it. If anything breaks, like Case 2, we get a compilation error for V1 versions and we adapt them to match the domain functionality accordingly. PS : I liked the Unknown extra value though as it can be helpful in some scenarios. I can easily generate unknown for every enum listed in the spec automatically through code generator. Hope it makes sense. I came across the same problem recently and there were some suggestions to include this unknown literal for all enums at code generation level. One concern i have with that is it introduce unwanted code complexity for all enum handlings. And developers end up in implementing undefined behavior in consumer code for all enums. Do you see the same concerns ? I like the version API for enum values solution and also if you really want to use enum for something potential change, then define the unknown as part of the API contract and describe it. So API designers can take that decision than a code generator does. Another option i consider is representing the new value as a attribute in current version of the API provided that this new value can be optional for current consumers. I have enums. And I usually expose a versioned endpoint to get the valid values. But for that master data that can change in some form, I use the database.
https://dev.to/timothymcgrath/enums-apis-15n4?utm_campaign=dotNET%20Weekly&utm_medium=email&utm_source=week-37_year-2019
CC-MAIN-2021-43
refinedweb
1,606
62.58
py2app 0.8 Create standalone Mac OS X applications with Python py. Release history py2app 0.8 py2app 0.8 is a feature release Fixed argv emulator on OSX 10.9, the way the code detected that the application was launched through the Finder didn’t work on that OSX release. The launcher binary is now linked with Cocoa, that should avoid some problems with sandboxed applications (in particular: standard open panels don’t seem to work properly in a sandboxed application when the main binary is not linked to AppKit) Don’t copy Python’s Makefile, Setup file and the like into a bundle when sysconfig and distutils.sysconfig don’t need these files (basicly, when using any recent python version). Fix some issues with virtualenv support: - detection of system installs of Python didn’t work properly when using a virtualenv. Because of this py2app did not create a “semi-standalone” bundle when using a virtualenv created with /usr/bin/python. - “semi-standalone” bundles created from a virtualenv included more files when they should (in particular bits of the stdlib) Issue #92: Add option ‘ ‘email’ recipe, but require a new enough version of modulegraph instead. Because of this py2app now requires modulegraph 0.11 or later. py2app 0.7.4 Issue #77: the stdout/stderr streams of application and plugin bundles did not end up in Console.app on OSX 10.8 (as they do on earlier releases of OSX). This is due to a change in OSX. With this version the application executable converts writes to the stdout and stderr streams to the ASL logging subsystem with the options needed to end up in the default view of Console.app. NOTE: The stdout and stderr streams of plugin bundles are not redirected, as it is rather bad form to change the global environment of the host application. The i386, x86_64 and intel stub binaries are now compiled with clang on OSX 10.8, instead of an older version of GCC. The other stub versions still are compiled on OSX 10.6. Issue #111: The site.py generated by py2app now contains a USER_SITE variable (with a default value of None) because some software tries to import the value. Py2app didn’t preserve timestamps for files copied into application bundles, and this can cause a bytecompiled file to appear older than the corresponding source file (for packages copied in the bundle using the ‘packages’ option). Related to issue #101 Py2app also didn’t copy file permissions for files copied into application bundles, which isn’t a problem in general but did cause binaries to lose there executable permissions (as noted on Stackoverflow) Issue #101: Set “PYTHONDONTWRITEBYTECODE” in the environment before calling Py_Initialize to ensure that the interpreter won’t try to write bytecode files (which can cause problems when using sandboxed applications). Issue #105: py2app can now create app and plugin bundles when the main script has an encoding other than ASCII, in particular for Python 3. Issue #106: Ensure that the PIL recipe works on Python 3. PIL itself isn’t ported yet, but Pillow does work with Python 3. “python setup.py install” now fails unless the machine is running Mac OS X. I’ve seen a number of reports of users that try to use py2app on Windows or Linux to build OSX applications. That doesn’t work, py2app now fails during installation do make this clear. Disabled the ‘email’ ‘pkg/foo.py’ to be in namespace package ‘pkg’ unless there is a zipfile entry for the ‘pkg’ folder (or there is a ‘pkg/__init__.py’ entry). Issue #97: Fixes a problem with the pyside and sip recipes when the ‘qt_plugins’ option is used for ‘image_plugins’. Issue #96: py2app should work with python 2.6 again (previous releases didn’t work due to using the sysconfig module introduced in python 2.7) Issue #99: appstore requires a number of symlinks in embedded frameworks. (Version 0.7 already added a link Python.frameworks/Versions/Current, this versions also adds Python.framework/Python and Python.framework/Resources with the value required by the appstore upload tool). Py2app copied stdlib packages into the app bundle for semi-standalone builds when they are mentioned in the ‘–packages’ option (either explicitly or by a recipe). This was unintentional, semi-standlone builds should rely on the external Python framework for the stdlib. Note Because of this bug parts of the stdlib of /usr/bin/python could be copied into app bundles created with py2app. ‘.git’ and ‘.hg’ directories while copying package data (‘.svn’ and ‘C ‘pkg ‘raw-backends (or ‘matplotlib_backends’ in setup.py) is a list of plugins to include. Use ‘-‘ to not include backends other than those found by the import statement analysis, and ‘*’ to include all backends (without necessarily including all of matplotlib) As an example, use --matplotlib-backends ‘python ‘includes’ ‘Versions’ ‘ ‘ ‘Resources’ folder is no longer on the python search path, it contains the scripts while Python modules and packages are located in the site-packages directory. This change is related to issue #30. The folder ‘Resources ‘argv_emulation’ to False when your using a 64-bit build of python, because that option is not supported on such builds. py2app now clears the temporary directory in ‘build’ and the output directory in ‘dist’ ‘examples’ ‘) - Downloads (All Versions): - 326 downloads in the last day - 1825 downloads in the last week - 8108 downloads in the last month - Author: Ronald Oussoren - Documentation: py2app package documentation - - Programming Language :: Python :: 2 - Programming Language :: Python :: 2.7 - Programming Language :: Python :: 3 - Programming Language :: Python :: 3.3 - Programming Language :: Python :: 3.4 - Topic :: Software Development :: Build Tools - Topic :: Software Development :: Libraries :: Python Modules - Topic :: Software Development :: User Interfaces - Package Index Owner: bob, ronaldoussoren - DOAP record: py2app-0.8.xml
https://pypi.python.org/pypi/py2app/0.8
CC-MAIN-2015-22
refinedweb
962
63.09
I am working on a simple program that will grab the memory address of a given variable upto 64 bits(unsigned long). currently this is the code I have but for someo reason the compiler is throwing me warnings saying that my method is returning address of a local variable when that is what I have intended. int main(int argc, char *argv[]) { char* one = argv[1]; long memaddress = address(one); } uint64_t address( char * strin) { return (uint64_t) &strin; } You can imagine the function definition and its call long address = address(one); //... uint64_t address( char * strin) { return (uint64_t) &strin; } the following way long address = address(one); //... uint64_t address( void ) { char * strin = one; return (uint64_t) &strin; } As you see variable strin is a local variable of the function. It will be destroyed after exiting the function. Thus its address after exiting the function will be invalid. And the compiler warns you about this. To avoid the warning you could write the function at least the following way uint64_t address( char ** strin) { return (uint64_t) &*strin; } and call it like long address = address(&one);
https://codedump.io/share/3WSPaeDI7PBt/1/function-returns-address-of-local-variable--wreturn-local-addr
CC-MAIN-2017-17
refinedweb
180
60.45
. Receiving key events in Java ME This code snippet demonstrates how to handle key events using the MIDP 2.0 API. Series 40 Series 40 DP 2.0 Series 40 6th Edition FP1 Series 40 3rd Edition FP1 Symbian Nokia Belle S60 5th Edition S60 3rd Edition FP2 S60 3rd Edition FP1 Overview Only some displayables can handle key events. In MIDP 2.0 those are: - CustomItem - Canvas - GameCanvas This example implements some Canvas class methods. In order to handle a key press, the Canvas.keyPressed method is implemented. This method is called by the framework when a 'key press' event occurs. The method takes the key code integer value as a parameter, which can be used for special handling. The second implemented method is Canvas.paint. It is used for displaying the code and name of the pressed key. Canvas can also handle 'key release', 'key repeat', 'pointer drag', 'pointer press', and 'pointer release' events. The methods for handling these events are: - keyReleased - keyRepeated - pointerDragged - pointerPressed - pointerReleased For more info, see API documentation. Source file: MyCanvas.java import javax.microedition.lcdui.Canvas; import javax.microedition.lcdui.Font; import javax.microedition.lcdui.Graphics; /** * Canvas demostrates key handling procedure example. */ public class MyCanvas extends Canvas { /** * Array holds string displayed with key code and key name. */ private String[] displayStrings; /** * Holds last pressed key code. */ private int pressedKey; /** * */ public MyCanvas() { pressedKey = 0; displayStrings = new String[2]; displayStrings[0] = "Key code: "; displayStrings[1] = "Key: "; } /** * From Canvas. * Method fills canvas draw area with grey color, draws a rectangle border * for a canvas and displays last presend key code and it's name. * @param g */ protected void paint( Graphics g ) { // Setting up text font to default one. g.setFont( Font.getDefaultFont() ); // Setting up Grey color as the current one. g.setColor( 0xC0C0C0 ); // Filling the screen with the current color. g.fillRect( 0, 0, getWidth(), getHeight() ); // Setting text and canvas border color. g.setColor( 0x000000 ); // Draw a simple border for our canvas. g.drawRect( 0 + 2, 0 + 2, getWidth() - 4, getHeight() - 4 ); // Changing pen position to where our key codes will be displayed g.translate( 4, getHeight() / 2 ); // drawing a key code // If pressedKey variable holds '0' as a value no key code ot it's name // will be displayed then. if( pressedKey != 0) { // Drawing a string containing pressed key code g.drawString( displayStrings[0] + pressedKey, 0, 0, Graphics.TOP | Graphics.LEFT ); // Drawing a string containing pressed key code name g.drawString( displayStrings[1] + "'" + getKeyName( pressedKey ) + "'", 0, 20, Graphics.TOP | Graphics.LEFT ); } else { g.drawString( "Press a key ... ", 0, 20, Graphics.TOP | Graphics.LEFT ); } // moving cursor back to where it was g.translate( 0, - getHeight() / 2 ); } /** * From Canvas. * Method saves code of a pressed key to the pressedKey variable. * @param keyCode */ protected void keyPressed( int keyCode ) { pressedKey = keyCode; repaint(); } } Source file: ReceivingKeyEvents.java /** * Reference to our Canvas implementation object. */ private MyCanvas canvas; public ReceivingKeyEvents() { display = Display.getDisplay( this ); setupMainForm(); setupCanvas(); } /** * Instantiates a canvas variable with MyCanvas instance. */ private void setupCanvas() { canvas = new MyCanvas(); // Adding 'Back' softkey to be able to return to app's main form. canvas.addCommand( BACK_COMMAND ); // Setting up our MIDlet class as canvas command listener. canvas.setCommandListener( this ); } /** *) { // Bringing canvas to the foreground. display.setCurrent( canvas ); } else if ( command == BACK_COMMAND ) { // Switching back from canvas to mainForm. display.setCurrent( mainForm ); } } Postconditions When the MIDlet is started, the main form with a text field will be displayed. Press the 'Start' softkey and a canvas will be displayed, waiting for key inputs from the user. If you press one of the numpad buttons, the appropriate key code and name will be displayed on the canvas. To get back to the main form, press the 'Back' softkey.:ReceivingKeyEvents.zip. - You can view all the changes that are required to implement the above-mentioned features. The changes are provided in unified diff and colour-coded diff (HTML) formats in Media:ReceivingKeyEvents.diff.zip. - For general information on applying the patch, see Using Diffs. - For unpatched stub applications, see Example app stubs with logging framework.
http://developer.nokia.com/community/wiki/Receiving_key_events_in_Java_ME
CC-MAIN-2014-42
refinedweb
665
60.21
); } 10 Comments Permalink Now I see why you didn’t include specific commands using the GNU C compiler for “The Correct Solution”. $ g++ -o yourprogram somecode.o othercode.o /usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/8/../../../x86_64-linux-gnu/Scrt1.o: in function _start':main’ (.text+0x20): undefined reference to collect2: error: ld returned 1 exit status It doesn’t compile. I am beginning to think that it is not possible to call a C function from a C++ main using the GNU C compiler. Permalink I call C functions from C++ code and it compiles every day. Your linker error seems to be something different. According to the error message, neither of the two object files contains a main function. If you show the actual sources that resulted in those object files, I can do more than guesswork as to why the linker doesn’t find main. Permalink Hi Arne, thanks so much for writing. I am trying to include two C headers to my C++ program that have the same name, types, and functions (they are the same utility with different parameters). With only one instance of the C code in use, I used the extern “C” {#include “foo.h”} in my C++ .cpp file. Now that I try to include the second, there are name conflicts. Note, I have individual cpp files wrapping use of the each C-code utility. I implemented your foo_for_cpp.h solution to no avail. To my question: Is it possible to create aliases for the C types and functions in foo_for_cpp.h to avoid the conflicts? Sorry if my vocab isn’t quite right; I’m not a professional. Respectfully, John Permalink Hi John, I am afraid there is no easy solution to that. If the two headers are header only (i.e. all functions are inlined, there are no .c source files having definitions), you might get away with putting the includes in anonymous namespaces and include only one of those headers in any .cpp source file. That way, the symbols in those headers are only visible locally in the respective .cpp file. If that is not the case, i.e. there are definitions in .c source files, this is not possible due to linking rules: There will be two different definitions of the same function in two translation units, which violates the One Definition Rule and makes your program invalid and undefined behavior. Permalink I’d like to point to the ISO C++ FAQ regarding these questions:++".
https://arne-mertz.de/2018/10/calling-cpp-code-from-c-with-extern-c/
CC-MAIN-2022-05
refinedweb
420
67.15
SHSkipJunction function Checks a bind context to see if it is safe to bind to a particular component object. Syntax Parameters - pbc [in, optional] A pointer to an IBindCtx interface that specifies the bind context you want to check. This value can be NULL. - pclsid [in] Type: const CLSID* A pointer to a variable that specifies the CLSID of the object being tested to see if it must be skipped. Typically, this is the CLSID of the object that IShellFolder::BindToObject is about to create. Return value Type: BOOL Returns TRUE if the object specified by pclsid must be skipped, or FALSE otherwise. Remarks This function can be used to avoid infinite cycles in namespace binding. For example, a folder shortcut that refers to a folder above it in the namespace tree can produce an infinitely recursive loop. Requirements Show:
https://msdn.microsoft.com/en-us/library/dd378431(v=vs.85).aspx
CC-MAIN-2017-09
refinedweb
140
62.27
rails pass variable to model from controller not working Im trying to pass a variable from controller to my form to no avail. Its working in another controller but cannot make it work for new user registrations. def update @profile = Profile.find(params[:id]) @profile.form = "signup" ... end Inside model I can do @profile.form to get the value This does not work for new user registrations: inside create action: undefined method `form=' for nil:NilClass how to pass a variable to my model upon new user registration? User = User.new then @user.form == "signup" does not work either, i have :form in my attributes accessible list Part of model: attr_accessible form ... validates :city_id, :presence => true, :if => :signup? def signup? #@profile.form == "signup" #@user.form == "signup" if self.form == "signup" return true else return false end end EDIT 1: Still unable to pass a param to the model :S:S:S tried every possible way and google found solution. The create method for my registrations#create = def create profile = Profile.new profile.form = "signup" super end Answers I am not sure what exactly your problem is . What I understand after I read your problem statement twice is that you tried to "new" a instance of Profile, assign its form attribute. And use its instance method signup? to do validation. But you get an error after on self.form because self reference to the nil Class I think you should change profile to @profile inside the create method. Just by guess after reading ruby guide about instance variable, My thought is that declaring the instance variable that will be added dynamically inside the controller structure and later they are passed to the model. And it is always @... inside controller generated by rails g scaffold. Just give it a try :) Interesting. I believe what you want to do is have a virtual attribute 'form' of an object a certain value. I.e. does form exist in the database as well. If it doesn't you should be using attr_accessor :form Documentation This defines the getter and setter method for form in the class where it is being invoked. However, the error you stated is something radically different. undefined method `form=' for nil:NilClass This merely means a profile with the id being passed in the params does not exist. i.e. if we are updating params id = 222, a database record with id 222 does not exist for Profile. Need Your Help How can I add a Foursquare-like Badge / Achievement Tab on FB Users' profiles facebook facebook-graph-api foursquare social-gamingI have already created a Game app in Facebook, and have posted achievements. So far so good...
http://unixresources.net/faq/12347474.shtml
CC-MAIN-2019-13
refinedweb
447
65.62
34761/how-to-export-data-from-mysql-to-a-csv-file If you are using MySql workbench then exporting data from a table is just a matter of few clicks. All you need to know is which table's data you need, what number of rows and columns you want and how your data is going to be exported. Here is an elaborate screenshot of the process:- Step 1:- Choose the table you want your data from. Here I need my data from record1 table. Step 2:- Right-click on your table and click on Export Data wizard. Step 3:- Check which columns you need to be exported and click on next. Step 4:- Select the path where you want to save the file, select the way your file should look like and click on next. Step 5:- Everything is ready click on next. Step 6:- Wait for few minutes, you can also see the progress bar and logs of data being exported. Step 7:- Everything is complete just click on next. Step 8:- Everything is done you can see the number of rows that is exported. Click on finish. Done, you will have your exported file in the directory you mentioned. Hope this helps. You can try out the following query: INSERT ...READ MORE Reference the System.Web dll in your model ...READ MORE From the version 5.1.7 onward, MySQL allows ...READ MORE using MySql.Data; using MySql.Data.MySqlClient; namespace Data { ...READ MORE MySQL Workbench is a designing or a ...READ MORE There are mainly two MySQL Enterprise Backup ...READ MORE There are majorly three ways to export ...READ MORE The reasons are as follows: The MySQL extension: Does ...READ MORE You can download the all in one ...READ MORE To install MySQL on ubuntu you have ...READ MORE OR
https://www.edureka.co/community/34761/how-to-export-data-from-mysql-to-a-csv-file
CC-MAIN-2019-22
refinedweb
304
86.81
During Liferay upgrades from v6.1 to v6.2, we often see our portlets not working correctly once deployed. The portlets often deploy correctly without any errors, and they show up in the category menu as expected. When it comes to using the forms, this is the piece that often does not work as expected. If you are experiencing an issue where your form parameters are not being passed to the portlet on Liferay, then you should know there is a simple fix. This issue with form parameters not being passed to portlets on Liferay will not happen for every portlet. It depends on how the portlet was written and what portlet framework was used. If you don’t want to change any of your portlet code, then you can usually add this snippet to your liferay-portlet.xml: <requires-namespaced-parameters>false</requires-namespaced-parameters> For more information on this topic, see this blog post from Liferay:
https://www.xtivia.com/form-parameters-not-passed-portlets-liferay/
CC-MAIN-2019-04
refinedweb
158
65.01
![if !(IE 9)]> <![endif]>. CryEngine is a game engine created by the German company Crytek in the year 2002, and originally used in the first-person shooter Far Cry. There are a lot of great games made on the basis of different versions of CryEngine, by many studios who have licensed the engine: Far Cry, Crysis, Entropia Universe, Blue Mars, Warface, Homefront: The Revolution, Sniper: Ghost Warrior, Armored Warfare, Evolve and many others. In March 2016 the Crytek company announced the release of the new CryEngine V, and soon after, posted the source code on GitHub. To perform the source code analysis, we used the PVS-Studio for Linux. Now, it has become even more convenient for the developers of cross-platform projects to track the quality of their code, with one static analysis tool. The Linux version can be downloaded as an archive, or a package for a package manager. You can set up the installation and update for the majority of distributions, using our repository. This article only covers the general analysis warnings, and only the "High" certainty level (there are also "Medium" and "Low"). To be honest, I didn't even examine all of the "High" level warnings, because there was already enough material for an article after even a quick look. I started working on the article several times over a period of a few months, so I can say with certainty that the bugs described here have living in the code for some months already. Some of the bugs that had been found during the previous check of the project, also weren't fixed. It was very easy to download and check the source code in Linux. Here is a list of all necessary commands: mkdir ~/projects && cd ~/projects git clone cd CRYENGINE/ git checkout main chmod +x ./download_sdks.py ./download_sdks.py pvs-studio-analyzer trace -- \ sh ./cry_waf.sh build_linux_x64_clang_profile -p gamesdk pvs-studio-analyzer analyze \ -l /path/to/PVS-Studio.lic \ -o ~/projects/CRYENGINE/cryengine.log \ -r ~/projects/CRYENGINE/ \ -C clang++-3.8 -C clang-3.8 \ -e ~/projects/CRYENGINE/Code/SDKs \ -j4 plog-converter -a GA:1,2 -t tasklist \ -o ~/projects/CRYENGINE/cryengine_ga.tasks \ ~/projects/CRYENGINE/cryengine.log The report file cryengine_ga.tasks can be opened and viewed in QtCreator. What did we manage to find in the source code of CryEngine V? V501 There are identical sub-expressions to the left and to the right of the '==' operator: bActive == bActive LightEntity.h 124 void SetActive(bool bActive) { if (bActive == bActive) return; m_bActive = bActive; OnResetState(); } The function does nothing because of a typo. It seems to me that if there was a contest, "Super Typo", this code fragment would definitely take first place. I think this error has every chance to get into the section, "C/C++ bugs of the month". But that's not all, here is a function from another class: V501 There are identical sub-expressions 'm_staticObjects' to the left and to the right of the '||' operator. FeatureCollision.h 66 class CFeatureCollision : public CParticleFeature { public: CRY_PFX2_DECLARE_FEATURE public: CFeatureCollision(); .... bool IsActive() const { return m_terrain || m_staticObjects || m_staticObjects; } .... bool m_terrain; bool m_staticObjects; bool m_dynamicObjects; }; The variable m_staticObjects is used twice in the function IsActive(), although there is an unused variable m_dynamicObjects. Perhaps, it was this variable that was meant to be used. V547 Expression 'outArrIndices[i] < 0' is always false. Unsigned type value is never < 0. CGFLoader.cpp 881 static bool CompactBoneVertices(...., DynArray<uint16>& outArrIndices, ....) // <= uint16 { .... outArrIndices.resize(3 * inFaceCount, -1); int outVertexCount = 0; for (int i = 0; i < verts.size(); ++i) { .... outArrIndices[....] = outVertexCount - 1; } // Making sure that the code above has no bugs // <= LOL for (int i = 0; i < outArrIndices.size(); ++i) { if (outArrIndices[i] < 0) // <= LOL { return false; } } return true; } This error is worthy of a separate section. In general, in the CryEngine code, there are a lot of fragments where unsigned variables are pointlessly compared with zero. There are hundreds of such places, but this fragment deserves special attention, because the code was written deliberately. So, there is an array of unsigned numbers - outArrIndices. Then the array is filled according to some algorithm. After that we see a brilliant check of every array element, so that none of them has a negative number. The array elements have the uint16 type. V512 A call of the 'memcpy' function will lead to underflow of the buffer 'hashableData'. GeomCacheRenderNode.cpp 285 void CGeomCacheRenderNode::Render(....) { .... CREGeomCache* pCREGeomCache = iter->second.m_pRenderElement; .... uint8 hashableData[] = { 0, 0, 0, 0, 0, 0, 0, 0, (uint8)std::distance(pCREGeomCache->....->begin(), &meshData), (uint8)std::distance(meshData....->....begin(), &chunk), (uint8)std::distance(meshData.m_instances.begin(), &instance) }; memcpy(hashableData, pCREGeomCache, sizeof(pCREGeomCache)); .... } Pay attention to the arguments of the memcpy() function. The programmer plans to copy the object pCREGeomCache to the array hashableData, but he accidentally copies not the size of the object, but the size of the pointer using the sizeof operator. Due to the error, the object is not copied completely, only 4 or 8 bytes. V568 It's odd that 'sizeof()' operator evaluates the size of a pointer to a class, but not the size of the 'this' class object. ClipVolumeManager.cpp 145 void CClipVolumeManager::GetMemoryUsage(class ICrySizer* pSizer) const { pSizer->AddObject(this, sizeof(this)); for (size_t i = 0; i < m_ClipVolumes.size(); ++i) pSizer->AddObject(m_ClipVolumes[i].m_pVolume); } A similar mistake was made when the programmer evaluated the size of this pointer instead of the size of a class. Correct variant: sizeof(*this). V530 The return value of function 'release' is required to be utilized. ClipVolumes.cpp 492 vector<unique_ptr<CFullscreenPass>> m_jitteredDepthPassArray; void CClipVolumesStage::PrepareVolumetricFog() { .... for (int32 i = 0; i < m_jitteredDepthPassArray.size(); ++i) { m_jitteredDepthPassArray[i].release(); } m_jitteredDepthPassArray.resize(depth); for (int32 i = 0; i < depth; ++i) { m_jitteredDepthPassArray[i] = CryMakeUnique<....>(); m_jitteredDepthPassArray[i]->SetViewport(viewport); m_jitteredDepthPassArray[i]->SetFlags(....); } .... } If we look at the documentation for the class std::unique_ptr, the release() function should be used as follows: std::unique_ptr<Foo> up(new Foo()); Foo* fp = up.release(); delete fp; Most likely, it was meant to use the reset() function instead of the release() one. V549 The first argument of 'memcpy' function is equal to the second argument. ObjectsTree_Serialize.cpp 1135 void COctreeNode::LoadSingleObject(....) { .... float* pAuxDataDst = pObj->GetAuxSerializationDataPtr(....); const float* pAuxDataSrc = StepData<float>(....); memcpy(pAuxDataDst, pAuxDataDst, min(....) * sizeof(float)); .... } It was forgotten, to pass pAuxDataSrc to the memcpy() function. Instead of this, the same variable pAuxDataDst is used as both source and destination. No one is immune to errors. By the way, those who are willing, may test their programming skills and attentiveness, by doing a quiz on the detection of similar bugs: q.viva64.com. V501 There are identical sub-expressions to the left and to the right of the '||' operator: val == 0 || val == - 0 XMLCPB_AttrWriter.cpp 363 void CAttrWriter::PackFloatInSemiConstType(float val, ....) { uint32 type = PFSC_VAL; if (val == 0 || val == -0) // <= type = PFSC_0; else if (val == 1) type = PFSC_1; else if (val == -1) type = PFSC_N1; .... } The developers planned to compare a real val variable with a positive zero and with a negative zero, but did this incorrectly. The values of zeros became the same after the integer constants were declared. Most likely, the code should be corrected in the following way, by declaring real-type constants: if (val == 0.0f || val == -0.0f) type = PFSC_0; On the other hand, the conditional expression is redundant, as it is enough to compare the variable with a usual zero. This is why the code is executed in the way the programmer expected. But, if it is necessary to identify the negative zero, then it would be more correct to do it with the std::signbit function. V501 There are identical sub-expressions 'm_joints[i].limits[1][j]' to the left and to the right of the '-' operator. articulatedentity.cpp 1326 int CArticulatedEntity::Step(float time_interval) { .... for (j=0;j<3;j++) if (!(m_joints[i].flags & angle0_locked<<j)&& isneg(m_joints[i].limits[0][j]-m_joints[i].qext[j]) + isneg(m_joints[i].qext[j]-m_joints[i].limits[1][j]) + isneg(m_joints[i].limits[1][j]-m_joints[i].limits[1][j]) < 2) { .... } In the last part of the conditional expression there is subtraction of the variable m_joints[i].limits[1][j] from itself. The code looks suspicious. There are a lot of indexes in the expression, one of them probably has an error. One more similar fragment: V590 Consider inspecting this expression. The expression is excessive or contains a misprint. GoalOp_Crysis2.cpp 3779 void COPCrysis2FlightFireWeapons::ParseParam(....) { .... bool paused; value.GetValue(paused); if (paused && (m_State != eFP_PAUSED) && (m_State != eFP_PAUSED_OVERRIDE)) { m_NextState = m_State; m_State = eFP_PAUSED; m_PausedTime = 0.0f; m_PauseOverrideTime = 0.0f; } else if (!paused && (m_State == eFP_PAUSED) && // <= (m_State != eFP_PAUSED_OVERRIDE)) // <= { m_State = m_NextState; m_NextState = eFP_STOP; m_PausedTime = 0.0f; m_PauseOverrideTime = 0.0f; } .... } A conditional expression is written in such a way that the result does not depend on the subexpression m_State != eFP_PAUSED_OVERRIDE. But is it really worth speaking about here if this code fragment is still not fixed after the first article? In case it is interesting, I have already described the same kind of errors in the article "Logical Expressions in C/C++. Mistakes Made by Professionals". V529 Odd semicolon ';' after 'for' operator. boolean3d.cpp 1077 int CTriMesh::Slice(...) { .... pmd->pMesh[0]=pmd->pMesh[1] = this; AddRef();AddRef(); for(pmd0=m_pMeshUpdate; pmd0->next; pmd0=pmd0->next); // <= pmd0->next = pmd; .... } One more code fragment that remained uncorrected since the last project check. But it is still unclear if this is a formatting error, or a mistake in logic. V522 Dereferencing of the null pointer 'pCEntity' might take place. BreakableManager.cpp 2396 int CBreakableManager::HandlePhysics_UpdateMeshEvent(....) { CEntity* pCEntity = 0; .... if (pmu && pSrcStatObj && GetSurfaceType(pSrcStatObj)) { .... if (pEffect) { .... if (normal.len2() > 0) pEffect->Spawn(true, pCEntity->GetSlotWorldTM(...); // <= } } .... if (iForeignData == PHYS_FOREIGN_ID_ENTITY) { pCEntity = (CEntity*)pForeignData; if (!pCEntity || !pCEntity->GetPhysicalProxy()) return 1; } .... } The analyzer detected null pointer dereference. The code of the function is written or refactored in such a way that there is now a branch of code, where the pointer pCEntity will be, initialized by a zero. Now let's have a look at the variant of a potential dereference of a null pointer. V595 The 'pTrack' pointer was utilized before it was verified against nullptr. Check lines: 60, 61. AudioNode.cpp 60 void CAudioNode::Animate(SAnimContext& animContext) { .... const bool bMuted = gEnv->IsEditor() && (pTrack->GetFlags() & IAnimTrack::eAnimTrackFlags_Muted); if (!pTrack || pTrack->GetNumKeys() == 0 || pTrack->GetFlags() & IAnimTrack::eAnimTrackFlags_Disabled) { continue; } .... } The author of this code first used the pointer pTrack, but its validity is checked on the next line of code before the dereference. Most likely, this is not how the program should work. There were a lot of V595 warnings, they won't really fit into the article. Very often, such code is a real error, but thanks to luck, the code works correctly. V571 Recurring check. The 'if (rLightInfo.m_pDynTexture)' condition was already verified in line 69. ObjMan.cpp 70 // Safe memory helpers #define SAFE_RELEASE(p){ if (p) { (p)->Release(); (p) = NULL; } } void CObjManager::UnloadVegetationModels(bool bDeleteAll) { .... SVegetationSpriteLightInfo& rLightInfo = ....; if (rLightInfo.m_pDynTexture) SAFE_RELEASE(rLightInfo.m_pDynTexture); .... } In this fragment there is no serious error, but it is not necessary to write extra code, if the corresponding checks are already included in the special macro. One more fragment with redundant code: V575 The 'memcpy' function doesn't copy the whole string. Use 'strcpy / strcpy_s' function to preserve terminal null. SystemInit.cpp 4045 class CLvlRes_finalstep : public CLvlRes_base { .... for (;; ) { if (*p == '/' || *p == '\\' || *p == 0) { char cOldChar = *p; *p = 0; // create zero termination _finddata_t fd; bool bOk = FindFile(szFilePath, szFile, fd); if (bOk) assert(strlen(szFile) == strlen(fd.name)); *p = cOldChar; // get back the old separator if (!bOk) return; memcpy((void*)szFile, fd.name, strlen(fd.name)); // <= if (*p == 0) break; ++p; szFile = p; } else ++p; } .... } There might be an error in this code. The last terminal null is lost during the copying of the last string. In this case it is necessary to copy the strlen() + 1 symbol or use special functions for copying the strings: strcpy or strcpy_s. V521 Such expressions using the ',' operator are dangerous. Make sure the expression '!sWords[iWord].empty(), iWord ++' is correct. TacticalPointSystem.cpp 3243 bool CTacticalPointSystem::Parse(....) const { string sInput(sSpec); const int MAXWORDS = 8; string sWords[MAXWORDS]; int iC = 0, iWord = 0; for (; iWord < MAXWORDS; !sWords[iWord].empty(), iWord++) // <= { sWords[iWord] = sInput.Tokenize("_", iC); } .... } Note the section of the for loop with the counters. What is a logic expression doing there? Most likely, it should be moved to the loop condition; thus we'll have the following code: for (; iWord < MAXWORDS && !sWords[iWord].empty(); iWord++) {...} V521 Such expressions using the ',' operator are dangerous. Make sure the expression is correct. HommingSwarmProjectile.cpp 187 void CHommingSwarmProjectile::HandleEvent(....) { .... explodeDesc.normal = -pCollision->n,pCollision->vloc[0]; .... } One more strange code fragment with the ',' operator. V571 Recurring check. The 'if (pos == npos)' condition was already verified in line 1530. CryString.h 1539 //! Find last single character. // \return -1 if not found, distance from beginning otherwise. template<class T> inline typename CryStringT<T>::....::rfind(....) const { const_str str; if (pos == npos) { // find last single character str = _strrchr(m_str, ch); // return -1 if not found, distance from beginning otherwise return (str == NULL) ? (size_type) - 1 : (size_type)(str - m_str); } else { if (pos == npos) { pos = length(); } if (pos > length()) { return npos; } value_type tmp = m_str[pos + 1]; m_str[pos + 1] = 0; str = _strrchr(m_str, ch); m_str[pos + 1] = tmp; } return (str == NULL) ? (size_type) - 1 : (size_type)(str - m_str); } The analyzer detected a repeated check of the pos variable. A part of the code will never be executed because of this error. There is also duplicate code in the function, that's why this function is worth rewriting. This code was successfully duplicated in another place: V523 The 'then' statement is equivalent to the 'else' statement. ScriptTable.cpp 789 bool CScriptTable::AddFunction(const SUserFunctionDesc& fd) { .... char sFuncSignature[256]; if (fd.sGlobalName[0] != 0) cry_sprintf(sFuncSignature, "%s.%s(%s)", fd.sGlobalName, fd.sFunctionName, fd.sFunctionParams); else cry_sprintf(sFuncSignature, "%s.%s(%s)", fd.sGlobalName, fd.sFunctionName, fd.sFunctionParams); .... } There is an attempt to print the string regardless of its content. There are many such fragments in the code, here are some of them: V610 Undefined behavior. Check the shift operator '<<'. The left operand '-1' is negative. physicalplaceholder.h 25 class CPhysicalEntity; const int NO_GRID_REG = -1<<14; const int GRID_REG_PENDING = NO_GRID_REG+1; const int GRID_REG_LAST = NO_GRID_REG+2; The analyzer can find several types of error which lead to undefined behavior. According to the latest standard of the language, the shift of a negative number to the left results in undefined behavior. Here are some more dubious places: Another type of undefined behavior is related to the repeated changes of a variable between two sequence points: V567 Undefined behavior. The 'm_current' variable is modified while being used twice between sequence points. OperatorQueue.cpp 101 boolCOperatorQueue::Prepare(....) { ++m_current &= 1; m_ops[m_current].clear(); return true; } Unfortunately, this fragment is not the only one. In the CryEngine V code I saw quite an amusing way of communication between the developers with the help of comments. Here is the most hilarious comment that I found with the help of the warning: V763 Parameter 'enable' is always rewritten in function body before being used. void CNetContext::EnableBackgroundPassthrough(bool enable) { SCOPED_GLOBAL_LOCK; // THIS IS A TEMPORARY HACK TO MAKE THE GAME PLAY NICELY, // ASK peter@crytek WHY IT'S STILL HERE enable = false; .... } Further on, I decided to look for similar texts and note down a couple of them: .... // please ask me when you want to change [tetsuji] .... // please ask me when you want to change [dejan] .... //if there are problems with this function, ask Ivo uint32 numAnims = pCharacter->GetISkeletonAnim()->GetNumAnimsInFIFO(layer); if (numAnims) return pH->EndFunction(true); .... //ask Ivo for details //if (pCharacter->GetCurAnimation() && // pCharacter->GetCurAnimation()[0] != '\0') // return pH->EndFunction(pCharacter->GetCurAnimation()); .... ///////////////////////////////////////////////////////////////// // Strange, !do not remove... ask Timur for the meaning of this. ///////////////////////////////////////////////////////////////// if (m_nStrangeRatio > 32767) { gEnv->pScriptSystem->SetGCFrequency(-1); // lets get nasty. } ///////////////////////////////////////////////////////////////// // Strange, !do not remove... ask Timur for the meaning of this. ///////////////////////////////////////////////////////////////// if (m_nStrangeRatio > 1000) { if (m_pProcess && (m_pProcess->GetFlags() & PROC_3DENGINE)) m_nStrangeRatio += cry_random(1, 11); } ///////////////////////////////////////////////////////////////// .... // tank specific: // avoid steering input around 0.5 (ask Anton) .... CryWarning(VALIDATOR_MODULE_EDITOR, VALIDATOR_WARNING, "....: Wrong edited item. Ask AlexL to fix this."); .... // If this renders black ask McJohn what's wrong. glGenerateMipmap(GL_TEXTURE_2D); .... The most important question to the developers: why don't they use specialized tools for the improvement of their code? Of course, I mean PVS-Studio. :) I should note once again that this article provides only some of the errors we found. I didn't even get to the end of the High level warnings. So, the project is still waiting for those who may come and check it more thoroughly. Unfortunately, I cannot spend that much time, because dozens of other projects are waiting for me. Having worked on the development of an analyzer, I came to the conclusion that it is just impossible to avoid errors, if the team increases or decreases in size. I am really not against Code Review, but it's not hard to count the amount of time that a team lead will have to spend reviewing the code of ten people. What about the next day? What if the number of developers is more than 10? In this case, the Code Review would only be necessary when editing key components of the product. This approach would be extremely ineffective if there is more code, and more people, in a team. The automated check of the code with the help of static analyzers will greatly help the situation. It is not a substitute for the existing tests, but a completely different approach to the code quality (by the way, static analyzers find errors in the tests too). Fixing bugs at the earliest stages of development doesn't really cost anything, unlike those that are found during the testing phase; the errors in the released product may have enormous cost. You may download and try PVS-Studio by this link. In case you want to discuss the licensing options, prices, and discounts, contact us at support. Don't make the unicorn sad by writing bad ...
https://www.viva64.com/en/b/0495/
CC-MAIN-2019-09
refinedweb
2,989
50.94
The incredible power of voice Published: 10/10/2018 Last Updated: 10/10/2018 Here we are with a new post and with this week we finish the description of the main features of our project TraIND40. In the next and last post of this Challenge we will be ready to tell you about our project in its totality. This week we want to address another aspect that can greatly enhance the user experience within a VR/MR app. During the post about UX and UI we talked about the "learnability" of UX, that is how long the user discover and remember the main functions and mechanisms of interaction, and “usability”, how easy it is for the user to interact with the system. Very often in VR apps these aspects can be very critical. In the post number 3 we dealt with the theme of Mixed Reality device controllers like HPXXX and we saw that they are very feature-rich and can be used for various purposes. All these possibilities can be difficult for the user to manage and voice commands can be an easy to remember shortcut to speed up many operations. Voice input is a natural way to interact with an object in mixed reality. It can help users to reduce the time and minimize effort. Windows Mixed Reality shell provides system level commands such as “Select”, “Close”, “Move This” and “Face Me”, but with MRTK, we can easily add voice commands to our experiences. The steps to be taken are the following: - Enable microphone capability - Add SpeechInputSource - Define new Keyword and relative keyboard shortcut - Create a script that implement interface ISpeechHandler - Handle the event OnSpeechKeywordRecognize public class SpeechCommandManager : MonoBehaviour, ISpeechHandler { public void OnSpeechKeywordRecognized(SpeechEventData eventData) { // ..... string word = eventData.RecognizedText.ToLower(); switch (word) { case "close menu": menuScript.UpdateState(); break; case "remove": menuScript.Remove(); break; // .... default: break; } } } And this is the result. See you next week with the final video and final post. Product and Performance Information Performance varies by use, configuration and other factors. Learn more at.
https://www.intel.com/content/www/us/en/developer/articles/technical/the-incredible-power-of-voice.html
CC-MAIN-2022-33
refinedweb
339
53
`ortoolpy` is a package for Operations Research. Project description ortoolpy is a package for Operations Research. It is user’s responsibility for the use of ortoolpy. from ortoolpy import knapsack size = [21, 11, 15, 9, 34, 25, 41, 52] weight = [22, 12, 16, 10, 35, 26, 42, 53] capacity = 100 knapsack(size, weight, capacity) Show Table import ortoolpy.optimization %typical_optimization Requirements - Python 3, pandas, pulp, more-itertools Features - This is a sample. So it may not be efficient. - ortools_vrp using Google OR-Tools ( ). Setup $ pip install ortoolpy History 0.0.1 (2015-6-26) - first release Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages. Built Distribution ortoolpy-0.3.0-py3-none-any.whl (24.7 kB view hashes)
https://pypi.org/project/ortoolpy/
CC-MAIN-2022-21
refinedweb
140
52.66
§JSON automated mapping If the JSON maps directly to a class, we provide a handy macro so that you don’t have to write the Reads[T], Writes[T], or Format[T] manually. Given the following case class : case class Resident(name: String, age: Int, role: Option[String]) The following macro will create a Reads[Resident] based on its structure and the name of its fields : import play.api.libs.json._ implicit val residentReads = Json.reads[Resident] When compiling, the macro will inspect the given class and inject the following code, exactly as if you had written it manually : import play.api.libs.json._ import play.api.libs.functional.syntax._ implicit val residentReads = ( (__ \ "name").read[String] and (__ \ "age").read[Int] and (__ \ "role").readNullable[String] )(Resident) This is done at compile-time, so you don’t lose any type safety or performance. Similar macros exists for a Writes[T] or a Format[T] : import play.api.libs.json._ implicit val residentWrites = Json.writes[Resident] import play.api.libs.json._ implicit val residentFormat = Json.format[Resident]Requirements These macros rely on a few assumptions about the type they’re working with : - It must have a companion object having apply and unapply methods - The return types of the unapply must match the argument types of the apply method. - The parameter names of the apply method must be the same as the property names desired in the JSON. Case classes natively meet these requirements. For more custom classes or traits, you might have to implement them.
https://www.playframework.com/documentation/2.6.6/ScalaJsonAutomated
CC-MAIN-2018-22
refinedweb
260
50.23
In This Section SqlTypes and the DataSet Describes type support for SqlTypes in the DataSet. Handling Null Values Demonstrates how to work with null values and three-valued logic. Comparing GUID and uniqueidentifier Values Demonstrates how to work with GUID and uniqueidentifier values in SQL Server and the .NET Framework. Date and Time Data Describes how to use the new date and time data types introduced in SQL Server 2008. Large UDTs Demonstrates how to retrieve data from large value UDTs introduced in SQL Server 2008. XML Data in SQL Server Describes how to work with XML data retrieved from SQL Server. Reference DataSet Describes the DataSet class and all of its members. System.Data.SqlTypes Describes the SqlTypes namespace and all of its members. SqlDbType Describes the SqlDbType enumeration and all of its members. DbType Describes the DbType enumeration and all of its members.
https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/sql-server-data-types?view=netframework-4.7
CC-MAIN-2019-47
refinedweb
145
65.52
Trying to Add VM Tags Via vRO Workflownateconehealth Jan 15, 2018 7:13 AM UPDATE: Just realized that for the property (vRA.Deployment.Tags) I only have the tag name entered in the Value column for the dropdown. Do I need to add a way to reference the tag category? Not sure how to add that to the property. UPDATE 2: Doesn't look like the workflow requires a tag category. I'm trying to force the workflow to give me more information in the logs, see at what step it's failing. UPDATE 3: Found the issue, but not sure how to fix it. So the issue is that in the Variables section of the last run of the setVcTags workflow, I can see the following: vmName: (here it gives the correct VM name) errorCode: "Could not find VM (Dynamic Script Module name : findVcVmByName#7)" Because the result of vmName isn't getting properly passed to the vm attribute (type: VC:VirtualMachine), or at least it can't find it based off that name, the workflow ends. I may end up opening a ticket with VMware to see why it's not getting passed properly. Hey everybody, Novice vRO user here (yaaaay) - so I've been tasked with having vRA assign 1 of 2 tags to every single VM when it's being created (for these examples, "Backup Tag 1" and "Backup Tag 2"). We're doing this to enable auto backups in Avamar. Here's the site that I've been using: Assigning vCenter tags using vRealize Orchestrator – The vGoodie-bag So far I have: - In vRO client, run "Import vAPI metamodel" successfully. - Imported this package: vCenterTagging/com.vGoodie-Bag.library.vapi.package at master · KnutssonDevelopment/vCenterTagging · GitHub - Went to the setVcTags workflow in vRO (vGoodie-Bag > setVcTags) > edit > endpoint > changed to "https://[my vCenter FQDN]/api" - Created property "vRA.Deployment.Tags" as a dropdown (X = Backup Tag 1, Y = Backup Tag 2) and added it to the Blueprint. - (I already have a LifeCycle Property Group with pretty much all the "VMPSMasterWorkflow32" entries, plus two "CloneWorkflow" ones, so just made sure that was in the Blueprint as well.) - Created a new Subscription: setVcTags > Machine provisioning > Lifecycle state State selected the "setVcTags" workflow from the imported package > Finish - NOTE: This is one of the steps where I got confused, since they mention sorting workflows in vRO but don't give any information at all - I'm assuming the fact that I chose a state phase when creating the subscription means I'm good? Now when I build a VM in vRA, it successfully builds, and after a while vRO says that the setVcTags workflow ran successfully (under Logs it has two messages - "Settings vCenter Tags" and then "Tags: Backup Tag 1") - however, no Tags appear in vCenter for that VM. I'm assuming I'm just missing something with how I've set this up in vRO - obviously vRA is calling the workflow because I see it running, but nothing's making it back to the VM. Maybe I haven't tied in the right action at the end of the workflow or something? I'm really sorry if I've missed something obvious, this is my second time trying any serious workflow, and the first one I'm piecing together myself. Let me know if any other info is helpful, and I would love any suggestions of what to check next. Thanks! 1. Re: Trying to Add VM Tags Via vRO Workflowdaphnissov Jan 12, 2018 12:29 PM (in response to nateconehealth) Especially if you're a vRO novice, I'd highly recommend you check out the SovLabs module for vSphere tagging. Using this requires no custom vRO code and is as simple as adding a custom property to your blueprint or anywhere else that creates a tag, and the module does the rest. I actually have a blog post coming out soon about this, so I'll try and post it here to my profile first so you can get the idea of how simple that is. 2. Re: Trying to Add VM Tags Via vRO Workflownateconehealth Jan 12, 2018 12:44 PM (in response to daphnissov) Their vRO modules seem awesome, the problem is that their stuff costs money. Don't think I'd be able to get approval for that at the moment. 3. Re: Trying to Add VM Tags Via vRO Workflowdaphnissov Jan 12, 2018 2:53 PM (in response to nateconehealth) 4. Re: Trying to Add VM Tags Via vRO Workflowhawks76 Jan 25, 2018 11:34 AM (in response to daphnissov)We add tags by invoking a powershell script and running powercli commands. Works like a champ everytime. Very simple and straighforward. 5. Re: Trying to Add VM Tags Via vRO Workflownateconehealth Jan 25, 2018 11:45 AM (in response to hawks76) Huh, that's awesome. For someone just getting into this, would you mind sharing how you have it setup and what the script is? Thanks! 6. Re: Trying to Add VM Tags Via vRO Workflowhawks76 Jan 25, 2018 12:28 PM (in response to nateconehealth)Sure. Give me a little bit to put all the info together and i'll post it. 7. Re: Trying to Add VM Tags Via vRO Workflownateconehealth Jan 25, 2018 12:33 PM (in response to hawks76) Awesome, thanks! You have no idea how excited I am - been trying to tweak a workflow I imported for days now and getting nothing but errors. 8. Re: Trying to Add VM Tags Via vRO Workflowhawks76 Jan 26, 2018 8:45 AM (in response to nateconehealth) Ok, So, here is how we do tags Components: Resource Element with the following script: $DC = "%%location%%" $appCode = "%%appcode%%" If ($DC -eq "DCNAME1") { $vc = "VCNAME1" } If ($DC -eq "DCNAME2") { $vc = "VCNAME2" } ##### Imports Core Module ##### If (!(Get-Module VMWare.VimAutomation.Core)) { Import-Module VMWare.VimAutomation.Core } ##### Connects to vCenter ##### Connect-VIServer $vc -Credential $creds | Out-Null ##### Validates tag exist, and if not, creates it, and attaches it to specific category ##### If (!(Get-Tag $appCode -ErrorAction SilentlyContinue)) { New-Tag -Name $appCode -Category (Get-TagCategory APP_CODE) -ErrorAction SilentlyContinue } ##### Executes tag assignment ##### if (!(New-TagAssignment -Entity (Get-VM %%vmName%%) -Tag (Get-Tag $appCode) -ErrorAction SilentlyContinue)) { return $false } else { return $true } ##### Closes connection to vCenter ##### Disconnect-VIServer * -Force -Confirm:$false Custom Action called "invokeScript" Here is Custom Action Script Contents to copy: var output; var session; try { session = host.openSession(); output = invokeScript(host,script,session.getSessionId()) ; } finally { if (session){ host.closeSession(session.getSessionId()); } } return output; function invokeScript(host,script,sessionId){ if(sessionId == null){ throw "PowerShellInvocationError: Invalid session." } var oSession = host.getSession(sessionId) if (oSession == null ) { throw "PowerShellInvocationError: Invalid session." } System.debug("Invoke command in session " + sessionId); var result = oSession.invokeScript(script); if (result.invocationState == 'Failed'){ throw "PowerShellInvocationError: Errors found while executing script \n" + result.getErrors(); } //System.log ( result.getHostOutput() ); return ( result.getHostOutput() ); } Here is the workflow setup Here, we are tagging each vm with an application code. Ofcourse, you could change the property that gets sent in to anything you want. As well, the Categorys we use are already setup. Not all the App codes are setup, so as shown in the ResourceElement, it checks for it first, and creates it if it doesn't exist. We trigger it to run base on an Event Subscription for conditions MachineActivated and POST. I'm sure there will be questions, as i'm terrible at explaining things. Thanks! 9. Re: Trying to Add VM Tags Via vRO Workflownateconehealth Jan 29, 2018 12:38 PM (in response to hawks76) That's very helpful, thanks! This is new to me so getting the PowerShell host added and working with scripts is going to take a while, but this is a great start. I do have one question so far - I created the workflow and entered the script as you put, just changing the following: "site.com.rf.app.code" to "vRA.Deployment.Tags" (had already created this as a custom property) "site.com.rf.actions" to (module name that I created) However, when I go to save it, it's saying there's a syntax error on this line: if (!(actionResult.match( "True" ) != null)) {throw ("TagError"); ) ; Any ideas on that? Thanks! 10. Re: Trying to Add VM Tags Via vRO Workflowhawks76 Jan 29, 2018 1:12 PM (in response to nateconehealth) Check the quotes. Sometimes when quotes get copied, they don't copy correctly. just remove and replace them and see if that helps. Just saw the error. You need to change this ("TagError"); ) ; to this ("TagError"); } ; Closing bracket is ) instead of }. 11. Re: Trying to Add VM Tags Via vRO WorkflowBrian Knutsson Feb 6, 2018 5:02 AM (in response to nateconehealth) Hi, if you ask your questions on the article on my blog I notice them, and try to respond. Assigning vCenter tags using vRealize Orchestrator – The vGoodie-bag Did you remember to create the tags in vCenter? 12. Re: Trying to Add VM Tags Via vRO WorkflowBrian Knutsson Feb 6, 2018 5:06 AM (in response to hawks76) The benefit of using javascript in orchestrator is that it is MUCH faster and easier when returning data.
https://communities.vmware.com/thread/579949
CC-MAIN-2018-13
refinedweb
1,527
60.04
16 April 2013 20:13 [Source: ICIS news] SAN FRANCISCO (ICIS)--On-purpose butadiene (BD) will account for about 8% of global supply by 2017, an industry consultant said on Tuesday. Bill Hyde, a C4 consultant for IHS Chemical, said that by 2022 that figure will increase to 10-12%. He made his comments during a speech at the International Institute for Synthetic Rubber Producers (IISRP) Annual General Meeting. “[On-purpose] butadiene will become the global price-setting mechanism, but not until the mid-2020s,” he said. While much of the talk in the US has been about TPC Group’s on-purpose BD project, expected to come online in 2017, Hyde said two on-purpose BD facilities are already up and running. These facilities are capable of producing about 800,000 tonnes of BD, but Hyde said the plants are running at about 60% of capacity. That’s because over the next few years the petrochemical industry will be adding more capacity than is necessary, Hyde said. He sees “very soft growth” in BD demand for the rest of 2013, a slight uptick in 2014 and then perhaps stronger growth – but still modest – after 2015. The BD market has been “terrible” for about a year, Hyde said. “So far this year, we don’t have a lot of optimism [for demand],” he said. “But we also haven’t seen fire-sale type prices, either.” Back in January, the general forecast for the US BD contract price was for prices to rise slowly, generally getting back to over $1/lb (€0.77/lb). That forecast has now been significantly reduced, and no one in the BD industry expects the price to get back to $1/lb in 2013. In March, the US Gulf contract price for BD rose by 8 cents/lb to 84 cents/lb from 76 cents/lb. For April, the contract price rolled over and, according to most market sources, it is expected to drop by a few cents when the May contract price is negotiated over the next few weeks. Looking forward, Hyde said that on-purpose BD will be “the key game-changer” for the industry. He said there are only two places in the world where on-purpose BD will work – the ?xml:namespace> “Anywhere else in the world you don’t have low-cost feedstock, so you don’t have the same drivers,”
http://www.icis.com/Articles/2013/04/16/9659659/on-purpose-bd-will-be-8-of-global-production-by-2017-consultant.html
CC-MAIN-2014-42
refinedweb
400
67.49
I am new to this and I am sorry if this is an obvious question but I cannot seem to find an answer. I am trying to understand the proper method to do processing on camera streams using VPI 1.0 and I was wondering where I can get the source code for the VPI Remap Demo v1.1, especially loading the input from cameras and displaying on screen. I tried the VPI Python API but I am stuck, see the code below import jetson.inference import jetson.utils import vpi import numpy as np from PIL import Image # create display window display = jetson.utils.glDisplay() # create camera device camera = jetson.utils.gstCamera(1920, 1280, '0') # open the camera for streaming camera.Open() # capture frames until user exits while display.IsOpen(): image, width, height = camera.CaptureRGBA() copied_img = jetson.utils.cudaAllocMapped(width=image.width, height=image.height, format=image.format) jetson.utils.cudaMemcpy(copied_img, image) arr=jetson.utils.cudaToNumpy(copied_img); input=vpi.asimage(np.uint8(arr)) with vpi.Backend.CUDA: output = input.convert(vpi.Format.U8).box_filter(5, border=vpi.Border.ZERO) display.RenderOnce(jetson.utils.cudaFromNumpy(np.array(Image.fromarray(output.cpu()))), width, height) display.SetTitle("{:s} | {:d}x{:d} | {:.1f} FPS".format("Camera Viewer", width, height, display.GetFPS())) # close the camera camera.Close() I tried the jetson.utils to open a camera and read an image (which is saved as CudaImg). I converted it to VPI image, did the processing but I could not get it to render back using jetson.utils renderer. I tried the same with opencv and I do not understand how to convert VPIimage to an image that can be used with cv2.imshow. Is there a full python API reference with all functions for VPI somewhere, I could not find that either. Ideally, if there was the source code for the VPI demos published somewhere, it would be very helpful. Kindly let me know how to proceed with this.
https://forums.developer.nvidia.com/t/vpi-pipeline-with-csi-camera-and-output-render-on-screen/195731
CC-MAIN-2022-33
refinedweb
325
52.46
Using SPA With Offload™ Offload KB - case-studies Old Content Alert Please note that this is a old document archive and the will most likely be out-dated or superseded by various other products and is purely here for historical purposes. Using the Offload™ compiler does not stop you from doing all the funky SPU tricks you could have ever dreamed of. The Offload™ compilers intention is simply to get code onto the SPU quickly and with type safety. #include <liboffload> int func() { int result; int bar1 = 13; __blockingoffload() { int bar2 = 42; result = foo1(bar2); result += foo2(&bar2); result += foo3(&bar1); }; return result; } In the above example, we are trying to call three functions, foo1, foo2 & foo3 from within an Offload™ block. For arguments sake, say we want to write each of these three functions in SPA, for performance reasons. The first step is to change the code like so: #include <liboffload> extern "C" __offload int foo1(int); extern "C" __offload int foo2(int * ); extern "C" __offload int foo3(__outer int * ); int func() { int result; int bar1 = 13; __blockingoffload() { int bar2 = 42; result = foo1(bar2); result += foo2(&bar2); result += foo3(&bar1); }; return result; } We have added three extern "C" __offload definitions to the code. Each of these definitions is telling the compiler there is an external method, using C calling conventions (e.g. no mangling of the function name!) that can be called only from the SPU. the Offload™ compiler will generate prototype functions for the SPU code to link against these functions. Content has been removed. To link an Offload™ block against an SPU object file we need to put the SPU object file into an archive, by using spu-lv2-ar. For this example, lets say we generate libSPA.a. To link this into the Offload™ block we add -BEspuL"C:pathtolibSPA" -BEspulSPA to the command line (see /kb/125.html & /kb/126.html for more details).We've just linked our type-safe, easily offloaded Offload™ block to SPA, with relatively little effort! Offload™ can be linked against any SPU archive in this manner too, but crucially only against extern "C" functions - functions that are not mangled. Care should be taken when passing __outer pointer arguments into these functions, as once passed to code not going through the Offload™ compiler the pointer loses its type safety.
https://www.codeplay.com/products/offload/kb/using-spa-with-offload.html
CC-MAIN-2021-04
refinedweb
389
59.33
0 So for my program, I have to allow the user to enter a character then ask them how many of that character they want. I'm saving the character and the quantity into two different arrays. I've attached my code but when I go to display the array, it doesn't print the character that I have put in. #include <stdio.h> #include <stdlib.h> #define pause system ("pause") main(){ char characterOfChoice = '0', charChoice = 'c', charArray[100]; int quantityOfChar = 0, i = 0, intChoice = 0, intArray[100]; printf("Enter a character of your choice "); scanf("%c", &charArray[charChoice]); //save char to a temp char printf("Enter a quantity of that character "); scanf("%d", &intArray[intChoice]); //save int to a temp value for(i=0; i<100; i++){ charArray[charChoice] = '!'; intArray[i] = 0; }//End for Loop for(i=0; i<10; i++){ printf("%c ", charArray[charChoice]); } pause; }//End main
https://www.daniweb.com/programming/software-development/threads/439278/storing-user-input-to-an-array
CC-MAIN-2016-50
refinedweb
149
58.62
Conn's journey through Phoenix If you want to understand the Phoenix web framework, you really want to understand Plug. Plug is an Elixir web request library that's hard to define succinctly because it plays several roles. It's an adapter that takes an HTTP request from the Cowboy web server and returns a struct representing both the request and the eventual response. It's also a specification for thin middleware layers (called plugs) that accept and return the Plug.Conn struct. The struct is referred to in function arguments as 'conn' so I'll mostly refer to it that way here. You can easily build your own plugs to insert into an existing framework or you could stack an entire framework on top of Plug , as Chris McCord did with Phoenix. The struct is defined right here. In this blog post I want to follow the journey of conn, in the context of how this blog post was rendered. The first Phoenix plug is Endpoint, which in its 'call' function takes the base struct from the Plug library, adds the path name and your application's secret key, and then pipes to all of the plugs listed in your application's endpoint.ex file. The very first of them (assuming default configuration) is Plug.Static, which is used to bypass most of the Phoenix framework when a request for a static asset like a CSS file comes in. The bypassing part is done like this: def halt(%Conn{} = conn) do %{conn | halted: true} end In other words, if conn's halted key is set to true, the rest of the pipeline won't touch it. Right after that, Plug.RequestId generates a unique ID for the request and sets conn's req_headers key accordingly. One of the next plug modules is Plug.MethodOverride, which takes an HTTP verb from a POST request's _method parameter and uses it to set conn's method key. Let's take a look: # Plug.MethodOverride @allowed_methods ~w(DELETE PUT PATCH) defp override_method(conn, body_params) do method = (body_params["_method"] || "") |> String.upcase cond do method in @allowed_methods -> %{conn | method: method} true -> conn end end If the method from body_params pattern matches against the list of allowed methods, the method key is changed. Otherwise, conn is just returned. There are a few more "administrative" plugs like that and then finally... plug AdamczDotCom.Router Yep, the router is yet another plug. It takes conn, transforms a few keys and spits it back out. In a Phoenix router, you'll see a pipeline macro that looks like this: pipeline :browser do plug :accepts, ["html"] plug :fetch_session plug :fetch_flash plug :protect_from_forgery plug :put_secure_browser_headers end Those are the default plugs, and on this site there's also an auth plug in there to stop random folks from editing this blog. Hopefully it works and I really wrote all of this. Those plugs get added to conn's before_send key. The router also adds the blog post you requested to the params key and and the appropriate controller action to the private key: before_send: (all the plugs from pipeline :browser) params: %{"slug" => "conns-journey-through-phoenix"} private: %{:phoenix_action => :show, phoenix_controller => AdamczDotCom.BlogController, :phoenix_format => "html", etc.} Just as with any MVC framework, the Phoenix controller (which is of course a plug) asks the data layer for resources specific to this request. Then in the final step, it passes those resources to a render function as "assigns." For this page, "assigns" contains things I added like the blog post's title & body content, plus some things Phoenix added behind the scenes like the layout that wraps my blog template. def render(conn, template, assigns) # bunch of code send_resp(conn, conn.status || 200, content_type, data) end I'm glossing over how 'render' works because that's surely a blog post of its own. But at a high level it takes the completed conn, the blog template, the blog content, and hands a response to the very last plug, Plug.Conn.send_resp. And now you're looking at it. This approach of transforming a data structure through a series of functions until it contains a completed HTML string is both simple, and fast. It's also really flexible, since it is so easy to inject your own functions (provided they uphold the plug contract) at any point along the way, or halt and skip the rest of the stack. I'm having a lot of fun learning Phoenix, and gladly welcome feedback and/or corrections if I have any of the details wrong. Feel free to send me an email, aczerepinski at Google's email service. Thanks for reading! -Adam
https://www.adamcz.com/blog/conns-journey-through-phoenix
CC-MAIN-2022-05
refinedweb
776
69.82
monday july 25 Filter by Day: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Failed to enable constraints. Posted by Mike Moore at 7/25/2005 8:21:02 PM In my web form I recieve the following message - Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints. I believe that part of the problem resolves around having a nulls (which i need) in a foreign key column. When i pu... more >> my issues with FT Search Posted by barak.benezer NO[at]SPAM gmail.com at 7/25/2005 6:35:20 >> creating trigger on indexed view Posted by Zeng at 7/25/2005 5:42:21 PM Hi, Is it possible to create trigger on an indexed view? I tried and it keep give me this error: Server: Msg 208, Level 16, State 4, Procedure Tr_TmpTrigger, Line 1 Invalid object name 'dbo.VIEW_MYVIEW'. I also attempted to use Enterprise Manager tool to create the trigger on the indexe... more >> Can update accumulate? Posted by David at 7/25/2005 5:14:18 PM I need to write an UPDATE statement that adds to a field from data in another table. Can someone help? below is sample: UPDATE TableA SET Total = Total + TableB.Amount FROM TableB JOIN TableA ON TableB.EmpNo = TableA.EmpNo WHERE TableB.PrdYr = 2005 When I do this, it does not add in the... more >> Simple nested SP without Recordset Posted by realraven2000 NO[at]SPAM hotmail.com at 7/25/2005 3:39:25 PM HI I want to make a nested SP that finds out whether a certain record exists (keys: stockItemId and cartId) and return a contents of a "Quantity" column, else 0; I can't find anything in the BOL on how to select into a local variable. Also how do I avoid the SP returning an open recordset? An... more >> find identity column Posted by Britney at 7/25/2005 3:27:18 PM hi, how to find out whether my database have identity columns or not? I don't want to go check through all the tables one by one. ... more >> SQL2K:How to insure that data is NOT recoverable by forensic metho Posted by BAG at 7/25/2005 3:21:01 PM My customer has a lot of govt customers and has a non-classified SharePoint 2003 implementation with SQL2K for the backend (all SPS 2003 content is stored in SQL server). They're very concerned about what to do if/when a user unintentionally uploads a classified document to the site. Deleting ... more >> DBCC Shrinkfile with Emptyfile clause Posted by MichaelW at 7/25/2005 3:20:04 PM I have 3 datafiles (A,B,C) in one filegroup and due to space I want to move the data from one of the files to another drive. Will the following work? 1) Create new file (D) on seperate drive 2) Do not allow growth for A and B 3) Run DBCC shrinkfile with the empty file clause against C. If... more >> Don't see what you're looking for? Search DevelopmentNow.com. a price range dimension question Posted by === Steve L === at 7/25/2005 3:03:57 PM I'm using sql2k. I'm providing a simplified scenario here. I'm trying to build a fact table on sales (ie. item, price, quantity, price*quantity). I'd like build a cube that I can look up the price by range ($0-$5, $5-10, $10-$15, etc...). What's the best way to handle this? do i need a price... more >> Severity 20 Error Posted by Kalvin at 7/25/2005 2:58:06 PM I keep getting Severity 20 errors from my server. How can I find out what is causing the error? I have version 8.00.760 and is used with a large 3rd party application. We can't upgrade to SP 4 and still be able to get help from them on other issues since they haven't "blessed" the service pa... more >> export to unicode textfile with tsql Posted by gerben at 7/25/2005 2:48:01 PM Hello, is it possible to write the he result of query to a unicode textfile instead of an ansi textfile. i need this because i want to generate udl files on the fly. any help appriciated. Gerben... more >> Stored Procedure Posted by news.microsoft.com at 7/25/2005 2:15:26 PM When calling multi select statements in a stored procedure how do you set the namespace to each select statement? Thanks Chris ... more >> Complex queries using WHERE and mix of OR and AND Posted by Kate at 7/25/2005 2:04:04 PM How do you effectively mix OR and AND together? I have the query below for my SEARCH page. I would like the users to have the option of selecting one field to search with OR selecting pairs of fields together to search the database with. The problem is, with the SQL statement below, OR works (... more >> Dynamic Decimal Format Posted by Scott2624 at 7/25/2005 1:56:01 PM I have a number: 3320.8000000. My expected result is 3320.80 based on a dynamic decimal parameter for example: declare idecimal int select @idecimal=3 convert(decimal(20,@idecimal),number) does not work - error message. How can I accomplish this? ... more >> UPDATE SQL HELP Posted by MS User at 7/25/2005 1:40:20 PM SQL 2K I have a table 'TripMovement' with columns CarID, TripType, TripDate, ....... (These three columns form the PRIMARY-KEY) Each trip will have an entry in table 'TripMovement' , there are four different 'TripType' (A, B, C and D) For a trip cycle, Trip will start with type 'A... more >> cdosysmail with attachment Posted by 26point2er at 7/25/2005 1:32:01 PM With cdosysmail I have gotten my code to work to programatically send emails out as HTML. Does anyone know how to tweak the standard CDO example to send the query results as an attachment? thanks ... more >> Embeded case in where clause - causing problems. Posted by Madler at 7/25/2005 1:04:38 PM Hi. I have a sp that allows the user to search based on a variable set of = parameters for example just a home phone or just a buss. phone however = when searching by last name the user also has to supply the ZIP optionally he can filter that search by first name or address. I am = trying to ... more >> Aggregate problem Posted by David C at 7/25/2005 12:50:35 PM I am getting the following error in my SQL. Column 'dbo.ClientWorkerStatus.WorkerTypeID' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. My code is below. What I want to get is up to 1, 2 or 3 counts for one ClientID as t... more >> Stored Procedure Question Posted by Raymond Yap at 7/25/2005 12:17:46 PM Dear All, Got a question with stored procedure, is there any way with in a stored procedure to capture where or what is calling the stored procedure, an ASP page or another stored procedure? other than using sp_depends. Thanks for help. Raymond... more >> VARCHAR column index -- dirty pages -- checkpoint Posted by Srini at 7/25/2005 12:12:02 PM Hi, I have a problem with index pages on a varchar column in my database. The varchar field is an Identifier which is not unique. The field length is 15 DataId varchar (15) I have an index on that column. When we are inserting data in the table, we do lot of transactions per second (~2000)... more >> TOP 1 Posted by Arjen at 7/25/2005 11:29:11 AM Hi, See the statement below. SET NOCOUNT ON SELECT TOP 1 [Name] FROM [Persons] Can I use "TOP 1" in combination with "SET NOCOUNT ON"? Is this statement faster with or without "TOP 1"? Thanks! ... more >> Confuse !! XACT_Abort + Begin Transaction Posted by Jonathan at 7/25/2005 11:28:02 AM Hi all. I'm very confuse. I was thinking that I understand Stored Procedure, Begin, Commit and RollBack Transaction, but I think that is false. I have a stored procedure that is called from a Visual Basic application. My stored procedure look like that. Create procedure [sp_Test] as... more >> timestamp Posted by ReTF at 7/25/2005 10:51:02 AM Hi All. Where I should use timestamp? Thanks. ... more >> How to get an sp to error on compile, when an underlying table is missing? Posted by Sylvia at 7/25/2005 10:37:30 AM Hello, I seem to remember in previous versions of sql server, you could try to compile a procedure with missing tables, and it would fail. For example, I believe this used to fail on old versions: create procedure TestABC as select * from TestTableDoesNotExist Now (sql 2000) this gives... more >> Missing VarBinary(MAX) filestream storage attribute Posted by vihs at 7/25/2005 9:51:15 AM Hi I found the filestream attribute in Beta 1 new features list, but cannot find any reference to it in the CTP documentation. It allows direct access to word documents stored in a VarBinary(MAX) column. By direct access I mean a file path\name to the file as stored in the database using SQL ... more >> sp Encrypting??? Posted by Richard K at 7/25/2005 9:50:04 AM I am building a software system for a client and SQL Server is my back end. I am hoping to take this system and commercially market it BUT alot of my logic is wrapped up in stored procecedures that I don't want my clients to touch, let alone even see just like my compiled VB code. Question... more >> Truncating text when updating table Posted by Rob C at 7/25/2005 9:47:24 AM Hello all - I'm looking for some simple SQL statements to truncate text when we copy data from one field in a table to another field in the same table. The problem is this; the copy from field is 100 char in length and the copy to field is 15 char in length. We want to only copy the first 15... more >> SP not updating both Posted by David C at 7/25/2005 9:44:22 AM I have a stored proc that updates 2 fields on a table but only 1 of the 2 fields is getting updated. The pay table (PayInfo) should update either PayFirst or PaySecond based on the DAY of a date field. Can anyone see anything wrong? Thanks. CREATE PROCEDURE [dbo].[mc_updPayTotals] (@Prd... more >> SP that handles both scenarios Posted by Mike Moore at 7/25/2005 9:38:02 AM The goal is to have one SP that handles both scenarios. Scenario 1 is when there is a real number and scneario 2 is when there is no number to store.The column in the database is numeric. Anyone have suggestions on how to pass that variable to the SP such that this variable can be both a n... more >> ERROR USING BULK INSERT Posted by Macisu at 7/25/2005 8:32:08 AM Hi I'm executing declare @ARCHIVOidx varchar(300) set @ARCHIVOidx = (Select top 1 MyFile from task) declare @sentencia varchar(300) set @sentencia = 'bulk insert a from ''' + @ARCHIVOidx+'''' + ' with (formatfile ='''+'c:\input\bcpfmt.txt' + ''',batchsize=100) ' exec (@sentencia) and ... more >> create trigger help Posted by mike at 7/25/2005 8:10:03 AM Hi. I'm not familiar with using triggers, but that seems like the best tool for the job at hand. I have a table with phone number records, and I get updates from multiple sources. The phone numbers are often in one of two formats -- 000-000-0000 or 0000000000 -- and I need a consistent format.... more >> Varchar Storage and Search Techniques Posted by Garrett at 7/25/2005 8:08:03 AM Hi all In a project i've been working on recently I came across a problem. I have a table with a varchar(7) column consisting of around 20million rows. This column needs to be searchable by substring - i.e a LIKE '%AB%' but this obviously is taking forever. The shortest search string i... more >> "The Return of the bugs" Posted by x-rays at 7/25/2005 8:03:03 AM Hello "precious" developers of Microsoft SQL Server 2000. Huge bugs are performed when: 1) sp4 has applied for my default instance (developer edition) 2) everytime I setup a new named instance (MSDE Last release) a letter from the registry key SQLPath (located under HKEY_LOCAL_MACHINE\S... more >> Execute a sProc from within a cursor Posted by marcmc at 7/25/2005 8:03:02 AM I want to execute a number of sProcs from a table thru the below cursor. It only executes the first and stops without executing the second. Am I missing something blatently obvious? set nocount on declare @pCommand varchar(50) declare trigger_cursor cursor for select pCommand from MIS_R... more >> Using a cursor to skip through individual records Posted by Stephen at 7/25/2005 7:35:39 AM I've been given some logic which i've been asked to but into code. Basically i've been given a table of records and based on certain values of each record I have been asked to either insert the row into another table or don't insert and skip unto the next record. i've been asked to produce som... more >> Truncate Decimal without rounding Posted by Scott2624 at 7/25/2005 7:03:07 AM I have a decimal 15.14567 and I want to tuncate to 2 decimal places without rounding. My expected result is 15.14. I have tried cast, convert, and string and all round to 15.15. How can I accomplish this in T-SQL?... more >> 0x0000275D - anyone know this error? Posted by s_m_b at 7/25/2005 6:28:13 AM getErrorInfo just produces 'Description NULL HelpFile NULL HelpId 0 comes up in the process of using sp_OAmethod, using OpenTextFile. The file is there, and as the script has already opened and processed around 80 files in the same folder, its not a coding issue?... more >> Query Help Posted by JP at 7/25/2005 6:12:03 AM Hi; I need some help with quering. I have three tables as follows. EmpIoyees -------------- EmpNo FirstName LastName EmployeeTraining ---------------------- TrainingId EmpNo TrainingCode TrainingDate Training ------------ TrainingId TrainingCode Description What I w... more >> How can I merge Identity tables? Posted by trint at 7/25/2005 4:57:24 AM Ok, I have tables t1, t2, t3 and t4. I also have identical tables in a CopyOfFirstDataBase, except this one contains older data that needs to combine with the newer. It's like: DATABASE1 -------------> DATABASE2 [1995 through 2003] [2004 through Present] Both are struc... more >> Converting Integer into date and time stamp Posted by Liam Mac at 7/25/2005 3:58:01 AM Hi Folks, I am having a problem with a proprietary application that is writing a transaction record into a MS SQL database. The transaction date is store as an integer i.e. “1097683526†, how can I convert this into a readable date and time stamp format. I have also developed my own ap... more >> Update via subquery Posted by hals_left at 7/25/2005 2:03:02 AM Hi - want to update records using a subquery view the value to update them too is also returned from the subquery. What goes at the @?? UPDATE tblRegistration SET Outcome=@?? WHERE RegistrationID IN (SELECT RegistrationID, Outcome FROM UnitsCompleted WHERE CourseID=@CourseID AND Compl... more >> Insert fixed length file Posted by Shirish Nair at 7/25/2005 1:09:02 AM Hi, Is it possible to use BULK insert or any other faster way to insert data from a fixed length file ( file which does not have a delimeter as a column separator). The column are defined as fixed length. e.g. col1 - 10 char col2 - 5 char Thank you ... more >> Copying Data Posted by thomson at 7/25/2005 12:35:30 AM Hi all, Can i do have two different databases[onse server]. I need to copy the data from one table to another table[another database]. I dont want to recreate the table, i need to copy only the data how is it possible? Thanks in Advance thomson ... more >> Looking for a Good HEX editor Posted by M.Siler at 7/25/2005 12:00:00 AM I didn't know what group to post this in... I'm looking for a good hex editor. One that would permit me to view two files at the same time and as I move the position in one it would also move it in the other. I'm trying to compare the position location of two files. ... more >> Duplicate records - difference method? Posted by tw at 7/25/2005 12:00:00 AM Hi, My scenario is that i have 2 system with name and adress (100.000 names), that have to be merged into 1 system without any duplicates. The problem is that the spelling is not 100% between the system. One way to find duplicate is to group name,adress and count > 1. My dream is to us... more >> Date Posted by Gérard Leclercq at 7/25/2005 12:00:00 AM Hi, i store dates in a smalldatetime field. Now i want to retrieves all records between 2 dates SELECT * FROM myTable WHERE addDate BETWEEN '2005/7/24' AND '2005/7/25' (suppose to retrieve all records added yesterday and today.) I get no error but get no results back ? What do i wrong... more >> Drop an unnamed primary key.... Posted by Frédéric at 7/25/2005 12:00:00 AM Hello! How can I drop an unnamed primary key with a query? Else how can I recover its name assigned by SQLServer? Thanks for your help ... more >> Best Performance Posted by Bpk. Adi Wira Kusuma at 7/25/2005 12:00:00 AM I wanna know. I have 2 ways to copy data. way I INSERT into A SELECT * from B Way II DECLARE CURSOR ... For SELECT * from B WHILE ... BEGIN /* insert one by one to A END .... .... Which the best, way I or II? How much its speed comparison? ... more >> update problem Posted by tw at 7/25/2005 12:00:00 AM My scenario is like follow: id number teamid 1 0 1 2 0 1 3 3 1 4 1 1 5 0 2 6 0 2 7 2 2 8 0 3 9 ... more >> Help me? Posted by Bpk. Adi Wira Kusuma at 7/25/2005 12:00:00 AM I've data like this noid fdate ftime fstatus --------------------------------------- 1 1/1/2005 1/1/2005 6:30:00 1 1 1/1/2005 1/1/2005 6:30:00 1 1 1/1/2005 1/1/2005 6:31:00 1 1 ... more >> How to write conditional Where statement Posted by Mital at 7/25/2005 12:00:00 AM Hi, I would like to write IF or Case expression inside where condition lilke: DECLARE @Allow_All_Customers BIT SET @Allow_All_Customers = 0 -- This will be 0 or 1 depends on permission table settings. SELECT CustomerCode FROM tblCustomer WHERE (Normal Where Condition...) AND IF (... more >> Apply filter on dbcc log query Posted by Pushkar at 7/25/2005 12:00:00 AM Hi, I am reading online transaction log using dbcc log command. But I don't want to read the whole online transaction log, because it = will take too long time. Is there any way through which I can filter the = record and get lesser no. of records. Is there any way through which I can use d... more >> HELP ME........... Posted by Bpk. Adi Wira Kusuma at 7/25/2005 12:00:00 AM It my ddl table: CREATE TABLE [dbo].[TTEMP_BC] ( [RecID] [int] IDENTITY (1, 1) Primary Key, [FDATE] [smalldatetime] NULL , [FTIME] [smalldatetime] NULL , [NOID] [nvarchar] (6) COLLATE Latin1_General_CI_AS NULL , [FSTATUS] [nvarchar] (3) COLLATE Latin1_General_CI_AS NULL ) and... more >> · · groups Questions? Comments? Contact the d n
http://www.developmentnow.com/g/113_2005_7_0_25_0/sql-server-programming.htm
crawl-001
refinedweb
3,414
72.56
Concise programming 2022-06-24 07:34:52 阅读数:47 Read text from character input stream , Buffer characters so that Provide characters 、 Effective reading of arrays and rows . You can specify the buffer size , You can also use the default size . this The default value is large enough for most purposes In particular, we have more readLine Method can read a whole line import java.io.BufferedReader; import java.io.FileNotFoundException; import java.io.FileReader; import java.io.IOException; public class demo11 { public static void main(String[] args) throws IOException { String path = "C:\\Users\\Syf200208161018\\Desktop\\ New text document .txt"; BufferedReader reader = new BufferedReader(new FileReader(path)); String line = null; while ((line= reader.readLine())!=null){ System.out.println(line); } reader.close(); } } Write text to character output stream , Buffer characters so that Provide a single character 、 Efficient writing of arrays and strings . You can specify the buffer size , You can also accept the default size . For most purposes , The default value is large enough . Provides a newLine() Method , It uses the platform's own concept Line delimiters defined by system properties line.separator. Not all platforms use line breaks (‘\n’) To terminate the row . therefore , It is best to call this method to terminate each output line Write a line break directly . import java.io.BufferedWriter; import java.io.FileWriter; import java.io.IOException; public class demo12 { public static void main(String[] args) throws IOException { String path = "C:\\Users\\Syf200208161018\\Desktop\\neww.txt"; BufferedWriter bufferedWriter = new BufferedWriter(new FileWriter(path)); bufferedWriter.write(" Satan card or something adsadhsahdkk"); bufferedWriter.newLine(); bufferedWriter.write("565446464"); bufferedWriter.flush(); bufferedWriter.close(); } } Encapsulates the print() and println() Method Print the formatted representation of the object to the text output stream . This Class implements all print Methods found in PrintStream. It does not contain methods for writing raw bytes , among The program should use an uncoded byte stream . Unlike PrintStream class , If auto refresh is enabled Only if one of them println, printf, perhaps format Method is called , Instead of every newline It happens to be the output . These methods use the platform's own line concept Separator instead of newline . Methods in this class never throw I/O abnormal , Although some of it Constructors may . The customer can ask if there are any errors Occurs by calling checkError(). This class always replaces malformed and unmapped character sequences with The default replacement string for the character set . CharsetEncoder Use Classes should be used more often The coding process needs to be controlled import entity.Stu; import java.io.FileNotFoundException; import java.io.PrintWriter; public class demo13 { public static void main(String[] args) throws FileNotFoundException { String path = "C:\\Users\\Syf200208161018\\Desktop\\neww.txt"; Stu stu = new Stu("zhansgan",55,102.31); PrintWriter printWriter = new PrintWriter(path); printWriter.println(209.31); printWriter.println("helloword"); printWriter.write("new"); printWriter.print(2012); printWriter.close(); } } You can convert a byte stream to a character stream !!! You can set the character encoding method !!! InputStreamReader It's a bridge from byte stream to character stream : it Reads bytes and decodes them into characters using the specified characters charset. The character set it uses You can specify by name , It can also be given explicitly , Or platform You can accept the default character set . Every time you call InputStreamReader Of read() One of the methods may be Causes one or more bytes to be read from the underlying byte input stream . In order to effectively convert bytes to characters , More bytes can be used Read ahead from the underlying stream , Instead of being satisfied Current read operation . For maximum efficiency , Please consider that InputStreamReader It's packed in Buffer reader import java.io.*; public class demo14 { public static void main(String[] args) throws IOException { String path = "C:\\Users\\Syf200208161018\\Desktop\\neww.txt"; InputStreamReader inputStreamReader = new InputStreamReader(new FileInputStream(path), "UTF-8"); int count = 0; while ((count= inputStreamReader.read())!=-1){ System.out.print((char) count); } inputStreamReader.close(); } } OutputStreamWriter It's a bridge from character stream to byte stream : The characters written to it are encoded as charset. The character set it uses You can specify by name , It can also be given explicitly , Or platform You can accept the default character set . Every time you call write() Methods will cause the codec Call... On a given character . The result bytes accumulate in a Buffer before writing to the underlying output stream . Please note that , Pass to write() Method's characters are not buffered . For maximum efficiency , Please consider that OutputStreamWriter It's packed in BufferedWriter To avoid frequent converter calls . import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.IOException; import java.io.OutputStreamWriter; public class demo15 { public static void main(String[] args) throws IOException { String path = "C:\\Users\\Syf200208161018\\Desktop\\neww.txt"; OutputStreamWriter outputStreamWriter = new OutputStreamWriter(new FileOutputStream(path),"UTF-8"); outputStreamWriter.write("hello world !!! Hello world "); outputStreamWriter.flush(); outputStreamWriter.close(); } }
https://en.javamana.com/2022/175/202206240734459052.html
CC-MAIN-2022-33
refinedweb
807
50.02
This guide will walkthrough the set up for being able to load csv and excel files to Google Drive with python. Uploading files to Google Drive is a useful to have as you can upload much larger data sets than can be loaded into Google Sheets (Check out this tutorial for how to upload data to Google Sheets with python). As usual with python there is a package that takes cares of most of the legwork. We will be using the PyDrive package to upload files to Google Drive. Prerequisites: - Python with PyDrive installed - Google Account The PyDrive package has documentation to get set up using the tool. This is what I followed below, only with more screenshots. Setting up Authorisation Go to the Google API console and select Drive API If this is the first time using the Google API console we may need to accept some terms and conditions To enable the Google Drive API we need to first create a project. Click “Create” Give the project a name and click “Create” This will make the project and take us back to the main Google Drive API screen where we can now enable this functionality. Now we need to navigate to the credentials area to download some credentials we can use. Select “Other UI (e.g. Windows, CLI tool)” in the first drop down and “User data” in the radio buttons options. Then click “What credentials do I need”? We should now be at a screen where we are setting up client details for an “OAuth 2.0 client ID“. Name the client and click “Create client ID” Give the product a name and click “Continue” Click “Download” to download the credentials then hit “Done” We now have a client_id.json file on our computer. This needs to be renamed to “client_secrets.json and placed in your working directory. Run Some Code Run the following code in a python interpreter from pydrive.auth import GoogleAuth gauth = GoogleAuth() gauth.LocalWebserverAuth() drive = GoogleDrive(gauth) file1 = drive.CreateFile({'title': 'Hello.txt'}) # Create GoogleDriveFile instance with title 'Hello.txt'. file1.SetContentString('Hello World!') # Set content of the file from given string. file1.Upload() This will open a new tab in your browser asking you to allow the project we just made in the Google API console. Click “allow”. Then this code snippet will upload a text file into your Google drive called Hello.txt, with “Hello world” inside. What we really want however is to upload large csv files that are stored on local drives. This snippet (following allowing the project access) will upload a local csv file called “something.csv” into a Google Drive account. file1 = drive.CreateFile({"mimeType": "text/csv"}) file1.SetContentFile("something.csv") file1.Upload() Congratulations, you made it to the end! If you followed along you now have the ability to upload text and csv files to Google drive. Another tool in the tool belt to tame any data that might come your way. 8 thoughts on “Loading files to Google Drive using python” I am a python user great ur done.Thank you for the wonderful info NameError: name ‘drive’ is not defined please help? Hi QF, you are right. I missed the line defining what drive is here: ‘drive = GoogleDrive(gauth)’. Ive updated the code snippet now, this should work for you. You forgot to import GoogleDrive for that line to work: from pydrive.drive import GoogleDrive how do we upload when we have only the file variable and not the file name? Does creating a new project in the Google API Console have a cost in the short/long run? Hi, creating a project in the Google Cloud Console doesn’t cost anything. However there are many different tools to make use of within a project. Enabling the Google Drive API doesn’t have any associated costs. If you were to use other tools such as the Cloud Vision API or Big Query then those will have costs. is google drive api free to use in application?
https://marquinsmith.com/2017/08/08/loading-files-to-google-drive-using-python/
CC-MAIN-2020-40
refinedweb
672
82.65
This question already has an answer here: I have a server centOS 5.9 i386. I contacted cPanel for some issue, so they informed me that my server had been compromised and /lib64/libkeyutils-1.2.so.2 is the direct indication that the server has been hacked. So, I followed the instructions from here and removed the file, but I think i did not linked the libkeyutil with the previous version so when I restarted SSH, the server denided the access with a message i.e. (Server unexpectedly closed network connection). Now I cannot access my server over ssh. But I access my server with KVM console, so I logged into my server to reinstall keyutils but yum is not working now. It gives me error i.e. Traceback (most recent call last): File "/usr/bin/yum", line 4, in ? import yum File "/usr/lib/python2.4/site-packages/yum/__init__.py", line 50, in ? import config File "/usr/lib/python2.4/site-packages/yum/config.py", line 27, in ? from parser import ConfigPreProcessor File "/usr/lib/python2.4/site-packages/yum/parser.py", line 3, in ? import urlgrabber File "/usr/lib/python2.4/site-packages/urlgrabber/__init__.py", line 53, in ? from grabber import urlgrab, urlopen, urlread File "/usr/lib/python2.4/site-packages/urlgrabber/grabber.py", line 412, in ? import keepalive File "/usr/lib/python2.4/site-packages/urlgrabber/keepalive.py", line 339, in ? class HTTPSHandler(KeepAliveHandler, urllib2.HTTPSHandler): AttributeError: 'module' object has no attribute 'HTTPSHandler' Unfortunately I restarted the server machine (I though it will configure itself), now apache cannot be started, it gives me the following error /usr/local/pache/bin/httpd: error while loading shared libraries: libkeyutils.so.1: cannot shared object file: No such file or directory Now, all the sites are down as apache is not up. I tried to reinstall yum using rpm following the instructions here Need Help In: Can I get back the file i removed (which is /lib64/libkeyutils-1.2.so.2) ? How can I install keyutils without yum? Please help. Thanks This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. If you did not have the compromised libkeyutils.so.1.9 library on your system then you can download the CentOS 5.9 rpm for the libs from here or use wget to get it. Once you've done this use rpm -Fvh keyutils-libs-1.2-1.el5.i386.rpm to install it. If you did haver the compromised libkeyutils.so.1.9 library then you really should wipe it and reinstall from a known good backup. --replacefiles --replacepkgs libkeyutils-1.2.so.2 libkeyutils.so.1.9 rpm -Uvh --replacefiles --replacepkgs asked 2 years ago viewed 933 times active
http://serverfault.com/questions/482669/how-to-install-keyutils-on-centos
CC-MAIN-2015-32
refinedweb
471
61.12
Opened 6 years ago Last modified 6 years ago Fix formatting. I suspect this is not a problem with the plugin. The line you reference is from pkg_resources import resource_filename Which has nothing really to do with XmlRpcPlugin. At a guess I'd suspect something is wrong with your setuptools installation. I have updated setuptools installation, but to no use. Still the same error. Importing os.path module from empty .py script gives no errors. I have resolved it myself - this plugin requires pydoc. It would be nicer if it was mentioned ;) pydoc has been part of the standard library since 2.1. It doesn't seem feasible to enumerate every module a plugin uses. By Edgewall Software. Hosting sponsored by
http://trac-hacks.org/ticket/1536
CC-MAIN-2013-20
refinedweb
121
70.8
So have I. And after some intensive testing (you'll need a dual beam oscilloscope to see this properly), here's what I've discovered... It suffers from some inherent built-in flaws. One of which is where the receiver picks up the signal from the transmitter before the sound has even 'bounced off' any object at all. Try putting a small tube about 20cm long over either the transmitter or receiver (it doesn't matter which one) to see how this overcomes the problem. The worst problem of all is when the sound waves are reflected off objects that are at oblique angles, and this is particularly worse if they're at some distance away. This results in the most erratic 'Echo' signals imaginable. This could mean your program seems to 'lock up' if you use the 'conventional' way of interfacing to this device. In fact, I understand lots of people have given trying to use the HC-SR04 because they think it's so unreliable. However, here's a program I've written for my Pi Car, which uses the HC-SR04 as the distance sensor. Check out the 'get_distance' function to see how I've overcome all the problems with the sensor. I hope this program is helpful to all you robotics fans out there. Please feel free to comment. Code: Select all # ******************************************************************* # * DUAL MOTOR DRIVER USING A Wii REMOTE CONTROL AND THE GPIO BUS * # ******************************************************************* # * This program is designed to control the L298N Dual Full Bridge * # * Driver IC, which in turn drives two DC motors. These could be * # * the left and right hand wheels of a small motorised vehicle. * # * The L298N has 6 control signals: In1, In2, In3, In4, EnA & EnB. * # * All of the signals are active high. See the manufacturers data * # * sheet for full details of the L298N device. * # * * # * Run the program from a Terminal window (root user). It will * # * not run from within the Python Shell GUI. * # * * # * List of the GPIO pins used: * # * GPIO25 - Motor 1 (left motor) Enable - EnA - (output - bit 0) * # * GPIO24 - Motor 1 (left motor) Forward - In1 -(output - bit 1) * # * GPIO23 - Motor 1 (left motor) Reverse - In2 -(output - bit 2) * # * * # * GPIO22 - Motor 2 (right motor) Enable - EnB - (output - bit 3) * # * GPIO27 - Motor 2 (right motor) Forward - In3 - (output - bit 4) * # * GPIO18 - Motor 2 (right motor) Reverse - In4 - (output - bit 5) * # * * # * GPIO17 - Trigger for the ultrasonic sensor (output - bit 6) * # * * # * GPIO11 - Spare GPIO output pin (output - bit 7) * # * * # * GPIO10 - Echo input from the ultrasonic sensor (input - bit 8) * # * * # * GPIO9 - Spare GPIO Input pin (input - bit 9) * # * GPIO8 - Spare GPIO Input pin (input - bit 10) * # * GPIO7 - Spare GPIO Input pin (input - bit 11) * # ******************************************************************* import RPi.GPIO as GPIO # Import the GPIO module as 'GPIO' import cwiid # Import the Nintendo Wii controller module import time # Import the 'time' module import random GPIO.setmode (GPIO.BCM) # Set the GPIO mode to BCM numbering # ******************************************************************* # * DEFINE THE CONSTANTS * # ******************************************************************* # * IMPORTANT: For a Rev 1 RPi, replace 27 below with 21 instead * # ******************************************************************* output_ports = [25, 24, 23, 22, 27, 18, 17, 11] # Define the GPIO output port numbers input_ports = [10, 9, 8, 7] # Define the GPIO input port numbers m1_en = 1 # Motor 1 Enable (left motor) m1_fwd = 2 # Motor 1 Forward (left motor) m1_rev = 4 # Motor 1 Reverse (left motor) m2_en = 8 # Motor 2 Enable (right motor) m2_fwd = 16 # Motor 2 Forward (right motor) m2_rev = 32 # Motor 2 Reverse (right motor) trig = 17 # Trigger output for the ultrasonic sensor echo = 10 # Echo return from the ultrasonic sensor stop = 0 # Drive value for no movement left = m1_en + m1_rev + m2_en + m2_fwd # Drive value for turning left right = m1_en + m1_fwd + m2_en + m2_rev # Drive value for turning right fwd = m1_en + m1_fwd + m2_en + m2_fwd # Drive value for moving forwards rev = m1_en + m1_rev + m2_en + m2_rev # Drive value for moving backwards auto = -1 # Drive value for automatic mode # ******************************************************************* # * FUNCTION TO DRIVE THE MOTORS USING THE SUPPLIED VALUE * # ******************************************************************* # * Although the 'global' command is used, this is simply to allow * # * access to the program variables outside the function. These * # * are never altered by any function throughout the entire program * # * On exit, 'b' contains a 12-bit binary string representing the * # * states of the 12 GPIO pins - all eight outputs - 0 to 7 and all * # * four inputs - 8 to 11 * # ******************************************************************* def motor_drive (value): # Accept the motor drive value global input_ports, output_ports # Allow acces to the assigned GPIO ports b = bin (value) # Create a binary string from the supplied value b = b [2:len(b)] # Strip off the '0b' from the start of the string b = b.zfill(8) # Make sure the string is eight bits long output_pointer = len (b) -1 # Start with the LSB of Binary string for port in output_ports: # Pick out the individual GPIO port required output_state = int (b[output_pointer]) # Select whether it needs to be on or off (1 or 0) GPIO.output (port,output_state) # Turn on or off the relevant GPIO bit output_pointer = output_pointer - 1 # Move to the next bit in the string for port in input_ports: # Get the status of the four input bits b = str (GPIO.input (port)) + b # Add the bit values to the Binary string return (b) # Exit with the Binary string in 'b' # ******************************************************************* # * FUNCTION TO REVERSE THE CAR TO AVOID OBJECTS IN FRONT OF IT * # ******************************************************************* # * If an obstruction is detected in front of the car which is less * # * than 15cm away, then back away from it to a distance of 20cm. * # * This happens irrespective of whether the car is being driven * # * manually or autonomously. To prevent excess current surges and * # * sudden 'unnatural' reversing, stop the car first, then wait for * # * half a second before and after backing up * # ******************************************************************* def back_away (direction): movement = motor_drive (False) # First of all, stop the car time.sleep (0.5) # Wait for half a second distance = get_distance() # Get the current object distance while distance < 20: # Whilst it's less than 20cm movement = motor_drive (direction) # keep reversing the car distance = get_distance () # and checking the distance movement = motor_drive (False) # Otherwise, stop the car, time.sleep (0.5) # wait for half a second, return (False) # then exit with no direction (stop) # ******************************************************************* # * FUNCTION TO DRIVE THE CAR AUTONOMOUSLY * # ******************************************************************* # * This function is called if the 'A' button, and ONLY the 'A' * # * button on the Wii Remote Control is being held down. The car * # * will normally attempt to drive forwards. If an obstruction is * # * detected in front of it then the 'back_away' function is called.* # * The car will then turn either left or right - depending on a * # * random number between 0 and 7. Between 0 and 3 the the car * # * will turn left. Between 4 and 7 and the car will turn right. * # * It will then attempt to continue in a forwards direction * # ******************************************************************* def automatic (buttons): global stop, left, right, fwd, rev # Allow access to the direction constants while (buttons - cwiid.BTN_A == 0): # Run autonomously only while 'A' pressed direction = fwd # Set the initial direction to forwards distance = get_distance () # Check the distance if direction == fwd and distance < 15: # If the distance is less than 15cm then movement = back_away (rev) # reverse the car if driving forward direction = right # Set initial turn direction to 'right' turn = random.randint (0,7) # Generate a random number between 0 and 7 if turn < 4: # If the random number is less than 4 direction = left # then turn left instead movement = motor_drive (direction) # Now turn in that direction time.sleep (0.5) # for half a second movement = motor_drive (stop) # Then stop for half a second time.sleep (0.5) movement = motor_drive (direction) # Continue moving in a forwards direction buttons = wii.state['buttons'] # Get the button data from the Wii remote return () # Exit if the 'A' button isn't being pressed # ******************************************************************* # * FUNCTION TO CALCULATE THE DISTANCE FROM OBSTRUCTIONS IN FRONT * # ******************************************************************* # * THIS USES THE HC-SR04 ULTRASONIC DISTANCE SENSOR * # ******************************************************************* # * Unfortunately, the HC-SR04 suffers from some inherent built-in * # * flaws, particularly if the reflected sound waves bounce off * # * objects at a distance and/or at oblique angles. This results * # * in very erratic signals being generated on the 'Echo' pin. * # * Since we're only interested in short distances up to 20cm, we * # * need to trap these errors within this function, otherwise the * # * program would be very slow to respond to the Wii Remote buttons.* # * In worst-case situations the program could simply 'hang' whilst * # * waiting for an echo signal which never ends! On exit, if a * # * valid short distance is calculated, then 'distance' contains * # * the value in centimetres. If a sensor error occurs, then a * # * value of 100 is returned instead. * # ******************************************************************* def get_distance (): global trig, echo # Allow access to 'trig' and 'echo' constants if GPIO.input (echo): # If the 'Echo' pin is already high return (100) # then exit with 100 (sensor fault) distance = 0 # Set initial distance to zero GPIO.output (trig,False) # Ensure the 'Trig' pin is low for at time.sleep (0.05) # least 50mS (recommended re-sample time) GPIO.output (trig,True) # Turn on the 'Trig' pin for 10uS (ish!) dummy_variable = 0 # No need to use the 'time' module here, dummy_variable = 0 # a couple of 'dummy' statements will do fine GPIO.output (trig,False) # Turn off the 'Trig' pin time1, time2 = time.time(), time.time() # Set inital time values to current time while not GPIO.input (echo): # Wait for the start of the 'Echo' pulse time1 = time.time() # Get the time the 'Echo' pin goes high if time1 - time2 > 0.02: # If the 'Echo' pin doesn't go high after 20mS distance = 100 # then set 'distance' to 100 break # and break out of the loop if distance == 100: # If a sensor error has occurred return (distance) # then exit with 100 (sensor fault) while GPIO.input (echo): # Otherwise, wait for the 'Echo' pin to go low time2 = time.time() # Get the time the 'Echo' pin goes low if time2 - time1 > 0.02: # If the 'Echo' pin doesn't go low after 20mS distance = 100 # then ignore it and set 'distance' to 100 break # and break out of the loop if distance == 100: # If a sensor error has occurred return (distance) # then exit with 100 (sensor fault) # Sound travels at approximately 2.95uS per mm # and the reflected sound has travelled twice # the distance we need to measure (sound out, # bounced off object, sound returned) distance = (time2 - time1) / 0.00000295 / 2 / 10 # Convert the timer values into centimetres return (distance) # Exit with the distance in centimetres # ******************************************************************* # * FUNCTION TO EXIT THE PROGRAM CLEANLY * # ******************************************************************* # * Use this function to turn off both motor and to ensure the GPIO * # * ports are reset properly on exit, or if a controllable error * # * occurs within the program. Note: This will not work if CTRL-C * # * is used to quit the program prematurely. In such case, a GPIO * # * error message will be displayed when the program is run again. * # * However, the program will continue to function as normal * # ******************************************************************* def exit_program(): z = motor_drive (False) # Turn off all the GPIO outputs print ("\n\n") # Print a couple of blank lines GPIO.cleanup() # Clean up the GPIO ports exit() # And quit the program # ******************************************************************* # * START OF THE MAIN PROGRAM * # ******************************************************************* for bit in output_ports: # Set up the six output bits GPIO.setup (bit,GPIO.OUT) GPIO.output (bit,False) # Initially turn them all off for bit in input_ports: # Set up the six input bits GPIO.setup (bit,GPIO.IN, pull_up_down = GPIO.PUD_DOWN) # Set the inputs as normally low # ******************************************************************* # * CONNECT TO THE Wii REMOTE CONTROL. QUIT IF IT TIMES OUT 3 TRIES * # ******************************************************************* # * The Wii Remote Control is connected via a bluetooth adaptor and * # * by importing the 'cwiid' module. To connect, press and release * # * the '1' and '2' buttons simultaneously on the Wii Remote. The * # * program will try to connect up to 3 times. If it fails the * # * program will terminate * # ******************************************************************* print ("\n\n\n\nPress 1 & 2 together on your Wii Remote now ...") # Print some instructions attempt = ['first', 'second', 'last'] # Make them a bit informative word = 0 # Number of attempts to connect while True: # Start an infinite loop try: print ("\n\nTrying to connect for the"), attempt [word], print ("time...\n\nAttempt"), word+1 # Print current attempt wii=cwiid.Wiimote() # Wait for a response from the Wii remote break # If successful then exit the loop except RuntimeError: # If it times out... word = word + 1 # Try again if word == 3: # If it fails after 3 attempts... print ("\n\nFailed to connect to the Wii remote control") # Print a failure message print ("\nProgram Terminated\n") print ("Please restart the program to begin again\n\n") terminate = exit_program() # And exit the program # ******************************************************************* # * SUCCESSFULLY CONNECTED TO THE Wii REMOTE CONTROL * # ******************************************************************* wii.rumble = 1 # Briefly vibrate the Wii remote time.sleep(0.2) wii.rumble = 0 wii.rpt_mode = cwiid.RPT_BTN | cwiid.RPT_ACC # Report button and accelerometer data print ("\n\n\n\nThe Wii Remote is now connected...\n") # Print a few instructions print ("Use the direction pad to steer the car\n") print ("or...\n") print ("Hold the 'B' button and tilt the Wii Remote to steer\n") print ("or...\n") print ("Press and hold down the 'A' button for Autonomous Mode\n") print ("Press '+' and '-' buttons at the same time quit.\n") while True: # Begin an infinite loop direction = stop # Set the initial direction to none (stop) buttons = wii.state['buttons'] # Get the button data from the Wii remote x, y, z = wii.state['acc'] # Also get the accelerometer data if not (buttons & cwiid.BTN_B): # Only use accelerometer data if x, y, z = 125, 125, 125 # the 'B' button is being pressed if (buttons - cwiid.BTN_PLUS - cwiid.BTN_MINUS == 0): # Are both the '+' and '-' buttons pressed? print ("\nThe Wii Remote connection has been closed\n") print ("Please restart the program to begin again\n") # Yes - Print a message wii.rumble = 1 # Briefly vibrate the Wii remote time.sleep(0.2) wii.rumble = 0 terminate = exit_program() # And quit the program if (buttons - cwiid.BTN_A == 0): # If ONLY the 'A' button is pressed movement = automatic (buttons) # then run autonomously # ******************************************************************* # * Using the data from the Wii Remote Control, check which buttons * # * are pressed using a bitwise AND of the buttons bit value and * # * the predefined 'cwiid' constant for each button. If more than * # * one button is pressed, only the last one in the sequence 'left',* # * 'right', 'up' or 'down' will be selected. It has to be done * # * this way because the L298N controller cannot drive the motors * # * in two directions at the same time - eg: forwards and left * # ******************************************************************* if (buttons & cwiid.BTN_LEFT) or x < 110: direction = left # Prepare to turn the car to the left if(buttons & cwiid.BTN_RIGHT) or x > 130: direction = right # Prepare to turn the car to the right if (buttons & cwiid.BTN_UP) or y > 130: direction = fwd # Prepare to drive the car forwards if (buttons & cwiid.BTN_DOWN) or y < 110: direction = rev # Prepare to drive the car backwards distance = get_distance () # Get object distance in centimetres if direction == fwd and distance < 15: # If the distance is less than 15cm then direction = back_away (rev) # reverse the car if driving forward movement = motor_drive (direction) # Otherwise get the car moving (if needs be)
https://www.raspberrypi.org/forums/viewtopic.php?p=553749
CC-MAIN-2019-30
refinedweb
2,468
56.18
While the user experience on tempalias.com is already really streamlined, compared to other services that encode the expiration settings and sometimes even the target) into the email address (and are thus exploitable and in some cases requiring you to have an account with them), it loses in that, when you have to register on some site, you will have to open the tempalias.com website in its own window and then manually create the alias. Wouldn’t it be nice if this worked without having to visit the site? This video is showing how I want this to work and how the bookmarklet branch on the github project page is already working: The workflow will be that you create your first (and probably only) alias manually. In the confirmation screen, you will be presented with a bookmarklet that you can drag to your bookmark bar and that will generate more aliases like the one just generated. This works independently of cookies or user accounts, so it would even work across browsers if you are synchronizing bookmarks between machines. The actual bookmarklet is just a very small stub that will contain all the configuration for alias creation (so the actual bookmarklet will be the minified version of this file here). The bookmarklet, when executed will add a script tag to the page that actually does the heavy lifting. The script that’s running in the video above tries really hard to be a good citizen as it’s run in the context of a third party webpage beyond my control: - it doesn’t pollute the global namespace. It has to add one function, window.$__tempalias_com, so it doesn’t reload all the script if you click the bookmark button multiple times. - while it depends on jQuery (I’m not doing this in pure DOM), it tries really hard to be a good citizen: - if jQuery 1.4.2 is already used on the site, it uses that. - if any other jQuery version is installed, it loads 1.4.2 but restores window.jQuery to what it was before. - if no jQuery is installed, it loads 1.4.2 - In all cases, it calls jQuery.noConflict if $ is bound to anything. - All DOM manipulation uses really unique class names and event namespaces While implementing, I noticed that you can’t unbind live events with just their name, so $().die(‘.ta’) didn’t work an I had to provide all events I’m live-binding to. I’m using live here because the bubbling up delegation model works better in a case where there might be many matching elements on any particular page. Now the next step will be to add some design to the whole thing and then it can go live.
https://blog.pilif.me/2010/04/26/tempalias-com-bookmarklet-work/
CC-MAIN-2018-47
refinedweb
461
69.82
Dražen's humble introduction to What's IPython notebook? A web interface for a scientific Python shell. Started by Fernando Perez to bring modern, open source tools to the research community. Main features: a=range(5) a [0, 1, 2, 3, 4] a.remove(3) a.remove? a [0, 1, 2, 4] def divide(): a = 1 b = 4 c = b / (a-1) divide() --------------------------------------------------------------------------- ZeroDivisionError Traceback (most recent call last) <ipython-input-5-a8416233c05c> in <module>() 4 c = b / (a-1) 5 ----> 6 divide() <ipython-input-5-a8416233c05c> in divide() 2 a = 1 3 b = 4 ----> 4 c = b / (a-1) 5 6 divide() ZeroDivisionError: integer division or modulo by zero pwd ! open unpause_action.pdf %magicfor some extra info) %timeit a*3 You can easily install it on your server to have a consistent environment sudo apt-get install ipython The ultimate lab notebook! Inside it you can use cool features such as: Can you believe it? I used to code in Java! private static int maxValue(char[] chars) { int max = chars[0]; for (int ktr = 0; ktr < chars.length; ktr++) { if (chars[ktr] > max) { max = chars[ktr]; } } return max; } $E = mc^2 \ne \sum_{i \in N}e^i + \int_0^\infty e^{-x}dx$ from IPython.lib.display import YouTubeVideo YouTubeVideo('HaS4NXxL5Qc') x = linspace(0, 2*pi) y = sin(x) plot(x,y) show() ! head data/kaggle/wind_forecast/train.csv date,wp1,wp2,wp3,wp4,wp5,wp6,wp7 2009070100,0.045,0.233,0.494,0.105,0.056,0.118,0.051 2009070101,0.085,0.249,0.257,0.105,0.066,0.066,0.051 2009070102,0.02,0.175,0.178,0.033,0.015,0.026,0 2009070103,0.06,0.085,0.109,0.022,0.01,0.013,0 2009070104,0.045,0.032,0.079,0.039,0.01,0,0 2009070105,0.035,0.011,0.099,0.066,0.015,0.013,0 2009070106,0.005,0,0.069,0.105,0.015,0.079,0 2009070107,0,0.011,0,0.017,0.025,0.013,0.025 2009070108,0,0.016,0,0.017,0.046,0,0 import pandas as pd def format_timestamp(raw): return '%s %s:00' % (raw[:-2], raw[-2:]) wind = pd.read_csv('data/kaggle/wind_forecast/train.csv', parse_dates=['date'], index_col=['date'], converters={'date':format_timestamp}) wind <class 'pandas.core.frame.DataFrame'> DatetimeIndex: 18757 entries, 2009-07-01 00:00:00 to 2012-06-26 12:00:00 Data columns: wp1 18757 non-null values wp2 18757 non-null values wp3 18757 non-null values wp4 18757 non-null values wp5 18757 non-null values wp6 18757 non-null values wp7 18757 non-null values dtypes: float64(7) wind.index <class 'pandas.tseries.index.DatetimeIndex'> [2009-07-01 00:00:00, ..., 2012-06-26 12:00:00] Length: 18757, Freq: None, Timezone: None a = '2009-07-01' b = '2009-07-03' wind[a:b].plot() ylabel('normalized wind power') <matplotlib.text.Text at 0x10ddab450> wind['wp1'][a:b].plot(color='red', label='raw power') pd.ewma(wind['wp1'], span=100)[a:b].plot(color='blue', label='smoothed') legend() <matplotlib.legend.Legend at 0x10ea42f90> from sympy import * v = symbols('v') integrate(log(v), v) v*log(v) - v Installing everything in Ubuntu is easy sudo apt-get install ipython-notebook In another OS it might give you some headache - better to install a Python distribution such as Enthought or Python(x,y) Take a stroll to the cheese shop for the necessary packages sudo pip install ipython[notebook] If you're not using IPython, you're doing something wrong. -- [Wes McKinney](), creator of Pandas and author of "Python for Data Analysis"
http://nbviewer.jupyter.org/gist/metakermit/5792121
CC-MAIN-2018-51
refinedweb
614
56.05
This action might not be possible to undo. Are you sure you want to continue? April 16, 2007 Paul Hudak Yale University paul.hudak@yale.edu John Hughes Chalmers University rjmh@cs.chalmers.se Simon Peyton Jones Microsoft Research simonpj@microsoft.com Philip Wadler University of Edinburgh wadler@inf.ed.ac.uk Abstract This paper describes the history of Haskell, including its genesis and principles, technical contributions, implementations and tools, and applications and impact. 1. Introduction In September of 1987 a meeting was held at the conference on Functional Programming Languages and Computer Architecture. These opening words in the Preface of the first Haskell Report, Version 1.0 dated 1 April 1990, say quite a bit about the history of Haskell. They establish the motivation for designing Haskell (the need for a common language), the nature of the language to be designed (non-strict, purely functional), and the process by which it was to be designed (by committee). Part I of this paper describes genesis and principles: how Haskell came to be. We describe the developments leading up to Haskell and its early history (Section 2) and the processes and principles that guided its evolution (Section 3). Part II describes Haskell’s technical contributions: what Haskell is. We pay particular attention to aspects of the language and its evo- lution that are distinctive in themselves, or that developed in unexpected or surprising ways. We reflect on five areas: syntax (Section 4); algebraic data types (Section 5); the type system, and type classes in particular (Section 6); monads and input/output (Section 7); and support for programming in the large, such as modules and packages, and the foreign-function interface (Section 8). Part III describes implementations and tools: what has been built for the users of Haskell. We describe the various implementations of Haskell, including GHC, hbc, hugs, nhc, and Yale Haskell (Section 9), and tools for profiling and debugging (Section 10). Part IV describes applications and impact: what has been built by the users of Haskell. The language has been used for a bewildering variety of applications, and in Section 11 we reflect on the distinctive aspects of some of these applications, so far as we can discern them. We conclude with a section that assesses the impact of Haskell on various communities of users, such as education, opensource, companies, and other language designers (Section 12). Our goal throughout is to tell the story, including who was involved and what inspired them: the paper is supposed to be a history rather than a technical description or a tutorial. We have tried to describe the evolution of Haskell in an evenhanded way, but we have also sought to convey some of the excitement and enthusiasm of the process by including anecdotes and personal reflections. Inevitably, this desire for vividness means that our account will be skewed towards the meetings and conversations in which we personally participated. However, we are conscious that many, many people have contributed to Haskell. The size and quality of the Haskell community, its breadth and its depth, are both the indicator of Haskell’s success and its cause. One inevitable shortcoming is a lack of comprehensiveness. Haskell is now more than 15 years old and has been a seedbed for an immense amount of creative energy. We cannot hope to do justice to all of it here, but we take this opportunity to salute all those who have contributed to what has turned out to be a wild ride.. Third ACM SIGPLAN History of Programming Languages Conference (HOPL-III) San Diego, CA Copyright c 2007 ACM . . . $5.00 Part I Lisp, and showed soundness of their evaluator with respect to a denotational semantics. • David Turner (at St. Andrews and Kent) introduced a series Genesis and Principles 2. The genesis of Haskell In 1978 John Backus delivered his Turing Award lecture, “Can programming be liberated from the von Neumann style?” (Backus, 1978a), which positioned functional programming as a radical attack on the whole programming enterprise, from hardware architecture upwards. This prominent endorsement from a giant in the field—Backus led the team that developed Fortran, and invented Backus Naur Form (BNF)—put functional programming on the map in a new way, as a practical programming tool rather than a mathematical curiosity. Even at that stage, functional programming languages had a long history, beginning with John McCarthy’s invention of Lisp in the late 1950s (McCarthy, 1960). In the 1960s, Peter Landin and Christopher Strachey identified the fundamental importance of the lambda calculus for modelling programming languages and laid the foundations of both operational semantics, through abstract machines (Landin, 1964), and denotational semantics (Strachey, 1964). A few years later Strachey’s collaboration with Dana Scott put denotational semantics on firm mathematical foundations underpinned by Scott’s domain theory (Scott and Strachey, 1971; Scott, 1976). In the early ’70s, Rod Burstall and John Darlington were doing program transformation in a first-order functional language with function definition by pattern matching (Burstall and Darlington, 1977). Over the same period David Turner, a former student of Strachey, developed SASL (Turner, 1976), a pure higher-order functional language with lexically scoped variables— a sugared lambda calculus derived from the applicative subset of Landin’s ISWIM (Landin, 1966)—that incorporated Burstall and Darlington’s ideas on pattern matching into an executable programming language. In the late ’70s, Gerry Sussman and Guy Steele developed Scheme, a dialect of Lisp that adhered more closely to the lambda calculus by implementing lexical scoping (Sussman and Steele, 1975; Steele, 1978). At more or less the same time, Robin Milner invented ML as a meta-language for the theorem prover LCF at Edinburgh (Gordon et al., 1979). Milner’s polymorphic type system for ML would prove to be particularly influential (Milner, 1978; Damas and Milner, 1982). Both Scheme and ML were strict (callby-value) languages and, although they contained imperative features, they did much to promote the functional programming style and in particular the use of higher-order functions. 2.1 The call of laziness Then, in the late ’70s and early ’80s, something new happened. A series of seminal publications ignited an explosion of interest in the idea of lazy (or non-strict, or call-by-need) functional languages as a vehicle for writing serious programs. Lazy evaluation appears to have been invented independently three times. • Dan Friedman and David Wise (both at Indiana) published of influential languages: SASL (St Andrews Static Language) (Turner, 1976), which was initially designed as a strict language in 1972 but became lazy in 1976, and KRC (Kent Recursive Calculator) (Turner, 1982). Turner showed the elegance of programming with lazy evaluation, and in particular the use of lazy lists to emulate many kinds of behaviours (Turner, 1981; Turner, 1982). SASL was even used at Burroughs to develop an entire operating system—almost certainly the first exercise of pure, lazy, functional programming “in the large”. At the same time, there was a symbiotic effort on exciting new ways to implement lazy languages. In particular: • In software, a variety of techniques based on graph reduction were being explored, and in particular Turner’s inspirationally elegant use of SK combinators (Turner, 1979b; Turner, 1979a). (Turner’s work was based on Haskell Curry’s combinatory calculus (Curry and Feys, 1958), a variable-less version of Alonzo Church’s lambda calculus (Church, 1941).) • Another potent ingredient was the possibility that all this would lead to a radically different non-von Neumann hardware architectures. Several serious projects were underway (or were getting underway) to build dataflow and graph reduction machines of various sorts, including the Id project at MIT (Arvind and Nikhil, 1987), the Rediflow project at Utah (Keller et al., 1979), the SK combinator machine SKIM at Cambridge (Stoye et al., 1984), the Manchester dataflow machine (Watson and Gurd, 1982), the ALICE parallel reduction machine at Imperial (Darlington and Reeve, 1981), the Burroughs NORMA combinator machine (Scheevel, 1986), and the DDM dataflow machine at Utah (Davis, 1977). Much (but not all) of this architecturally oriented work turned out to be a dead end, when it was later discovered that good compilers for stock architecture could outperform specialised architecture. But at the time it was all radical and exciting. Several significant meetings took place in the early ’80s that lent additional impetus to the field. In August 1980, the first Lisp conference took place in Stanford, California. Presentations included Rod Burstall, Dave MacQueen, and Don Sannella on Hope, the language that introduced algebraic data types (Burstall et al., 1980). In July 1981, Peter Henderson, John Darlington, and David Turner ran an Advanced Course on Functional Programming and its Applications, in Newcastle (Darlington et al., 1982). All the big names were there: attendees included Gerry Sussman, Gary Lindstrom, David Park, Manfred Broy, Joe Stoy, and Edsger Dijkstra. (Hughes and Peyton Jones attended as students.) Dijkstra was characteristically unimpressed—he wrote “On the whole I could not avoid some feelings of deep disappointment. I still believe that the topic deserves a much more adequate treatment; quite a lot we were exposed to was definitely not up to par.” (Dijkstra, 1981)—but for many attendees it was a watershed. In September 1981, the first conference on Functional Programming Languages and Computer Architecture (FPCA)—note the title!—took place in Portsmouth, New Hampshire. Here Turner gave his influential paper on “The semantic elegance of applicative languages” (Turner, 1981). (Wadler also presented his first conference paper.) FPCA became a key biennial conference in the field. In September 1982, the second Lisp conference, now renamed Lisp and Functional Programming (LFP), took place in Pittsburgh, “Cons should not evaluate its arguments” (Friedman and Wise, 1976), which took on lazy evaluation from a Lisp perspective. • Peter Henderson (at Newcastle) and James H. Morris Jr. (at Xerox PARC) published “A lazy evaluator” (Henderson and Morris, 1976). They cite Vuillemin (Vuillemin, 1974) and Wadsworth (Wadsworth, 1971) as responsible for the origins of the idea, but popularised the idea in POPL and made one other important contribution, the name. They also used a variant of Pennsylvania. Presentations included Peter Henderson on functional geometry (Henderson, 1982) and an invited talk by Turner on programming with infinite data structures. (It also saw the first published papers of Hudak, Hughes, and Peyton Jones.) Special guests at this conference included Church and Curry. The after-dinner talk was given by Barkley Rosser, and received two ovations in the middle, once when he presented the proof of Curry’s paradox, relating it to the Y combinator, and once when he presented a new proof of the Church-Rosser theorem. LFP became the other key biennial conference. (In 1996, FPCA merged with LFP to become the annual International Conference on Functional Programming, ICFP, which remains the key conference in the field to the present day.) In August 1987, Ham Richards of the University of Texas and David Turner organised an international school on Declarative Programming in Austin, Texas, as part of the UT “Year of Programming”. Speakers included: Samson Abramsky, John Backus, Richard Bird, Peter Buneman, Robert Cartwright, Simon Thompson, David Turner, and Hughes. A major part of the school was a course in lazy functional programming, with practical classes using Miranda. All of this led to a tremendous sense of excitement. The simplicity and elegance of functional programming captivated the present authors, and many other researchers with them. Lazy evaluation— with its direct connection to the pure, call-by-name lambda calculus, the remarkable possibility of representing and manipulating infinite data structures, and addictively simple and beautiful implementation techniques—was like a drug. (An anonymous reviewer supplied the following: “An interesting sidelight is that the Friedman and Wise paper inspired Sussman and Steele to examine lazy evaluation in Scheme, and for a time they weighed whether to make the revised version of Scheme call-byname or call-by-value. They eventually chose to retain the original call-by-value design, reasoning that it seemed to be much easier to simulate call-by-name in a call-by-value language (using lambdaexpressions as thunks) than to simulate call-by-value in a call-byname language (which requires a separate evaluation-forcing mechanism). Whatever we might think of that reasoning, we can only speculate on how different the academic programming-language landscape might be today had they made the opposite decision.”) 2.2 A tower of Babel As a result of all this activity, by the mid-1980s there were a number of researchers, including the authors, who were keenly interested in both design and implementation techniques for pure, lazy languages. In fact, many of us had independently designed our own lazy languages and were busily building our own implementations for them. We were each writing papers about our efforts, in which we first had to describe our languages before we could describe our implementation techniques. Languages that contributed to this lazy Tower of Babel include: • Miranda, a successor to SASL and KRC, designed and imple- used to the idea that laziness meant graph reduction, and graph reduction meant interpretation.) • Orwell, a lazy language developed by Wadler, influenced by KRC and Miranda, and OL, a later variant of Orwell. Bird and Wadler co-authored an influential book on functional programming (Bird and Wadler, 1988), which avoided the “Tower of Babel” by using a more mathematical notation close to both Miranda and Orwell. • Alfl, designed by Hudak, whose group at Yale developed a combinator-based interpreter for Alfl as well as a compiler based on techniques developed for Scheme and for T (a dialect of Scheme) (Hudak, 1984b; Hudak, 1984a). • Id, a non-strict dataflow language developed at MIT by Arvind and Nikhil, whose target was a dataflow machine that they were building. • Clean, a lazy language based explicitly on graph reduction, developed at Nijmegen by Rinus Plasmeijer and his colleagues (Brus et al., 1987). • Ponder, a language designed by Jon Fairbairn, with an impred- icative higher-rank type system and lexically scoped type variables that was used to write an operating system for SKIM (Fairbairn, 1985; Fairbairn, 1982). • Daisy, a lazy dialect of Lisp, developed at Indiana by Cordelia Hall, John O’Donnell, and their colleagues (Hall and O’Donnell, 1985). With the notable exception of Miranda (see Section 3.8), all of these were essentially single-site languages, and each individually lacked critical mass in terms of language-design effort, implementations, and users. Furthermore, although each had lots of interesting ideas, there were few reasons to claim that one language was demonstrably superior to any of the others. On the contrary, we felt that they were all roughly the same, bar the syntax, and we started to wonder why we didn’t have a single, common language that we could all benefit from. At this time, both the Scheme and ML communities had developed their own standards. The Scheme community had major loci in MIT, Indiana, and Yale, and had just issued its ‘revised revised’ report (Rees and Clinger, 1986) (subsequent revisions would lead to the ‘revised5 ’ report (Kelsey et al., 1998)). Robin Milner had issued a ‘proposal for Standard ML’ (Milner, 1984) (which would later evolve into the definitive Definition of Standard ML (Milner and Tofte, 1990; Milner et al., 1997)), and Appel and MacQueen had released a new high-quality compiler for it (Appel and MacQueen, 1987). 2.3 The birth of Haskell By 1987, the situation was akin to a supercooled solution—all that was needed was a random event to precipitate crystallisation. That event happened in the fall of ’87, when Peyton Jones stopped at Yale to see Hudak on his way to the 1987 Functional Programming and Computer Architecture Conference (FPCA) in Portland, Oregon. After discussing the situation, Peyton Jones and Hudak decided to initiate a meeting during FPCA, to garner interest in designing a new, common functional language. Wadler also stopped at Yale on the way to FPCA, and also endorsed the idea of a meeting. The FPCA meeting thus marked the beginning of the Haskell design process, although we had no name for the language and very few technical discussions or design decisions occurred. In fact, a key point that came out of that meeting was that the easiest way to move forward was to begin with an existing language, and evolve mented by David Turner using SK combinator reduction. While SASL and KRC were untyped, Miranda added strong polymorphic typing and type inference, ideas that had proven very successful in ML. • Lazy ML (LML), pioneered at Chalmers by Augustsson and Johnsson, and taken up at University College London by Peyton Jones. This effort included the influential development of the Gmachine, which showed that one could compile lazy functional programs to rather efficient code (Johnsson, 1984; Augustsson, 1984). (Although it is obvious in retrospect, we had become it in whatever direction suited us. Of all the lazy languages under development, David Turner’s Miranda was by far the most mature. It was pure, well designed, fulfilled many of our goals, had a robust implementation as a product of Turner’s company, Research Software Ltd, and was running at 120 sites. Turner was not present at the meeting, so we concluded that the first action item of the committee would be to ask Turner if he would allow us to adopt Miranda as the starting point for our new language. After a brief and cordial interchange, Turner declined. His goals were different from ours. We wanted a language that could be used, among other purposes, for research into language features; in particular, we sought the freedom for anyone to extend or modify the language, and to build and distribute an implementation. Turner, by contrast, was strongly committed to maintaining a single language standard, with complete portability of programs within the Miranda community. He did not want there to be multiple dialects of Miranda in circulation and asked that we make our new language sufficiently distinct from Miranda that the two would not be confused. Turner also declined an invitation to join the new design committee. For better or worse, this was an important fork in the road. Although it meant that we had to work through all the minutiae of a new language design, rather than starting from an already welldeveloped basis, it allowed us the freedom to contemplate more radical approaches to many aspects of the language design. For example, if we had started from Miranda it seems unlikely that we would have developed type classes (see Section 6.1). Nevertheless, Haskell owes a considerable debt to Miranda, both for general inspiration and specific language elements that we freely adopted where they fitted into our emerging design. We discuss the relationship between Haskell and Miranda further in Section 3.8. Once we knew for sure that Turner would not allow us to use Miranda, an insanely active email discussion quickly ensued, using the mailing list fplangc@cs.ucl.ac.uk, hosted at the University College London, where Peyton Jones was a faculty member. The email list name came from the fact that originally we called ourselves the “FPLang Committee,” since we had no name for the language. It wasn’t until after we named the language (Section 2.4) that we started calling ourselves the “Haskell Committee.” 2.4 The first meetings The Yale Meeting The first physical meeting (after the impromptu FPCA meeting) was held at Yale, January 9–12, 1988, where Hudak was an Associate Professor. The first order of business was to establish the following goals for the language: usable as a basis for further language research. 5. It should be based on ideas that enjoy a wide consensus. 6. It should reduce unnecessary diversity in functional programming languages. More specifically, we initially agreed to base it on an existing language, namely OL. The last two goals reflected the fact that we intended the language to be quite conservative, rather than to break new ground. Although matters turned out rather differently, we intended to do little more than embody the current consensus of ideas and to unite our disparate groups behind a single design. As we shall see, not all of these goals were realised. We abandoned the idea of basing Haskell explicitly on OL very early; we violated the goal of embodying only well-tried ideas, notably by the inclusion of type classes; and we never developed a formal semantics. We discuss the way in which these changes took place in Section 3. Directly from the minutes of the meeting, here is the committee process that we agreed upon: 1. Decide topics we want to discuss, and assign “lead person” to each topic. 2. Lead person begins discussion by summarising the issues for his topic. • In particular, begin with a description of how OL does it. • OL will be the default if no clearly better solution exists. 3. We should encourage breaks, side discussions, and literature research if necessary. 4. Some issues will not be resolved! But in such cases we should establish action items for their eventual resolution. 5. It may seem silly, but we should not adjourn this meeting until at least one thing is resolved: a name for the language! 6. Attitude will be important: a spirit of cooperation and compromise. We return later to further discussion of the committee design process, in Section 3.5. A list of all people who served on the Haskell Committee appears in Section 14. Choosing a Name The fifth item above was important, since a small but important moment in any language’s evolution is the moment it is named. At the Yale meeting we used the following process (suggested by Wadler) for choosing the name. Anyone could propose one or more names for the language, which were all written on a blackboard. At the end of this process, the following names appeared: Semla, Haskell, Vivaldi, Mozart, CFL (Common Functional Language), Funl 88, Semlor, Candle (Common Applicative Notation for Denoting Lambda Expressions), Fun, David, Nice, Light, ML Nouveau (or Miranda Nouveau, or LML Nouveau, or ...), Mirabelle, Concord, LL, Slim, Meet, Leval, Curry, Frege, Peano, Ease, Portland, and Haskell B Curry. After considerable discussion about the various names, each person was then free to cross out a name that he disliked. When we were done, there was one name left. That name was “Curry,” in honour of the mathematician and logician Haskell B. Curry, whose work had led, variously and indirectly, to our presence in that room. That night, two of us realised that we would be left with a lot of curry puns (aside from the spice, and the thought of currying favour, the one that truly horrified us was Tim Curry—TIM was Jon Fairbairn’s abstract machine, and Tim Curry was famous for playing the lead in the Rocky Horror Picture Show). So the next day, after some further discussion, we settled on “Haskell” as the name for the new language. Only later did we realise that this was too easily confused with Pascal or Hassle! Hudak and Wise were asked to write to Curry’s widow, Virginia Curry, to ask if she would mind our naming the language after her husband. Hudak later visited Mrs. Curry at her home and listened to stories about people who had stayed there (such as Church and Kleene). Mrs. Curry came to his talk (which was about Haskell, of course) at Penn State, and although she didn’t understand a word of what he was saying, she was very gracious. Her parting remark was “You know, Haskell actually never liked the name Haskell.” The Glasgow Meeting Email discussions continued fervently after the Yale Meeting, but it took a second meeting to resolve many of the open issues. That meeting was held April 6–9, 1988 at the University of Glasgow, whose functional programming group was beginning a period of rapid growth. It was at this meeting that many key decisions were made. It was also agreed at this meeting that Hudak and Wadler would be the editors of the first Haskell Report. The name of the report, “Report on the Programming Language Haskell, A Non-strict, Purely Functional Language,” was inspired in part by the “Report on the Algorithmic Language Scheme,” which in turn was modelled after the “Report on the Algorithmic Language Algol.” IFIP WG2.8 Meetings The ’80s were an exciting time to be doing functional programming research. One indication of that excitement was the establishment, due largely to the effort of John Williams (long-time collaborator with John Backus at IBM Almaden), of IFIP Working Group 2.8 on Functional Programming. This not only helped to bring legitimacy to the field, it also provided a convenient venue for talking about Haskell and for piggy-backing Haskell Committee meetings before or after WG2.8 meetings. The first two WG2.8 meetings were held in Glasgow, Scotland, July 11– 15, 1988, and in Mystic, CT, USA, May 1–5, 1989 (Mystic is about 30 minutes from Yale). Figure 1 was taken at the 1992 meeting of WG2.8 in Oxford. 2.5 Refining the design After the initial flurry of face-to-face meetings, there followed fifteen years of detailed language design and development, coordinated entirely by electronic mail. Here is a brief time-line of how Haskell developed: September 1987. Initial meeting at FPCA, Portland, Oregon. December 1987. Subgroup meeting at University College London. January 1988. A multi-day meeting at Yale University. April 1988. A multi-day meeting at the University of Glasgow. July 1988. The first IFIP WG2.8 meeting, in Glasgow. May 1989. The second IFIP WG2.8 meeting, in Mystic, CT. 1 April 1990. The Haskell version 1.0 Report was published (125 pages), edited by Hudak and Wadler. At the same time, the Haskell mailing list was started, open to all. The closed fplangc mailing list continued for committee discussions, but increasingly debate took place on the public Haskell mailing list. Members of the committee became increasingly uncomfortable with the “us-and-them” overtones of having both public and private mailing lists, and by April 1991 the fplangc list fell into disuse. All further discussion about Haskell took place in public, but decisions were still made by the committee. August 1991. The Haskell version 1.1 Report was published (153 pages), edited by Hudak, Peyton Jones, and Wadler. This was mainly a “tidy-up” release, but it included let expressions and operator sections for the first time. March 1992. The Haskell version 1.2 Report was published (164 pages), edited by Hudak, Peyton Jones, and Wadler, introducing only minor changes to Haskell 1.1. Two months later, in May 1992, it appeared in SIGPLAN Notices, accompanied by a “Gentle introduction to Haskell” written by Hudak and Fasel. We are very grateful to the SIGPLAN chair Stu Feldman, and the Notices editor Dick Wexelblat, for their willingness to publish such an enormous document. It gave Haskell both visibility and credibility. 1994. Haskell gained Internet presence when John Peterson registered the haskell.org domain name and set up a server and website at Yale. (Hudak’s group at Yale continues to maintain the haskell.org server to this day.) May 1996. The Haskell version 1.3 Report was published, edited by Hammond and Peterson. In terms of technical changes, Haskell 1.3 was the most significant release of Haskell after 1.0. In particular: • A Library Report was added, reflecting the fact that pro- grams can hardly be portable unless they can rely on standard libraries. • Monadic I/O made its first appearance, including “do” syn- tax (Section 7), and the I/O semantics in the Appendix was dropped. • Type classes were generalised to higher kinds—so-called “constructor classes” (see Section 6). • Algebraic data types were extended in several ways: new- types, strictness annotations, and named fields. April 1997. The Haskell version 1.4 report was published (139 + 73 pages), edited by Peterson and Hammond. This was a tidy-up of the 1.3 report; the only significant change is that list comprehensions were generalised to arbitrary monads, a decision that was reversed two years later. February 1999 The Haskell 98 Report: Language and Libraries was published (150 + 89 pages), edited by Peyton Jones and Hughes. As we describe in Section 3.7, this was a very significant moment because it represented a commitment to stability. List comprehensions reverted to just lists. 1999–2002 In 1999 the Haskell Committee per se ceased to exist. Peyton Jones took on sole editorship, with the intention of collecting and fixing typographical errors. Decisions were no longer limited to a small committee; now anyone reading the Haskell mailing list could participate. However, as Haskell became more widely used (partly because of the existence of the Haskell 98 standard), many small flaws emerged in the language design, and many ambiguities in the Report were discovered. Peyton Jones’s role evolved to that of Benign Dictator of Linguistic Minutiae. December 2002 The Revised Haskell 98 Report: Language and Libraries was published (260 pages), edited by Peyton Jones. Cambridge University Press generously published the Report as a book, while agreeing that the entire text could still be available online and be freely usable in that form by anyone. Their flexibility in agreeing to publish a book under such unusual terms was extraordinarily helpful to the Haskell community, and defused a tricky debate about freedom and intellectual property. It is remarkable that it took four years from the first publication of Haskell 98 to “shake down” the specification, even though Haskell was already at least eight years old when Haskell 98 came out. Language design is a slow process! Figure 2 gives the Haskell time-line in graphical form1 . Many of the implementations, libraries, and tools mentioned in the figure are discussed later in the paper. 1 This figure was kindly prepared by Bernie Pope and Don Stewart. Paul Hudak. Corrado Boehm. Most of them are detailed on the Haskell website at haskell. and that he was not only resigning from the committee. Lennart Augustsson. and also released on April 1. Several of the responses to Partain’s well-written hoax were equally funny. Nikhil wrote: “Recently. Richard (Corky) Cartwright Figure 1. It was mostly an accident that it appeared on April Fool’s Day—a date had to be chosen. Chris Hankin (moustache). but the seed had been planted for many more to follow. and here is a summary of the more interesting ones: 1. Dick Kieburtz. he sent an email message to the Haskell Committee saying that it was all too much for him. Geoffrey Burn. and David Wise immediately phoned Hudak to plead with him to reconsider his decision. in which he wrote: “Recently Haskell was used in an experiment here at Yale in the Medical School. Geraint Jones (glasses). Warren Burton. 1992 2. Luca Cardelli. Many members of the committee bought into the story. 1990.Back row Next row Front standing row Seated On floor John Launchbury. Richard Bird. a local hospital suffered many malpractice suits due to faulty software in their X-ray machine. but the release did lead to a number of subsequent April Fool’s jokes. Rex Page. John Hughes. John Williams. Its technically detailed and very serious tone made it highly believable. Sebastian Hunt. Oxford. Alex Aiken (mostly hidden). One was by Hudak. Mary Sheeran. they decided to rewrite the code in Haskell for more reliability.6 Was Haskell a joke? The first edition of the Haskell Report was published on April 1. Of course Haskell was no joke.org/humor. . On April 1 a year or two later. What got it all started was a rather frantic year of Haskell development in which Hudak’s role as editor of the Report was especially stressful. Joe Fasel. Patrick O’Keefe (glasses). Mrs Williams. On April 1. Jack Dennis (beard). In the six months that it was in operation. So. Will Partain wrote a brilliant announcement about an extension to Haskell called Haskerl that combined the best ideas in Haskell with the best ideas in Perl. Neil Jones. 1993.8. Chris Clack. Dave MacQueen (beard). he was also quitting Yale to pursue a career in music. the hospital estimates that probably a dozen lives were saved because the program was far more robust than the C program. Colin Runciman (moustache) Philip Wadler (big beard).” In response to this. Of course it was just an April Fool’s joke. It was used to replace a C program that controlled a heart-lung machine. John O’Donnell. which often crashed and killed the patients. Joe Stoy (red shirt). David Turner (red tie) Mario Coppo. Members and guests of IFIP Working Group 2. David Lester Karen MacQueen. Dorothy Peyton Jones Simon Peyton Jones. and the release was close enough to April 1 to justify using that date. Mrs Boehm. 2. Figure 2. Haskell timeline . Lacking side effects. SML used overloading for the built-in numeric operators. called num. says that LINQ is directly inspired by the monad comprehensions in Haskell.) Laziness has its costs. As we discuss in Section 10. Dually. Miranda avoided this problem by having only a single numeric type.” programs. John Peterson wrote a bogus press release in which it was announced that because Sun Microsystems had sued Microsoft over the use of Java. If one wanted to define. Once we were committed to a lazy language. This is convenient and flexible but sacrifices some of the advantages of static typing – for example. Purity is a big bet. Whether a pure language (with monadic effects) is ultimately the best way to write programs is still an open question. f really is a function in the mathematical sense: every call (f 3) will return the same value. Ironically. On April 1. But many of the features in C# were pioneered by Haskell and other functional languages. 3. Haskell is a language with a non-strict semantics. say. As a result. 1988). as we discuss in more detail in Section 7. but it certainly is a radical and elegant attack on the challenge of programming. and its attractions were becoming better understood. a principal designer of LINQ.. (Hughes first presented it as his interview talk when applying for a position at Oxford in 1984.” so we follow popular usage by describing Haskell as lazy. 3. strict languages have dabbled with laziness (Wadler et al. Type classes were introduced to the Haskell Committee by Wadler in a message sent to the fplangc mailing list dated 24 February 1988. but rather that laziness kept us pure. therefore. it is perhaps type classes that are now regarded as Haskell’s most distinctive characteristic. By the mid-eighties. Erik Meijer. the strict/lazy divide has become much less an all-or-nothing decision. Hughes’s paper “Why functional programming matters” captured these in an influential manifesto for lazy programming. Microsoft did indeed respond to Java by backing another language. and processes In this section we reflect on the principles that underlay our thinking. 1998. with pervasive consequences. was able to unravel Enron’s seedy and shaky financial network. whether functional or not. Nevertheless the term “laziness” is more pungent and evocative than “non-strict. which we now regard as one of Haskell’s main contributions to the world. if a function f has type Int -> Int you can be sure that f will not read or write any mutable variables. an event that Peterson knew nothing about at the time. and it circulated informally before finally being published in 1989 (Hughes. The converse is not true. square in terms of multiplication.” 3. Why? Because in a call-by-value language. a pure language.” The article describes how Peyton Jones. but it was C# rather than Haskell. the prevalence of these space leaks led us to add some strict features to Haskell. and coincided with the early stages of Haskell’s design. This made it hard to define new numeric operations in terms of old. but it is notable that in practice most pure programming languages are also lazy. Unrestricted side effects are undoubtedly very convenient. type classes were motivated by the narrow problem of overloading of numeric operators and equality. 3. and it was that combination of power and beauty that motivated the designers. nor will it perform any input/output. Subsequent events have made Peterson’s jape even more prophetic. the temptation to allow unrestricted side effects inside a “function” is almost irresistible.3 Haskell has type classes Although laziness was what brought Haskell’s designers together. Technically. perhaps the biggest single benefit of laziness is not laziness per se.“Malpractice suits have now dropped to zero. so that no term is evaluated twice. This cost is a significant but constant factor. 2002. These problems had been solved in completely different ways in Miranda and SML. principles.1 Haskell is lazy Laziness was undoubtedly the single theme that united the various groups that contributed to Haskell’s design. Microsoft had decided to adopt Haskell as its primary software development language. Haskell’s input/output was initially painfully clumsy. Initially. a pure one was inescapable. So in the end. As a result.4) is type-correct. say integers and floats. In short. 3. Goals.. Necessity being the mother of invention. The reason is that they haven’t taken any new X-rays (‘we’re still compiling the Standard Prelude’). this embarrassment ultimately led to the invention of monadic I/O. A much more important problem is this: it is very hard for even experienced programmers to predict the space behaviour of lazy . and processes that led to them. On April 1. Peyton Jones is quoted as saying: “It’s really very simple. Haskell is. not long after this press release.” in contrast with the call-by-value mechanism of languages like Lisp and ML. and was understood at the time Haskell was designed. Peyton Jones announced his move from Glasgow to Microsoft Research in Cambridge. using his research on formally valuating financial contracts using Haskell (Peyton Jones et al. so that some terms may not be evaluated. and thereby motivated a great deal of productive work on monads and encapsulated state.2 Haskell is pure An immediate consequence of laziness is that evaluation order is demand-driven. we have bottom. lazy evaluation is simply one implementation technique for a non-strict language. For example. the big choices that we made. therefore. with automatic conversion of int to float when required. If I write a contract that says its value is derived from a stock price and the worth of the stock depends solely on the contract. then one had to define a different version for each numeric type. and the practitioners of each recognise the value of the other. Enron had created a complicated series of contracts that ultimately had no value at all.2. Peterson wrote another bogus but entertaining and plausible article entitled “Computer Scientist Gets to the ‘Bottom’ of Financial Scandal. resolved at the point of call. such as seq and strict data types (as had been done in SASL and Miranda earlier). which was a union of unbounded-size integers and double-precision floats. When referring specifically to implementation techniques we will use the term “call-by-need. Call-by-need is usually less efficient than call-by-value. 1989). even though in most languages the modulus operator mod only makes sense for integer moduli. it becomes more or less impossible to reliably perform input/output or other side effects as the result of a function call. and there can be much more than a constant factor at stake. In retrospect. because of the extra bookkeeping required to delay evaluation until a term is required. in Miranda the expression (mod 8 3. there was almost a decade of experience of lazy functional programming in practice. 4. 2000). notably polymorphic types and LINQ (Language Integrated Query). and to overwrite a term with its value. which was a source of considerable embarrassment. thanks to Haskell’s purity. there may be more proofs of correctness properties and program transfor mations in Haskell than any other language. Indeed. In a memorable letter to the Haskell Committee. even if we didn’t write it all down formally. But that is the fact of the matter. Why not? Certainly not because of a conscious choice by the Haskell Committee. we always found it a little hard to admit that a language as principled as Haskell aspires to be has no formal definition. How did this come about? In reflecting on this question we identified several factors that contributed: • The initial situation. and the Definition of Standard ML (Milner and Tofte. Subsequent papers describe a good part of Haskell. (under the right conditions) the conclusions drawn are valid even so (Danielsson et al. Rather. a violation of the abstraction barrier). in practice the static semantics of Haskell (i. No one undertook the work. it is instructive to reflect on the somewhat accidental nature of such a fundamental and farreaching aspect of the Haskell language. • Mathematical elegance was extremely important to us.4 Haskell has no formal semantics One of our explicit goals was to produce a language that had a formally defined type system and semantics. recursively defined values.” Yet. 1989. and it is not without its advantages. We elaborate on some of the details and consequences of the typeclass approach in Section 6. once a program type-checks. but the latter is never itself formally specified. Combined with appropriate induction (and co-induction) principles. 3. the dynamic semantics of Haskell is relatively simple. one would have to pass in an equality-testing function as an extra argument. It had far-reaching consequences that dramatically exceeded our initial reason for adopting it in the first place. systematic and modular than any of the alternatives. especially its type system (Faxen. the semantics of its type system) is where most of the complexity lies. for all its shortcomings Haskell is often described as “beautiful” or “elegant”—even “cool”—which are hardly words one would usually associate with committee designs. but it was he who had the key insight that overloading should be reflected in the type of the function. Tony Hoare wistfully remarked that Haskell was “probably doomed to succeed. 1988). (To define this function. Little did we know what we were letting ourselves in for! Wadler conceived of type classes in a conversation with Joe Fasel after one of the Haskell meetings. Our individual goals were well aligned. with little debate. Wadler misunderstood what Fasel had in mind. Milner et al. and type classes were born! Wadler’s student Steven Blott helped to formulate the type rules. vision of what we were trying to achieve. in direct contradiction to our implicit goal of embodying a tried-and-tested consensus. because the costs of producing fully formal specifications of any proposed change are heavy.SML also originally used overloading for equality. In particular. function calls. .) Miranda simply gave equality a polymorphic type. Indeed. A similar idea was formulated independently by Stefan Kaes (Kaes. It was adopted. We were strongly motivated by mathematical techniques in programming language design. so. and sometimes results in small differences between different compilers. it is a powerful reasoning method in practice. introducing a notion of a “class” of types that possessed a given set of operations (such as numeric operations or equality). and conventional wisdom would say that a committee language will be full of warts and awkward compromises. 1991). and part of the training that every good Haskell programmer receives. not function types or abstract types).and η-reduction). Such reasoning was especially useful in reasoning about “bottom” (which denotes error or non-termination and occurs frequently in a lazy language in pattern matching. there is little concern about the static semantics. and proved the system sound. yet amazingly enough. Blott. The type-class solution was attractive to us because it seemed more principled.. It was a happy coincidence of timing that Wadler and Blott happened to produce this key idea at just the moment when the language design was still in flux. 2006)! Nevertheless.e. but introduced special “equality type variables” (written ’’a instead of ’a) that ranged only over types for which equality was defined (that is. and we began with a strong shared. it just never seemed to be the most urgent task. and coherent for his doctoral dissertation (Wadler and Blott. We were inspired by our brothers and sisters in the ML community. complete. A later version of SML included polymorphic equality. 1997) had a place of honour on our shelves. we resorted to denotational semantics to discuss design options. it was adopted by acclamation.5 Haskell is a committee language Haskell is a language designed by committee. was very favourable. Type classes provided a uniform solution to both of these problems.. formal semantics or no formal semantics. 1990. Nevertheless. Fasel had in mind a different idea. They generalised the notion of equality type variables from SML. The theoretical basis for equational reasoning derives from the standard reduction rules in the lambda calculus (β. 2002). along with those for primitive operations (so-called δ-rules). as if we all knew what the semantics of Haskell should be. and so on). Perhaps more importantly. We all needed Haskell. Many debates were punctuated by cries of “does it have a compositional semantics?” or “what does the domain look like?” This semi-formal approach certainly made it more difficult for ad hoc language features to creep in. we never achieved this goal. As a result. The consequences of not having a formal static semantics is perhaps a challenge for compiler writers. But for the user. as it turns out. despite its lack of a formally specified semantics! Such proofs usually ignore the fact that some of the basic steps used—such as η-reduction in Haskell—would not actually preserve a fully formal semantics even if there was one. so one could not define the polymorphic function that took a list and a value and returned true if the value was equal to some element of the list. if somewhat fuzzy. Equational reasoning in Haskell is part of the culture. Meanwhile. the dynamic semantics of Haskell is captured very elegantly for the average programmer through “equational reasoning”—much simpler to apply than a formal denotational or operational semantics. the absence of a formal language definition does allow the language to evolve more easily. and by themselves discourage changes. Parts of the language (such as the semantics of pattern matching) are defined by a translation into a small “core language”. 3. and little need to reason formally about it. at many times during the design of Haskell. but there is no one document that describes the whole thing. described above in Section 2. The Haskell Report follows the usual tradition of language definitions: it uses carefully worded English language. and in practice the language users and implementers seemed to manage perfectly well without it. but this made equality well defined on function types (it raised an error at run time) and on abstract types (it compared their underlying representation for equality. Fortunately. who had shown that it was possible to give a complete formal definition of a language. despite its rather radical and unproven nature. the committee wholeheartedly embraced superficial complexity. Many matters 3. The first method is far more difficult. At the time he wrote. We are urged to design for ‘big’ programs. of users. and fully justified.” In the end. as Haskell started to become popular. He also was the custodian of the Report.” and language implementers committed themselves to continuing to support Haskell 98 indefinitely. and was responsible for embodying the group’s conclusion in it.” However. a rich collection of libraries. “There are two ways of constructing a software design: one way is to make it so simple that there are obviously no deficiencies. the emphasis was firmly on evolution. user groups. We are urged to return to the mind-numbing syntax of Lisp (a language that held back the pursuit of functional programming for over a decade).” On the other hand. Because much of what is proposed is half-baked. The fact that Haskell has. We hope that extensions or variants of the language may appear. managed the tension between these two strands of development is perhaps due to an accidental virtue: Haskell has not become too successful. the language has simultaneously served as a highly effective laboratory in which to explore advanced language design ideas. we adopted an idea that complicated everything but was just too good to miss.6). We summarise these developments in Section 8.. The Syntax Czar was our mechanism for bringing such debates to an end.2) and the loss of parametricity. for both teaching and real applications. Haskell has continued to evolve apace. The reader will have to judge the resulting balance. the committee evolved a simple and obvious solution: we simply named a particular instance of the language “Haskell 98. and the language becomes bogged down in standards. In just one case. We made no attempt to discourage variants of Haskell other than Haskell 98. on the contrary. the result is likely to be a mess. In contrast. exceptions. and see which ones survived. while its free-spirited children are free to term themselves “Haskell. we eschewed deep complexity. The Editor could not make binding decisions. The preface of every version of the Haskell Report states: “The committee hopes that Haskell can serve as a basis for future research in language design.2) and extensible records (Section 5. despite the cost in expressiveness—for example. we also really wanted Haskell to be a useful language. we explicitly encouraged the further development of the language. including numerous proposals for language features. it has had to grapple with the challenges of scale and complexity with which any real-world language is faced. • At each moment in the design process. 1997). But rather than having a committee to choose and bless particular ones.3). while using the language for teaching and applications requires stability. ease of proof. much to our loss. It did.6 Haskell is a big language A major source of tension both within and between members of the committee was the competition between beauty and utility. • At the same time. retrogressive. In response to this pressure. concurrency. in two quite different ways. 3. and the other way is to make it so complicated that there are no obvious deficiencies. lead Richard Bird to resign from the committee in mid1988. thus far. it seemed to us that the best thing to do was to get out of the way.4). . In short. and legacy issues. such as a foreign-function interface. in contradiction to our original inclinations (Section 4. incorporating experimental features. the Haskell community is small enough. is that you get too many users. It was also a huge relief to be able to call the task finished and to file our enormous mail archives safely away. this dilemma never led to open warfare. as Haskell has become a mature language with thousands that were discussed extensively by email were only resolved at one of these meetings. Although very real. because constructs that are ‘aesthetic’ for small programs will lose their attractiveness when the scale is increased. but was responsible for driving debates to a conclusion. as Hoare so memorably put it. • At each moment in the design process. We are urged to allow large where-clauses with deeply nested structures. let a thousand flowers bloom. • First. that it usually not only absorbs language changes but positively welcomes them: it’s like throwing red meat to hyenas. it seems we are urged to throw away the one feature of functional programming that distinguishes it from the conventional kind and may ensure its survival into the 21st century: susceptibility to formal proof and construction. but even in retrospect we feel that the elegant core of purely functional programming has survived remarkably unscathed. Everyone always says that far too much time is devoted to discussing syntax—but many of the same people will fight to the death for their preferred symbol for lambda. and agile enough. and even baroque. the syntax supports many ways of expressing the same thing. and surjective pairing due to seq (see Section 10. We discuss a number of examples in Section 6. elegant language. For example. one member of the committee (not necessarily the Editor) served as the Syntax Czar. “On the evidence of much of the material and comments submitted to fplang. and questions about what our plans were. “I want to write a book about Haskell. These ideas surface both in papers— witness the number of research papers that take Haskell as their base language—and in Haskell implementations. currying. and much else besides. There was (and continues to be) a tremendous amount of innovation and activity in the Haskell community. We regarded Haskell 98 as a reasonably conservative design. there is a severe danger that the principles of simplicity. especially in the area of type systems and meta-programming.• We held several multi-day face-to-face meetings. In other places. we started to get complaints about changes in the language. example. but I can’t do that if the language keeps changing” is a typical. The (informal) standardisation of Haskell 98 was an important turning point for another reason: it was the moment that the Haskell Committee disbanded.7 Haskell and Haskell 98 The goal of using Haskell for research demands evolution. type classes.” In the absence of a language committee. we avoided parametrised modules (Section 8. however. by that time multi-parameter type classes were being widely used. for example. but Haskell 98 only has single-parameter type classes (Peyton Jones et al. On the one hand we passionately wanted to design a simple. At the beginning. and elegance will be overthrown. The nomenclature encourages the idea that “Haskell 98” is a stable variant of the language. such as that of Java. The trouble with runaway success. If we had to pick places where real compromises were made. That has led to a range of practically oriented features and resources. they would be the monomorphism restriction (see Section 6. The Czar was empowered to make binding decisions about syntactic matters (only). one or two members of the committee served as The Editor. the Haskell Committee worked very hard—meaning it spent endless hours—on designing (and arguing about) the syntax of Haskell. This is at first sight surprising. added monadic I/O (Section 7. we found that syntax design could be not only fun. By the mid-1990s. capitalisation of type constructors as well as data constructors. As required to safeguard his trademark. It wasn’t so much that we were boldly bucking the trend. and the absence of a Windows version increasingly worked against it. as this paper describes. and semantics to be “where the action was. Hugs gave Haskell a fast interactive interface similar to that which Research Software supplied for Miranda (and Hugs ran under both Unix and Windows).. being the user interface of a language. academic licences were cheaper than commercial ones. and where clauses. This led to friction between Oxford University and Research Software over the possible distribution of Wadler’s language Orwell. Although Miranda initially had the better implementation. but rather that. but neither were free. And Haskell had important new ideas. the formal semantics of programming languages. Syntax The phrase “syntax is not important” is often heard in discussions about programming languages. the use of Miranda in several large projects (Major and Turcotte. from university-based implementation efforts? Would the additional constraints of an existing design have precluded the creative and sometimes anarchic ferment that has characterised the Haskell community? How different could history have been? Miranda was certainly no failure. the last textbook in English to use Miranda was published in 1995.4). writing pair types as (num. Turner conceived Miranda to carry lazy functional programming. Haskell implementations improved more rapidly—it was hard for a small company to keep up. Miranda ran only under Unix. Miranda is still in use today: it is still taught in some institutions. Miranda’s proprietary status did not enjoy universal support in the academic community. and the details of the layout rule. a nice interactive user interface. including: placement of guards on the left of “=” in a definition. with Hindley-Milner typing (Milner. etc. and the naming of many standard functions. and emphasising the importance of. In response. Haskell’s design was. and a variety of textbooks (four altogether. Miranda has largely been displaced by Haskell. of which the first was particularly influential (Bird and Wadler. Turner’s efforts made a permanent and valuable contribution to the development of interest in the subject in general. and incorporated many innovations to the core Hindley-Milner type system. We also found that syntax. Moreover. **.8 Haskell and Miranda At the time Haskell was born. but an obsession. by far the most mature and widely used non-strict functional language was Miranda. elegant language design with a wellsupported implementation. either commercially or scientifically. while Haskell was produced by a group of universities with public funds and available free to academic and commercial users alike. especially pattern matching. which was adopted in many universities and undoubtedly helped encourage the spread of functional programming in university curricula. In the end. At the time. how user-defined operators are distinguished (x $op y in Miranda vs. Turner raised no objections to Haskell. As a result. for better or worse. Examples of the latter include: the equational style of function definitions. paving the way for Haskell a few years later. static typing) and in their syntactic look and feel.bool) rather than the int*bool of ML.2). because it can be hard to displace a well-established incumbent. Haskell was a much more practical choice for real programming than Miranda. Miranda’s licence conditions at that time required the licence holder to seek permission before distributing an implementation of Miranda or a language whose design was substantially copied from Miranda. there are many similarities between the two languages. or that the phrase “syntax is important” was a new retro-phrase that became part of our discourse. 1988)). Research Software Limited. different syntax for data type declarations. More fundamentally. as some urged him to do? Would the mid ’80s have seen a standard lazy functional language. There is no doubt that some of our most heated debates were over syntax. capitalisation of data constructors. while Moore’s law made Haskell’s slow compilers acceptably fast and the code they generated even faster. Beyond academia. and by the early 1990s Miranda was installed (although not necessarily taught) at 250 universities and around 50 companies in 20 countries. the notation for lists and list comprehensions. purely functional language with a Hindley-Milner type system and algebraic data types—and that was precisely the kind of language that Haskell aspired to be. x ‘op‘ y in Haskell). Turner always footnoted the first occurrence of Miranda in his papers to state it was a trademark of Research Software Limited. 1991. Many programming language researchers considered syntax to be the trivial part of language design. lexically distinguished user-defined infix operators. rather than suffer. 1993) demonstrated the industrial potential of a lazy functional language. Miranda had a well supported implementation. into the commercial domain. laziness. algebraic types. Page and Moe. therefore. First released in 1985. guards. It contributed a small. the use of a layout rule. and the implementations for Linux and Solaris (now free) continue to be downloaded. rather than Miranda’s *. some early Haskell presentations included a footnote ”Haskell is not a trademark”. which was a relatively new field in itself. one of the most pleasing consequences of our effort has been comments heard . Miranda was a product of David Turner’s company. using the module system instead (Section 5.3). was it worth it? Although not an explicit goal. partly because there was so much interest at the time in developing the theory behind. The tale raises a tantalising “what if” question. which he founded in 1983. strongly influenced by Miranda. It was rapidly taken up by both academic and commercial licences. despite Haskell’s clear debt to Miranda. Part II Technical Contributions 4. One indication of that is the publication of textbooks: while Haskell books continue to appear regularly.3. However. Today. especially type classes (Section 6). could become very personal. What if David Turner had placed Miranda in the public domain. Haskell did not adopt Miranda’s abstract data types. both in their basic approach (purity. with subsequent releases in 1987 and 1989. Miranda was the fullest expression of a non-strict. use of alphanumeric identifiers for type variables. There are notable differences from Miranda too.” Despite this. not semantics. In fact. but the economics worked against Miranda: Research Software was a small company seeking a return on its capital. higher order. 1978). a richer syntax for expressions (Section 4. supported by the research community and with a company backing it up? Could Research Software have found a business model that enabled it to benefit. in the 1980s this phrase was heard more often than it is today. by themselves. the function is viewed as taking a single argument. one then has a first-class functional value. including declarations of precedence and associativity. 1986).2. the left argument. this simple rule is not adhered to by @-patterns. including some ideas considered and ultimately rejected. There is still the need to separate declarations of various kinds. Haskell. One advantage of currying is that it is often more compact: f x y contains three fewer lexemes than f(x.1— among others to Bird. Function application is denoted by juxtaposition and associates to the left. but we couldn’t bring ourselves to force users to write something like minus 42 or ~42 for the more conventional -42. Infix operators The Haskell Committee wanted expressions to look as much like mathematics as possible. supports both curried and uncurried definitions: 2 The hyp :: Float -> Float -> Float hyp x y = sqrt (x*x + y*y) hyp :: (Float. which bind more tightly than anything. the following equivalences hold: (+) = \x y -> x+y (x+) = \y -> x+y (+y) = \x -> x+y Being able to partially apply infix operators is consistent with being able to partially apply curried functions. although the idea goes back to Christopher Strachey’s CPL (Barron et al. 3 This is in contrast to the Scheme designers. . Further historical details. Haskell evolved a fairly complex layout rule— complex enough that it was formally specified for the first time in the Haskell 98 Report. Thus f x + g y never needs parentheses. Achieving all this was fairly conventional. This design decision proved to be a good one. first class. who consistently used prefix application of functions and binary operators (for example.3]. The layout rules needed to be very simple. One reason we thought this was important is that we expected people to write programs that generated Haskell programs. like many other languages based on lambda calculus.2]. a problem made apparent by considering the expression f + x. or the right argument—and by surrounding the result in parentheses. or the function + applied to two arguments? The solution to this problem was to use a generalised notion of sections. in practice we expected that programmers would also want to write fairly large function definitions—and it would be a shame if layout got in the way.) Sections Although a commitment to infix operators was made quite early. Currying Following a tradition going back to Frege. in our case by using explicit curly braces and semicolons instead. was chosen to resemble lambda expressions. Nevertheless. so this was a happy solution to our problem. Float) -> Float hyp (x. A section is a partial application of an infix operator to no arguments. This tradition was honed by Moses Sch¨ nfinkel and Haskell Curry and came to be o called currying.2 Functions and function application There are lots of ways to define functions in Haskell—after all. although @-patterns are not used extensively enough to cause major problems. (Sadly. However. a function of two arguments may be represented as a function of one argument that itself returns a function of one argument. to square each number in a list we write map square [1.” For some reason. may be found in Hudak’s Computing Surveys article (Hudak. \x -> exp. However.1 Layout Most imperative languages use a semicolon to separate sequential commands. and thus from day one we bought into the idea that Haskell would have infix operators. Thus. So Haskell’s layout rules are considerably more lenient than Miranda’s in this respect. We were familiar with the idea. after a short adjustment period. Anonymous functions The syntax for anonymous functions. in the form of the “offside rule” from our use of Turner’s languages SASL (Turner. Does this mean the function f applied to two arguments. For example. Prefix operators Haskell has only one prefix operator: arithmetic negation. since the backslash was the closest single ASCII character to the Greek letter λ.3 It was also important to us that infix operators be definable by the user. and Turner. we provided a way for the user to override implicit layout selectively.. So there was considerable concern about the fact that infix operators were not. short function definitions. a notation that first appeared in David Wile’s dissertation (Wile. the notion of sequencing is completely absent. 1973) and was then disseminated via IFIP WG2. but we also defined the following simple relationship between infix application and conventional function application: the former always binds less tightly than the latter. regardless of what infix operator is used.y) = sqrt (x*x + y*y) In the latter. Like Miranda. this was probably a mistake.y). as it contributes to the readability of programs. the dearth of prefix operators makes it easier for readers to parse expressions. 1963). Although we felt that good programming style involved writing small. otherwise users would object. there was also the feeling that all values in Haskell should be “first class”—especially functions. Miranda. in supporting larger function definitions with less enforced indentation. (+ x y)). instead of adopting mathematical convention. We ended up with a design that differed from our most immediate inspiration. who adopted it in his work. many people think that Haskell programs look nice. however. Influenced by these constraints and a desire to “do what the programmer expects”.[3]]. 1989). The Haskell Committee in fact did not want any prefix operators. who introduced it into Miranda. most users find it easy to adopt a programming style that falls within the layout rules. Why is that? In this section we give historical perspectives on many of the syntactic language features that we think contribute to this impression.many times over the years that “Haskell is a pretty language. 1976) and Miranda (Turner. which is a pair of numbers. while to square each number in a list of lists we write map (map square) [[1. Exploiting the physical layout of the program text is a simple and elegant way to avoid syntactic clutter. 1966). same is true of Miranda users. This leads to concise and powerful code. f x y is parsed (f x) y. In a language without side effects. 4. and it was also featured in ISWIM (Landin. 4. and we thought it would be easier to generate explicit separators than layout. “->” was used instead of a period in order to reserve the period for function composition. imperative baggage. and we explored many variations. it is a functional language—but the ways are simple and all fit together in a sensible manner. and rarely resort to overriding them2 . but the feeling of the Haskell Committee was that we should avoid the semicolon and its sequential. For example. For example. replacing rest with filter p xs. as well as the ability to use “words” as infix operators. For example. As an example of overlap. and provided full syntactic support for both styles. and keeping it low was a priority of the Haskell Committee. Different constructs have different nuances. . In contrast. and Miranda) of a single colon : for the list “cons” operator. in the same lexical scope. in the expression style a function is built up by composing expressions together to make bigger expressions.Declaration style filter p [] = [] filter p (x:xs) | p x = x : rest | otherwise = rest where rest = filter p xs -. One might argue that the code would be less cluttered (in both cases) if one eliminated the let or where. Could we design a mechanism to convert an ordinary function into an infix operator? Our simple solution was to enclose a function identifier in backquotes. both guards and conditionals.) Once we had sections. So the question arises. and overlapped when context was sufficient to distinguish their meaning. which in Haskell must start with a colon. since whenever the name Foo appears. and infix data constructors to be mutually exclusive. It is certainly true that the additional syntactic sugar makes the language seem more elaborate. a data constructor. and that we now regard as a strength of the language. x ‘f‘ y is the same as f x y. We wanted the user to have as much freedom as possible. we abandoned the underlying assumption. capitalised names can. but as mentioned earlier the committee decided not to buck convention in its treatment of negation. KRC.” An underlying assumption was that if possible there should be “just one way to do something. where subtract is a predefined function in Haskell. So. both pattern-matching definitions and case expressions—not only in the same program but sometimes in the same function definition. but it is a superficial sort of complexity. but once we had done so. In the end. Vector is the name of the data type. we felt that list membership. So we carefully defined a set of lexemes for each namespace that were orthogonal when they needed to be.4 Declaration style vs. We adopted from Miranda the convention that data constructors are capitalised while variables are not. by the way. and a module. We liked the generality that this afforded. Java has 50. but it is one that the present authors believe was a fine choice. This is a relatively low number (Erlang has 28. was a hotly contested issue (ML does the opposite) and remains controversial to this day. it became clear that there were two different styles in which functional programs could be written: “declaration style” and “expression style”. easily explained by purely syntactic transformations. 4 The example is a little contrived. The expression style dominates in other functional languages. so far as possible. and Scheme. As an example of orthogonality. normal data constructors. a small contingent of the Haskell Committee argued that shadowing of variables should not be allowed. refer to a type constructor. in the end. and real programmers do in practice employ both let and where. we designed normal variables. This “problem” with sections was viewed more as a problem with prefix operators. having both let and where would be redundant and confusing. including the guards). 1968). x $elem xs.) As a final comment. what is the meaning of (-42)? The answer is negative 42! In order to get the function \x-> x-42 one must write either \x-> x-42. For example. say. 4. Also.Expression style filter = \p -> \xs -> case xs of [] -> [] (x:xs) -> let rest = filter p xs in if (p x) then x : rest else rest The declaration style attempts. and in particular a way to convert infix operators into ordinary functional values. Haskell allowed shadowing. we then asked ourselves why we couldn’t go the other way. each of which uses pattern matching and/or guards to identify the cases it covers. Haskell has 21 reserved keywords that cannot be used as names for values or types. Miranda used a similar notation. we tried hard to avoid keywords (such as “as”) that might otherwise be useful variable names. to define a function by multiple equations. and the name of the single data constructor of that type. namely negation. ML. such as Lisp. or (subtract 42). was more readable when written as x ‘elem‘ xs rather than elem x xs. for example. C++ has 63—and Miranda has only 10). while avoiding any form of ambiguity. because introducing a shadowed name might accidentally capture a variable bound in an outer scope. it is clear from context to which entity it is referring. it is quite common to declare a single-constructor data type like this: data Vector = Vector Float Float Here. This may seem like a classic committee decision. taken from Art Evans’ PAL (Evans. But outlawing shadowing is inconsistent with alpha renaming—it means that you must know the bound names of the inner scope in order to choose a name for use in an outer scope. For example. here is the filter function written in both styles4 : filter :: (a -> Bool) -> [a] -> [a] -. (The choice of “:” for cons and “::” for type signatures. infix operators. OCaml has 48.(Sections did introduce one problem though: Recall that Haskell has only one prefix operator. we engaged in furious debate about which style was “better. Each style is characterised by a set of syntactic constructs: Declaration style where clause Function arguments on left hand side Pattern matching in function definitions Guards on function definitions Expression-style let expression Lambda abstraction case expression if expression 4. The declaration style was heavily emphasised in Turner’s languages KRC (which introduced guards for the first time) and Miranda (which introduced a where clause scoping over several guarded equations. It took some while to identify the stylistic choice as we have done here. expression style As our discussions evolved. and added a similar convention for infix constructors.3 Namespaces and keywords Namespaces were a point of considerable discussion in the Haskell Committee. The latter convention was chosen for consistency with our use (adopted from SASL.” so that. Lines that were not comments were indicated by a greater-than sign > to the left. and there are plans to add them to Javascript as array comprehensions. the Haskell Committee did not buy into the idea that programmers should write (or feel forced to write) short function definitions.6 Comments Comments provoked much discussion among the committee. Still. proposed reversing the usual comment convention: lines of code.1 Algebraic types Here is a simple declaration of an algebraic data type and a function accepting an argument of the type that illustrates the basic features of algebraic data types in Haskell. = gcd x (y-x). and in a place more suggestive of the evaluation order (which builds the right operational intuitions). The inclusion of basic algebraic types was straightforward. and placing the guard on the far right of a long definition seemed like a bad idea. so that the same file could serve both as source for a typeset paper and as an executable program. they are present in Erlang and more recently have been added to Python. (Literate comments also were later adopted by Miranda. Turner put this notation to effective use in his paper “The semantic elegance of applicative languages” (Turner.5 List comprehensions List comprehensions provide a very convenient notation for maps. 1977). syntax is discussed half as much as lexical syntax. we moved them to the left-hand side of the definition (see filter and f above). and infix operators and layout provoked more discussion than comments. Originally. we viewed our design as an improvement over conventional mathematical notation. First.. 1985). and scopes over the guards as well as the right-hand sides of the declarations. The list comprehension notation was first suggested by John Darlington when he was a student of Rod Burstall. [ x*x | x <. new types. For obvious reasons.[1. and lexical syntax is discussed half as much as the syntax of comments. 1977) and Burstall. where code was marked by \begin{code} and \end{code} as it is in Latex. 1997) and Miranda (Turner. Because of this. while longer comments begin with {. but not in SML or Scheme.f x ] applies a function f to each element of a list xs.and end with -}. For example. 1981). these noncomment indicators came to be called ‘Bird tracks’. However.and end with a newline. rather than lines of comment. and Cartesian products. it accurately reflected that committee members held strong views on low-level details. list comprehensions seem to be more popular in lazy languages. if x=y if x>y otherwise 4. Haskell adopted from Miranda the idea that a where clause is attached to a declaration.. Miranda placed guards on the far right-hand side of equations. Notice that each element x chosen from xs is used to generate a new list (f x) for the second generator. which had the added benefit of placing the guard right next to the patterns on formal parameters (which logically made more sense). They were absent from the original ML (Gordon et al. The longer form was designed to make it easy to comment out segments of code. inspired by Knuth’s work on “literate programming” (Knuth. not an expression. and concatMap :: (a -> [b]) -> [a] -> [b] concatMap f xs = [ y | x <. as can be seen in the second definition of filter near the beginning of this subsection. data Maybe a = Nothing | Just a mapMaybe :: (a->b) -> Maybe a -> Maybe b mapMaybe f (Just x) = Just (f x) mapMaybe f Nothing = Nothing . abstract types. MacQueen. Wadler introduced the name “list comprehension” in his paper “How to replace failure by a list of successes” (Wadler.xs ] returns the squares of the numbers in the list xs. However. 1979) and KRC (Turner. tuples. as mentioned earlier in the discussion of layout. 5. The notation was popularised—and generalised to lazy lists—by David Turner’s use of it in KRC.. filters. y <. 1982). and [ f | f <. 1984). Haskell supported two commenting styles. Data types and pattern matching Data types and pattern matching are fundamental to most modern functional languages (with the notable exception of Scheme).n]. This was an exaggeration: a review of the mail archives shows that well over half of the discussion concerned semantics. For some reason. Later. but appeared in their successors Standard ML (Milner et al. In contrast. The style of writing functional programs as a sequence of equations with pattern matching over algebraic types goes back at least to Burstall’s work on structural induction (Burstall. n+k patterns. Haskell later supported a second style of literate comment. should be the ones requiring a special mark. and views. Note also that xp is defined only in the second clause—but that is fine since the bindings in the where clause are lazy. and Sannella’s Hope (Burstall et al. Second. where it was called a “ZF expression” (named after Zermelo-Fraenkel set theory). including code containing comments. n ‘mod‘ f == 0 ] returns a list of the factors of n. this was either a typical committee decision. Haskell added support for a third convention.Two small but important matters concern guards. and his work with his student Darlington on program transformation (Burstall and Darlington. xps is used in a guard as well as in the binding for xp. literate comments. for example they are found in Miranda and Haskell. Algebraic types as a programming language feature first appeared in Burstall’s NPL (Burstall. and concatenates the resulting lists. = gcd (x-y) y. in Haskell one can write: firstSat :: (a->Bool) -> [a] -> Maybe a firstSat p xs | null xps = Nothing | otherwise = Just xp where xps = filter p xs xp = head xps Here.xs. 1982). a let binding is attached to an expression. thus resembling common notation used in mathematics. So.. For example. thus: gcd x y = x. and can be nested.) Bird. records. 1986). and Wadler later formulated a law to describe how effort was allotted to various topics: semantics is discussed half as much as syntax. or a valid response to a disparate set of needs. Equations with conditional guards were introduced by Turner in KRC (Turner. Depending on your view. which first appeared in OL at the suggestion of Richard Bird. 1969). 5. 4. but interesting issues arose for pattern matching. Short comments begin with a double dash -. 1980). 5. a branch would take one argument which was itself a tuple of two trees. and examined with top and isempty. this time defining a recursive data type of trees: data Tree a = Leaf a | Branch (Tree a) (Tree a) size :: Tree a -> Int size (Leaf x) = 1 size (Branch t u) = size t + size u + 1 Haskell took from Miranda the notion of defining algebraic types as a ‘sum of products’. In the end. potentially avoiding nontermination or an error in a match further to the right. to build a value of Maybe type. . and u). because laziness means that whether one chooses to first match against a variable (doesn’t force evaluation) or a constructor (does force evaluation) can change the semantics of a program. KRC. and Miranda. instead of a special construct. The use of pattern matching against algebraic data types greatly increases readability. while an unlifted semantics treats them as the same value. to decompose a value of Maybe type. left-to-right matching was simple to implement. pop. outside of this module a stack can only be built from the operations push. where each alternative is a product of zero or more fields. In general. It might have been useful to permit a sum of zero alternatives. and a branch contains a left and right subtree (a product with two fields). but in exchange an extra construct is avoided. In the above. and Miranda. empty. in particular. Haskell’s solution is somewhat cluttered by the Stk constructors. This uniform rule was unusual. and in Miranda type variables were written as a sequence of one or more asterisks (e. 1992). and offered greater expressiveness compared to the other alternatives. but at the time the value of such a type was not appreciated.g.. SML. Haskell further extended this rule to apply to type constructors (like Tree) and type variables (like a)... usually written ⊥. • Uniform patterns. Hope. which would be a completely empty type. and there is no ambiguity about whether a given subexpression is a Stack or a list. abstract data types were supported by a special language construct. fit nicely with guards. but distinguished everywhere else.The data declaration declares Maybe to be a data type. Haskell also took from Miranda the rule that constructor names always begin with a capital. these choices were made for Haskell as well. an algebraic type specifies a sum of one or more alternatives..g. Moreover in SASL. the module system is used to support data abstraction. matching against equations is in order from top to bottom. t. ⊥) distinct values? In the jargon of denotational semantics. matching proceeds to the next equation. In Haskell. pop. 1979). Here is an example: module Stack( Stack. Eventually.2 Pattern matching The semantics of pattern matching in lazy languages is more complex than in strict languages. matching is from left to right within each left-hand-side—which is important in a lazy language. 1989) for more details).3 Abstract types In Miranda. In contrast. One constructs an abstract data type by introducing an algebraic type. a tree is either a leaf or a branch (a sum with two alternatives). a value that belongs to every type. as introduced by Huet and Levy (Huet Top-to-bottom. top. KRC. In SASL. in the equivalent definition of a tree. with two data constructors Nothing and Just. The values of the Maybe type take one of two forms: either Nothing or (Just x). The Show instance for Stack can be different from the Show instance for lists. 1987). as described by Wadler in Chapter 5 of Pey- ton Jones’s textbook (Peyton Jones. 5. In Standard ML. it is common to use lower case for both. a lifted tuple semantics distinguishes the two values. In Standard ML type variables were distinguished by starting with a tick (e. since as soon as a non-matching pattern is found. Both are illustrated in the definition of mapMaybe. making it easy to distinguish constructors (like Leaf and Branch) from variables (like x. and Levy. it was thought better to adopt the more widely used top-to-bottom design than to choose something that programmers might find limiting. 5. with the first matching equation being used. Here is another example. Data constructors can be used both in pattern-matching. But the other alternatives had a semantics in which the order of equations did not matter. if a pattern consists of a single identifier it can be hard to tell whether this is a variable (which will match anything) or a constructor with no arguments (which matches only that constructor). abstype: abstype stack * == [*] with push :: * -> stack * -> stack * pop :: stack * -> * empty :: stack * top :: stack * -> * isEmpty :: stack * -> bool push x xs = x:xs pop (x:xs) = xs empty = [] top (x:xs) = x isEmpty xs = xs = [] Here the types stack * and [*] are synonyms within the definitions of the named functions. There is an interesting choice to be made about the semantics of tuples: are ⊥ and (⊥. which aids equational reasoning (see (Hudak. after considering at length and rejecting some other possibilities: • Tightest match. It was unclear to us how to achieve this effect with abstype. isEmpty ) where data Stack a = Stk [a] push x (Stk xs) = Stk (x:xs) pop (Stk (x:xs)) = Stk xs empty = Stk [] top (Stk (x:xs)) = x isEmpty (Stk xs) = null xs Since the constructor for the data type Stack is hidden (the export list would say Stack(Stk) if it were exposed). and empty. tree ’a). The most important point is that Haskell’s solution allows one to give a different instance to a type-class for the abstract type than for its representation: instance Show Stack where show s = . tree *). and in an expression. Hope and Standard ML separated sums (algebraic types) and products (tuple types). push. and then exporting the type but hiding its constructors.4 Tuples and irrefutable patterns An expression that diverges (or calls Haskell’s error function) is considered to have the value “bottom”. whether or not the program terminates. a leaf contains a value (a trivial product with only one field). and the types of the operations can be inferred if desired.. as used in Hope+ (Field et al. • Sequential equations. just as with tuples. of which the best documented is TRex (Gaster and Jones. so the committee eventually adopted a minimalist design originally suggested by Mark Jones: record syntax in Haskell 1. Patterns of the form n+k were suggested for Haskell by Wadler.3—because parallel evaluation would be required to implement seq on unlifted tuples. which influenced us considerably. h evaluates its second argument only if b is True. It was unfortunate that the Haskell definition of Stack given above forced the representation of stacks to be not quite isomorphic to lists. along with ingenious ways of encoding records using type classes (Kiselyov et al. Lastly.7). concatenation. Instead.y) = if b then x+y else 0 The tilde “~” makes matching lazy. but f (⊥. In the end.7 n+k patterns An algebraic type isomorphic to the natural numbers can be defined as follows: data Nat = Zero | Succ Nat This definition has the advantage that one can use pattern matching in definitions. Now one could avoid this problem by replacing the data declaration in Stack above with the following declaration. who first saw them in G¨ del’s incompleteness proof (G¨ del.Int) -> Int g b ~(x. o o the core of which is a proof-checker for logic. update. 1931). Miranda’s design identified ⊥ with (⊥. In this case. in the form of tilde-patterns.In an implementation.y) = True If this pattern match evaluates f’s argument then f ⊥ = ⊥. row polymorphism and/or subtyping). ⊥). However. Furthermore. pattern matching in let and where clauses is always lazy. 5.5 Newtype The same choice described above for tuples arose for any algebraic type with one constructor. in a somewhat uneasy compromise. but the disadvantage that the unary representation implied in the definition is far less efficient than the built-in representation of integers. so that pattern matching always induces evaluation. This extra complexity seemed particularly undesirable as we became aware that type classes could be used to encode at least some of the power of records. as lifting added a new bottom value ⊥ distinct from Stk ⊥. For a start. There are a huge number of record systems. and polymorphism. should singleconstructor data types. The only way in which they might be distinguished is by pattern matching.3 onwards there was also a second way to introduce a new algebraic type with a single constructor and a single component. (The n+k pattern feature can be considered a special case of a view (Wadler. which was already complicated enough. we also reintroduced lazy pattern-matching.8) combined with convenient syntax.g. the unlifted form of tuples is essentially incompatible with seq—another controversial feature of the language. why were they omitted? The strongest reason seems to have been that there was no obvious “right” design. thereby distinguishing the two values. the user pressure for named fields in data structures was strong. Furthermore. One can instead consider this definition to be equivalent to f t = True where x = fst t y = snd t in which case f ⊥ = True and the two values are indistinguishable. in this example. discussed in Section 10. we decided to make both tuples and algebraic data types have a lifted semantics. so that g can also be written: g x pr = if b then x+y else 0 where (x.3 design was under way.) All of this works uniformly when there is more than one constructor in the data type: h :: Bool -> Maybe Int -> Int h b ~(Just x) = if b then x else 0 Here again. 5. 1987) (see Section 5. Given that records are extremely useful in practice. with an unlifted semantics. This apparently arcane semantic point became a subject of great controversy in the Haskell Committee. that is. for example: f (x. Haskell provides so-called n+k patterns that provide the benefits of pattern matching without the loss of efficiency..0.y) is performed only if x or y is demanded. From Haskell 1. there was a choice as to whether or not the semantics should be lifted. offering named fields. when b is True. and the space leaks that might result. By the time the Haskell 1. newtype Stack a = Stk [a] We can view this as a way to define a new type isomorphic to an existing one. variously supporting record extension. it was decided that algebraic types with a single constructor should have a lifted semantics. thus: g :: Bool -> (Int. the two values will be represented differently. All of them have a complicating effect on the type system (e.b) -> c ∼ a -> b -> c = But there were a number of difficulties. 5. From Haskell 1. such as data Pair a b = Pair a b share the same properties as tuples. New record proposals continue to appear regularly on the Haskell mailing list. Neither record-polymorphic operations nor subtyping are supported. ⊥) = True. in 1993. 1996) (Section 6.) Here is an example: fib fib fib fib :: Int 0 1 (n+2) -> = = = Int 1 1 fib n + fib (n+1) The pattern n+k only matches a value m if m ≥ k. and if it succeeds it binds n to m − k. The main motivation for introducing this had to do with abstract data types. so that the pattern match for (x. with a semantic discontinuity induced by adding a second constructor? We were also concerned about the efficiency of this lazy form of pattern matching. this identification made currying an exact isomorphism: (a.6 Records One of the most obvious omissions from early versions of Haskell was the absence of records. but under the unlifted semantics they must be indistinguishable to the programmer.y) = pr (This difference in the semantics of pattern matching between let/where and case/λ can perhaps be considered a wart on the language design—certainly it complicates the language description. This minimal design has left the field open for more sophisticated proposals.. coded using recursive equations in a style that would seem not unfamiliar to users . 2004).3 (and subsequently) is simply syntactic sugar for equivalent operation on regular algebraic data types. at the appropriate instance type. This led to complications in export lists and derived type classes. 1977). in fact. higher-rank types. 1998b) provides an excellent review of these. There is some talk of including views or similar features in Haskell′ . At the time views were removed. by Wadler and Blott (Wadler and Blott.8 Views Wadler had noticed there was a tension between the convenience of pattern matching and the advantages of data abstraction. The rest of this section summarises the historical development of the main ideas in Haskell’s type system. type classes began to be generalised in a variety of interesting and surprising ways. This seemingly innocuous bit of syntax provoked a great deal of controversy. Some users considered n+k patterns essential. and by April 1989 Wadler was arguing that the language could be simplified by removing views. which provides an implementation for each of the class’s methods. 6. 1989). principled solution to a relatively small problem (operator overloading for numeric operations and equality). In Haskell we may write class Eq a where (==) :: a -> a -> Bool (/=) :: a -> a -> Bool instance Eq Int where i1 == i2 = eqInt i1 i2 i1 /= i2 = not (i1 == i2) instance (Eq a) => Eq [a] where [] == [] = True (x:xs) == (y:ys) = (x == y) && (xs == ys) xs /= ys = not (xs == ys) member :: Eq a => a -> [a] -> member x [] = member x (y:ys) | x==y = | otherwise = Bool False True member x ys In the instance for Eq Int. An entirely unforeseen development—perhaps encouraged by type classes—is that Haskell has become a kind of laboratory in which numerous type-system extensions have been designed. They were originally proposed early in the design process. 1997). because they allowed function definition by cases over the natural numbers (as in fib above).of Haskell. and Chris Okasaki (Okasaki. But views never made it back into the language nor appeared among the many extensions available in some implementations. Even syntactic niceties resulted: n + 1 = 7 is a (function) definition of +. The type signature for member uses a form of bounded quantification: it declares that member has type a -> [a] -> Bool. and was based on the notion that the constructors and views exported by a module should be indistinguishable. n+k patterns stayed. one can perfectly well apply fib to matrices! This gave rise to a substantial increase in the complexity of pattern matching. Indeed. for example. namely (==) and (/=)) and their types. a successor to Haskell now under discussion. Several variations on this initial proposal have been suggested. and then permits constructors of the second type to appear in patterns that match against the first (Wadler. Peyton Jones wanted to add views to an experimental extension of Haskell. Here is the translation of the above code: data Eq a = MkEq (a->a->Bool) (a->a->Bool) eq (MkEq e _) = e ne (MkEq _ n) = n dEqInt :: Eq Int dEqInt = MkEq eqInt (\x y -> not (eqInt x y)) dEqList :: Eq a -> Eq [a] dEqList d = MkEq el (\x y -> not (el x y)) where el [] [] = True el (x:xs) (y:ys) = eq d x y && el xs ys el _ _ = False member :: Eq a -> a -> [a] -> member d x [] member d x (y:ys) | eq d x y | otherwise Bool = False = True = member d x ys 6. A particularly attractive feature of type classes is that they can be translated into so-called “dictionary-passing style” by a typedirected transformation.1 Type classes The basic idea of type classes is simple enough. A class declaration specifies the methods of the class (just two in this case. allowed to specify a less general type. worse was to come: since in Haskell the numeric literals (0. Wadler said he would agree to their removal only if some other feature went (we no longer remember which). and a detailed proposal to include views in Haskell 1. and suggested views as a programming language feature that lessens this tension. but they are unlikely to be included as they do not satisfy the criterion of being “tried and true”. implemented. for any type a that is an instance of the class Eq. and applied. One of the very few bits of horse-trading in the design of Haskell occurred when Hudak. Haskell as a type-system laboratory Aside from laziness. 1996). Consider equality. 1 etc) were overloaded. we assume that eqInt is a primitive function defining equality at type Int. where the second must be algebraic. generic programming. 1987). some of them summarised in a 1997 paper “Type classes: exploring the design space” (Peyton Jones et al. then Editor of the Report. They were earlier incorporated into Darlington’s NPL (Burstall and Darlington.. lexically scoped type variables. A view specifies an isomorphism between two data types. Examples include polymorphic recursion. But others worried that the Int type did not. these complications led to the majority of the Haskell Committee suggesting that n+k patterns be removed. higher-kinded quantification. while (n + 1) = 7 is a (pattern) definition of n—so apparently redundant brackets change the meaning completely! Indeed.3 was put forward by Burton and others (Burton et al. type classes are undoubtedly Haskell’s most distinctive feature. A type is made into an instance of the class using an instance declaration. as a . and (partially at Wadler’s instigation) into Miranda. which now had to invoke overloaded comparison and arithmetic operations.. The original design of Haskell included views. denote the natural numbers. tried to convince Wadler to agree to remove n+k patterns. In the end. template meta-programming. such as Int -> Int above. As time went on. beginning with type classes. In Haskell. and more besides. it seemed only consistent that fib’s type should be fib :: Num a => a -> a although the programmer is. as always. 5. they were immediately applied to support the following main groups of operations: equality (Eq) and ordering (Ord). or Float. and the types of the functions involved are as follows: fromInteger :: Num a => Integer -> a negate :: Num a => a -> a show :: Show a => a -> String Again the expression is ambiguous. Type classes were extremely serendipitous: they were invented at exactly the right moment to catch the imagination of the Haskell Committee. The programmer may then say which types to use by adding a type signature.. and array indexing (Ix). Floating. sometimes rejecting the un-annotated program seems unacceptably pedantic. because it is not clear whether the computation should be done at type Int. This rule is clumsy but conservative: it tries to avoid making an arbitrary choice in all but a few tightly constrained situations. Stated briefly. because more is happening “behind the scenes”. The functions eq and ne select the equality and inequality method from this dictionary. no truly satisfactory alternative has evolved. the rule forces len to be used at the same type at both its occurrences. or a Float.) Following much debate. The monomorphism restriction is manifestly a wart on the language. When at least one of the ambiguous constraints is numeric but all the constraints involve only classes from the Standard Prelude. (This was admittedly with a very simple compiler. . such as Ring and Monoid) and pragmatism (which suggested fewer). it is more difficult for the programmer to reason about what is going to happen. that is. then the constrained type variable is defaultable.2 The monomorphism restriction A major source of controversy in the early stages was the so-called “monomorphism restriction. But in all this time. the committee adopted the now-notorious monomorphism restriction. len) where len = genericLength xs It looks as if len should be computed only once. as we discuss in the rest of this section. In fact. but the difficulty is there is nothing to specify the type of the intermediate subexpression (read s). The rather daunting collection of type classes used to categorise the numeric operations reflected a slightly uneasy compromise between algebraic purity (which suggested many more classes. namely ambiguity. His argument was motivated by a program he had written that ran exponentially slower than he expected. it seems too conservative for Haskell interpreters. Should read parse an Int from s.” Suppose that genericLength has this overloaded type: genericLength :: Num a => [b] -> a Now consider this definition: f xs = (len. but one can understand how the program will execute without considering the types. Fractional. and these types are tried. thus: f :: String -> String f s = show (read s :: Int) However. we compromised by adding an ad hoc rule for choosing a particular default type. but it can actually be computed twice. consider the expression (show (negate 4)) In Haskell. an instance declaration translates to a function that takes some dictionaries and returns a more complicated one. corresponding to the Eq a constraint in its original type.1) provides a flag: -fno-monomorphism-restriction to suppress the restriction altogether. len becomes a function that is called once for each occurrence of len. they led to a wildly richer set of opportunities than their initial purpose. Once type classes were adopted as part of the language design. it says that a definition that does not look like a function (i.. in order. and the fact that the very first release of Haskell had thirteen type classes in its standard library indicates how rapidly they became pervasive. Finally. But beyond that. while read does does the reverse for any type in class Read. The programmer can supply an explicit type signature for len if polymorphic behaviour is required. For example. or indeed any other numeric type. and furthermore there is more reason to expect numeric operations to behave in similar ways for different types than there is for non-numeric operations. Consider the following classic example: show :: Show a => a -> String read :: Read a => String -> a f :: String -> String f s = show (read s) Here. the literal 4 is short for (fromInteger (4::Integer)). RealFrac and RealFloat). The member function takes a dictionary parameter of type Eq a. a record of its methods. Type classes have proved to be a very powerful and convenient mechanism but. Section 9. It seems to bite every new Haskell programmer by giving rise to an unexpected or obscure error message. and performs the membership test by extracting the equality method from this dictionary using eq. numeric operations (Num. Programs like this are said to be ambiguous and are rejected by the compiler. when desugared with the dictionarypassing translation. and the choice affects the semantics of the program. The Glasgow Haskell Compiler (GHC. which declares a dictionary for Eq.3 Ambiguity and type defaulting We rapidly discovered a second source of difficulty with type classes. So f appears well-typed. Hughes argued strongly that it was unacceptable to silently duplicate computation in this way. In most statically typed languages.The class declaration translates to a data type declaration. enumerations (Enum). converting values to and from strings (Read and Show). the type system checks consistency. has no arguments on the left-hand side) should be monomorphic in any overloaded type variables. There has been much discussion of alternatives. until one satisfies all the constraints.e. In this example. dEqList takes a dictionary for Eq a and returns a dictionary for Eq [a]. Notably. Not so in Haskell: the dynamic semantics of the program necessarily depends on the way that its type-class overloading is resolved by the type checker. or even a value of type Maybe Int? There is nothing to say which should be chosen. 6. 6. each of which might used at a different type. Performing numerical calculations on constants is one of the very first things a Haskell programmer does. The programmer may specify a list of types in a special top-level default declaration. After much debate. but we were reluctant to make performance differences as big as this dependent on compiler optimisations. Why? Because we can infer the type len :: (Num a) => a. For example. show converts a value of any type in class Show to a String. Real. which solves the performance problem. Integral. However. Jones. so if m has kind *->*. Here we allow the programmer to add numbers of different types. unanticipated development in the type-class story came when Mark Jones. 6. As time went on. Going beyond that would be an unforced step into the dark.6 Functional dependencies The trouble with multi-parameter type classes is that it is very easy to write ambiguous types. by treating higher-kinded type constructors as uninterpreted functions and not allowing lambda at the type level. higherkinded quantification is a simple. . 1993) shows that ordinary first-order unification suffices.. Alas.. 6... However.. In retrospect. but that intent is implied only by the absence of an instance declaration such as instance Add Int Int Float where . choosing the result type based on the input types. they also observed that type classes might types just as types classify values. Coerce a b holds whenever a is a subtype of b). for example... The fact that type classes so directly supported monads made monads far more accessible and popular.. higher-kinded polymorphism has independent utility: it is entirely possible.5 Multi-parameter type classes While Wadler and Blott’s initial proposal focused on type classes with a single parameter. an idea he called constructor classes (Jones. and we were anxious about questions of overlap. The programmer intended that if the arguments of (+) are both Int then so is the result. The Haskell Committee was resistant to including them. even trivial programs have ambiguous types. 1991. such as: data ListFunctor f a = Nil | Cons a (f a) Furthermore. Mark Jones published “Type classes with functional dependencies”. They gave the following example: class Coerce a b where coerce :: a -> b instance Coerce Int Float where coerce = convertIntToFloat Whereas a single-parameter type class can be viewed as a predicate over types (for example.. The kind * is pronounced “type”. it does not matter. instantiating return’s type (a -> m a) with m=Maybe gives the type (a -> Maybe a). and decidability of type inference. or what? Of course. 1993). and dually. For example. and useful generalisation of the conventional Hindley-Milner typing discipline (Milner. 6. then at Yale. Are we trying to show a list of Char or a list of Int. confluence.consider the expression (show []). the type variable m has kind5 *->*. Chen et al.3 Report. since the result is the same in all cases. user pressure grew to adopt multiparameter type classes. and they solved the problem we initially addressed (overloading equality and numeric operations). then m is a type-level function mapping one type to another. The most immediate and persuasive application of this idea was to monads (discussed in Section 7). and occasionally very useful. Jones’s paper appeared in 1993. Jones’s paper (Jones. However. the order of type parameters in a data-type declaration can matter— but it has an excellent power-to-weight ratio. and GHC adopted them in 1997 (version 3. As a result. consider: n = x + y where x and y have type Int. thus: class Monad m where return :: a -> m a (>>=) :: m a -> (a -> m b) -> m b Here. multi-parameter type classes did not really come into their own until the advent of functional dependencies. this declaration makes the Maybe type an instance of Monad by instantiating m with Maybe. 1992). elegant. For example. Haskell 98 retained the single-parameter restriction. the usefulness of monadic I/O ensured the adoption of higher-kinded polymorphism. which was published in 1996. Multi-parameter type classes were discussed in several early papers on type classes (Jones. 1978). 1975). All this was solidified into the Haskell 1.00).. For example. Bird and Paterson. In 2000. 2000). GHC therefore relaxes the defaulting rules further for its interactive version GHCi. consider the following attempt to generalise the Num class: class Add a b r where (+) :: a -> b -> r instance instance instance instance Add Add Add Add Int Int Float Float Int Float Int Float Int Float Float Float where where where where . thus: class Add a b r | a b -> r where . 1992. the same year that monads became popular for I/O (Section 7). a multi-parameter class can be viewed a relation between types (for example. 1999). While it was easy to define coerce as above.. The solution is a little ad hoc—for example.. . Type inference for a system involving higher kinds seems at first to require higher-order unification. .. and that is indeed the type of the return function in the instance declaration. 5 Kinds classify be generalised to multiple parameters. suggested parameterising a class over a type constructor instead of over a type. it was less clear when type inference would make it usable in practice. which has kind *->*: data Maybe a = Nothing | Just a instance Monad Maybe where return x = Just x Nothing >>= k = Nothing Just x >>= k = k x So. however. which solves the problem (Jones. 1999. which is both much harder than traditional first-order unification and lacks most general unifiers (Huet. resolving the ambiguity. and they were implemented in Jones’s language Gofer (see Section 9.4 Higher-kinded polymorphism The first major.3) in its first 1991 release. one may need functions quantified over higherkinded type variables to process nested data types (Okasaki. however. to declare data types parameterised over higher kinds. so that the Monad class can be instantiated at a type constructor. but there is no way for the type system to know that. Eq a holds whenever a is a type for which equality is defined). The difficulty is that the compiler has no way to figure out the type of n. We felt that single-parameter type classes were already a big step beyond our initial conservative design goals. . The idea is to borrow a technique from the database community and declare an explicit functional dependency between the parameters of a class. The “a b -> r” says that fixing a and b should fix r. 2005a). The type qualification in this case is a collection of lacks predicates. especially when combined with other extensions such as local universal and existential quantification (Section 6. Chakravarty et al. Just as each type-class constraint corresponds to a runtime argument (a dictionary). 1995).. By liberalising other Haskell 98 restrictions on the form of instance declarations (and perhaps thereby risking non-termination in the type checker). It also led to a series of papers suggesting more direct ways of expressing such programs (Neubauer et al. 1996). but we give citations for the interested reader to follow up. Gaster. Furthermore. Then each function in the library must take the page width as an extra argument. 1991b. 2004) for another fascinating approach to the problem of distributing configuration information such as the page width.a → (a → Int) → T. a reference to the page width itself is signalled by ?pw.7). This simple but powerful idea was later formalised by Odersky and L¨ ufer (L¨ ufer and Odersky. Meijer. but (as usual with Haskell) that did not stop them from being implemented and widely used. 2005. and returns an Int. The system can accommodate a full complement of polymorphic operations: selection. Implicit parameters arrange that this parameter passing happens implicitly. The witness for the predicate (r\l) is the offset in r at which a field labelled l would be inserted.g. 2001. note the occurrence of a in the argument type but not the result. and in turn pass it to the functions it calls: pretty :: Int -> Doc -> String pretty pw doc = if width doc > pw then pretty2 pw doc else pretty3 pw doc These extra parameters are quite tiresome. extensible records called TRex (Gaster and Jones.7 Beyond type classes As if all this were not enough. plus other fields described by the row-variable r. 2002. so each lacks predicate is also witnessed by a runtime argument. but the integration with a more general framework of qualified types is particularly elegant. rather like dictionary passing. as shown in the definition of f. extension. Suppose you want to write a pretty-printing library that is parameterised by the page width. The selector #x selects the x field from its argument. 1991a) and in his implementation of Hope+ that this pattern could be expressed with almost no new language complexity. (Hallgren. 1998). 2002.. With his student Benedict Gaster he developed a second instance of the qualified-type idea. especially when they are only passed on unchanged. One way of understanding implicit parameters is that they allow the programmer to make selective use of dynamic (rather than lexical) scoping. a system of polymorphic. and the calls to pretty2 and pretty3 no longer pass an explicit pw parameter (it is passed implicitly instead).. Thus f receives extra arguments that tell it where to find the fields it needs. update. as well as much traffic on the Haskell mailing list). A value of type T is a package of a value of some (existentially quantified) type τ . r\y) => Rec (x::Int. 6. 1997. thus: pretty :: (?pw::Int) => Doc -> String pretty doc = if width doc > ?pw then pretty2 doc else pretty3 doc The explicit parameter turns into an implicit-parameter type constraint. Implicit parameters A third instantiation of the qualified-type framework. 1994). These applications have pushed functional dependencies well beyond their motivating application. 1998). restriction.. For example. (See (Kiselyov and Shan. 1996. Shields. space precludes a proper treatment of any of them. 1985). 2000. a This extension was first implemented in hbc and is now a widely used extension of Haskell 98: every current Haskell implementation supports the extension. 2001. Despite their apparent simplicity. Perry showed in his dissertation (Perry... 2000). Haskell has turned out to be a setting in which advanced type systems can be explored and applied. Sum is a three-parameter class with no operations. even leaving type classes aside. so (#x p) is what would more traditionally be written p. 2007). Perry.) . Jones’s original paper gave only an informal description of functional dependencies. in logic-programming style. Sulzmann et al. Neubauer et al. 2001). so-called “implicit parameters”. y::Int | r) would be ill formed. Extensible records Mark Jones showed that type classes were an example of a more general framework he called qualified types (Jones. thus: f :: (r\x. the reader may find a detailed comparison in Gaster’s dissertation (Gaster. MkT a (a->Int) f :: T -> Int f (MkT x g) = g x Here the constructor MkT has type ∀a. The lacks predicate (r\x.x. Shields and Peyton Jones. L¨ ufer also described how a a a to integrate existentials with Haskell type classes (L¨ ufer.. Efforts to understand and formalise the design space are still in progress (Glynn et al. it turned out that one could write arbitrary computations at the type level. 2004). L¨ mmel and a Peyton Jones. in GHC one can say this: data T = forall a. was developed by Lewis. For example: data Z = Z data S a = S a class Sum a b r | a b -> r instance Sum Z b b instance Sum a b r => Sum (S a) b (S r) Here. This realisation gave rise to an entire cottage industry of type-level programming that shows no sign of abating (e. and Launchbury (Lewis et al. McBride. and field renaming (although not concatenation). r\y) says that r should range only over rows that do not have an x or y field— otherwise the argument type Rec (x::Int. 1994). The relation Sum ta tb tc holds if the type tc is the Peano representation (at the type level) of the sum of ta and tb. 2005b.. y::Int | r) -> Int f p = (#x p) + (#y p) The type should be read as follows: f takes an argument record with an x and y fields. The combination of multi-parameter classes and functional dependencies turned out to allow computation at the type level. Existential data constructors A useful programming pattern is to package up a value with functions over that value and existentially quantify the package (Mitchell and Plotkin. The rest of this section gives a series of examples.. functional dependencies have turned out to be extremely tricky in detail. type classes have spawned numerous variants and extensions (Peyton Jones et al. The idea of passing extra arguments to record-polymorphic functions is not new (Ohori. simply by allowing a data constructor to mention type variables in its arguments that do not appear in its result. Kiselyov et al. and a function of type τ → Int.But that was not all.. The package can be unpacked with ordinary pattern matching. Chakravarty et al. b) .g. maintaining invariants in data structures (e. 2004. It is hard to infer the type of such a function. Here is an example inspired by (Okasaki. Type inference for GADTs is somewhat tricky. and so the program is rejected. and sometimes turns out to be essential. L¨ mmel a and Peyton Jones.etc.. and extensible records are all instances (Jones. they were barely discussed.Square matrix: -. so that “a” is bound by prefix and used in the signature for xcons.. 2002. 2003). In another unforeseen development. the pattern match gives type as well as value information. 1995). implicit parameters. it is not long before one encounters the need to abstract over a polymorphic function. 2007). Lexically scoped type variables In Haskell 98. the omission of lexically scoped type variables was a mistake. 2002). So Haskell 98 allows polymorphic recursion when (and only when) the programmer explicitly specifies the type signature of the function. 1994. consider this function: eval :: Term a -> a eval (Lit i) = i eval (Pair a b) = (eval a. Hence the first argument to sq_index is a polymorphic function. Generic programming A generic function behaves in a uniform way on arguments of any data types. which would be unthinkable in the Standard ML community. See (Hinze et al. In a function that performs pattern matching on Term. 2006. Shields and Peyton Jones. it is sometimes impossible to write a type signature for a function. 2004). 2000.. another cottage industry sprang up offering examples of their usefulness in practice (Baars and Swierstra. Some notable attempts have been made to bring order to this chaos. 2004). For example: prefix :: a -> [[a]] -> [[a]] prefix x yss = map xcons yss where xcons :: [a] -> [a] -. 1997. because polymorphic recursion and (more recently) higher-rank types absolutely require type signatures. Peyton Jones et al.. on the contrary. 2006) for a recent survey of o this active research area. The idea is to allow a data constructor’s return type to be specified directly: data Term a where Lit :: Int -> Term Int Pair :: Term a -> Term b -> Term (a. in a rather ad hoc manner. because type signatures are always closed. Int -> v a -> a) -> Int -> Int -> Sq v a -> a sq_index index i j m = index i (index j m) The function index is used inside sq_index at two different types. of which type classes. Higher-rank types Once one starts to use polymorphic recursion. 2002). once implemented. 2000). ones that require a a more specific language extension. L¨ mmel and Peyton Jones. Sulzmann. some kind of lexically scoped type variables are required. Generalised algebraic data types GADTs are a simple but farreaching generalisation of ordinary algebraic data types (Section 5). security constraints). he wrote sq_index :: (forall a . 2001). and modelling objects (Hinze. 2003). Sheard and Pasalic. 6.BAD! xcons ys = x : ys The type signature for xcons is treated by Haskell 98 as specifying the type ∀a. including generic programming. 1999). 2005). This anarchy. This idea is very well known in the typetheory community (Dybjer. 1997. More recently. including: approaches that use pure Haskell 98 (Hinze. There are no great technical difficulties here. Cheney and Hinze. for example when using nested data types (Bird and Paterson. it must have been built with a Lit constructor. and programs are sometimes reduced to experiments to see what will and will not be acceptable to the compiler.g.[a] → [a].8 Summary Haskell’s type system has developed extremely anarchically. such as Generic Haskell (L¨ h et al. and whole new language designs. though. who played a prominent role in several of these developments. scoped type variables were not omitted after fierce debate.. Many of the new features described above were sketched. . Meijer and Claessen. Hinze. implemented. Xi et al. 2003. so a must be Int. modelling programming languages. If the argument matches Lit.. eval b) . For example.A vector of vectors 2003. while having a few typespecific cases. and applied well before they were formalised. and e support for GADTs was added to GHC in 2005. However.. Hinze. although there is an interesting space of design choices (Milner et al.. we simply never realised how important type signatures would prove to be. expressing constraints in domainspecific embedded languages (e. 1997). 1999): type Sq v a = v (v a) -. 1997). The strength is that the design space is explored much more quickly. 2003. but a few explicit type annotations from the programmer (such as that for sq_index above) transform the type inference problem into an easy one (Peyton Jones et al. This innovation is extremely simple to describe and implement. 1995). developed a theory of qualified types.Polymorphic recursion This feature allows a function to be used polymorphically in its own definition. An example might be a function that capitalises all the strings that are in a big data structure: the generic behaviour is to traverse the structure. Sheard. At that time there were two main motivations: one was to allow data constructors with polymorphic fields.. The weakness is that the end result is extremely complex. 1991).. and derivable type classes (Hinze and Peyton Jones. and sq_index has a so-called rank-2 type. GHC supports a form of type-safe metaprogramming (Sheard and Peyton Jones. In retrospect. so it must be polymorphic. but easy to check that the definition is well typed. given the type signature of the function. 2003. 2003. Its advent in the world of programming languages (under various names) is more recent. but is now becoming better understood (Pottier and R´ gis-Gianas. Mark Jones.. 2002. In the absence of any type annotations. Karl-Filip Faxen wrote a static semantics for the whole of Haskell 98 (Faxen. 2007). and tricky corners are often (but not always!) exposed. such as PolyP (Jansson and Jeuring. but it seems to have many applications. To fix the problem. and the other was to allow the runST function to be defined (Launchbury and Peyton Jones.. while the type-specific case is for strings. and hence we may return i (an Int) in the right-hand side. red-black trees). Template meta-programming Inspired by the template metaprogramming of C++ and the staged type system of MetaML (Taha and Sheard. and GHC’s implementation has become much more systematic and general (Peyton Jones et al. Interestingly. Higher-rank types were first implemented in GHC in 2000. Jones. higher-rank types make type inference undecidable. has both strengths and weaknesses. 2004). ones that require higher-rank types (L¨ mmel and Peyton a Jones. Haskell has served as the host language for a remarkable variety of experiments in generic programming. in which the state of the world is passed around and updated. because when Haskell was born a “monad” was an obscure feature of category theory whose implications for programming were largely unrecognised. Stream-based I/O Using the stream-based model of purely functional I/O. The following reasons seem to us to have been important. it was possible to completely model stream I/O with continuations.0: data Request = | | | | ReadFile WriteFile AppendFile DeleteFile . 1999). comparing their expressiveness.. disbanded itself in 1999 (Section 3. The authors of the present paper have the sense that we are still awaiting a unifying insight that will not only explain but also simplify the chaotic world of type classes. and rather geeky.0 Report. In working out the details of these approaches. • The two most widely used implementations (GHC. Both were understood well enough theoretically. But there were also nontechnical factors at work: • The Haskell Committee encouraged innovation right from the beginning and. and monads on the other. Note the reliance on lazy patterns (indicated by ~) to assure that the response is not “looked at” prior to the generation of the request. type Name = String As an example. it is worth asking why Haskell has proved so friendly a host language for type-system innovation. We did not want to lose expressive power just because we were “pure. data Response = | | | Success Str String Failure IOError . without throwing the baby out with the bath water.3 in 1996. that prompts the user for the name of a file. 1994). In this section we describe the symbiotic evolution of Haskell’s support for input/output on the one hand. • Polymorphic recursion was in the language. user base. • Type classes. echoes the filename as typed by the user. examples include extensible records and implicit parameters.0 Report.” since interfacing to the real world was an important pragmatic concern. it was defined in terms of what response the OS generated for each request. used by both Ponder and Miranda. 1996). and giving translations between them during these deliberations (Hudak and Sundaresh. and both were certainly pure. much as one would pass around and update any other data structure in a pure functional language. On the technical side: • The purity of the language removed a significant technical ob- we first defined I/O in terms of streams.. A suitably rich set of Requests and Responses yields a suitably expressive I/O system. An abstract specification of this behaviour was defined in the Appendix of the Haskell 1. we realised that in fact they were functionally equivalent—that is. Monads and input/output Aside from type classes (discussed in Section 6). which is a different way to formalise the system (Jones. The I/O system was defined entirely in terms of how the operating system interpreted a program having the above type—that is. taken from the Haskell 1.. This opens the door to features for which unaided inference is infeasible. and even breaking changes are accepted. 2000. Thus in the Haskell 1. a program is represented as a value of type: type Behaviour = [Response] -> [Request] The idea is that a program generates a Request to the operating system. however. • Haskell has a smallish. and Hudak and his student Sundaresh wrote a report describing them. 7. New fea- tures are welcomed. far from exercising control over the language. namely dealing with mutable state. and the operating system reacts with some Response. and follow in Section 7. Barendsen and Smetsers. Sulzmann et al. Monads were not in the original Haskell design. 2007). 2006)..0 Report. monads are one of the most distinctive language design features in Haskell. including the subtleties of functional dependencies (Glynn et al. Name Name String Name String Name stacle to many type-system innovations. It is worth mentioning that a third model for I/O was also discussed.. (The Clean designers eventually solved this problem through the use of “uniqueness types” (Achten and Plasmeijer. but also included a completely equivalent design based on continuations. Lazy evaluation allows a program to generate a request prior to processing any responses. The Haskell Committee was resolute in its decision to keep the language pure—meaning no side effects— so the design of the I/O system was an important issue. provided a rich (albeit rather complex) framework into which a number of innovations fitted neatly. With this treatment of I/O there was no need for any specialpurpose I/O syntax or I/O constructs. These works do indeed nail down some of the details. Meanwhile. 7. by giving a definition of the operating system as a function that took as input an initial state and a collection of Haskell programs and used a single nondeterministic merge operator to capture the parallel evaluation of the multiple Haskell programs. and vice versa. 1989).2 with the monadic model of I/O that was adopted for Haskell 1. and their generalisation to qualified types (Jones. Here is a partial definition of the Request and Response data types as defined in Haskell 1. In this section we give a detailed account of the streambased and continuation-based models of I/O. Our greatest fear was that Haskell would be viewed as a toy language because we did a poor job addressing this important capability.a paper giving the complete code for a Haskell 98 type inference engine. but the result is still dauntingly complicated. 1995.7). Figure 3 presents a program. and then looks up and displays the contents of the file on the standard output. At the time. Hugs) both had teams that encouraged experimentation. a program was still represented as a value of type Behaviour. so the idea that every legal program should typecheck without type annotations (a tenet of ML) had already been abandoned. Continuation-based I/O Using the continuation-based model of I/O. all three designs were considered.. both seemed to offer considerable expressiveness. because we saw no easy way to ensure “single-threaded” access to the world state.) In any case. but instead of having the user manipulate the requests and responses directly. This “world-passing” model was never a serious contender for Haskell.1 Streams and continuations The story begins with I/O. a collection of transactions were defined that cap- . Martin Sulzmann and his colleagues have applied the theory of constraint-handling rules to give a rich framework to reason about type classes (Sulzmann. the two leading contenders for a solution to this problem were streams and continuations. you would probably introduce a reference cell that contains a count. the example given earlier in stream-based I/O can be rewritten as shown in Figure 4. The code uses the standard failure continuation. where r is the result type of the continuation. For instance. In ML. where the definition of streams in terms of continuations is attributed to Peyton Jones). Moggi. monads give quite a bit of freedom in how one defines the operators return and >>=. . type Behaviour type FailCont type StrCont = = = [Response] -> [Request] IOError -> Behaviour String -> Behaviour One can define this transaction in terms of streams as follows. and errors are easily introduced by misspelling one of the names used to pass the current count in to or out of a function application. while ML fixes a single built-in notion of computation and sequencing. data Exc e a = Exception e | OK a • A continuation monad accepts a continuation. A monad consists of a type constructor M and a pair of functions. which return a sequence of values. So the corresponding transaction readFile name accepted two continuations.1. Conversely. where s is the state type. type ST s a = s -> (a. to thread a counter through a program we might take s to be integer. as opposed to the expected constant space and linear time. 7. But a denotational semantics can be viewed as an interpreter written in a functional language. one for failure and one for success. 1992b). However. The use of a function called let reflects the fact that let expressions were not in Haskell 1. 1992a. This performs the computation indicated by m. say that you want to write a program to rename every occurrence of a bound variable in a data structure representing a lambda expression. where e is the type of the error message. and increment this count each time a fresh name is required. For this reason. Transactions were just functions. Then the expression m >>= (\x -> n) has type M b. you would probably arrange that each function that must generate fresh names accepts an old value of the counter and returns an updated value of the counter. Here M a is Cont r a. 1991). where s is the state type.0! (They appeared in Haskell 1. In Haskell. Wadler. such as ReadFile) there corresponded a transaction (a function. In a similar way. since the flow of control was more localised. Haskell 1. type Cont r a = (a -> r) -> r • A list monad can be used to model nondeterministic computa- tions. if it succeeded.tured the effect of each request/response pair in a continuationpassing style. Using a state transformer monad would let you hide all the “plumbing. the definition of streams in terms of continuations was inefficient. the success continuation would be applied to the contents of the file. the failure continuation would be applied to the error message. For each request (a constructor. abort. Wadler recognised that the technique Moggi had used to structure semantics could be fruitfully applied to structure other functional programs (Wadler. Eugenio Moggi published at LICS a paper on the use of monads from category theory to describe features of programming languages. which is just the type of lists of values of type a. and an auxiliary function let. 1989. Above we take streams as primitive and define continuations in terms of them.s) A state transformer is a function that takes the old state (of type s) and returns a value (of type a) and the new state (of type s). but the computation never changes the state. the pattern matching required by stream-based I/O forces the reader’s focus to jump back and forth between the patterns (representing the responses) and the requests. and n has type b instead of M b. binds the value returned to x. In effect. Here M a is Exc e a. readFile :: Name -> FailCont -> StrCont -> Behaviour readFile name fail succ ~(resp:resps) = = ReadFile name : case resp of Str val -> succ val resps Failure msg -> fail msg resps If the transaction failed. and continuations in terms of them. Wadler used monads to express the same programming language features that Moggi used monads to describe. In particular. return and >>= (sometimes pronounced “bind”). Here M a is List a. even though continuations were considered easier to use for most purposes. m has type a instead of M a. The request ReadFile name induced either a failure response “Failure msg” or success response “Str contents” (see above). It is analogous to the expression let x = m in n in a language with side effects such as ML. Here are a few examples of the notions of side effects that one can define with monads: • A state transformer is used to thread state through a program. type SR s a = s -> a • An exception monad either returns a value or raises an excep- tion. with some cleverness it is also possible to take continuations as primitive and define streams in terms of them (see (Hudak and Sundaresh. so there is no chance to misspell the associated names. Say that m is an expression of type M a and n is an expression of type M b with a free variable x of type a. It accepts a state that the computation may depend upon. systematising the treatment of diverse features such as state and exceptions. and performs the computation indicated by n.) Although the two examples look somewhat similar.” The monad itself would be responsible for passing counter values. In 1989. Here are their types: return :: a -> M a (>>=) :: M a -> (a -> M b) -> M b One should read “M a” as the type of a computation that returns a value of type a (and perhaps performs some side effects).0 defined streams as primitive. Further. the continuation style was preferred by most programmers. 1989). Using this style of I/O. Moggi used monads to modularise the structure of a denotational semantics. Here M a is SR s a. it is straightforward to define each of the continuation-based transactions in terms of the stream-based model of I/O. This requires some way to generate a fresh name every time a bound variable is encountered. except that the types do not indicate the presence of the effects: in the ML version. For example. such as readFile). • A state reader is a simplified state transformer. requiring linear space and quadratic time in terms of the number of requests issued. which immediately attracted a great deal of attention (Moggi. This is straightforward but tedious. Here M a is ST s a. lacking reference cells.2 Monads We now pause the story of I/O while we bring monads onto the scene. readFile name appendChan stdout contents) (appendChan stdout "can’t open file") Figure 6. ReadChan stdin.readChan stdin let (name : _) = lines userInput appendChan stdout name catch (do contents <. Continuation I/O main :: IO () main = appendChan stdout "enter filename\n" >> readChan stdin >>= \userInput -> let (name : _) = lines userInput in appendChan stdout name >> catch (readFile name >>= \contents -> appendChan stdout contents) (appendChan stdout "can’t open file") Figure 5. Stream-based I/O main :: Behaviour main = appendChan stdout "enter filename\n" abort ( readChan stdin abort (\userInput -> letE (lines userInput) (\(name : _) -> appendChan stdout name abort ( readFile name fail (\contents -> appendChan stdout contents abort done))))) where fail ioerr = appendChan stdout "can’t open file" abort done abort :: FailCont abort err resps = [] letE letE x k :: a -> (a -> b) -> b = k x Figure 4. ReadFile name. Monadic I/O main :: IO () main = do appendChan stdout "enter filename\n" userInput <.main :: Behaviour main ~(Success : ~((Str userInput) : ~(Success : ~(r4 : _)))) = [ AppendChan stdout "enter filename\n". AppendChan stdout name. AppendChan stdout (case r4 of Str contents -> contents Failure ioerr -> "can’t open file") ] where (name : _) = lines userInput Figure 3. Monadic I/O using do notation . but if it fails. when performed.3 adopted Jones’s “donotation. It turned out that this pattern can be directly expressed in Haskell. A monad is a kind of “programming pattern”.. GHC does not actually pass the world around. 1993. might perform input and output before delivering a value of type a. so monad comprehensions were removed in Haskell 98. 1993). These laws guarantee that composition of functions with side effects is associative and has an identity (Wadler. This happy conjunction of monads and type classes gave the two a symbiotic relationship: each made the other much more attractive. The computation (catch m h) runs computation m. Here M a is Parser a. The Liang. Hudak.String)] Each of the above monads has corresponding definitions of return and >>=.” which was itself derived from John Launchbury’s paper on lazy imperative programming (Launchbury. used when reporting an error). because we can now write useful monadic combinators that will work for any monad. u . performs m. and a state reader monad (to pass around the current program location. when performed. So return x is the trivial computation of type IO a (where x::a) that performs no input or output and returns the value x. >>. Each of the monads above has definitions of return and >>= that satisfy these laws. and catch. the latter law is this: return x >>= f = fx Two different forms of syntactic sugar for monads appeared in Haskell at different times. Monads are often used in combination. each consisting of the value parsed and the remaining unparsed string. rewritten using Haskell’s do-notation. The input is the string to be parsed. which we discuss next. in Figure 6. if it succeeds. (>>=). catch and so on. Indeed. then its result is the result of the catch. using a type class. it is also possible to implement the IO monad in a completely different style. applies k to the result to yield a computation. which it then performs. modular way. Liang et al. and the result is list of possible parses. For example. 1998).type List a = [a] • A parser monad can be used to model parsers. thus: type IO a = FailCont -> SuccCont a -> Behaviour (The reader may like to write implementations of return. Figure 5 shows our example program rewritten using monads in two forms. but specialised for the IO monad. World) An IO computation is a function that (logically) takes the state of the world. The operator (>>) is sequential composition when we want to discard the result of the first computation: (>>) :: IO a -> IO b -> IO b m >> n = m >>= \ _ -> n The Haskell IO monad also supports exceptions. as we saw earlier in Section 6. This was one of the examples that motivated a flurry of extensions to type classes (see Section 6) and to the development of the monad tranformer library. Haskell 1. That concrete expression has direct practical utility. he and others at Glasgow soon realised that monads provided an ideal framework for I/O. For example: sequence :: Monad m => [m a] -> m [a] sequence [] = return [] sequence (m:ms) = m >>= \x -> sequence ms >>= \ xs -> return (x:xs) The intellectual reuse of the idea of a monad is directly reflected in actual code reuse in Haskell. monads do not compose in a nice. Harrison and Kamin. It can be implemented using continuations. 2002).3 Monadic I/O Although Wadler’s development of Moggi’s ideas was not directed towards the question of input/output. without any recourse to a stream of requests and responses. (>>=) is sequential composition. The key idea is to treat a value of type IO a as a “computation” that. but it required type class extensions supported only in Gofer (an early Haskell interpreter—see Section 9). using this definition of IO. as this example suggests. Of course. It makes use of the monad operators >>=. For example. 1995. throwing exception e. Despite the utility of monad transformers. and by abstracting one level further one can build monad transformers in Haskell (Steele. reads the file and returns its contents as a String. although Haskell provides no mechanism to ensure this. It can be viewed as a combination of the state transformer monad (where the state is the string being parsed) and the list monad (to return each possible parse in turn). The implementation in GHC uses the following one: type IO a = World -> (a. Subsequently. There are three laws that these definitions should satisfy in order to be a true monad in the sense defined by category theory. The same example program is shown once more. Similarly. and generalising comprehensions to monads meant that errors in ordinary list comprehensions could be difficult for novices to understand. in practice some Haskell programmers use the monadic types and programming patterns in situations where the monad laws do not hold. 7. readFile can be given the type readFile :: Name -> IO String So readFile is a function that takes a Name and returns a computation that. (m >>= k) is a computation that. 1994. Haskell 1. Indeed. For example. instead. when performed. since the comprehension notation was proposed before do-notation! Most users preferred the do-notation. 1992b). Monads turned out to be very helpful in structuring quite a few functional programs. and Jones paper was the first to show that a modular interpreter could be written in Haskell using monad transformers. This notation makes (the monadic parts of) Haskell programs appear much more imperative! Haskell’s input/output interface is specified monadically. The first two are exactly as described in the previous section. type Parser a = String -> [(a.4: class Monad m where return :: a -> m a (>>=) :: m a -> (a -> m b) -> m b The Monad class gives concrete expression to the mathematical idea that any type constructor that has suitably typed unit and bind operators is a monad. an exception monad (to indicate an error if some type failed to unify). GHC’s type checker uses a monad that combines a state transformer (representing the current substitution used by the unifier). a research problem that is still open (Jones and Duponcheel.) However. there are whole Haskell libraries of monadic functions that work for any monad. the exception is caught and passed to h. 1990a)—an interesting reversal. return. and returns a modified world as well as the return value.4 supported “monad comprehensions” as well as do-notation (Wadler. L¨ th and Ghani. offering two new primitives: ioError :: IOError -> IO a catch :: IO a -> (IOError -> IO a) -> IO a The computation (ioError e) fails. with a solid guarantee that no side effects could accidentally leak.7): runST :: (forall s. each of which can perform I/O by itself (so that the language semantics becomes. the big advantage is conceptual. and occasionally with some good reason. The reader may find a tutorial introduction to the IO monad. 2001). there is no safe way to escape from the IO monad... It is much easier to think abstractly in terms of computations than concretely in terms of the details of failure and success continuations.” to ensure proper sequencing of actions in the presence of lazy evaluation. Exceptions were built into the IO monad from the start—see the use of catch above—but Haskell originally only supported a single exception mechanism in purely functional code. it is difficult to build big.4 Subsequent developments Once the IO monad was established. Since this style of programming was probably going to be fairly common. The types are more compact. in the continuation model we had readFile :: Name -> FailCont -> StrCont -> Behaviour The type is cluttered with success and failure continuations (which must be passed by the programmer) and fails to show that the result is a String. Although these two code fragments have a somewhat imperative feel because of the way they are laid out. Concurrent Haskell (Peyton Jones et al. using these monadic operations: newIORef :: a -> IO (IORef a) readIORef :: IORef a -> IO a writeIORef :: IORef a -> a -> IO () An exciting and entirely unexpected development was Launchbury and Peyton Jones’s discovery that imperative computations could be securely encapsulated inside a pure function. and makes it easy to change them in future. blushing slightly. by design. 1999).. a problem that is endemic to all concurrent programming technology. with a rank-2 type (Section 6. 1991). Note in the continuation example in Figure 4 the plethora of parentheses that tend to pile up as lambda expressions become nested. 2001). The monadic approach rapidly dominated earlier models. The IO monad provides a way to achieve this goal without giving up the simple. ST s a) -> a A proof based on parametricity ensures that no references can “leak” from one encapsulated computation to another (Launchbury and Peyton Jones. nondeterministic). The monad abstracts away from these details. Software transactional memory is a recent and apparently very promising new approach to this problem. which was specified as bringing the entire program to a halt. Mutable state. which safely encapsulates an imperative computation.. namely the function error. as well as pattern-match failures (which also call error). 1993). 7. In retrospect it is worth asking whether this same (or similar) syntactic device could have been used to make stream or continuation-based I/O look more natural. although doing so would be somewhat pedantic. yet was heartily adopted by the Haskell Committee. The trouble with MVars is that programs built using them are not composable.. 1997). mutable locations called MVars. 1993). For the first time this offered the ability to implement a function using an imperative algorithm. together with various further developments in (Peyton Jones. 2005). Transactional memory. Some of the main ones are listed below. and the Haskell 98 Random library uses the IO monad as a source of such seeds. which might want to catch. correct programs by gluing small correct subprograms together. Note the striking similarity of this code to the monadic code in Figure 5. and reused to support encapsulated continuations (Dybvig et al. it was rapidly developed in various ways that were not part of Haskell 98 (Peyton Jones. This syntax seriously blurred the line between purely functional programs and imperative programs. it was really the advent of do-notation—not monads themselves—that made Haskell programs look more like conventional imperative programs (for better or worse). It can be made even more similar by defining a suitable catch function. therefore provide: . calls to error. Random numbers need a seed.it passes a dummy “token. The idea was to parameterise a state monad with a type parameter s that “infected” the references that could be generated in that monad: newSTRef :: a -> ST s (STRef s a) readSTRef :: STRef s a -> ST s a writeSTRef :: STRef s a -> a -> ST s () The encapsulation was performed by a single constant. deterministic semantics of purely functional code (Peyton Jones et al. For example. UnsafePerformIO Almost everyone who starts using Haskell eventually asks “how do I get out of the IO monad?” Alas. the types of IO computations could be polymorphic: readIORef :: IORef a -> IO a writeIORef :: IORef a -> a -> IO () These types cannot be written with a fixed Request and Response type. whose order and interleaving is immaterial. unlike runST. so that the continuation example could be written as follows: main :: Behaviour main = appendChan stdout "enter filename\n" >>> readChan stdin >>> \ userInput -> let (name : _) = lines userInput in appendChan stdout name >>> readFile name fail (\ contents -> appendChan stdout contents abort done) where fail ioerr = appendChan stdout "can’t open file" abort done where f >>> x = f abort x. This behaviour is rather inflexible for real applications. Furthermore. the Haskell Committee decided quite late in the design process to change the precedence rules for lambda in the context of infix operators. Threads can communicate with each other using synchronised. and more informative. All Haskell implementations. Syntax matters An interesting syntactic issue is worth pointing out in the context of the development of Haskell’s I/O system. 1996) extends the IO monad with the ability to fork lightweight threads. 1995). From the very beginning it was clear that the IO monad could also support mutable locations and arrays (Peyton Jones and Wadler. runST. 2005). that is. and recover from. That does not stop programmers from wanting to do it. The idea was subsequently extended to accommodate block-structured regions (Launchbury and Sabry. However. and performs input and output as actual side effects! Peyton Jones and Wadler dubbed the result “imperative functional programming” (Peyton Jones and Wadler. such as printing debug messages. and one that fits particularly beautifully into Haskell (Harris et al. which were themselves inspired by the Mstructures of Id (Barth et al. they cannot both use the module name Map. We therefore content ourselves with a brief overview here. This had the great merit of simplicity and clarity—for example. 2000. or never. Data. Haskell 1. instead interface files were regarded as a possible artifact of separate compilation. even so. more and more attention has been paid to areas that received short shrift from the original designers of the language. especially when a module re-exports entities defined in one of its imports. Marlow. arrow programming is very much easier if syntactic support is provided (Paterson. As a result. and hence.. And so it proved: although transactional memory had a ten-year history in imperative settings. but in a more general setting. some tricky corners remained unexplored for several years (Diatchki et al. and polytypic programming (Jansson and Jeuring. . and this is something that we believe may ultimately be seen as one of Haskell’s main impacts on mainstream programming6 ..4 completely abandoned interfaces as a formal part of the language. 2003). and vice versa. 2005) for details). a well-specified Appendix to the Haskell 98 Report that contained precise advice regarding the implementation of a variety of language extensions. for example. and its use amounts to a promise by the programmer that it does not matter whether the I/O is performed once. The FFI Addendum effort was led by Manuel Chakravarty in the period 2001–2003.1 Hierarchical module names As Haskell became more widely used. In any case. in very rough order of first appearance. 1997). 2003). and so on. and that its relative order with other I/O is immaterial. Haskell sadly lacks a formally checked language in which a programmer can advertise the interface that the module supports. These areas are of enormous practical importance.g. As in the case of monads (only more so). In parallel with. reactive programming (Hudak et al. 2001). This motivated an effort led by Malcolm Wallace to specify an extension to Haskell that would allow multi-component hierarchical module names (e. The difficulty was that these mechanisms tended to be implementation-specific. Paterson. the module system is specified completely separately from the type system— but. Haskell’s crude effect system (the IO monad) means that almost all memory operations belong to purely functional computations. and C2Hs (Chakravarty. A great deal of discussion took place about the syntax and semantics of interfaces. 1999a) among others. nothing more and nothing less. and Hugs had an extensibility mechanism that made it 6 “Effects” is shorthand for “side effects”. and this syntax is treated directly by the type checker. every module was specified by an interface as well as an implementation.” “the exercise was seen as valuable”) because it was different in kind to the original development of the Haskell language. Originally proposed by Hughes in 1998 (Hughes.. The exercise was open to all. every read and write to a mutable location must be logged in some way. Underlying all these developments is the realisation that being explicit about effects is extremely useful. In fact. the sophisticated ML module system was becoming well established. In the end. 1998). or perhaps there was a tacit agreement that the combination of type classes and ML modules was a bridge too far. and finally resulted in the 30-page publication of Version 1. This exercise was seen as so valuable that the idea of “Blessed Addenda” emerged. In versions 1. when Harris. Once the IO monad was established. This design constituted the second “Blessed Adden- 8. 1999). and symbiotic with. for example. Somewhat less obviously. An effort gradually emerged to specify an implementation-independent way for Haskell to call C procedures. Haskell in middle age As Haskell has become more widely used for real applications. We have used passive verbs in describing this process (“an effort emerged.3 of the language..2 Modules and packages Haskell’s module system emerged with surprisingly little debate. the fact that the module name space was completely flat became increasingly irksome. this debate never really happened. and one might have anticipated a vigorous debate about whether to adopt it for Haskell. 8. H/Direct (Finne et al. examples include Green Card (Nordin et al. At the time.. whether one can deduce from an interface which module ultimately defines an entity. we eventually converged on a very simple design: the module system is a namespace control mechanism. who often use it unnecessarily.1 The Foreign Function Interface One feature that very many applications need is the ability to call procedures written in some other language from Haskell.0–1. 2001). This so-called Foreign Function Interface (FFI) treats C as a lowest common denominator: once you can call C you can call practically anything else. this standardisation effort were a number of pre-processing tools designed to ease the labour of writing all the foreign import declarations required for a large binding. issues such as the duplication of information between interfaces and implementations. it is possible to use unsafePerformIO to completely subvert the type system: cast :: a -> b cast x = unsafePerformIO (do writeIORef r x readIORef r ) where r :: IORef a r = unsafePerformIO (newIORef (error "urk")) It should probably have an even longer name. but they have evolved more recently and are still in flux. a tension between what a compiler might want in an interface and what a programmer might want to write. Perhaps no member of the committee was sufficiently familiar with ML’s module system to advocate it. and preferably vice versa. so we have less historical perspective on them. arrows have found a string of applications in graphical user interfaces (Courtney and Elliott.unsafePerformIO :: IO a -> a As its name implies. a variety of ad hoc mechanisms rapidly appeared. but depended critically on the willingness of one person (in this case Manuel Chakravarty) to drive the process and act as Editor for the specification.. A good example is the development of transactional memory. to discourage its use by beginners.Map). using a design largely borrowed from Java. 2002). That makes Haskell a very natural setting for experiments with transactional memory. 8.0 in 2003. many times. Arrows are an abstract view of computation with the same flavour as monads. do not need to be logged. GHC’s very first release allowed the inclusion of literal C code in monadic procedures. if there are two collection libraries. by construction. possible to expose C functions as Haskell primitives. it is not safe. 8. Herlilhy and Peyton Jones transposed it into the Haskell setting they immediately stumbled on two powerful new composition operators (retry and orElse) that had lain hidden until then (see (Harris et al.2. In an implementation of transactional memory. and has survived unchanged since. but not deep) to process the many constructors of the syntax tree.” including its documentation. and type inference) is performed on this data type. the full module system. Hugs. and much more besides.org/cabal . Perhaps this was a good choice.10) was made in December 1992. and a novel system for space and time profiling (Sansom and Peyton Jones. This approach contrasts with the more popular method of first removing syntactic sugar. as soon as the initial language design was fixed. After type checking. 140 are language definition while only 100 define the libraries. but not of distribution. The only part that was shared with the prototype was the parser. Nevertheless. and made various other changes to the language. All processing that can generate error messages (notably resolving lexical scopes. The initial Haskell Report included an Appendix defining the Standard Prelude. based around Haskell language implementations. licencing information. In 2004. Isaac Jones took up the challenge of leading an effort to specify and implement a system called Cabal that supports the construction and distribution of Haskell packages8 . but by Haskell 1. include files. Of the 240 pages of the Haskell 98 Language and Libraries Report. alongside the language definition. the Haskell Committee introduced the monomorphism restriction. This prototype started to work in June 1989. by a team consisting initially of Cordelia Hall. Implementations Haskell is a big language. and we discuss their development in this section. and the compiler used a reasonable amount of that memory (upwards of 2Mbytes!). A subsequent release (July 1993) added a strictness analyser. The final Core program is transformed in 8. 9. and the added complexity of type classes meant the compiler was quite a lot bigger and slower than the base LML compiler. bootstrapped via the prototype compiler. This version of GHC already supported several extensions to Haskell: monadic I/O (which only made it officially into Haskell in 1996). build information.0 and “Candidate” status7 . but the larger Haskell prelude stressed the LML prelude mechanism quite badly. Partly through experience with this compiler. it certainly avoids a technically complicated area. details about dependencies on other packages. but it was another 18 months before the first full release (version 0. Will Partain.dum.3 (May 1996) the volume of standard library code had grown to the extent that it was given a separate companion Library Report. This is not the place to describe these tools. It was reasonably robust (with occasional spectacular failures). The first version of GHC was written in LML by Kevin Hammond. Part III Implementations and Tools 9. several implementations are available.2. a Cabal package server that enables people to find and download Cabal packages. The libraries defined as part of Haskell 98 were still fairly modest in scope.3 Summary The result of all this evolution is a module system distinguished by its modesty. and an informal library evolution mechanism began. GHC was begun in January 1989 at the University of Glasgow.1 The Glasgow Haskell Compiler Probably the most fully featured Haskell compiler today is the Glasgow Haskell Compiler (GHC).org/definition 8. an open-source project with a liberal BSD-style licence. GHC and nhc teams began in 2001 to work together on a common. mutable arrays. driven by user desire for cross-implementation compatibility. unboxed data types (Peyton Jones and Launchbury. the program is desugared into an explicitly typed intermediate language called simply “Core” and then processed by a long sequence of Core-to-Core analyses and optimising transformations. but has the huge advantage that the error messages could report exactly what the programmer wrote. The GHC approach required us to write a great deal of code (broad.2.0 including views (later removed). 8.” consisting of a single page that never moved beyond version 0. an effort that continues to this day. There were quite a few grumbles about this: most people had 4–8Mbyte workstations at that time. A big difference from the prototype is that GHC uses a very large data type in its front end that accurately reflects the full glory of Haskell’s syntax. David Himmelstrup implemented Hackage. GHC proper was begun in the autumn of 1989. type classes. and it is quite a lot of work to implement. 8. open-source set of libraries that could be shipped with each of their compilers. but the historical perspective is interesting: it has taken more than fifteen years for Haskell to gain enough momentum that these distribution and discovery mechanisms have become important. just as Peyton Jones arrived in Glasgow to join the burgeoning functional programming group there. as a glance at the literature on ML modules will confirm. It was designed from the ground up as a complete implementation of Haskell in Haskell. Initially. removed views. and was essentially a new front end to the Chalmers LML compiler.3 Libraries It did not take long for the importance of well-specified and wellimplemented libraries to become apparent. Developers want to distribute a related group of modules as a “package. and nhc. and binary I/O as well as both streams and continuations. Subsequently. But real applications need much richer libraries. The prototype compiler implemented essentially all of Haskell 1. it was swiftly implemented by GHC. None of this was part of the Haskell language design. the Hugs. and Peyton Jones. The first beta release was on 1 April 1991 (the date was no accident). It does about as little as it is possible for a language to do and still call itself a practical programming tool. which at that stage was still written in Yacc and C. 7 Packaging and distribution Modules form a reasonable unit of program construction. Nevertheless. GHC began to distribute a bundle of libraries called hslibs but. and only then processing a much smaller language. 1995). 1991). the deriving mechanism. catching compiler bugs this way is vastly cheaper than generating incorrect code. themselves implemented in a functional language. 1994).99. At the end of August I had a mostly complete implementation of Haskell. 1990. 2002). we added a “Core Lint” typechecker that checked that the output of each pass remained well-typed. although at the time we thought it was such a simple idea that we did not think it worth publishing. first served.. so I ended up spending most of July and August coding. Over time. and vice versa. Several years later. mainly for that reason..Phil. This makes GHC a dauntingly complex beast to understand and modify and. Both of these features were subsequently adopted in Haskell 98. Gofer was an interpreter. Speaking of the Prelude I think it’s worth pointing out that Joe Fasel’s prelude code must be about the oldest Haskell code in existence. 1989) and a regular topic of both conversation and speculation on the Haskell mailing list at the time. Marlow et al. before being translated into C or machine code. to work with multiple parameter type classes. In August 1991. This was way before there were any good papers about how it was supposed to be done. It was hard to do. and he began to use Gofer as a testbed for his experiments. 1990). “For various reasons Truv´ couldn’t help in the coding of the e compiler. studies—indeed. We initially based Core on the lambda calculus but then. 1996. the same idea was used independently by Morrisett. Over the fifteen years of its life so far. but it provides a surprisingly strong consistency check—many. all we needed to do was to add data types.” 9. let-expressions. many of which are now in Haskell 98 (e. Augustsson writes: “During the spring of 1990 I was eagerly awaiting the first Haskell compiler. running it. concurrency (Peyton Jones et al. the author of Gofer and Hugs. “After the first release hbc became a test bed for various extensions and new features and it lived an active life for over five years. then a D. however. After he left Yale in the summer of 1994. my head filled with Haskell to the brim.the Spineless Tagless G-machine (STG) language (Peyton Jones. “The testing of the compiler at the time of release was really minimal. Jones also developed a Gofer-to-C compiler. perhaps most bugs in the optimiser produce type-incorrect code. Gofer included the first implementation of multi-parameter type classes. adding support for constructor classes (Section 6. “Concerning the implementation. 1993). and much more besides. Jones wrote Gofer as a side project to his D. GHC has grown a huge number of features. Jones undertook a major rewrite of the Gofer code base. The implementation had everything from the report (except for File operations) and also several extensions. But first come. it was supposed to come from Glasgow and be based on the LML compiler. At the same time. transactional memory (Harris et al. a researcher at Chalmers University whose programming productivity beggars belief. implemented in C. although it took us a remarkably long time to recognise this fact. I only remember two problematic areas: modules and type checking. student at the University of Oxford. It supports dozens of language extensions (notably in the type system). and he used this as a basis for the first “dictionary-free” implementation of type classes. wondering how to decorate it with types.. he reports that he did not dare tell his thesis adviser about Gofer until it was essentially finished—to learn more about the implementation of functional programming languages.2 hbc The hbc compiler was written by Lennart Augustsson. namely Girard’s System F ω (Girard. 2005). GHC appears to be the first compiler to use System F as a typed intermediate language. 2004). This consistency checking turned out to be one of the biggest benefits of a typed intermediate language. getting a segmentation fault. as originally suggested by Wadler and Blott (Wadler and Blott. Furthermore. But since the compiler was written in LML it was more or less doomed to dwindle. Shortly afterwards. except as a small section in (Peyton Jones et al. But the big stumbling block was the type checking. Moving to take a post-doctoral post at Yale in 1992. development of the core GHC functionality remains with Peyton Jones and Simon Marlow. Harper and Tarditi at Carnegie Mellon in their TIL compiler (Tarditi et al. 1992). . released an entirely different implementation called Gofer (short for “GOod For Equational Reasoning”).g. After talking to Glasgow people at the LISP & Functional Programming conference in Nice in late June of 1990 Staffan Truv´ and I decided that instead e of waiting even longer we would write our own Haskell compiler based on the LML compiler. this check will always succeed. debugging it with gdb. however. an interactive read/eval/print interface (GHCi). I decided that hbc would be a cool name for the compiler since it is Haskell Curry’s initials. And I waited and waited.1) that was designed to minimise the construction of dictionaries at run time. but it could compile the Standard Prelude—and the Prelude uses a lot of Haskell features. and small enough to fit on a single (360KB) floppy disk. and requiring a good deal of memory and disk space.3 Gofer and Hugs9 GHC and hbc were both fully fledged compilers. Core Lint often nails the error immediately. By modifying the interpreter’s back end.. The Core language is extremely small — its data type has only a dozen constructors in total — which makes it easy to write a Core-to-Core transformation or analysis pass. 0. and case expressions. sometimes in an almost trance-like state. developed on an 8MHz 8086 PC with 640KB of memory. Gofer also adopted an interesting variant of Wadler and Blott’s dictionary-passing translation (Section 6. this resulted in small but significant differences between the Haskell and Gofer type systems. Template Haskell (Sheard and Peyton Jones. support for packages. was on August 21. understanding type classes became a central theme of Jones’ dissertation work (Jones.. who both moved to Microsoft Research in 1997. using techniques from partial evaluation to specialise away the results of the dictionary-passing translation. Mark Jones. (I later learnt that this is the name the Glasgow people wanted for their compiler too. operator sections). so that some Haskell programs would not work in Gofer. and gradually tracing the problem back to its original cause. 1996).) “The first release. For example. They understood its significance much better than the GHC team. The export/import of names in modules were different in those days (renaming) and there were many conditions to check to make sure a module was valid. to more closely track the Haskell 9 The material in this section was largely written by Mark Jones.. and the whole approach of type-directed compilation subsequently became extremely influential. Jones continued to develop and maintain Gofer. we realised in 1992 that a ready-made basis lay to hand.4) in 1992–93 and producing the first implementation of the do-notation in 1994. If the compiler is correct.Phil. 9. and large parts of it are still unchanged! The prelude code was also remarkably un-buggy for code that had never been compiled (or even type checked) before hbc came along. and to provide more accurate principal types. 3c. 9.0 and striving to further close the gap with Haskell. Bloss et al.4 nhc The original nhc was developed by Niklas R¨ jemo when he was o a PhD student at Chalmers (Rojemo. Hugs 1. this was the first version of Hugs to support the Haskell 98 standard. Aggressive in-lining was able to generate code competitive with other languages (Hartel et al. T was then abandoned in favour of Common Lisp to address performance and portability issues. This resulted in what became known as Yale Haskell. . R¨ jemo came to York as a post-doctoral o researcher where. Various maintainers and contributors have worked on Hugs during this period. both Research Scientists at Yale.3 features such as monadic I/O. In addition to fixing bugs. 1992. The compiler used a dual-entry point approach to allow very efficient first-order function calls. in collaboration with Colin Runciman. was completed in January 1998. it seemed easier to compile Haskell into Scheme or T. existentials. a continuing strand of work relates to space efficiency (Wallace and Runciman. 1996). Microsoft’s . a post-doc at York working on functional programming for embedded systems. The main development work was mostly complete by the time Jones started work at the University of Nottingham in October 1994. However. derived instances. including Haskell-style type classes. working from Hugs 1. Hugs 1. To help achieve this space-efficiency he made use during development of the first-generation heap-profiling tools—which had previously been developed at York and used to reveal space-leaks in hbc (Runciman and Wakeling. In 2006. Yale Haskell used strictness analysis and type information to compile the strict part of Haskell into very efficient Lisp code.2. was one of the key results of this effort (Kranz et al. His motivation from the start was to have a space-efficient compiler (Rojemo. for development and free distribution (with due acknowledgements). Moreover. as both Jones and Reid moved on to other universities at around that time (Jones to OGI and Reid to Utah). Briefly christened “Hg” (short for Haskell-gofer).. Several MS and PhD theses grew out of this work. including Jones and Reid.3 in August 1996 that provided support for new Haskell 1. Alastair Reid began modifying Hugs to support the Haskell module system. The nhc system has been host to various further experiments. Hugs development has proceeded at a more gentle pace since the first release of Hugs 98. It had been a confusing time for Hugs users (and developers!) while there were multiple versions of Hugs under development at the same time. these developers have added support for new features including implicit parameters. with roughly one new formal release each year. Bloss. the labelled field syntax. Jeff Lewis. which merged the features of the previous Yale and Nottingham releases into a single system. The CMU lisp compiler was able to generate very good numeric code from Lisp with appropriate type annotations. at Yale. These features were considered too experimental for Hugs 1. 1998). Specifically he wanted to bootstrap it on his personal machine which had around 2Mbytes main memory. scoped type variables.. 1996a). an optimising compiler for T. Sigbjorn Finne. adding standard foreign-function interface and libraries. were the primary implementers of Yale Haskell. The first joint release. rather than port the ideas to a stand-alone Haskell compiler. 1986. Bloss et al. functional dependencies. benefiting in part from the stability provided by the standardisation of Haskell 98. 1995 with the greeting “Hugs on Valentine’s Day!” The first release of Hugs supported almost all of the features of Haskell 1. and more recently the development of the Hat tools for tracing programs (Wallace et al. leading to an independent release of Hugs 1. 1988. he devised more advanced heap-profiling methods. The most prominent missing feature was the Haskell module system. yhc. 1986). as the name suggests. was started to re-engineer nhc. and also to restore the support for multi-parameter type classes that had been eliminated in the transition from Gofer to Hugs. the new system soon acquired the name “Hugs” (for “the Haskell User’s Gofer System”). Ross Paterson. and then use a Scheme compiler as a back end. the York Haskell Compiler project. however. The results of Reid’s work appeared for the first time in the Yale Hugs0 release in June 1996. For example. Unicode characters.. stream-based I/O. The Orbit compiler. and strictness annotations. defaults. Runciman and Wakeling. and he hoped that Hugs would not only appease the critics but also help to put his newly founded research group in Nottingham onto the functional programming map. and bignum arithmetic. prior to the development of Haskell. Jones and Reid had started to talk about combining their efforts into a single system. In fact Hugs 98 was also the last of the Nottingham and Yale releases of Hugs. had also been working on a significant overhaul of the Hugs type checker to include experimental support for advanced type system features including rank-2 polymorphism. hierarchical module names. 1995a). there was an active research project at Yale involving Scheme and a dialect of Scheme called T. supervised mostly by Hudak. But Hugs development has certainly not stood still. as well as Peterson.4.0 would parse but otherwise ignore module headers and import declarations. 2004. Even before the release of these two different versions of Hugs.NET. Andy Gill. overloaded numeric literals. Kranz et al. 1988b. as well as adding user interface enhancements such as import chasing. In addition. Jones had continued his own independent development of Hugs.standard. Always enjoying the opportunity for a pun. it was only natural to apply Scheme compilation techniques in an implementation of Haskell. leading to a still more spaceefficient version (Rjemo and Runciman. a full prelude. John Peterson and Sandra Loosemore. So once Hudak became actively involved in the design of Haskell. This problem was finally addressed with the release of Hugs 98 in March 1999. and Dimitry Golubovsky. which was the last version of Hugs to be released without support for Haskell modules.. and a greatly expanded collection of libraries. polymorphic recursion. an enhanced foreign function interface. the T compiler was no longer being maintained and had problems with compilation speed. Meanwhile. Yale Haskell performed various optimisations intended to reduce the overhead of lazy evaluation (Hudak and Young. and extensible records.. Jones. Meanwhile. Malcolm Wallace. its name reflecting the fact that the Haskell standard had also moved on to a new version by that time.4 and were released independently as Hugs 1. and used them to find residual space-inefficiencies in nhc. newtype declarations. Johan Nordlander. To achieve reasonable performance. and making various improvements (Wallace. 1998). albeit at a reduced level. Unfortunately. 2001). 1993).5 Yale Haskell In the 1980s. became the principal keeper and developer of nhc—he has since released a series of distributed versions. 9. 1988a. Young. 1995b) that could be bootstrapped in a much smaller memory space than required by systems such as hbc and GHC. Jones worked to complete the first release of the system so that he could announce it on February 14. Because of this link.. tracking Haskell 98. When R¨ jemo left York around 1996 he handed nhc over to Runcio man’s group. 1988). the limitations of basing a Haskell compiler on a Common Lisp back-end caught up with the project. Yale Haskell was the first implementation to support both compiled and interpreted code in the same program (straightforward. in the last five years several new Haskell implementation projects have been started. and Andrei de A Formiga 10 . The major difficulty was finding a sensible way to assign costs. Haskell provides many such functions.. Notable examples include the Haskell Refactorer (Li et al. and a flexible foreign function interface for both C and Common Lisp. enable and disable various optimisers. GHC programs were soon running two to three times faster than Yale Haskell programs. A scratch pad was a logical extension of a module in which additional function and value definitions could be added. At the beginning of the 1990s. inlining pragmas.. developed by Leif Frenzel. 2003). of assigning costs to functions and procedures. was also explored by Maessen’s Eager Haskell (Maessen. It is focused on aggressive optimisation using whole-program analysis. 9. since Lisp systems had been doing that for years). jhc uses flow analysis to support a defunctionalised representation of thunks.7 Programming Environments Until recently. 2003). 2003b. specialising over-loaded functions. Patrick Sansom and Peyton Jones began working on profiling Haskell. with the notable exception of Yale Haskell. and M-structures. little attention has been paid by Haskell implementers to the programming environment. That is now beginning to change. 2005). the Yale Haskell implementation was abandoned circa 1995. The idea of evaluating Haskell eagerly rather than lazily (while retaining non-strict semantics). run dialogues.net . UHC and EHC. the GHC Visual Studio plug-in (Visual Haskell). Although early on Yale Haskell was competitive with GHC and other compilers. The resulting language was called pH (short for “parallel Haskell”). developed by John Meacham. while retaining Id’s eager. either manually (to reflect the programmer’s intuitive decomposition into tasks) or automatically. works poorly for higher-order functions such as map. a compilation to run in the background as the editing of a source file continued in emacs in the foreground. 10. or the contents of a snapshot of memory at any particular time. Likewise. compile modules. the underlying Lisp system allowed the Yale effort to focus attention on the the Haskell programming environment. Utrecht is also host to two other Haskell compiler projects. and formed the basis of Nikhil and Arvind’s textbook on implicit parallel programming (Nikhil and Arvind. Thiago Arrais. and thus served as an excellent test bed for new ideas. Ultimately. polymorphic recursion. The conventional approach. strictness annotations. which can be extremely efficient.cs. only to discover that the information obtained was too hard to interpret to be useful.6 Other Haskell compilers One of the original inspirations for Haskell was the MIT dataflow project. Thus a new approach to assigning costs was needed. Heeren et al. when one logical task is implemented by a combination of higher-order functions. But knowing that map consumes 20% of execution time is little help to the programmer—we need to know instead which occurrence of map stands for a large fraction of the time. Every thunk introduced an extra level of indirection (a Lisp cons cell) that was unnecessary in the other Haskell implementations. thus allowing. For this reason. for example. based at Utrecht. We have all tried adding side-effecting print calls to record a trace of execution. As a result.nl/wiki/ Center/ResearchProjects). 10. in addition to the lack of funding to pursue further research in this direction. It also had a very nice emacsbased programming environment in which simple two-keystroke commands could be used to evaluate expressions. Yale Haskell also supported many Haskell language extensions at the time. and run a tutorial on Haskell. based on nhc but with an entirely new back end. and the EclipseFP plug-in for Haskell. Commands could even be queued. but whose evaluation did not result in recompilation of the module. In 1993 Arvind and his colleagues decided to adopt Haskell’s syntax and type system. The Helium compiler. 2001). Helium. turn specific compiler diagnostics on and off. The new idea Sansom and Peyton Jones introduced was to label the source code with cost centres.. and it had begun to seem that Haskell was such a dauntingly large language that no further implementations would emerge. there was a factor of 3 to 5 in lazy code that could not be overcome due to the limitations of the Lisp back end. whose programming language was called Id. The imperative nature of Lisp prevented many other optimisations that could be done in a Haskell-specific garbage collector and memory manager. parallel evaluation order. there was no real hope of making Yale Haskell run any faster without replacing the back-end and runtime system. or printing a backtrace of the stack on errors. jhc is a new compiler. The York Haskell Compiler. UHC and EHC (. is a new compiler for Haskell 98. developed by Krasimir Angelov and Simon Marlow (Angelov and Marlow. but on a uniprocessor. Optimisations such as reusing the storage in a thunk to hold the result after evaluation were impossible with the Common Lisp runtime system. dynamic types.sourceforge. This whole-program approach allows a completely different approach to implementing type classes. 1999). I-structures. Another nice feature of Yale Haskell was a “scratch pad” that could be automatically created for any module. 10. Based on early work by Johnsson and Boquist (Boquist. Worse. mutually recursive modules. then the time devoted to the task is divided among these functions in a way that disguises the time spent on the task itself. 2002) and Ennals’s optimistic evaluation (Ennals and Peyton Jones. beginning in the early 1990s. These extensions included monads. without using dictionary-passing. Profiling and debugging One of the disadvantages of lazy evaluation is that operational aspects such as evaluation order. yhc. is focused especially on teaching. However.Although performance was an important aspect of the Yale compiler. which are designed to be reusable in many different contexts and for many different tasks—so these functions feature prominently in time profiles. and on giving high-quality type error messages (Heeren et al. Developing successful profiling and debugging tools for Haskell has taken considerable research. conventional profiling and debugging methods are hard to apply. can vary between executions of the same code. led by Arvind. depending on the demands the context makes on its result. All the compilers described so far were projects begun in the early or mid ’90s. While performance within the strict subset of Haskell was comparable with other systems.1 Time profiling 9. 2003a). are not easily predictable from the source code—and indeed. It had been known for some time that lazy programs could sometimes exhibit astonishingly poor space behaviour—so-called space leaks. The reason was that many functions were “nearly.The profiling tool they built then assigned time and space costs to one of these cost centres. Both seq and strict components of data structures were already present in Miranda for the same reasons (Turner. but also allowed the compiler to optimise the representation of the data type in some cases. On the contrary. seq is not definable in the lambda calculus. seq weakens . which evaluates its first argument. by profiling the contents of the heap. and then returns its second: seq x y = ⊥. 1995). this was not the main reason for introducing it into Haskell. seq was primarily introduced to improve the speed of Haskell programs! By 1996. strict. Runciman and Wakeling reduced the peak space requirements of a clausification program for propositional logic by two orders of magnitude. making the assignment of costs to cost centres precise. which not only made many more functions strict. but the results of strictness analysis were not always as good as we hoped. Using these constructs. Hughes was very concerned that Haskell’s version of seq should support space debugging well.” but not quite. which had already been optimised using their earlier tools. Although seq was not introduced into Haskell primarily to fix space leaks. 1996a). who had by o this time joined Runciman at the University of York. 1984. The next step was thus to extend the heap profiler to provide direct information about object lifetimes. y. Runciman and Wakeling developed a profiler that could display a graph of heap contents over time.3 Controlling evaluation order In 1996.3 introduced two features that give the programmer better control over evaluation order: • the standard function seq. Runciman and R¨ jemo were able to improve the peak space requirements of o their clausify program to less than 1K—three orders of magnitude better than the original version. This program was in LML. But adding seq to Haskell was controversial because of its negative effect on semantic properties. was never used at all (void) (Rjemo and Runciman. in particular speeding up GHC itself by a factor of two. which could explain why data was not garbage collected by showing which objects pointed at the data of interest (Rjemo and Runciman. The detailed information now available enabled lazy programmers to make dramatic improvements to space efficiency: as the first case study. This semantics was published at POPL in 1995. when he was working on his heap profiler and Hughes had a program with particularly stubborn space leaks—the two spent much time working together to track them down. 1993). 10. or. how can one ensure that cost assignments are independent of evaluation order (which the programmer should not need to be aware of)? These questions are hard enough to answer that Sansom and Peyton Jones felt the need to develop a formal cost semantics. sometimes dramatically shortening the lifetimes of data structures. the top-level constructor of the data. or even combinations of the two (for example. if large numbers of them end up with long lifetimes. and so the strictness analyser was forced to (safely) classify them as non-strict. while seq (\x -> ⊥) 0 does not)—a distinction that Jon Fairbairn. but there was no practical way of finding the causes of space leaks in large programs. whose elements are evaluated before the list is constructed. 1996b). and is the only way to distinguish \x -> ⊥ from ⊥ (since seq ⊥ 0 goes into a loop. after their last use?” With information at this level of detail. a programmer can move selected computations earlier. “show the allocating functions of all the cons cells in the heap over the entire program run”). Moreover. 1985). as in: data SList a = SNil | SCons !a !(SList a) where the exclamation points denote strict fields. and thus here define a type of strict lists.2 Space profiling Sansom and Peyton Jones focused on profiling time costs. and indeed seq had been used to fix space leaks in lazy programs since the early 1980s (Scheevel. Programmers expect that costs should be assigned to the closest enclosing cost centre—but should this be the closest lexically enclosing or the closest dynamically enclosing cost centre? (Surprisingly. This step was taken by Runciman and R¨ jemo (the author of nhc). Since Haskell programs allocate objects very fast. thus aggregating all the costs for one logical task into one count (Sansom and Peyton Jones. 1983). Combina- tions of these forms made it possible for programmers to get answers to very specific questions about space use. and time and again a carefully placed seq proved critical to plugging a leak. Haskell 1. Runciman had spent a sabbatical at Chalmers in 1993. then the peak space requirements can be very high indeed. and that is why lazy evaluation contributes to space leaks. Hughes and Runciman were by this time well aware of its importance for this purpose. In particular. Indeed. A further extension introduced retainer profiling. where the cost of using a value the first time can be much greater than the cost of using it subsequently. Hughes. along with the selective introduction of strictness to partially fix them. Assigning costs to explicitly labelled cost centres is much more subtle than it sounds. but a prototype profiling tool was already in use with GHC in 1992. was dead set against making. They also achieved a factor-of-two improvement in the nhc compiler itself. from 1. 10. introducing a seq at a carefully chosen point is a very common way of fixing a space leak. 1995). the availability of a profiler led rapidly to faster Haskell programs.3 megabytes to only 10K (Runciman and Wakeling. Today. we understood the importance of using strictness analysis to recognise strict functions. such as “what kind of objects point at cons cells allocated by function foo. Strictness analysers were particularly poor at analysing data types. and the visualisation tool they wrote to display heap profiles is still in use to this day. By abstracting away from evaluation order. lazy evaluation also abstracts away from object lifetimes. the programmer could help the strictness analyser deliver better results. but interestingly. but at the same time Colin Runciman and David Wakeling were working on space. in particular. Programmers who cannot predict— and indeed do not think about—evaluation order also cannot predict which data structures will live for a long time. By introducing calls of seq. would never be used again (drag). hence the introduction of strictness annotations in data type declarations.) In a language with first-class functions. in order to invoke them using call-by-value rather than the more expensive call-by-need. Runciman and Wakeling’s original profiler worked for LML. the best answer is the closest lexically enclosing one (Sansom and Peyton Jones. indeed. if x =⊥ otherwise • strictness annotations in data definitions. classified by the function that allocated the data. Not surprisingly. should the cost of evaluating a function necessarily be assigned to the same cost centre as the costs of calling the function? In a call-by-need implementation. the problem was discussed in Hughes’s dissertation in 1984. but it was rapidly adopted by Haskell compilers. The new profiler could show how much of the heap contained data that was not yet needed (lag). which already had seq. a -> b -> b. parametricity was by this time not just a nice bonus. We have sacrificed parametricity in the interests of programming agility and (sometimes dramatic) optimisations. but even semantics has its price. 10. on the other hand. It’s worth noting that making programs stricter is not the only way to fix space leaks in Haskell.3 introduced a class class Eval a strict :: seq :: strict f x where (a->b) -> a -> b a -> b -> b = x ‘seq‘ f x tion required type signatures on these particular definitions (Section 6. making heavy use of polymorphism in the different layers. was very concerned that seq should be applicable to values of any type—even type variables—so that space leaks could be fixed even in polymorphic code. a Unfortunately. 1990b) had proven too expensive for daily use. and that can in turn be guaranteed by giving build a rank-2 type (Section 6. and build g = g (:) [] which constructs one. The extreme sensitivity of Haskell’s space use to evaluation order is a two-edged sword. as contexts of the form Eval a =>. two of Hughes’s students implemented a TCP/IP stack in Haskell. GHC still uses short-cut deforestation. profiled. Launchbury argued forcefully that parametricity was too important to give up.8. thus warning the programmer and the compiler that parametricity properties in that type variable were restricted. for this very reason. To avoid such problems. delaying the construction of a long list until just before it was needed. we are a little ashamed that reasoning about space use in Haskell is so intractable. it holds only if g has a sufficiently polymorphic type. each insertion of a seq became a nightmare. but because Haskell’s monomorphism restric- . The proof relies on the parametrictity properties of g’s type. it is very hard for programmers to anticipate their program’s space behaviour and place calls of seq correctly when the program is first written. But whenever they inserted a call of seq on a type variable. then returns its second—but it is not at all uncommon for the printing of the first argument to trigger another call of trace before the printing is complete.4 Debugging and tracing Haskell’s rather unpredictable evaluation order also made conventional approaches to tracing and debugging difficult to apply. and the major space leaks found. Since space debugging is to some extent a question of trial and error. space performance can be improved dramatically by very small changes in just the right place— without changing the overall structure of the program. Deforestation is an important optimisation for programs written in the “listful” style that Haskell encourages. but because making the necessary corrections was simply too heavyweight. with the suspect operations as its members. with the property that foldr k z (build g) = g k z (the “foldr/build rule”) (Gill et al. because seq does not satisfy the parametricity property for its type ∀a. more sophisticated debuggers aim to abstract away from the evaluation order. On the other hand. 1989) in a way that has recently been precisely characterised by Patricia Johann and Janis Voigtl¨ nder (Johann and a Voigtl¨ nder. Thus. Their code turned out to contain serious space leaks. This elegant use of parametricity to guarantee a sophisticated program transformation was cast into doubt by seq.2). this equation does not hold foldr ⊥ 0 (build seq) = seq ⊥ 0 Haskell’s designers love semantics. which consumes a list. Hearing Runciman describe the first heap profiler at a meeting of Working Group 2. 1993). but the justification for an important compiler optimisation. Inspired by the Fox project at CMU.b. Hughes. Object lifetimes can be shortened by moving their last use earlier—or by creating them later. the limitations of this solution soon became apparent. 2004). Yet Haskell encourages programmers—even forces them—to forget space optimisation until after the code is written.the parametricity property that polymorphic functions enjoy. the students needed to insert and remove calls of seq time and time again. However. given sufficiently good profiling information.3 was to make seq an overloaded function. Haskell 1. seq is a simple polymorphic function that can be inserted or removed freely to fix space leaks. Thus short-cut deforestation remained sound. without changing the types of enclosing functions. and neither do polymorphic functions that use it. the first optimisation Runciman and Wakeling made was to make the program more lazy. requiring repeated compilations to find affected type signatures and manual correction of each one. leading to very garbled output. It turns out that the foldr/build rule is not true for any function g. This experience provided ammunition for the eventual removal of class Eval in Haskell 98. and at that point puts powerful tools at the programmer’s disposal to fix them. The solution adopted for Haskell 1. namely deforestation—the transformation of programs to eliminate intermediate data structures. rather than a polymorphic one. However. Maybe this is nothing to be ashamed of. Moreover. This would not have mattered if the type signatures were inferred by the compiler—but the students had written them explicitly in their code. they had done so not from choice. These two goals are virtually incompatible.. but it is unsound—for example. his translation used only one third as much space as the lazy original— but Runciman and Wakeling’s first optimisation made the nowlazier program twice as efficient as Peter Lee’s version. which they attempted to fix using seq. Instead. Applying this rewrite rule from left to right eliminates an intermediate list very cheaply. On the one hand. GHC used shortcut deforestation. This would weaken Wadler’s “free theorems” in Haskell (Wadler. the type signatures of very many functions changed as a consequence of a single seq. As a result. Tiny changes—the addition or removal of a seq in one place—can dramatically change space requirements.7). while space leaks could be fixed at any type. but Wadler’s original transformation algorithm (Wadler. As designers who believe in reasoning. In the end they were forced to conclude that fixing their space leaks was simply not feasible in the time available to complete the project—not because they were hard to find. today. Peter Lee decided to translate the code into ML to discover the effect of introducing strictness everywhere. which depends on two combinators: foldr. Sure enough. The point of the Eval class was to record uses of seq in the types of polymorphic functions. In their famous case study.3 intended. thus weakening the parametricity property that it should satisfy. But often. the type signature of the enclosing function changed to require an Eval instance for that variable—just as the designers of Haskell 1. after all. programmers were not allowed to define their own instances of this class—which might not have been strict (!)—instead its instances were derived automatically. Most Haskell implementations provide a “function” trace :: String -> a -> a that prints its first argument as a side-effect. 4. In contrast to trace. For example. all of whose children behaved correctly. single-stepping. then this itself might trigger faults or loops that would otherwise not have been a problem at all! Henrik Nilsson solved this problem in 1993 (Nilsson and Fritzson. and by the Hat tools described next. 2000). At the end of program execution.org/hat/. but users indicate explicitly which information should be collected by inserting calls of observe :: String -> a -> a in the program to be debugged. data structures that provided all the necessary information for postmortem algorithmic debugging. This transformation includes the libraries (which are often large and use language extensions). Runciman realised that. finally identifying a call with an incorrect result. and asks whether the result is correct. Nilsson and Sparud’s tools are no longer extant. The debugger presents function calls from a faulty run to the user. Programmers can thus ask “Why did we call f with these arguments?” as well as inspect the evaluation of the call itself.. 10. only 3% of respondents named Hat as one of the “most useful tools and libraries. testing tools have been more successful. 1997). the Haskell Object Observation Debugger. and usable with any Haskell 98 compiler. developing efficient methods to build “evaluation dependence trees” (Nilsson and Sparud. it is known whether or not each value was required—if it was.3 Observational debugging A more lightweight idea was pursued by Andy Gill. This “post mortem” approach abstracts nicely from evaluation order. and if it wasn’t. in order to display them in questions to the user. all the collected values are printed. algorithmic debugging. the same trace could be used to support several different kinds of debugging (Wallace et al. 1983). As in Nilsson and Sparud’s work.” In 1996. Since 2001. HOOD can even observe function values. HOOD is also a post-mortem debugger. takes the initiative to explore the program’s behaviour. because they are not needed until later. with a little generalisation. This was the origin of the new Hat project.1 Algorithmic debugging One way to do so is via algorithmic debugging (Shapiro. For example. observational debugging. it seems well suited to lazy programs. in 2002 Hat became a separate tool. as it is used in practice. The most widely used is QuickCheck. observe prints nothing when it is called—it just collects the value of its second argument. Although Nilsson’s debugger did not handle Haskell. tagged with the first. This is probably because Haskell. or even insert bugs into his own code while his back was turned. in 1999–2000 (Gill. Initially usable only with nhc..haskell.4.10. Indeed. 2005). Thus the programmer can observe the collection of values that appeared at a program point. In this 11 In a web survey we conducted. and imposes a substantial performance penalty on the running program.4. has remained a moving target: new extensions appear frequently. which has developed a new tracer for Haskell 98 and a variety of trace browsing tools. then the value was irrelevant to the bug anyway. Since algorithmic debugging just depends on the input-output behaviour of functions. who developed HOOD. together with their arguments and results. rather than the user. the key to Hat’s implementation is an ingenious. but the ideas are being pursued by Bernie Pope in his algorithmic debugger Buddha for Haskell 98 (Pope. HOOD leaves locating the bug to the programmer.2 Debugging via redex trails become a regular part of programming for most users 11 . for the sheer joy of tracking them down with Hat! The Hat suite are currently the most widely used debugging tools for Haskell. and even test coverage measurement. Runciman has regularly invited colleagues to send him their bugs. showing us how much of the input was needed to produce the given result. 1997). by waiting until execution was complete before starting algorithmic debugging. and has been used by all Haskell debuggers since. Sparud joined Colin Runciman’s group at the University of York to begin working on redex trails.1] >>>>>>> Observations <<<<<< nats (0 : 1 : _) This actually provides useful information about lazy evaluation. The QuickCheck user can test that it does just by evaluating quickCheck prop_reverse in a Haskell interpreter. developed by Koen Claessen and Hughes. 10. the function definition prop_reverse :: [Integer] -> [Integer] -> Bool prop_reverse xs ys = reverse (xs++ys) == reverse ys++reverse xs expresses a relationship between reverse and ++ that should always hold. which is often enough to find bugs. However. Observe> take 2 (observe "nats" [0. Furthermore. then its value is now known and can be used in a question. there are trace browsers supporting redex-trail debugging. 10. another form of program trace which supports stepping backwards through the execution (Sparud and Runciman. Jan Sparud was meanwhile developing one that did. systematic source-to-source transformation of the entire program. 2001). and then invoking these functions on random data. Nilsson and Sparud then collaborated to combine and scale up their work. When execution is complete. Today. QuickCheck is based on a cool idea that turned out to work very well in practice.5 Testing tools While debugging tools have not yet really reached the Haskell mainstream. But there is a difficulty—the values of function arguments and (parts of their) results are often not computed until long after the function call is complete. the debugger proceeds to the calls made from the faulty one (its “children”). displaying them as a table of observed arguments and results—the same information that an algorithmic debugger would use to track down the bug location. they have not . but despite their power and flexibility. with values with the same tag gathered together. together with several more specific tools for tracking down particular kinds of problem in the trace—see. values that were collected but never evaluated are displayed as a dummy value “_”. Hat was long restricted to Haskell 98 programs only—a subset to which few serious users restrict themselves. by transforming Haskell program source code to collect debugging information while computing its result. and so it is hard for a language-aware tool such as Hat to keep up. 1994). working by source-to-source transformation. namely that programs can be tested against specifications by formulating specifications as boolean functions that should always return True. This is then reported as the location of the bug. an approach in which the debugger.]) [0. If they were computed early by an algorithmic debugger. in an algorithmic debugger for a small lazy language called Freja. If not. The Haskell language thus had a profound influence on QuickCheck’s design.org/story/2001/7/31/0102/11014. 11. QuickCheck spotted a corner case. returning all possible parses. but in reality orderedList is a test data generator. 2002). The class system is used to associate a test data generator with each type. it is maintained by Angelov. of any types. the Edison library of efficient data structures. for the effort of writing a simple property. but rather provides ways to define test cases. Haskell also has the usual complement of parser and lexer generators. so it is worthwhile to consider how well we have achieved this goal. returning a String. SQLite. The HSQL library interfaces to a variety of databases. QuickCheck is widely used in the Haskell community and is one of the tools that has been adopted by Haskell programmers in industry. To make this work for larger-than-toy examples. Perhaps QuickCheck has succeeded in part because of who Haskell programmers are: given the question “What is more fun. For example. including Marlow’s Haddock tool. . HUnit supports more traditional unit testing: it does not generate test cases. including MySQL. but the key idea is this: a combinator library offers functions (the combinators) that combine functions together to make bigger functions. QuickCheck supports this via an abstract data type of “generators. The good news is that there are far too many interesting applications of Haskell to enumerate in this paper. but its impact was huge. QuickCheck was first released in 1999 and was included in the GHC and Hugs distributions from July 2000. 11. but when properties fail then QuickCheck displays a counter example. organised using type classes.kuro5hin. and run tests automatically with a summary of the results. Dean Herington released HUnit (Herington. an early paper that made the design of combinator libraries a central theme was Hughes’s paper “The design of a pretty-printing library” (Hughes. structure them into a hierarchy. making it easily accessible to most users. Marlow’s Happy was designed to be similar to yacc and generated LALR parsers. which has also acquired a 12 See dedicated following.. with a follow-up article on testing monadic code in 2002 (Claessen and Hughes. provides multiple implementations of sequences and collections.” which conceptually represent sets of values (together with a probability distribution). with quotable quotes such as “QuickCheck to the rescue!” and “Not so fast. 1998a) and maintained by Robert Dockins. In this section we discuss some of the more interesting applications and real-world impacts. Thus. and Oracle. . many that I wouldn’t have found without a massive investment in test cases. In 2002. and it did so quickly and easily. Some early success stories came from the annual ICFP programming contests: Tom Moertel (“Team Functional Beer”) wrote an account12 of his entry in 2001. and Haskell’s syntactic sugar for monads is exploited to make generators easy to write. with an emphasis on successes attributable to specific language characteristics.case testing succeeds. even appearing in job ads from Galois Connections and Aetion Technologies. but also a good example of applying some of Haskell’s unique features. 1995). a test framework inspired by the JUnit framework for Java. which forAll invokes to generate a value for xs. Part IV Applications and Impact A language does not have to have a direct impact on the real world to hold a prominent place in the history of programming languages. In this paper a “smart document” was an abstract type that can be thought of like this: type Doc = Int -> String That is. Postgres. Applications Some of the most important applications of Haskell were originally developed as libraries. programmers can test a very large number of cases. .1 Combinator libraries One of the earliest success stories of Haskell was the development of so-called combinator libraries. For example. then so much the better! QuickCheck is not only a useful tool. the programmer could write prop_insert :: Integer -> Bool prop_insert x = forAll orderedList (\xs -> ordered (insert x xs)) We read the first line as quantification over the set of ordered lists. 2002). impact on the real world was an important goal of the Haskell Committee. which work with ambiguous grammars. being the available width of the paper. From now on. The abstract data type of generators is a monad. with the average category itself containing a score of entries. A port to Erlang has been used to find unexpected errors in a pre-release version of an Ericsson Media Gateway (Arts et al. (“Happy” is a “dyslexic acronym” for Yet Another Haskell Parser. but many more are available. . Parser combinator libraries are discussed later in this section. It defines a domainspecific language of testable properties. The bad news is that Haskell is still not a mainstream language used by the masses! Nevertheless. QuickCheck provides a library of combinators to make such generators easy to define. For example. ODBC. One of the most interesting examples is due to Christian Lindig. What is a combinator library? The reader will search in vain for a definition of this heavily used term. Documentation of Haskell programs is supported by several systems. and find counter examples very quickly. 2005). to test that insertion into an ordered list preserves ordering. On the other hand.org) lists more than a score of categories. The Haskell standard includes a modest selection of libraries. programmers need to be able to control the random generation. and to overload the quickCheck function so that it can test properties with any number of arguments. in the classic Haskell tradition. I’m a QuickCheck man! Today. who found bugs in production-quality C compilers’ calling conventions by generating random C programs in a manner inspired by QuickCheck (Lindig. For example. A first paper appeared in 2000 (Claessen and Hughes. and lays itself out in a suitable fashion. a document takes an Int. Algol was never used substantially in the real world.” concluding QuickCheck found these problems and more. 2006). testing code or writing formal specifications?” many Haskell users would choose the latter—if you can test code by writing formal specifications. This design has been emulated in many other languages. 2000). The Haskell web site (haskell.) Paul Callaghan recently extended Happy to produce Generalised LR parsers. originated by Okasaki (Okasaki. QuickCheck is not the only testing tool for Haskell. there are certain niches where Haskell has fared well. Nevertheless. where t is the type of value returned by the parser. Like many ingenious programming techniques. type system. document layout in the case of prettyprinting). oneOrMore. a function. 1998). .2 Domain-specific embedded languages A common theme among many successful Haskell applications is the idea of writing a library that turns Haskell into a domainspecific embedded language (DSEL). Usually. XML processing (Wallace and Runciman. parens term. and many others. recursive definitions like this are generally not allowed. 2002). and doing so allows an extraordinarily direct transcription of BNF into executable code. 2003). thanks to lazy evaluation. 1998). the BNF f loat ::= sign? digit+ (′ . thereby cluttering the code and (much more importantly) wrecking the abstraction (Syme. GUIs. By “embedded language” we mean that the domain-specific language is simply an extension of Haskell itself. this trade-off is a theme of Hughes’s paper. Hudak. there seem to be two main factors. returning zero or more depleted input strings. 1999). modules and so on.that can be printed. DSLs in Haskell are described in more detail in Section 11. however. 11.optional sign digs <. in Haskell.. and that is true of many other libraries mentioned above. control. 1998). at least conceptually.’ <*> oneOrMore digit) The combinators optional. written by Daan Leijen. A parser may be thought of as a function: type Parser = String -> [String] That is. 2005). Wadler. 1996). First. A parser of this kind is only a recogniser that succeeds or fails. and oneOrMore :: Parser a -> Parser [a]. Examples include pretty printing (Hughes. hardware design. this one goes back to Burge’s astonishing book Recursive Programming Techniques (Burge. laziness makes it extremely easy to write combinator libraries with unusual control flow. Now we can write the float parser using do-notation. 2000). What makes Haskell such a natural fit for combinator libraries? Aside from higher-order functions and data abstraction. 1996a. a non-embedded DSL can be implemented by writing a conventional parser. 1999).2. There are dozens of papers about cunning variants of parser combinators. packrat parsing (Ford.’ oneOrMore digit ) return (mkFloat mb_sgn digs mb_frac) where optional :: Parser a -> Parser (Maybe a). For example.2 Other combinator libraries In a way. a term first coined by Hudak (Hudak. generic programming (L¨ mmel and Peyton Jones. much (although emphatically not all) of the power of macros is available through ordinary function definitions. such as this recursive parser for terms: term :: Parser Term term = choice [ float. 11. In contrast. Such DSELs have appeared in a diverse set of application areas. type checker. and interpreter (or compiler) for the language. database queries (Leijen and Meijer. the most complete and widely used library is probably Parsec. 2004). both concerning laziness. Second. and lexical analysis (Chakravarty. variable.1. vision. Now it is easy to define a library of combinators that combine parsers together to make bigger parsers... oneOrMore :: Parser -> Parser (<*>) :: Parser -> Parser -> Parser It is easy for the programmer to make new parser combinators by combining existing ones. but it was probably Wadler’s paper “How to replace failure by a list of successes” (Wadler.1 Parser combinators The interested reader may find the short tutorial by Hutton and Meijer helpful (Hutton and Meijer. 1999b). such as embedding Prolog and parallel parsing. including graphics. Another productive way to think of a combinator library is as a domain-specific language (DSL) for describing values of a particular type (for example. 11. In practice. and operators are defined that combine these abstract functions into larger ones of the same kind. one wants a parser to return a value as well. depending on how many ways the parse could succeed. function definition mechanism. a data type is defined whose essential nature is often. integer. While a Doc can be thought of as a function. 1985) that brought it wider attention. The final program is then “executed” by decomposing these larger pieces and applying the embedded functions in a suitable manner. XML processing.1.oneOrMore digit mb_frac <. where Lisp macros are used to design “new” languages. sharing its syntax. parsing permutation phrases (Baars et al. 1975). 2003). Haskell is very well suited to such ap- One of the most fertile applications for combinator libraries has undoubtedly been parser combinators.. one would have to eta-expand the definition. ] In call-by-value languages. 2004). although he did not use the word “combinator” and described the work as “folklore”. 2003). Now a library of combinators can be defined such as: above :: Doc -> Doc -> Doc beside :: Doc -> Doc -> Doc sep :: [Doc] -> Doc The function sep lays the subdocuments out beside each other if there is room. indeed. . robotics. 1995. a embedding Prolog in Haskell (Spivey and Seres.′ digit+ )? might translate to this Haskell code: float :: Parser float = optional sign <*> oneOrMore digit <*> optional (lit ’. with dozens of combinator libraries appearing in widely different areas. one can write recursive combinators without fuss. The type of parsers is parameterised to Parser t. combinator libraries do not embody anything fundamentally new. Failure is represented by the empty list of results. including error-correcting parsers (Swierstra and Duponcheel. Instead. music. financial contracts (Peyton Jones et al. scripting. The phrase “embedded language” is commonly used in the Lisp community. The “domain-specific” part is just the new data types and functions offered by a library. Even in Wadler’s original listof-successes paper. or above each other if not. laziness plays a central role. and (<*>) combine parsers to make bigger parsers: optional. the idea has been extremely influential. like this: float :: Parser Float float = do mb_sgn <. a requirement that dovetails precisely with Haskell’s notion of a monad (Section 7). parallel parsing (Claessen. animation. synchronous programming (Scholz. Typically. and more. a Parser takes a string and attempts to parse it. it may not be implemented as a function.optional (do lit ’. 2004. Peterson et al. 2002b). 11. 2001. which stood for “functional reactive animation” (Elliott and Hudak. formal semantics. 1994. the formal semantics of FRP. instead.2.proaches as well. Elliott. One was a small combinator library for manipulating XML. 2002. either continuations or monads (the two approaches are quite similar). with a library and toolset called HaXml (Wallace and Runciman.. this idea can be supported using concepts available in functional languages. The key idea in Fran is the notion of a behaviour. the data-binding approach captures more precise types but is less flexible. Another key idea in Fran is the notion of an infinite stream of events. semideclarative modelling of 3D animations (Elliott et al. TBAG was implemented entirely in C++. Collaborations with Hudak at Yale on design issues. Each program writes an HTML form.. Wan. Schechter et al.. Conal Elliott. The traditional approach to implementing a web application requires breaking the logic into one separate program for each interaction between the client and the web server. Below is a collection of examples. Sage.. 2001.. pulse is a time-varying image value. Elliott’s group released in 1995 a DSL called ActiveVRML that was more declarative than TBAG. consider this Fran expression: pulse :: Behavior Image pulse = circle (sin time) In Fran.. then working at Sun Microsystems. 1997). that captured in a uniform way much of the same functionality provided by the XPath language at the core of XSLT (and later XQuery). Once at Microsoft. which later became Yahoo Stores (Graham. respectively: instance Num (Behavior a) where Beh f + Beh g = Beh (\t -> f t + g t) instance Floating (Behaviour a) where sin (Beh f) = Beh (\t -> sin (f t)) Thinking of behaviours as functions is perhaps the easiest way to reason about Fran programs. 2003). and implementation techniques led in 1998 to a language that they called Fran. Researchers at Brown have more recently ported the basic ideas of FRP into a Scheme environment called “Father Time” (Cooper and Krishnamurthi. including both mobile and humanoid robots (Peterson et al. 1999b). such as Seaside and RIFE. that improves both the modularity and performance of previous implementations (Hudak et al. and most of these languages still face the same trade-offs. 1999). but of course behaviours are abstract.7). 2000) (discussed further in Section 11. These efforts included: the application of FRP to real-world physical systems. and thus can be implemented in other ways. real-time variants of FRP targeted for real-time embedded systems (Wan et al. The same approach was independently discovered by Christian Queinnec (Queinnec. developed a DSL called TBAG for constraint-based. Various “switching” combinators provide the connection between behaviours and events—i. Independently.” This work. 2006). For example.. 1999a. Although largely declarative. just as with combinator libraries described earlier.3). 11. and the use of Yampa in the design of a 3D first-person shooter game called Frag in 2005 (Cheong. and vice versa. 2001). In particular. Using this representation. Hughes’s approach was further developed by Peter Thiemann in the WASH system for Haskell. a classic DSEL.e. A good way to understand behaviours is via the following data type definition: newtype Behavior a = Beh (Time -> a) type Time = Float That is. it is better to invert this view. 1994). 1999). . 2000). was extremely influential. one does not need to invent a completely new language for the purpose. and the connection between them (Wan and Hudak. in seconds. 2004). a first-class data type that represents a time-varying value.2. a behaviour in Fran is really just a function from time to values. Wan et al. both denotational and operational. an approach based on a generalisation of monads called arrows was discovered by Hughes (Hughes. and the responses to this form become the input to the next program in the series. It turns out that the approach using arrows or monads is closely related to the continuation approach (since continuations arise as a special case of monads or arrows). The continuation approach has since been adopted in a number of web frameworks widely used by developers. 2000) (Section 6. 2002).. Haskell has been particularly successful for domain-specific embedded languages. Hudak’s research group and others began a flurry of research strands which they collectively referred to as functional reactive programming. Since many Fran behaviours are numeric. the development of an arrowbased version of FRP called Yampa in 2002. Paul Graham used a continuation-based approach as the basis for one of the first commercial applications for building web stores. and was in fact based on an ML-like syntax (Elliott. the value time used in the pulse example would be defined as: time :: Behaviour Time time = Beh (\t -> t) i. 2005). Courtney. 1997. or FRP.e. describing a circle whose radius is the sine of the time.1 Functional Reactive Programming In the early 1990s. who revised it to use monads in place of arrows (Thiemann. However. and began collaborating with several people in the Haskell community on implementing ActiveVRML in Haskell. and this approach was first taken by the domain-specific language MAWL (Atkins et al. The other was a data-binding approach (implemented as a pre-processor) that mapped XML data onto Haskell data structures. the identity function. and instead to write a single program containing calls to a primitive that takes an HTML form as argument and returns the responses as the result. Haskell’s Num and Floating classes (for example) allow one to specify how to add two behaviours or take the sine of a behaviour. The two approaches have complementary strengths: the combinator library is flexible but all XML data has the same type. It was about that time that Elliott also became interested in Haskell. 2000) and further developed by Matthias Felleisen and others in PLT Scheme (Graunke et al. the use of FRP and Yampa in the design of graphical user interfaces (Courtney and Elliott. However. between the continuous and the discrete—thus making Fran-like languages suitable for socalled “hybrid systems.2 XML and web-scripting languages Demonstrating the ease with which Haskell can support domainspecific languages.. Both approaches are still common in many other languages that process XML. The success of his work resulted in Microsoft hiring Elliot and a few of his colleagues into the graphics group at Microsoft Research. They actually provided two approaches to XML processing.. 1996). Arguably. since the program began executing. Wallace and Runciman were one of the first to extend an existing programming language with features for XML programming. Haskell was also one of the first languages to support what has become one of the standard approaches to implementing web applications. 2002a). to be used almost interchangeably. 1984). generated important parts of an FPGA design—in most cases without anyone outside Xilinx being aware that Haskell was involved! Singh tells an amusing anecdote from these years: on one occasion. which is frustrating given that the only reason for the monad is to distinguish sharing from duplication! Lava has been used to teach VLSI design to electrical engineering students. Consider the following code fragment: . delivered to Xilinx customers as compiled programs that.Most of this work has been done in languages (Scheme. for example. Thiemann also introduced a sophisticated use of type classes to ensure that HTML or XML used in such applications satisfies the regular expression types imposed by the document type declarations (DTD) used in XML (Thiemann.. which are perfectly distinguishable in Haskell.nand a b y <.. these two fragments should be indistinguishable. 2005). in both the earlier Fort´ and current IDV systems. musical ornamentation and embellishment (legato. 1995). Clever use of the class system enables signals-of-lists and lists-of-signals. This is the recommended “Haskellish” approach—yet adopting a monadic syntax uniformly imposes quite a heavy cost on Lava users. because higher-order functions are ideal for expressing the regular structure of many circuits. 1983. etc. verify. Primitive values corresponding to notes and rests are combined using combinators for sequential and parallel composition to form larger musical values. Claessen used unsafePerformIO to implement “observable sharing”. and sent the result back to Singh the next day. the struggle to teach monadic Lava syntax to non-Haskell users became too much.) are treated by an object-oriented approach to musical instruments to provide flexible degrees of interpretation. One of the first to do so was John O’Donnell. clearly. for which he won the ACM Distinguished Dissertation award in 1984 (Johnson. This was one of the first successful industrial applications of Haskell: Singh was able to generate highly efficient and reconfigurable cores for accelerating applications such as Adobe Photoshop (Singh and Slous.. Lazy functional languages have a long history of use for describing and modelling synchronous hardware. though. Lava used a “circuit monad” to make the difference observable: do x <.2). 1998). Intel’s largescale formal verification work is based on a lazy language. 1984). Higher-order functions for capturing regular circuit structure were pioneered by Mary Sheeran in her language µFP (Sheeran. but the tools are still implemented in Haskell. Capturing sharing proved to be particularly tricky...2 (Barton.. Sheeran. and Mary Sheeran et al. for two fundamental reasons: first. Launchbury and his group used Haskell to describe microprocessor architectures in the Hawk system (Matthews et al. 1978b). 1995). developed Lava (Bjesse et al. Net-lists generated from these two descriptions should therefore be different—yet according to Haskell’s intended semantics. lazy functional programming has had an important impact on industrial hardware design. In addition. For years thereafter. Despite its unsafe implementation. Smalltalk. the designer intends to model a single NAND-gate whose output signal is shared by x and y. For a while. Here it seems clear that the designer intends to model two separate NAND-gates. and Lava in particular.3 Hardware design languages let x = nand a b y = nand a b in .2. Singh mailed his code to Peyton Jones at Microsoft Research. The first version of Haskore was written in the mid ’90s by Hudak and his students at Yale. which can simulate. It was not long before Haskell too was applied to this domain. inspired by Backus’ FP (Backus. 1998). without a profusion of zips and unzips. who was able to compile it with the development version of GHC. crescendo. and second. and so on. When Singh told his manager. 1999). because lazy streams provide a natural model for discrete time-varying signals. theoremprover input.. But what about let x = nand a b y = x in . Via this and other work. 11.. Using lazy streams dates to Steve Johnson’s work in the early eighties. A retrospective on the development of the field. and aside from the standard distribution at Yale. and Dave Barton was later invited to join the Haskell Committee as a result. Thiemann’s work has shown that the same approach works with a static type system that can guarantee that the type of information returned by the form matches the type of information that the application expects. can be found in Sheeran’s JUCS paper (Sheeran. Singh used Lava to develop specialised core generators. given appropriate parameters. Hudak.2. and thus Lava has both tested Haskell’s ability to embed other languages to the limit.nand a b y <. and in the end. 11. Hudak. Over the years it has matured in a number of different ways.. a proprietary hardware description language closely based on Haskell (see Section 12. versus do x <. When Satnam Singh moved to Xilinx in California. Another was Dave Barton at Intermetrics. who proposed MHDL (Microwave Hardware Description Language) based on Haskell 1. 2003). 1998). Sandburst was e founded by Arvind to exploit Bluespec. and generate net-lists for the circuits described.4. “You mean to say you got 24-hour support from Microsoft?” Lava in particular exercised Haskell’s ability to embed domain specific languages to the limit. he took Lava with him and added the ability to generate FPGA layouts for Xilinx chips from Lava descriptions.. Now. This was one of the earliest signs of industrial interest in Haskell. and contributed a new mechanism to extend its power. allowing Lava to use the first syntax above. A little later. making simulation of functional models very easy.4 Computer music Haskore is a computer music library written in Haskell that allows expressing high-level musical concepts in a purely declarative way (Hudak et al.return x . a system for describing regular circuits in particular. 1996b. Both Hawk and Lava are examples of domain-specific languages embedded in Haskell. Ruby) without static typing. whose Hydra hardware description language is embedded in Haskell (O’Donnell. but still to distinguish sharing from duplication when generating net-lists. the manager exclaimed incredulously.. The language is now being marketed (with a System Verilog front end) by a spin-off company called Bluespec. 1996. a bug in GHC prevented his latest core generator from compiling.nand a b . observable sharing turns out to have a rather tractable theory (Claessen and Sands. filters. 1992) (unlike the fragmented approach to GUIs taken by Haskell). Monads and arrows are flexible mechanisms for combining operations in ways that reflect the semantics of the intended domain. 6. 2001.5 Summary transformers (Carlsson and Hallgren. 2000). Despite the elegance and innovative nature of these GUIs. reflects this simple structure. be generalized to other forms of time-varying media (Hudak. and each system lacked the full range of widgets. none of them broke through to become the GUI toolkit of choice for a critical mass of Haskell programmers. The search for an elegant. Thomas Hallgren.2. devel- oped Haggis. none of them. 2. A notable example is the Clean graphical I/O library. are combined in a signal-processing-like manner. and implemented only part of the full interface. and they all remained singlesite implementations with a handful of users. and the shape of the network could change dynamically. event-loop-based interaction model of mainstream programming languages.2. “What is the right way to interact with a GUI in a purely declarative setting?” This question led to several quite unusual GUI systems: • The Fudgets system was developed by Magnus Carlsson and oped FranTk (Sage. Gtk2Hs. each button might have a thread dedicated to listening for clicks on that button. And Haskell libraries for XML processing share a lot in common with parsing and layout. a so-called “binding. There was no central event loop: instead each stream processor processed its own individual stream of events. Haskell’s purity. at Chalmers University in Sweden. their authors often developed quite sophisticated Haskell wrapper libraries that present a somewhat higher-level interface to the programmer. such as Gtk2Hs and WxHaskell. • Based on ideas in Fran (see section 11. by the direct route of interfacing to some widely available GUI toolkit library. Each processor had a visual appearance. Meanwhile.. in which oscillators. which replaced the event loop with extremely lightweight concurrency. developing a fully featured GUI is a huge task. It is easy to see why. Although many other computer music languages preceded Haskore. that programmers have come to expect. in fact..3 Graphical user interfaces Once Haskell had a sensible I/O system (Section 7). Rather than adopt the imperative. 11. lazy evaluation. declarative GUI toolkit remains open. The underlying GUI toolkit for Clean was the Macintosh. TkGofer. instruments—in a declarative way. early bindings were often somewhat compiler specific. Gtk+Hs) and WxWidgets (WxHaskell). 5. usable. Meurig Sage devel- Why has Haskell been so successful in the DSEL arena? After all. and higher-order functions are the key features that make possible this elegant design. which combined the best ideas in Fran with those of the GUI toolkit Tk. 2000). together with an event loop and call-backs. are generated automatically by transforming the machine-readable descriptions of the library API into the Haskell 98 standard FFI. The stress was on widget composition.g. Much later. then a research student at Glasgow. the Clean I/O library was ported to Haskell (Achten and Peyton Jones. and snazzy appearance.e. for example. and is actively used for computer music composition and education. First. The requirements of Haggis directly drove the development of Concurrent Haskell (Peyton Jones et al. the quest for purity always led to programming inconvenience in one form or another. TclHaskell. For example. as well as being connected to other stream processors. GTK (e. which formed an integral part of the Clean system from a very early stage (Achten et al.1). and uses arrows to “wire together” GUI components in a data-flow-like style. As a direct result. invariably based on imperative widget creation and modification. but we may identify the following ways in which Haskell is a particularly friendly host language for a DSEL: 1. so that complex widgets could be made by composing together simpler ones (Finne and Peyton Jones.” Early efforts included an interface to Tcl/Tk called swish (Sinclair. which is just a meta-language for contextfree grammars. etc. envelope generators. One of the more recent additions to the system is the ability to specify musical sounds—i. Infix syntax allows one to emulate infix operators that are common in other domains. More recent bindings. Courtney. Lazy evaluation allows writing recursive definitions in the new language that are well defined in the DSEL. they sought to answer the question. but with many similarities to Fudgets. Fruit is purely declarative.g. but would not terminate in a strict language. or stream . • Antony Courtney took a more declarative approach based en- tirely on FRP and Yampa. perhaps surprisingly. 2004). 1996). 4. a parser combinator library can be viewed as a DSEL for BNF. the pragmatists were not idle. Type classes permit overloading of many standard operations (such as those for arithmetic) on many nonstandard types (such as the Behaviour type above). especially as the libraries involved have huge interfaces.Henning Thielemann maintains an open-source Darcs repository (Section 12. 1992). They treated the GUI as a network of “stream processors”. many languages provide the ability to define new data types together with operations over them. Over-loaded numeric literals allow one to use numbers in new domains without tagging or coercing them in awkward ways. but Clean allows the user to specify the interface by means of a data structure containing call-back functions. 11. 1993). 3. These bindings all necessarily adopt the interaction model of the underlying toolkit. The reader will also note that there is not much difference in concept between the combinator libraries described earlier and DSELs. Higher-order functions allow encoding nonstandard behaviours and also provide the glue to combine operations. • Sigbjorn Finne. 1995). People interested in this area rapidly split into two groups: the idealists and the pragmatists. Nevertheless. 2004). and a DSEL is little more than that! No single feature seems dominant. The idealists took a radical approach. It is probably only for historical reasons that one project might use the term “combinator library” and another the term “DSL” (or “DSEL”). Second. HTk) and bindings to other tool kits such as OpenGL (HOpenGL).3) to support further development. in a system that he called Fruit (Courtney and Elliott. including an imperative model of call-backs. and thus with combinator libraries. These efforts were hampered by the absence of a well defined foreignfunction interface for Haskell. Haskore is based on a very simple declarative model of music with nice algebraic properties that can. Haskore has been used as the basis of a number of computer music projects.. but there were many subsequent variants (e. They just wanted to get the job done. and an interface to X windows (the Yale Haskell project).. the next obvious question was how to drive a graphical user interface (GUI). A system description and an analysis of the MUC-6 results were written by Callaghan (Callaghan. 2000) uses multimedia applications (such as graphics. interactive interface. but initially there was a dearth both of textbooks and of robust implementations suitable for teaching..1 Education One of the explicit goals of Haskell’s designers was to create a language suitable for teaching. In 2002. particularly for the compilation and partial evaluation aspects (of grammars). written in Haskell and related languages. 11. Both problems were soon addressed. even using multiple languages simultaneously. including financial information analysers and information extraction tools for Darpa’s “Message Understanding Conference Competitions” (MUC-6 and MUC-7). Simon Thompson published a Haskell version of his Craft of Functional Programming textbook. At the turn of the millennium. This was followed the next year by Fethi Rabhi and Guy Lapalme’s algorithms text Algorithms: A functional programming approach. in which many aspects of Haskell were invaluable in development. the same description specifies both how to parse concrete syntax into abstract syntax. and low-level IO operations. At its core was a semantic network containing some 90. Indeed. protected execution of user binaries. and robotics) as an underlying theme (see Section 11. This book (revised in 1998) has become the top-selling book on Haskell. Gibbons and de Moor edited The Fun of Programming. The release of Gofer in 1991 made an “almost Haskell” system available with a fast. GF allows users to describe a 13 This precise abstract syntax together with one or more concrete syntaxes. and discusses new tools and libraries that are emerging. The main GF system is written in Haskell and the whole system is open-source software (under a GPL licence). Laziness was essential in handling the explosion of syntactic ambiguity resulting from a large grammar. It uses a monad to provide access to the Intel IA32 architecture. In 1996. easing the construction of new applications. The latter involved processing original Wall Street Journal articles. Fragments of semantic net could also be rendered back to English or Spanish. 2004) is a language for defining grammars based on type theory. textbooks teaching more advanced techniques began to appear. 11. 2005). and in the same year Okasaki published the first textbook to use Haskell to teach another subject—Purely Functional Data Structures. to perform tasks such as identifying key job changes in businesses and summarising articles. intended for teaching functional programming to first-year students. In 1995. The system used multiple DSELs (Section 11. using Haskell. Translator and Analyzer) was developed by Garigliano and colleagues at the University of Durham (UK) between 1986 and 2000. a later project. In this section we survey some of these groups of users and briefly assess Haskell’s impact on other programming languages. LOLITA was an early example of a substantial application written in a functional language: it consisted of around 50. verifying mathematical proof texts and software specifications.To this day.000 interlinked concepts. implemented by Sebastian Carlier and Jeremy Bobbio (Carlier and Bobbio. The first Haskell texts were quite introductory in nature.4 Operating Systems An early operating system for Haskell was hOp. Bird revised Introduction to Functional Programming. Haskell was chosen as a suitable language for this kind of system. We highlight two substantial applications that make significant use of Haskell. Many reusable “resource grammars” are available. Text could be parsed and analysed then incorporated into the semantic net. where it could be reasoned about (Long and Garigliano. The first Haskell book—Tony Davie’s An Introduction to Functional Programming Systems Using Haskell—appeared in 1992. 12. music. The language is the focal point of an active and still-growing user community. a micro-kernel based on the runtime system of GHC. The impact of Haskell Haskell has been used in education. implemented a system in which the kernel. A unique aspect of this book is its use of DSELs (for animation. including virtual memory management. when Hugs was released. and interactive dialogue systems. 2004). in 1998. 1998).2) for semantic and pragmatic processing and for generation of natural language text from the semantic net. and how to linearise the abstract syntax into concrete syntax. Several applications were built using the system. far ahead of its closest competitor in Amazon’s sales rankings. section is based on material contributed by Paul Callaghan. Also important was the ability to work with complex abstractions and to prototype new analysis algorithms quickly. House. window system. Building on hOp. Linguistic Interactor. it is widely regarded as being more suitable for an advanced course. LOLITA was one of a small number of systems worldwide to compete in all sections of the tasks. Richard Frost (Frost. almost as soon as the language was defined. It was designed as a general-purpose tool for processing unrestricted text that could be the basis of a wide variety of applications.5 Natural language processing13 Haskell has been used successfully in the development of a variety of natural language processing systems and tools. 2006) gives a comprehensive review of relevant work in Haskell and related languages. despite various putative standardisation efforts. 1993). Object-based. Although often suggested for first-year teaching. and music) to teach Haskell idioms in novel ways that go well beyond earlier books.2). and all device drivers are written in Haskell (Hallgren et al. and it was much used with semantic ambiguity too. which had first appeared as a Miranda textbook a year earlier. Hudak’s Haskell School of Expression (Hudak. 12. The GF system has many applications. Monads and type classes are extensively used in the implementation. an advanced book on Haskell programming with contributions by many authors. and by companies. dedicated to Richard Bird and intended as a follow-up to his text. by the open-source community. The Grammatical Framework (GF) (Ranta. multi-lingual authoring. It is also a complex and demanding application. . An editing mode allows incremental construction of well formed texts. The arrival of Haskell 98 gave textbooks another boost. Durham’s LOLITA system (Large-scale. including highquality translation. although WxHaskell (another side project of the indefatigable Daan Leijen) has perhaps captured the majority of the pragmatist market. the Haskell community periodically agonises over the absence of a single standard Haskell GUI. and new texts continue to appear. developed by Ranta and colleagues at Chalmers University. so that ambiguity at various levels was unavoidable and significant. Yet no one approach has garnered enough support to become the design. Haskell finally had an implementation perfect for teaching—which students could also install and use on their PCs at home. good for teaching. such as Graham Hutton’s 2006 book Programming in Haskell. it was being taught to undergraduates at Oxford and Yale. communication in controlled language. LOLITA was designed to handle unrestricted text. animation. Lacking such a standard is undoubtedly an inhibiting factor on Haskell’s development.000 lines of Haskell (with around 6000 lines of C). Rex Page carried out a careful three-year study. which has rapidly become popular. with total student numbers of 1. which suggests that the use of Haskell in teaching is currently seeing rapid growth. there has been an excellent series of International Summer Schools on Advanced Functional Programming. The years in which Haskell courses are taught are shown in this table: Year 1st undergrad 2nd undergrad 3rd undergrad 4–5th undergrad Postgrad %ge 20% 23% 25% 16% 12% To try to form an impression of the use of Haskell in university education today. exploiting Haskell’s mathematical look and feel. a simplified version of Haskell. Helium. we are in dire need of a book on FP that not only presents the purely functional aspects. 12. We make no claim that our survey is complete. We received responses from 48 courses of this type. • Haskell input/output is not well covered by current textbooks: “my impression was that students are mostly interested in things which Simon Peyton Jones addressed in his paper ‘Tackling the Awkward Squad’ (Peyton Jones. and found that students in the latter group became significantly more effective programmers (Page. and Portugal (5%).” As mentioned earlier.000–10. but it was quite extensive: 126 teachers responded. 1998. Both take advantage of well-known strengths of the language—symbolic computation and its mathematical flavour. Haskell was among the first two programming languages only in 35% of cases (15% as first language. most textbooks on Haskell programming are intended for that purpose. the UK (19%). asked only for approximate student numbers. Most Haskell courses are aimed at experienced programmers seeing the language for the first time: 85% of respondents taught students with prior programming experience. We found that—even at Universities that teach Haskell—Java was the first language taught in 47% of cases. 1996. with 3–700 students. • Simple loop programs can be harder for students to grasp when expressed using recursion. Sweden (8%). but also comprehensively covers issues discussed in that paper. Cordelia Hall and John O’Donnell published the first textbook taking this approach in 2000—Discrete Mathematics Using a Computer. How does Haskell measure up in teaching? Some observations we received were: • Both respondents and their students are generally happy with the choice of language—“Even though I am not a FL researcher. for the purpose of teaching FP.1 A survey of Haskell in higher education Haskell is used to teach nine compilers courses. we carried out a web survey of courses taught in the 2005–2006 academic year. only 28 of the courses in our survey were aimed at beginners (i. Australia (7%). • Array processing and algorithms using in-place update are messier in Haskell. Recently (in 2004) Doets and van Eijck have published another textbook in this vein.” It remains to be seen how successful it will be. sometimes leading to puzzling error messages for students. and distributed and parallel programming—revealing a surprising variety in the subjects where Haskell appears.” • Haskell attracts good students—“The students who take the Haskell track are invariably among the best computer science students I have taught.” • Fundamental concepts such as types and recursion are ham- mered home early. on the assumption that basic programming will teach at least two languages. I enjoy teaching the course more than most of my other courses and students also seem to like the course. quantum computing. and also the most commonly taught second language (in 22% of cases). 25% of these courses began using Haskell only in the last two years (since 2004). 2001). Enthusiasts have long argued that functional languages are ideally suited to teaching introductory programming. I think.300–2. There have been five such summer schools to date. 2003). and one course in each of domain-specific languages. • The class system causes minor irritations. The countries from which we received most responses were the USA (22%). However.700 students annually. There were 25 such courses. or a group taught using Hall and O’Donnell’s text. is being developed at Utrecht specifically for teaching—the first release was in 2002. 2–4. there is currently no Haskell-based textbook aimed at this market—an opportunity. . A typical comment from respondees was that the course was intended to teach “a different style of programming” from the objectoriented paradigm that otherwise predominates. together they teach Haskell to 5. held in 1995. beginners’ courses did account for the largest single group of students to study Haskell. • Students can tackle more ambitious and interesting problems earlier than they could using a language like Java. Helium lacks classes. perhaps? 14 We This illustrates once again that the majority of courses are taught at more advanced levels. from 89 universities in 22 countries. For the more advanced students. 20% as second language).e. computer music. or assuming no previous programming experience). and indeed. We also asked respondents which programming languages students learn first and second at their Universities. at which projects involving Haskell have always had a significant presence.1. Germany (11%). The Haskell Road to Logic. and 2004.900 per year. but only 23% taught students who already knew Haskell. Maths and Programming. Four other more advanced programming courses (with 3–700 students) can be said to have a similar aim. The most common courses taught using Haskell are explicitly intended to teach functional programming per se (or sometimes declarative programming). taught in the first year. hence the wide range of possibilities.000 students every year14 . It is also used to teach six courses in theoretical computer science (2–400 students). with 800–1. in which students were randomly assigned to a group taught discrete mathematics in the conventional way. there are two courses in hardware description (50–100 students). 2002. but then it also lacks textbooks and the ability to “tackle the awkward squad. Surprisingly. because each such course is taken by more students on average than later courses are.Another trend is to teach discrete mathematics and logic using Haskell as a medium of instruction. which enables it to give clearer error messages. The third large group of courses we found were programming language courses—ranging from comparative programming languages through formal semantics. Finally.000 every year. Surprisingly. 12. rather than Haskell in particular. indicate fairly convincingly the superiority of Haskell in this particular experiment. an area where Haskell could have had significant impact. Things started well for Galois.S. While many Haskell programmers work for companies. at least partly because it is a ferociously difficult language to implement. Tang started her project on 1 February 2005. He came across Haskell and. 1996). they usually have an uphill battle to persuade their management to take Haskell seriously. three to Haskell. We invited four companies that use Haskell regularly to write about their experience. as Roundy puts it. 2005). built prototypes for this software component. with repositories exchanging updates by means of patches. 12 Lisp.S. Most notably. 12. Darcs considers each user to have a fully fledged repository. including the development of a “common prototyping language. “after working on it for a while I had an essentially solid mass of bugs” (Stosberg. and jumped from there to Pierce’s book Types and Programming Languages (Pierce. a three-day programming sprint that has been held every year since 1998.” to help in the design phase of large software systems. 2 SML. a company that began with the idea of finding clients for whom they could build great solutions simply by using the power of Haskell. 2005). Riding on that wave of wisdom.” Ten different programmers. including Hudak’s effort at Yale. It addresses the same challenges as the well-established incumbents such as CVS and Subversion. The community was simply not ready to adopt such a radical programming language. Pugs makes heavy use of parser combinators (to support a dynamically changeable parser) and several more sophisticated Haskell idioms. the Galois engineers shifted to ML. government. Darpa had christened Ada as the standard programming language to be used for future software development contracts with the U. three have gone to OCaml.. This rather democratic architecture (similar to that of Arch) seems very attractive to the open-source community. These contests have been open to anyone. Mark Jones.S.. The book suggests implementing a toy language as an exercise. developed its very own Programming Contest. 1993). and one to Cilk (Blumofe et al. so Tang picked Perl 6. attracted 48 entries. Their lightly edited responses constitute the rest of this section. At the time there were no implementations of Perl 6. or whatever. Purely Functional. began to wonder: can we do something with functional languages. and has numerous technical advantages as well (Roundy.7) and delimited continuations (Dybvig et al. then a professor in the functional programming research group at the Oregon Graduate Institute. and with Haskell in particular? He founded Galois Connections Inc. No recommendations were made to use Haskell in any kind of government software development. then a Research Scientist at Yale. but its data model is very different. One of these involved building a code translator for test program for chip testing equipment. the workshops for Commercial Users of Functional Programming. one to C++. 9 Scheme. Darcs and Pugs. Nevertheless. The first ICFP Programming Contest. run by Olin Shivers in 1998. including GADTs (Section 6. About ten years earlier.000 lines of literate Haskell (thus including the source for the 100-page manual). Sadly. 20 Haskell. 2005). software maintenance costs. One attempt at such a study was an exercise sponsored by Darpa (the U. 2002).4 Companies using Haskell In the commercial world. but the user-group mailing list has 350 members. Much of this reluctance is associated with functional programming in general. spent a month learning Haskell. 1 Mercury. was the primary Haskell programmer in the experiment. only 67 were functional (24 OCaml. rewrote Darcs in Haskell. described in (Carlson et al. Roundy reports that some developers now are learning Haskell specifically in order to contribute to Darcs. and it is common to receive entries written in C and other imperative languages. Defense Advanced Research Projects Agency) in the early 1990s. 1 Erlang).1 Galois Connections15 The late ’90s were the heady days of Internet companies and ridiculous valuations. However. it is very difficult to conduct a rigorous study to substantiate such claims. they then commissioned a program called ProtoTech to develop software prototyping technology. program development time. Initial contracts came from the U.4. The results. functional languages dominate the winners: of the first prizes awarded in the eight years of the Contest so far. the functional programming community. government for building a domain-specific language for cryptography. the Naval Surface Warfare Center (NSWC) conducted an experiment to see which of many languages—some new (such as Haskell) and some old (such as Ada and C++)—could best be used to prototype a “geometric region server. through ICFP. A year later there were 200 developers contributing to it. . although informal and partly subjective and too lengthy to describe in detail here. for example. Rather than thinking in terms of a master repository of which users take copies. Darpa’s ProtoTech program funded lots of interesting programming language research.. She came across Darcs. soon to be followed by contracts with local industry. to leverage the power of the ML C-Kit 15 This section is based on material contributed by John Launchbury of Galois Connections. in addition to pretty much every functional language in common use. Potential problems associated with standardisation efforts notwithstanding.000 lines of Haskell (including comments) (Tang. Toward the end of the ProtoTech Program. using nine different programming languages.2 Haskell and software productivity Occasionally we hear anecdotes about Haskell providing an “orderof-magnitude” reduction in code size. witness. 2005). Darcs was originally written in C++ but. The company tagline reflected this: Galois Connections. perhaps amazingly (considering this number) the compiler is only 18. not even in the context of prototyping. In the last few years two open-source projects. 12. The contest has grown substantially since then. and the Darcs home page lists nearly 60 projects that use Darcs. after a few experiments in 2002. Haskell still plays only a minor role. At just this time Launchbury. with a peak of 230 entries in 2004—more teams (let alone team members) than conference participants! In every year only a minority of the entries are in functional languages. of the 230 entries. although the climate is beginning to change. for example in 2004. Darcs is an open-source revision-control system written in Haskell by the physicist David Roundy (Roundy. 2005). Four years later. the source code is still a relatively compact 28. Because this was a C-based problem. 12. It is impossible to say how many people use Darcs.3 Open source: Darcs and Pugs One of the turning points in a language’s evolution is when people start to learn it because of the applications that are written in it rather than because they are interested in the language itself. One of these programmers was Audrey Tang. nothing of substance ever came from this experiment. have started to have that effect for Haskell. for any language. held annually at ICFP since 2004. In recent years there have been a few other informal efforts at running experiments of this sort. Inc. although Aetion has been able to hire largely when they needed to. especially when the cycle of development and testing is spread thin in space and time. and it has the added benefit that transition to compiled programs is trivial.4. detecting and configuring hardware is impossible to test fully in the lab. It is basically Haskell with some extra syntactic constructs for the term rewriting system (TRS) that describes what the hardware does. The idioms for expressing systems programming are not quite as compact as in languages such as Perl. though some employees had some experience with ML and Lisp. so that they are able to maintain the product if Aetion is no longer willing or able to do so. 2003 by Arvind (MIT). A problem that Aetion has not yet encountered. a WebDAV server with audit trails and logging. Haskell’s interpreters provide sufficient interactivity for constructing programs quickly. their experience was extremely positive. In a few months. and (limited) arithmetic is allowed (and statically checked) at the type level. arithmetic can be performed on these numeric types. 18 This section was contributed by Mark Carroll of Aetion.library. both for its own networks and for the public Internet. There were business challenges. From a language perspective. even with minimal project management overhead. awk. section was contributed by Clifford Beshers of Linspire. and Aetion’s rare use of such an agreeable programming language promotes employee retention. For example: bundle :: Bit[n] -> Bit[m] -> Bit[n+m] Here. is that a customer may object to the use of Haskell because of its unfamiliarity. tools for generating FPGA layouts from Cryptol. Use of Haskell has also helped the company to hire good programmers: it takes some intelligence to learn and use Haskell. a high-assurance compiler for the ASN. so efficiency in prototyping is very important. manufactures an industry standards-based electronic design automation (EDA) toolset that is intended to raise the level of abstraction for hardware design while retaining the ability to automatically synthesise high-quality register-transfer code without compromising speed. there is an initial period of difficulty while one learns what sorts of bugs evoke which incomprehensible error messages. no one at Aetion was an experienced Haskell programmer. The name Bluespec comes from a hardware description language by the same name. because of its rich static type system. Overall. its libraries are expanding to cover the necessary diversity with truly reusable algorithms. Market focus is needed. The company specialises in artificial intelligence software for decision support. They found that Haskell allows them to write succinct but readable code for rapid prototypes.4 Linspire18 Linspire makes a Linux distribution targeted for the consumer market. Bluespec is really a two-level language. but this is an active area of research and the other language benefits outweigh this lack. and the abstraction and non-interference properties of functional languages meant that productivity was very high. they find they can concentrate on the problem at hand without being distracted by all the attendant programming boilerplate and housekeeping. Static type-checking has proved invaluable. In this environment Haskell provided something more than simple productivity.) 12. Bluespec. The full power of Haskell is available at compile time. but they have kind Nat. and they now use Haskell for all their software development except for GUIs (where they use Java). As Haskell is a very high-level language. it is much more common to use a combination of several shells and script languages (such as bash. sed.4.S. as equations over the category of complete partial orders. Python). government already had major concerns. with special emphasis on information assurance. The type system has been extended with types of numeric kind.3 Aetion17 Aetion Technologies LLC is a company with some nine employees. In principle. It also appeared to present significant opportunity for introducing highly innovative approaches. . In this domain. each with its own syntax. the results are often fragile and fraught with ad hoc conventions. They chose Haskell. (Customers sometimes ask the company to place source code in escrow. based in Columbus. It has to resell its capabilities from the ground up on every sale. While not as specialised.2 Bluespec16 12. n and m are type variables. Problems that are not solved directly by the shell are handed off to a bewildering array of tools.1 data-description language. In 2001 Aetion was about to begin a significant new software development project. capabilities and shortcomings. at least. the specification becomes the program. 12. Bluespec’s design was heavily influenced by Haskell. but almost all Haskell language constructs are eliminated by a partial evaluator to get down to the basic TRS that the hardware can execute. however: a “can do anything” business doesn’t get known for doing anything. a comprehensive code translation tool was built and kept so precisely to a compressed code-delivery schedule that the client was amazed. there were no surprises here: compilers and other code translation are natural applications for functional languages. Ohio. Aetion does a lot of research and invention. a domain-specific language for specifying cryptographic algorithms. catching many errors that might have otherwise occurred in the field. and a wiki for providing collaboration across distinct security levels. open-source compilers. The core OS team settled in 2006 on Haskell as the preferred choice for systems programming. which is a key enabling technology for the company. Using the class system. The main difficulty that Aetion encountered concerns efficiency: how to construct software that uses both strict and lazy evaluation well. USA. 16 This section was contributed by Rishiyur Nikhil of Bluespec. Even if it were possible to collect all the 17 This Founded in June. However. Their purpose is to give accurate types to things like bit vectors (instead of using lists where the sizes cannot be checked by the type checker). And. and its active research community. Haskell programs can be viewed as executable mathematics. but fears. power or area. Galois selected a focus area of high-confidence software. the pool of candidates with good Haskell programming skills is certainly small. Examples of Haskell projects at Galois include: development tools for Cryptol. At the time. a non-blocking cross-domain file system suitable for fielding in systems with multiple independent levels of security (MILS). a debugging environment for a governmentgrade programmable crypto-coprocessor. This is an unusual choice. For example. Also. Perl. This was seen as a growth area and one in which the U. Haskell has comparable versatility but promotes much greater uniformity. Because of referential transparency.4. The language. For example: The Haskell Workshops. resources and tools. He organised a panel of 20 mentors. relying on the fact that most users are developers aware of its weak points. Planet Haskell is a site for Haskell bloggers23 .” Five issues have already appeared. 1995 2000 2005 Figure 7. but the Haskell community has been addressing this aggressively. but less formal than a journal article. started by AnttiJuhani Kaijanaho in 2006. The #haskell IRC channel first appeared in the late 1990s. Subsequent workshops were held in 1997 and 1999.org/communities/ 20 The Haskell community A language that is over 15 years old might be expected to be entering its twilight years. but really got going in early 2001 with the help of Shae Erisson (aka shapr)20 . At Linspire.haskell. Its use is growing strongly and appears for the first time to show signs of breaking out of its specialist-geeky niche. In November 2001 Claus Reinke edited the first edition of the Haskell Communities and Activities Report19 . and plenty of informative pages on the Haskell Wiki. first published on the 2nd of August21 . but the November 2005 edition (edited by Andres L¨ h) lists 96 authors and runs to over o 60 pages. Darcs patch tracking. carried out by Paolo Martini.haskell. Martini spearheaded a much larger Haskell participation in the 2006 Summer of Code.Reader aims to fit in there. as a one-day forum to discuss channel . Haskell offers the promise of annealing a stronger whole.org 24. It has grown extremely rapidly. Runtime efficiency can be a problem. Since there is no Haskell Committee (Section 3.7). with forms that seem natural. libraries and culture lead to solutions that feel like minimal surfaces: simple expressions that comprise significant complexity. Learning Haskell is not a trivial task. a web publication that first appeared in March 2005. with upward of 2. Open source software remains somewhat brittle.000 participants over a full year. But there’s not much between the two extremes. Growth of the “hard-core” Haskell community The Haskell Weekly News. In particular. led by a broad range of people including some outside the academic/research community. established haskell.” The Monad Reader. The #haskell channel has spawned a particularly successful software client called lambdabot (written in Haskell. papers. Perhaps surprisingly. with many articles by practitioners.haskell. and occasionally are.org/hwn 22. a theorem prover. a biannual newsletter that reports on what projects are going on in the Haskell community. there are typically 200 people logged into the channel at any moment. the recent development of the Data.org/hawiki/TheMonadReader 23. Explicitly segregating these formats into separate data types prevented the mysterious errors that always seem to propagate through shell programs when the format changes. after which it became an annual institution. more formal than a Wiki page. Haskell appears to be in a particularly vibrant phase at the time of writing. Another recent initiative to help a wider audience learn about Haskell is Shae Erisson’s The Monad Reader22 . dictionary lookup. the Haskell workshop is the only forum at which corporate decisions can be. recurring in problem after problem. The first Haskell Workshop was held in conjunction with ICFP in 1995. and attracted an astonishing 114 project proposals. at the time of writing. discussion.org/trac/summer-of-code 500 400 300 200 100 1990 various components. and more besides. In 2005. of course) whose many plugins include language translation.600 the language and the research ideas it had begun to spawn. It now has a refereed proceedings published by ACM and a steady attendance of 60-90 participants. illustrated with useful code fragments. The idea really caught on: the first edition listed 19 authors and consisted of 20 pages. The Haskell Communities and Activities Report (HCAR). The HWN covers new releases. the time to assemble and test all the possible combinations is prohibitive. a “Darcs corner. The last five years have seen a variety of new community initiatives. The Monad. John Goerzen decided to help people cope with the rising volume of mailing list activity by distributing a weekly summary of the most important points—the Haskell Weekly News. and included just one Haskell project. To substantiate our gut feel. The Google Summer of Code ran for the first time in 2005. reducing memory requirements by a factor of ten and increasing speed to be comparable with the standard command cat.” and quotes-of-the-week— the latter typically being “in” jokes such as “Haskell separates Church and state. but the economy of expression and the resulting readability seem to provide a calm inside the storm.complete. taken.ByteString library fills the most important gap. Fired by his experience. 12. of which nine were ultimately funded24 . searching for Haskell functions. Linspire recently converted a parser to use this module.org as a mentoring organisation. though. Another example is that Linspire’s tools must handle legacy data formats. we carried out an 19. It seems clear from all this that the last five years has seen particularly rapid growth. The first issue declared: “There are plenty of academic papers about Haskell.org/haskellwiki/IRC 21. and so are not representative of Haskell users in general.. Clean has adopted type classes from Haskell. as a survey of the “hard core” of the community. 1987). 1996). times.informal survey of the Haskell community via the Haskell mailing list. The majority of our respondents discovered the language by other means. Haskell is still most firmly established in academia. so our goal of designing a language suitable for applications has also been fulfilled. at 346 square kilometres. As its name indicates. so our goals of designing a language suitable for teaching and research have certainly been fulfilled. though: respondents’ median age was 27. a language for constraint programming built on top of Mercury. Haskell enthusiasts are truly “one in a million. We asked respondees when they first learnt Haskell. and it has also adopted the course. The country with the most Haskell enthusiasts is the United States (115). 50% were using Haskell as part of their studies and 40% for research projects. 1992). like Miranda and Haskell.. In particular. Both languages have a syntax influenced by Haskell and use monads for inputoutput. Curry is a language for functional-logic programming (Hanus et al. and a further quarter employed in a university. and Brazil.000 people. 1999). 27 Most Haskell textbooks are aimed at introductory programming courses. only 48% of respondees learned Haskell as part of a university course. Russia. and when an instance is declared one must prove that the instance satisfies those properties (for example. the “Haskell Caf´ ” list. While the “official” Haskell mailing list has seen relatively flat traffic. bringing together researchers working on functionallogic languages in the same way that Haskell brought together researchers working on lazy languages. It is significant as the first example of integrating the full power of dependent types into a programming language. when we consider the proportion of Haskell enthusiasts in the general population. while in the Netherlands and Portugal they have 1. started explicitly in October 2000 e as a forum for beginners’ questions and informal discussions. Finland.000 square kilometres each. where the bars show the total number of respondees who had learnt Haskell by the year in question. Portugal comes second. But 22% of respondents work in industry (evenly divided between large and small companies). and 25% were 23 or younger. Hal. so we could estimate how the size of the community has changed over the years25 .000 inhabitants. The Haskell Caf´ is most active in the winters: warm weather seems to e discourage discussion of functional programming26 ! Our survey also revealed a great deal about who the hard-core Haskell programmers are. Layout is significant in Python. Four out of five hard-core Haskell users were already experienced programmers by the time they learnt the language. although its type system differs in fundamental ways. Clearly the community has been enjoying much stronger growth since 1999. it is intended as a sort of successor to Haskell. Mercury is a language for logic programming with declared types and modes (Somogyi et al. this omits users who learnt Haskell but then stopped using it before our survey. Haskellers have a little more room. The picture changes. In the UK. and Sweden all have around one Haskeller per 300. This is the year that the Haskell 98 standard was published—the year that Haskell took the step from a frequently changing vehicle for research to a language with a guarantee of long-term stability. especially its adoption of type classes. In Singapore. Traditional “hotbeds of functional programming” come lower down: 25 Of the UK is in fourth place (49). 22% are using Haskell for open-source projects. some 25% were 35 or over. the results are interesting. and it bears a strong resemblance to both of these (Brus et al. which are also applications. and many other countries. that plus. so we content ourselves with mentioning similarities. then Scandinavia— Iceland. 1998). Further indications of rapid growth come from mailing list activity. In many cases it is hard to ascertain whether there is a causal relationship between the features of a particular language and those of Haskell. Isabelle is a theorem-proving system that makes extensive use of type classes to structure proofs (Paulson. 26 This may explain its relative popularity in Scandinavia. If we look instead at the density of Haskell enthusiasts per unit of land mass. Surprisingly. Now the Cayman Islands top the chart. with one Haskeller for every 2. It is tempting to conclude that this is cause and effect. and Sweden in sixth (29). Cayenne is explicitly based on Haskell. with one Haskell enthusiast per 44. 2004). Clearly.. Younger users do predominate. Nevertheless. though. despite the efforts that have been made to promote Haskell for teaching introductory programming 27 . 2002). with one in 116. When a type class is declared one associates with it the laws obeyed by the operations in a class (for example.” The United States falls between Bulgaria and Belgium. Interestingly.000. and obtained almost 600 responses from 40 countries. Other countries with 20 or more respondees were the Netherlands (42) and Australia (25). Cayenne is a functional language with fully fledged dependent types.6 Influence on other languages Haskell has influenced several other programming languages. Clean is a lazy functional programming language. has seen traffic grow by a factor of six between 2002 and 2005. and the youngest just 16! It is sobering to realise that Haskell was conceived before its youngest users.. Half of our respondents were students. and negation form a ring). and 10% of respondents are using Haskell for product development. . It is curious that France has only six. then the Cayman Islands are positively crowded: each Haskeller has only 262 square kilometres to program in. 12. Escher is another language for functional-logic programming (Lloyd.. given the importance we usually attach to university teaching for technology transfer. Perhaps open-source projects are less constrained in the choice of programming language than industrial projects are. 1995).500. but instead of using monads for input-output it uses an approach based on uniqueness (or linear) types (Achten et al. whereas Germany has 85—perhaps French functional programmers prefer OCaml. One lesson is that Haskell is a programming language for the whole family—the oldest respondent was 80 years old. The results are shown in Figure 7. our respondees belong to a self-selected group who are sufficiently enthusiastic about the language itself to follow discussion on the list. Python is a dynamically typed language for scripting (van Rossum. Other countries offer significantly more space—over a million square kilometres each in India. closely followed by Portugal (91) and Germany (85).000 inhabitants. it is clear from the responses that the majority of students currently being taught Haskell did not reply. Only 10% of respondents learnt Haskell as their first programming language (and 7% as their second). that the integers are a ring). designed and implemented by Lennart Augustsson (Augustsson. 1995). It is influenced by Haskell in a number of ways. uses type classes in innovative ways to permit use of multiple constraint solvers (de la Banda et al. 1998).org . covering the extensions that are heavily used in industry. Data. A new committee has been formed to design the new language.debian. and impossible for teachers to know which extensions they should teach. Anyone can upload new versions of the benchmark programs to improve their favourite language’s ranking. Here they are. Back in the early ’80s. We expect this trend to continue. is growing strongly. But we can already see convergence. with their affiliations during the lifetime of the committee. The LINQ (Language INtegrated Query) features of C# 3. the purely functional community has learnt both the merit of effects. mainstream languages are adopting more and more declarative constructs: comprehensions. on the 10th of February 2006 Haskell and GHC occupied the first place on the list! Although the shootout makes no pretence to be a scientific comparison. Odersky. So many extensions have appeared since the latter was defined that few real programs adhere to the standard nowadays. It includes for comprehensions that are similar to monad comprehensions. driven especially by the goad of parallelism. The members of the Haskell Committee played a particularly important role. We interpret these as signs that. it is awkward for users to say exactly what language their application is written in. Scala. and concise code. with points awarded for speed. Jon Fairbairn (University of Cambridge). by sticking remorselessly to the discipline of purity. many of us were consciously making a longterm bet that principled control of effects would ultimately turn out to be important. Richard Bird (University of Oxford). At the 2005 Haskell Workshop. of which Wadler is a codesigner (Bracha et al. Haskell will be no more than a distant memory. A new standard. It is becoming more and more suitable for real-world applications. Chalmers University). University of Nottingham. Purely functional programming is not necessarily the Right Way to write programs. The type system is based on GJ. many people have contributed to the language design beyond those mentioned in our paper. At the other end. 14. We believe the most important legacy of Haskell will be how it influences the languages that succeed it. Thomas Johnsson (Chalmers University). Ralf Hinze (University of Bonn). and early in 2006 the Haskell community began doing just that. Acknowledgements The Haskell community is open and vibrant. When the new standard is complete. For example. database query expressions. iterators. 13. and identifying those who served as Editor for some iteration of the language: Arvind (MIT). Whether that bet will truly pay off remains to be seen. is planned to adopt list comprehensions from Python. beyond our instinctive attraction to the discipline. Paul Hudak [editor] (Yale University). Maria Guzman (Yale University). particularly in its purely functional form. One lighthearted sign of that is Haskell’s ranking in the Great Computer Language Shootout28 .0 are based on monad comprehensions from Haskell. so it is tested by QuickCheck properties. Lennart Augustsson (Chalmers University). One day.alioth. first-class functions. and promoted by Miranda and Haskell). and had no input/output. eighteen years after it was christened. as is agreement between corresponding bytestring and String operations. The use of bounded types in that system is closely related to type classes in Haskell. Java. providing the same interface but running between one and two orders of magnitude faster. At one end. 2006). and many. Launchbury called for the definition of “Industrial Haskell” to succeed Haskell 98. The shootout is a benchmarking web site where over thirty language implementations compete on eighteen different benchmarks. but they were also laughably impractical: they were slow. and the Haskell community. it will give Haskell a form that is tempered by real-world use. Andy Gordon (University of Cambridge). Dick Kieburtz (Oregon Graduate. this does show that competitive performance is now achievable in Haskell—the inferiority complex over performance that Haskell users have suffered for so long seems now misplaced. is a radical and principled attack on the challenge of writing programs that work. In turn. The generic type system introduced in Java 5 is based on the Hindley-Milner type system (introduced in ML. John Hughes [editor] (University of Glasgow. 28 See so that loop fusion is performed when bytestring functions are composed. which punishes unrestricted effects cruelly.7 Current developments Haskell is currently undergoing a new revision. Stewart and Leshchinskiy) represents strings as byte vectors rather than lists of characters. Things are very different now! We believe that Haskell has contributed to that progress. But we believe that.ByteString (by Coutts. and at least one way to tame them. Oregon Graduate Institute). 2000). a member of the Haskell Committee. We hope and expect to see this continue. 2004. Conclusion Functional programming. As a result. the ideas and techniques that it nurtured will prove to have been of enduring value through their influence on languages of the future. It achieves this partly thanks to an efficient representation. another dynamically typed language for scripting. The correctness of the rewrite rules is crucial. will solve these problems—for the time being at least. Kevin Hammond [editor] (University of Glasgow). despite the dominance of effects-by-default in mainstream languages. while still small in absolute terms. 12. Javascript. Much energy has been spent recently on performance. Dave Barton (Mitre Corp). and by building a critical mass of interest and research effort behind a single language. but also by using GHC’s rewrite rules to program the compiler’s optimiser. and the Haskell community is heavily engaged in public debate on the features to be included or excluded. Nevertheless. Scala is a statically typed programming language that attempts to integrate features of functional and object-oriented programming (Odersky et al. despite a rather poor initial placement.. however. This is a great example of using Haskell’s advanced features to achieve good performance and reliability without compromising elegance. C# and Visual Basic. Mark Jones (Yale University. purely functional languages might have been radical and elegant. Joseph Fasel (Los Alamos National Laboratory). Warren Burton (Simon Fraser University). It was precisely this quirky elegance that attracted many of us to the field. took lots of memory. Their inclusion is due largely to the efforts of Erik Meijer. Part of the reason for this lies in the efficient new libraries that the growing community is developing. Haskell is maturing. difficult for tool builders to know which extensions they should support. and more besides.list comprehension notation. To everyone’s amazement. and view bounds and implicit parameters that are similar to type classes. appropriately named Haskell′ (Haskell-prime). but called array comprehensions instead. when that day comes. and they were inspired by his previous attempts to apply Haskell to build web applications (Meijer. memory efficiency..0 and Visual Basic 9. Brian Boutel (Victoria University of Wellington). Augustsson. Introduction to Functional Programming. M-structures: extending a parallel. editor. A. we thank the program committee and referees of HOPL III. E. Springer.. Graham Hutton. In Mohnen. G. D. Cilk: An efficient multithreaded runtime system. K. Prentice Hall. C. Bloss. Jon Fairbairn. Ralf L¨ mmel. Paolo Martini. (2004). (1987). Springer Verlag LNCS. Eindhoven. In (ICFP02. (1988). Hughes.. Alastair Reid (University of Glasgow. pages 174–184. Don Stewart. Bjesse. and Zhou. E. Clifford Beshers. Philip Wadler [editor] (University of Glasgow). Typing dynamic typing. A standard ML compiler. We also thank those who commented on a draft of this paper. Alberto Ruiz. P. An optimizing compiler for a modern functional language. Arts. Rishiyur Nikhil (MIT). Martin Sulzmann. Mark Carroll. Boquist. ACM. Journal of Functional Programming. 6:579–612. D. (2000). D. (1999). A. (1998). P. Barendsen.. Baars.. In Proceedings of ACM Workshop on Haskell. R. D. Springer Verlag.. A. Michael Vanier. and Paterson. (1988b). 5(1):81–110.. S. and Koopman. Boston. Chalmers University of Tech- References Achten.. Oregon Graduate Institute).. P. Buxton. D. Porting the Clean Object I/O library to Haskell. 2002). Erik Meijer (Utrecht University). Bloss. pages 239–250. J. In Proc PARLE (Paral- . Lennart Augustsson. John Kraemer. Greg Michaelson. J. pages 194–213... Tallinn. Mark Jones. Jared Updike. Portland. David Roundy. Colin Runciman (University of York). Yale University). 1984). Department of Computing Science. and Wadler. 9(1):77–91. and Peyton Jones. and Swierstra. Simon Thompson. Ben Moseley. J. and MacQueen. Kevin Glynn. Rishiyur Nikhil. (1995). L. K. Joe Stoy. Proceedings of the 12th International Workshop on the Implementation of Functional Languages. non-strict functional language with state. Bloss. We thank them very much for their input. Bernie Pope. Paul Callahan. and David Turner. pages 538–568. H. Achten. Benjamin Pierce. The main features of cpl. R. Mike Reeve (Imperial College). 37(1):55–69. Journal of Functional Programming. Can programming be liberated from the von Neumann style? A functional style and its algebra of programs.. Achten. Journal of Parallel and Distributed Computing. P. (1996).. P. (1988). Barth. A.. De Bruijn notation as a nested datatype. Michael Cartmell. J. (1963). Blott. Randall. Neil Mitchell. Ketil Malde. R. Christian Sievers. Appel. Executing a program on the MIT tagged-token dataflow architecture. Ganesh Sittampalam. (1991). Microsoft Research Ltd). Andre Pang. and Plasmeijer. (1999). D. Greg Restall. PhD thesis. Mawl: A domain-specific language for form-based services. R.. C. Richard Bird. Tallinn. Simon Peyton Jones [editor] (University of Glasgow. Colin Runciman. L.Institute). Kuszmaul. and Singh. Dave Bayer. S. 1(2):147–164. Lh. ACM Conference on Functional Programming and Computer Architecture (FPCA’91). Advanced modeling features of MHDL.. Rex Page. P. Oregon. Sheeran. Johannes Waldmann. (1984). In Proceedings of International Conference on Electronic Hardware Description Languages. and Arvind (1991). David Wise (Indiana University). Hartley. John Peterson [editor] (Yale University). Yale University. K.. volume 523 of Lecture Notes in Computer Science. J. Highlevel specification of I/O in functional languages. Andres Sicard. Jeremy Gibbons. In (LFP84. David Turner. Kostis Sagonas. J. 25(3):334–346. lel Languages and Architectures. T. Lisp and Symbolic Computation: An International Journal. Mark Jones. C.. M. We would also like to give our particular thanks to Bernie Pope and Don Stewart. Visual Haskell: a full-featured Haskell development environment. Warren Burton. B. Department of Computer Science. The Computer Journal. D. The Computer Journal. Nikhil. Backus. C.. Aachen (IFL’00).. and Swierstra. 14:635–646. S. A. P. Springer Verlag. G. 31(6):152–161. (1978a). Some sections of this paper are based directly on material contributed by Lennart Augustsson. Russell O’Connor. J. Backus. Angelov. and Strachey. (2006). L. (2002). and Marc van Woerkom. Uniqueness typing for functional languages with graph rewriting semantics. Hudak. Janis Voigtl¨ nder. PhD thesis. Path Analysis: Using Order-of-Evaluation Information to Optimize Lazy Functional Languages. Bird. Peter Stuckey. Audrey Tang. U. K. (1978b). (1995). Johan Jeuring. Hudak. or contributed their recollections: Thiaggo Arrais. In (Launchbury and Sansom.. Bird. 1998). In Hughes... 1992). Journal of Functional Programming. Conor McBride. Code optimizations for lazy evaluation. and Cox. (1988a). pages 218–227. Barron. 6(2):134–143. John Peterson. S. pages 157–166... editor. (1987). Can programming be liberated from the von Neumann style? Communications of the ACM. A. Kevin Hammond. Parsing pemutation phrases. Chris Okasaki. R. Glasgow University. Claessen.. The ins and outs of clean I/O. LNCS 274. In Trinder. Mitchell a Wand. Jevgeni Kabanov. Type Classes. and Wiger. (1996). Michael Maa honey. ACM SIGPLAN Erlang Workshop. Josef Svenningsson. D. Portland. John Launchbury.. Johansson. selected papers. Susan Eisenbach. Barton. Eric Willigers. and Smetsers. Joerg. Paul Callaghan. F. and Marlow. U. and Young. Leiserson. Thomas Johnsson. pages 1–17. and Young. who prepared the time line given in Figure 2. P. A. Alistair Bayley. John Launchbury (University of Glasgow. Communications of the ACM. Blumofe. (1992). M. Bruns. 21(8):613–41. IEEE Transactions on Software Engineering. PhD thesis. Y. P. Nixon. Atkins. Will Partain. Testing telecoms software with quviq quickcheck. R. Baars. (1998). and Jonathan Young (Yale University). In Kahn. Tony Field. T. P. (1999). Proceedings of the Conference on Functional Programming and Computer Architecture... 21(8). editors. Evan Martin. Estonia. Jan-Willem Maessen. Europe) Conference.. Code Optimisation Techniques for Lazy Functional Languages. (2005). R. Arvind and Nikhil. Cayenne — a language with dependent types. ACM SIGPLAN. number 2011 in Lecture Notes in Computer Science. Finally. Robert Dockins.. Denis Moskvin. van Groningen.. Augustsson. and Plasmeijer. Malcolm Wallace. E. J. James Bostock. Mathematical Structures in Computer Science. S. In International Conference on Functional Programming. In (ICFP98. S. M. R.. Lava: Hardware design in haskell. editor. Ball. A compiler for lazy ML. Modelling User Interfaces in a Functional Language. and Feys. M. Advanced Course on Functional Programming and its Applications. CUCIS TR2003-1901. (1975). In Conference Record of the 1980 LISP Conference. In Proc. pages 364–384. P. The Netherlands. Clean — a language for functional graph rewriting. (1999b). Peyton Jones. and Hallgren. Burstall. pages 41–48. (1977). Sansom. ACM Press. In Proceedings of the Sixth International Symposium on Functional and Logic Programming. K. (1987). and Milner. In Koopman. (2006). Department of Computer Science. and Jones. Burstall... Claessen. In Conference Record of the 9th Annual ACM Symposium on Principles of Programming Languages. Henderson. Associated types with class. 41(1):206–217. de la Banda. Infotech. The Computer Journal. PhD thesis. hop. In ACM SIGPLAN International Conference on Functional Programming (ICFP’05). L. Callaghan. 2000). Demoen. A transformation system for developing recursive programs. P..org/development/views. (2004). Yale University. Courtney. (2006). Chakravarty. Jansson. HOPE: An experimental applicative language... G. Journal of Functional Programming. (1941). . (2005). M.. and Reeve. H. LNCS 274. Chakravarty. H. Meijer. A. C. M.nology. In ACM Symposium on Principles of Programming Languages (POPL’05). (1998). In (FPCA93. Making the future safe for the past: Adding genericity to the Java programming language. Chakravarty. K. P. PhD thesis. J.. Dijkstra working note EWD798. (1977). and Bobbio. Lecture Notes in Computer Science. Principal type-schemes for functional programs. Department of Computer Science. Springer Verlag. An experiment using Haskell to prototype “geometric region servers” for Navy command and control. Research Report 1031. (1982). E. B. Vancouver. and Hinze. and Krishnamurthi. (1981).. J. B. T. pages 183–200. Darlington. SpringerVerlag. and Hughes. W. In Kahn.. Carlsson.... 1. and Darlington. K. T. Courtney.. P. R. University of Durham. (1998). Lazy lexing is fast.. In 15th European Symposium on Programming.. University of Utah. Estonia. (2004). Cambridge University Press. T. Darlington. A. Springer Verlag.. Cornell University. Tallinn. Cheong. R. P. pages 207–12. (2002). Cooper. M.. P.. Recursive Programming Techniques. A formal specification of the Haskell 98 module system. K... and Hallgren. Lecture Notes in Computer Science. R.. of the 2001 Haskell Workshop. In Thiagarajan. (1999). In Middeldorp. Languages. Damas. M. R. T. editors. K. Burge. Keller. Burstall. Danielsson. and Odersky. Fast and loose reasoning is morally correct. A. and Wadler. ACM. pages 136–143. Fudgets — a graphical user interface in a lazy functional language.. (2005a). First-class phantom types. Marriott. and Peyton Jones. D. C. S. Bracha. Proving properties of programs by structural induction. Views: An extension to Haskell pattern matching. pages 66–76. Keller. editors. M. MacQueen. (1993). Cheney. Technical Report UUCS-77-113. G. ACM Symposium on Object Oriented Programming: Systems. Carlier. NorthHolland. Thompson. and Gibbons. New York. P. editor (2002).. Pittsburgh. A. P. R.. S. Claessen.... and Elliott. QuickCheck: a lightweight tool for random testing of Haskell programs. 6. In Chambers. and Wadler. Addison Wesley. Stoutamire. J. Burstall. Portsmouth. I. Brus. M. pages 62–73..W. Advances in Computing Science (ASIAN’99). Chakravarty. D. Genuinely functional user interfaces. Davis. 5th Asian Computing Science Conference.. Fourth Fuji International Symposium on Functional and Logic Programming. Trip report E. Burton. Parametric type classes. (1980). and Plasmeijer. J. C -> Haskell: yet another interfacing tool. Sweden.. M. W. (1992). Vol. Proceedings of the 2002 Haskell Workshop. J. and Sato. and Clack. R. M. Curry. Carlson. Department of Computer Science. Springer Verlag. (1999a). (1993). D. (2003). D. (1981). D. M. (1996). M. P.. P. (2004). 24(1):44–67. J. editor. G. An Evaluation of LOLITA and Related Natural Language Processing Systems. ACM. Newcastle. BC. G. Church. W.. Yale University. M. van Leer. E. J. R. In (ICFP00. M. Jones. To the gates of HAL: a HAL tutorial. Springer Verlag LNCS 2441. P. Observable sharing for functional circuit description. Hudak. Lochem.. and Applications (OOPSLA).. M. N. G. In (Chakravarty. Embedding dynamic dataflow in a call-by-value language. Hughes. M. M. S. Dijkstra. volume 3924 of LNCS. A. Dijkstra.. The architecture of ddm1: a recursively structured data driven machine. M. pages 321–330. A. Claessen. New Hampshire. pages 41– 69. (1969). and Stuckey. S. In The Software Revolution. In Proc Conference on Functional Programming Languages and Computer Architecture. and Marlow. International Workshop on Implementing Functional Languages (IFL’99). and Sannella. (2002).html. and Sands.. Springer Verlag. J. and Hughes. Parallel parsing processes. Undergraduate thesis. (1982). (1977). (2005b). 14:741–757. T. Chen. (1958). Diatchki. 2002). ACM Press. Testing monadic code with QuickCheck. Claessen. Associated type synonyms. The calculi of lambda-conversion. Functional Programming and 3D Games. University of New South Wales. (2001).. JACM. 2002). and Yap. Functional Programming Languages and Computer Architecture. Hudak. Design considerations for a functional programming language. Odersky. Amsterdam. S.. number 1868 in Lecture Notes in Computer Science. (2000). Annals of Mathematics Studies. G. 1993). J. Combinatory Logic. C. pages 170–181. K. In Proceedings of ACM Conference on Lisp and Functional Programming. editors. Chakravarty. (2002). S. 19-25 July 1981. In (Chakravarty. SIGPLAN Not. pages 268–279. and Turner. van Eckelen.. ALICE — a multiprocessor reduction machine for the parallel evaluation of applicative languages. editor. R. . In Proc Joint CS/CE Winter Meeting. 38(4). Addison-Wesley. pages 257–281. and Wentworth. R. Technical Report MSR-TR-96-05. Modular compilers based on monad transformers. B. In (ICFP03. Varberg. Baltimore. M. (2000). USENIX. (1990). 2002). In ICFP ’05: Proceedings of the Tenth ACM SIGPLAN International Conference on Functional Programming. Microsoft Research. pages 285–296.. P. Peyton Jones. J. Krishnamurthi. and Programming. Technical Report Doc 92/13. (1995). pages 116–128. (1976). J. M. Haskell01 (2001). Ponder and its type system. . NY. Ford. pages 263–273. Elliott. Harris. Uber formal unentscheidbare s¨ tze der principia o a mathematica und verwandter Systeme I. A. Article No. 1967). Gordon. S. CONS should not evaluate its arguments. H/Direct: a binary foreign language interface for Haskell. and While. T. and Peyton Jones. P. A short cut to deforestation. Pages 596–616 of (van Heijenoort. pages 36–47. Gill. linear time. P. (2003). Proceedings of the 2001 Haskell Workshop. Florence. M. Hallgren. Milner. Springer Verlag LNCS 78. T. Dybvig. In ACM SIGPLAN International Conference on Functional Programming (ICFP’98). J. ACM. A brief introduction to activevrml. C. (1997).. Imperial College. S. P. pages 122–136. pages 421– 434. E. Gaster.. (2004). S. P. In (ICFP02. To appear in the Journal of Functional Programming. D. R. S. Friedman. (1992). Inductive sets and families in Martin-L¨ f’s o type theory. Elliott. (2005). C. Gill. A static semantics for Haskell. Dept of Computing. and Sabry. FPCA95 (1995). ¨ G¨ del. K. Beating the averages. and Plotkin. Department of Computer Science. S. R. Optimistic evaluation: an adaptive evaluation strategy for non-strict programs. ACM Press. In ACM Conference on Functional Programming and Computer Architecture (FPCA’93). Technical Report 75. R. In Proceedings of the ILPS ’95 Postconference Workshop on Visions for the Future of Logic Programming. G. G. Sweden. Faxen... H. (1985). Functional reactive animation. ACM Conference on Functional Programming and Computer Architecture (FPCA’93). Launchbury. Elliott. A. S. animated 3d graphics applications. C. In Proc International Conference on Computer Languages. Journal of Functional Programming.. (2005). ISBN 0-89791-595-X. Hall.. Cambridge University Press. D. ACM Press. R. ACM Conference on Functional Programming and Computer Architecture (FPCA’95). (1997). S. C. W.. J. Chalmers Univerity. Marlow. Tbag: A high level framework for interactive. A. M. S. Bauman. volume 34(1) of ACM SIGPLAN Notices. In Proc ACM Symposium on Language Issues and Programming Environments. Girard. Packrat parsing: simple.. (1994).. Hanus. (2001). Technical Report TR-31. and Peyton Jones. Edinburgh LCF. Evans. USA. editor.. powerful. Feeley. and Hudak. Logical Foundations of Functional Programming... G.. Composable memory transactions. M. (1998). A monadic framework for delimited continuations. Cambridge University Computer Lab. In First Workshop on Rule-Based Constraint Reasoning and Programming. Design and implementation of a simple typed language based on the lambda-calculus. R.. FPCA93 (1993). (1931). and O’Donnell. Fairbairn. Hartel. P... ACM. S. Pseudoknot: a float-intensive benchmark for functional compilers. L. P. 6(4). Cophenhagen. K. Cophenhagen. Alt. In International Conference on Functional Programming. B. (2001). Variants. and Wise. L. (1979). and Wadsworth. D. G.. Augustsson. In Proceedings of the first conference on Domain-Specific Languages. Seattle. 38:173–198. Curry: A truly functional logic language. (1993). 2003). Springer Verlag LNCS 2028. In Proceedings 10th European Symposium on Programming. Pal—a language designed for teaching programming linguistics. ACM SIGGRAPH. Languages. Frost. A principled approach to operating system construction in Haskell. and Abi-Ezzi. and Herlihy. (1985). Schechter. University of Not- tingham. Jones. The semantics and implementation of various best-fit pattern matching schemes for functional languages. ACM. ACM Computing Surveys. T. Stuckey. Field. Monatshefte f¨ r u Mathematik und Physik. pages 122–131. Ennals.. Harrison. Debugging Haskell by observing intermediate data structures. Finne. Leijen. C. Records. In Proceedings of SIGGRAPH ’94. S. Fun with functional dependencies. Gaster. In Proc 5th Eurographics Workshop on Programming Paradigms in Graphics. Yeung. and Peyton Jones. P. La Jolla.. In Hackers and Painters. J.. Technical Report TR-963. (2005). (1982). Hallgren. Realization of natural-language interfaces using lazy functional programming. J.-F. Graunke. K. Maastricht. (1991). P. Finne.-Y.. B. Kuchen. ACM SIGPLAN. (1998). Type classes and constraint handling rules. (1968). Fairbairn. S. (1996). (2006). Journal of Functional Programming. M. (1995). A polymorphic type system for extensible records and variants. P. University of Nottingham. and Moreno-Navarro. University of Cambridge Computer Laboratory. In ACM Symposium on Principles and Practice of Parallel Programming (PPoPP’05). In Proceedings ACM National Conference.. and Felleisen. and Jones. M. C. ACM Press.. O’Reilly. lazy.. Elliott... pages 153–162. In Haskell Workshop. editors. (2002). Programming the web with high-level programming languages.. PhD thesis. P. and Kamin. Peyton Jones. Automata. Composing Haggis. A. Department of Computer Science. V. Leslie. California. In Huet. Hoeven. (2000). A. In Huet. New York. pages 223–232. Hunt. Logical Frameworks. (1996). Modeling interactive 3D and multimedia animation with an embedded language. and Tolmach. The system F of variable types: fifteen years later. 11. R. Glynn. Meijer. Debugging in a side-effectfree programming environment. S.. 12(4&5). (1996).Dybjer. (2002). S. A. D. Graham... and Sulzmann. and Qualified Types.... M. Weis. and Peyton Jones. (1998). M. K. sourceforge. On the expressiveness of purely-functional I/O systems.net. P. for learning Haskell. Jansson. Functional geometry. (2003). Yale University. ACM SIGPLAN International Conference on Functional Programming (ICFP’03). volume 1576 of Lecture Notes in Computer Science. Utah. ACM Computing Surveys. Springer Verlag. R. Nottingham University Department of Computer Science Technical Report NOTTCS-TR-00-1. and application of functional programming languages. Hughes. P. J. (1984a). S. (1995). Hinze. G. Derivable type classes. E. Baltimore. Jeuring. and functional reactive programming.D. In Hutton. 32(2):98–107.. Paris. ICFP02 (2002). The Fun of Programming. In Jeuring. ICFP98 (1998). Uppsala. Springer-Verlag. In Proceedings of PADL’04: 6th International Workshop on Practical Aspects of Declarative Languages. Hudak. evolution. LNCS. and Levy. Second Edition. P. of Computer Science. Manufacturing datatypes. Free theorems in the presence a of seq. Modular domain specific languages and tools. (1983). Report 359. Courtney. volume 2638 of Lecture Notes in Computer Science. Jansson.. Springer Verlag. thesis. P. B. LNCS 925. Research Report YALEU/DCS/RR-317. Polytypic compact printing and parsing. In Jeuring. Dept. (1982). Hudak. IEEE Computer Society. Charleston. Makucevich. Palgrave. J. Journal of Functional Programming. and Lh. J. Springer-Verlag. O. Springer Verlag LNCS. Comparing approaches to generic programming in Haskell... ACM Computing Surveys. ACM. J. New York. Monadic parsing in Haskell. ACM. pages 134–142.. R. O. Nilsson. A lazy evaluator. G. (2000). ACM Press. Hughes. (2003). R. Polymorphic temporal media. J. Scripting the type inference process. Call by need computations in nonambiguous linear term-rewriting systems.. Higher-order strictness analysis in untyped lambda calculus. and Peyton Jones. Proceedings of the 2000 Haskell Workshop. (2001). 21(3):359–411. Leijen. and Jeuring. In (ICFP03. New York. Fun with phantom types. (2006). and Jones. Journal of Functional Programming. and Meijer. robots. J. editors. pages 179–187. Haskore music tutorial. Distributed applicative processing systems – project goals. Hage. In European Symposium on Programming. Hudak. Research Report YALEU/DCS/RR-322. (2000). Research Report YALEU/DCS/RR-665. The Haskell School of Expression – Learning Functional Programming Through Multimedia. ACM SIGPLAN International Conference on Functional Programming (ICFP’00). ALFL Reference Manual and Programmer’s Guide. P. Hinze.. A new approach to generic functional programming. E. Heeren. Henderson. and Peterson. Conception. Hudak. D. The Computer Journal. pages 97–109. editors. Dept. ICFP03 (2003). J. pages 245– 262. J. Herington. and Swierstra. B. pages 119–132. Huet. P. ACM. Hudak. R. T. J. ACM SIGPLAN International Conference on Functional Programming (ICFP’98). (1998). ACM. S. ACM. (1996b). Hinze. of Computer Science. The design of a pretty-printing library. Hudak. Hudak. and de Moor. Gadde. pages 38–68. Helium. and Jeuring. LNCS 1129. PolyP — a polytypic programming language extension. Hudak. In ACM Symposium on Principles of Programming Languages (POPL’04). Hudak. Snowbird. ACM. editor. Yale University. In Gibbons. In Generic Programming. Huet. Hinze. (1986). Hughes. Hudak. D. Hinze. In ACM Symposium on Principles of Programming Languages. Cambridge University Press. (2003a). Pittsburgh. The Fun of Programming. and Whong. (2002). ICFP99 (1999). INRIA. 4th International School. Advanced Functional Programming. In Second International School on Advanced Functional Programming. (1989). 2003). and de Moor. volume 34(1) of ACM SIGPLAN Notices.. In ACM Sigplan 2003 Haskell Workshop. (1976). A. Ph. J. J. Why functional programming matters. and Young. (1996). Johann. Snowbird. P. pages 273–287. (2003b). J.Haskell04 (2004). Arrows. P. P. and Morris. (2004). In ACM SIGPLAN International Conference on Functional Programming (ICFP’04). Building domain-specific embedded languages. (2003). Haskore music notation – an algebra of music.. P. (2000).. ACM. G. P. (1997). (1975). Springer-Verlag. P. In Gibbons.. A. Yale University. pages 53–96. Sweden. Science of Computer Programming.. Advanced Functional Programming. R. P. R. (2004). Hudak. A unification algorithm for typed lambdacalculus. and Voigtl¨ nder. (2004). ACM SIGPLAN International Conference on Functional Programming (ICFP’02). 1:22–58. and Meijer. pages 95–103. S. J. In Proceedings of Fifth International Conference on Software Reuse. Hudak. Snowbird. P. (2000). A. (1979). P. 6(3):465–483. pages 62 – 71. and Sundaresh.. ACM. ACM SIGPLAN International Conference on Functional Programming (ICFP’99). 1. Advanced Lectures. Describing and interpreting music in Haskell. pages 99–110. P. Theoretical Computer Science. J. In Proc ACM Symposium on Lisp and Functional Programming. Hughes. Hutton. J. P. Generalising monads to arrows. Oxford University. . (1989). ACM. Hunit home page. 37:67–111. editors. (1996a). In In Proceedings of 3rd International Conference on Principles of Programming Languages (POPL’76). editors.. H. (1998). Heeren. Paris.. Journal of Functional Programming. R.. (1999). G. S.. pages 470–482. motivation and status report. ICFP00 (2000). In (POPL00. ACM. Hudak. and van IJzendoorn. chapter 4. P. pages 3–14. 8:437–444. Palgrave. (1984b). Generics for the masses. Henderson. R. 28A.. Montreal. Department of Computer Science. Hinze. Proceedings of ACM Workshop on Haskell. P. B. J. Montreal. In 24th ACM Symposium on Principles of Programming Languages (POPL’97). (1989). 2000). Utah. The Design and Implementation of Programming Languages. Programming Research Group. (1964). (1992). Composing monads. Copenhagen (available as YALEU/DCS/RR-968. J. pages 96– 107. and Adams. W. M. Philbin. N. K. R. In AFIPS Conference Proceedings. M. G. (1999). Oxford University. and Ghani. (1995). (2003). ACM Distinguished Dissertation. R. number 1782 in Lecture Notes in Computer Science. L. C. In Jeuring. pages 33–44. (1966). 8(4):293–342. editors (1992). (2004). New Orleans. Springer Verlag. Technical Report YALEU/DCS/RR-1004.nl/pub/RUU/CS/ techreps/CS-1999/1999-28. Lmmel. Literate programming. pages 333–343. (1996). Nancy. Proceedings of the 2003 Haskell Workshop. Ellis Horwood. P. D. Germany. Journal of Functional and Logic Programming. In AADEBUG. and Peyton Jones..pdf. Synthesis of Digital Designs from Recursive Equations. and Shan. or.. J. Liang. D. (2003). 2002). Philbin. and Sansom. J. pages 227–238.. ACM. In European Symposium on Programming (ESOP’00). 2004). Kelsey. (1984). Typing Haskell in Haskell. 6(4):308–320. ACM Transactions on Programming Languages and Systems. Programming Research Group. R. P. pages 109–122. ACM Symposium on Lisp and Functional Programming (LFP’84). Oxford. L¨ ufer. pages 141–152. Amsterdam. S. France. ACM Conference on Functional Programming and Computer Architecture (FPCA’85). and Jones. (1991). W.. Launchbury. Keller. Johnsson.. Jones. Kaes. 39(4). Jones. Hudak. L¨ ufer. J. R. In Proc SIGPLAN Symposium on Compiler Construction. Lewis. (1998). (1997). Kiselyov. J. T.. Glasgow 1992. . In ACM SIGPLAN International Conference on Functional Programming (ICFP’05). (1999). In (POPL00. N. (1993). ACM.. S.. Jones. K. 1999). pages pp46–56. Yale University). Hudak. In SIGPLAN ’86 Symposium on Compiler Construction. Cambridge University Press. Clinger. and Thompson.Johnson. 21. Orbit: an optimizing compiler for Scheme. D. J. Monadic state: Axiomatization and type safety. (1995). 20 Years of the ACM SIGPLAN Conference on Programming Language Design and Implementation (1979–1999): A Selection. Yale Univesrity. P. Hudak. J. 9(3):157–166. (1979). Rees. and Odersky. R. S. (1986). A. N.. MIT Press.. E. 33(9):26– 76. Monad transformers and modular interpreters. Jones. (1994). Li. J. M. Kranz. Kesley. Domain-specific embedded compilers. and Jeuring. Launchbury. R. pages 219–233. SIGPLAN Notices. C.. Leijen. (2005). M. 6(3):485–517. Strongly typed heterogeneous collections. Montreal. A system of constructor classes: overloading and implicit higher-order polymorphism. Lindig. (2003). Lindstrom. In Proc ACM Sigplan Workshop on State in Programming Languages. Workshops in Computing. Meijer. pages 613–622. D. O. 27(2):97–111. State in Haskell. M. In Proc 2nd Conference on Domain-Specific Languages (DSL’99). type classes reflect the values of types. M. Scrap your boilera plate with class: Extensible generic functions. Computer Journal. Jones. S. (1988). Qualified Types: Theory and Practice. Reinke.. Revised5 report on the algorithmic language Scheme. and Schupke. volume 201 of Lecture Notes in Computer Science. A theory of qualified types. H. O. D. M... number 582 in Lecture Notes in Computer Science.. The next 700 programming languages. editor. A. Simplifying and improving qualified types.. 7. Dependency-style o Generic Haskell. and Peyton Jones. In European Symposium on Programming (ESOP’92).. and Garigliano. In (Haskell04. 2004). Retrospective on: Orbit: an optimizing compiler for Scheme. and Duponcheel. Landin. L¨ mmel. Landin. Springer-Verlag. (1994). L¨ th. (1993). ACM SIGPLAN Notices. Jones.. Lazy imperative programming. Communications of the ACM. J. In Proceedings of the 2nd European Symposium on Programming. 16(5):1411–1430. L¨ mmel. M. Shields. Type classes with functional dependencies. J. Estonia. Journal of a Functional Programming. (2002). Launchbury. Jouannaud. Polymorphic type inference a and abstract data types. (2000). J. J. (2004). July 1986. Lisp and Symbolic Computation. The mechanical evaluation of expressions. Rees.uu. Springer Verlag. Implicit configurations... and Patil. (1994). J. Jones. and Meijer. ACM.. K. Computer Journal. S. P. E. Functional Programming. (1995). Random testing of C calling conventions. Scrap your boilerplate: a a practical approach to generic programming. Kranz. (1999). Available at. (1993). S. (2004). Uppsala. Lloyd. 1993). editor (1985). Tool support for refactoring functional programs. A loosely coupled applicative multiprocessing system. Parametric overloading in polymorphic programming languages.. Efficient compilation of lazy evaluation. (1984). pages 3–12. R. ACM. and Launchbury. Launchbury. In (FPCA93.-P. Implicit parameters: dynamic scoping with static types. J.cs. In ACM SIGPLAN International Workshop on Types in Language Design and Implementation (TLDI’03).. Published as SIGPLAN Notices Vol. Clarke. M... Type inference for qualified types. pages 26–37. ACM. Rennes. In (FPCA95. and Adams. Jones. ACM Press. Type classes with existential types. K. No. In (Haskell04. Long.. In (ICFP03. Knuth. L¨ h. R. 2003). Berlin. D. 2000). In (Meijer. (2000). C. Reasoning by Analogy and Causality (A Model and Application). Programming in an integrated functional and logic language. M. In (ICFP02. pages 133–144. Composing monads using coprodu ucts. Tallinn. In ACM SIGPLAN International Conference on Functional Programming (ICFP’97). J. and Sabry. and Rees.. 1995). (1984). M. S. P. (2005).. LFP84 (1984). PRGTR-10-91. Kelsey. Kiselyov. In 22nd ACM Symposium on Principles of Programming Languages (POPL’95). Springer Verlag. J. P. S. France. and Peyton Jones. Nikhil. In Hoare. In Logic in Computer Science. A language for declarative robotic programming. Thiemann. M. and Turcotte. Marlow. An extended type system supporting polymorphism. and de Moor. C. NATO ASI Series. Peterson. pages 47–96. overloading and inference. J.. (2001). J. Micheloud. M. pages 233–244. ACM Transactions on Programming Languages and Systems. pages 37–51. Paulson. Haskell Workshop. Paterson. (1995). R. Cambridge University Press. EPFL Lausanne. A theory of type polymorphism in programming. editors. Maneth. A functional notation for functional dependencies. L. and Fritzson.. The design and implementation of Mondrian. L. Neubauer. Cambridge. (2002). In ACM Symposium on Principles of Programming Languages (POPL’02). Marktoberdorf Summer School 2000. In (FPCA93. S. In The Haskell Workshop. The Definition of Standard ML (Revised). pages 201– 222. (2001). S. (1985). (1987).. Peyton Jones. M. M. J. Imperial College. J. pages 90–101.0 and 2. An overview of the Scala programming language. S. and Sperber. IOS Press. R. A new notation for arrows. Page. (1998b). R. pages 79–86. B. 33(1):29–49. Pittsburgh. R. 3(4):184–195. pages 229– 240. (1999). abstract data types.. Harper. In (Haskell01. McCarthy. Proceedings of the 1999 Haskell Workshop. (1997). Hager. (1994). Netherlands.. 10(1):1–18. P. H. Stenman. G. (1998a). Milner. C. (1989). Hudak. D. Odersky. Prentice Hall. M.. In International Conference on Functional Programming. In Launchbury. R. S. 2001). J. In International Conference on Computer Languages.. B. (1991). 2003). . London. In ACM Symposium on LISP and Functional Programming. Thiemann. number UU-CS-1999-28 in Technical Reports. and foreign-language calls in Haskell. (2004). Matthews. O’Donnell. Abstract types have existential type.. Microprocessor specification in Hawk. volume 1022 of LNCS. Neubauer. (2002). Journal of Functional Programming. and Zenger. Journal of Automated Reasoning. Technical Report IC/2004/640. (2002). Nilsson. (2006). Moggi.pdf. Peterson. and MacQueen. In International Conference on Robotics and Automation. Eager Haskell: Resource-bounded execution yields efficient iteration.. Major. 13(3). Ohori. M. SIGPLAN. editors. Green Card: a foreign-language interface for Haskell. JCSS. The Fun of Programming.. 1999). The evaluation dependence tree as a basis for lazy functional debugging. exceptions. Peyton Jones. 4(2):121–150. A. IEEE. P. Meijer. Functional logic overloading. Organizing numerical theories using axiomatic type classes. pages 57– 68. Experience with a large scientific application in a functional language. Cremet. P. Morgan Kaufman. MIT Press... Okasaki. R. Arrows and computation. Gasbichler. M. C. Page. J. In Symposium on Functional Programming Languages in Education. Peyton Jones. editor. S. T. (1991a). R.uu.Maessen. E. Perry. Cook. Baltimore. Automated Software Engineering. and Reid.. (1997). G.nl/pub/RUU/CS/techreps/ CS-1999/1999-28. In (ICFP03. California. Milner. M. R... The Definition of Standard ML. 4(3):337–370. E. Nordin. (1997). (1984). Available at. Purely functional data structures. 253:1255–1260. Implicit Parallel Programming in pH. In Proceedings of Haskell Workshop.. Views for Standard ML. Server side web scripting in Haskell. (1995). A proposal for Standard ML. From transistors to computer architecture: teaching functional circuit specification in Hydra. Milner. 12(4&5):375– 392. In Gibbons. pages 184–197.D. M. R. Odersky. P. 93:55–92. C. A.0. Maryland. 17:844–895. Technical report. and Sperber. (2004). ACM. Utah. In (ICFP99. Engineering Theories of Software Construction.. In Twelfth Annual ACM Symposium on Principles of Programming Languages (POPL’85). thesis. and Arvind (2001). (2001). Emir.. N. (1991b). J. F. (2003). M... Moggi.. J. The combination of symbolic and numerical computation for three-dimensional modelling of RNA. N. pages 28–35. and Claessen. and Thaller. O. Software is discrete mathematics. editor (1999).. and Elliott. Recursive functions of symbolic expressions and their computation by machine. S. (1960). Milner. C. Okasaki. M. Extending the Haskell Foreign Function Interface with concurrency. and Moe. (1999b)... Schinz.. Amsterdam. Springer-Verlag. and Steinbrueggen. Journal of Functional Programming. ACM Press.. Meijer. Massachusetts. MIT Press. Communications of the ACM. V. 1993). Information and Computation. Computational lambda calculus and monads. Haskell Workshop. In First International Workshop on Practical Aspects of Declarative Languages. (1990).. M. J. B. Broy. Journal of Functional Programming.. Mihaylov. A polymorphic record calculus and its compilation. M. P. W. Snowbird. SCIENCE. and Plotkin. Palgrave. E. Ph. S.. In Launchbury.cs. Mitchell. E. Peyton Jones. N. Perry. J.. Okasaki. (1978). (1998).. Changes between Scala version 1.-W. C. Part I. Gasbichler. J. Portland. E. Nilsson.. K. and Launchbury. Lambda in motion: Controlling robots with Haskell. In Proc 15th Australian Computer Science Conference. McBride.. R. and Hudak. and Sparud. E. The Implementation of Functional Programming Languages. Algorithmic debugging for lazy functional languages. Tofte. (2000). (1999a). (1991). R. Meijer.. In ACM SIGPLAN Workshop on ML. The original Lisp paper. concurrency. editor. Amsterdam. H. The Implementation of Practical Functional Programming Languages. (1997). From fast exponentiation to square matrices: an adventure in types. Notions of computation and monads. (1993). Tackling the awkward squad: monadic input/output. M. S. J. Altherr.. and Tofte. (2004). Faking it: Simulating dependent types in Haskell. P. Paterson. EPFL Lausanne. (2003). Lag. a language for VLSI design. S. Florida. In (ICFP98. T. S.. Pottier. and Peyton Jones. Microsoft Research. M. and use: heap profiling and space-efficient compilation revisited. Type classes: an exploration of the design space. (1993). Queinnec. Department of Computing Science. pages 355–366. R. µFP. (1983). (1996). C.. In Hughes. Pierce. T. SIAM Journal on Computing. and Runciman. Wobbly types: type inference for generalised algebraic data types. Composing contracts: an adventure in financial engineering. (1995a). Oxford University. (2002). S. G. (2007). (1994). D. 2(2):127–202. Types and Programming Languages. N. Burroughs Corporation Austin Research Center. Eber. M. Highlights from nhc: a space-efficient Haskell compiler. A. The influence of browsers on evaluators or.. Tartu.. S. Yeung. S. In Vene. In Symp. M. Sheard. In ACM Symposium on Principles of Programming Languages (POPL’06). volume 523 of Lecture Notes in Computer Science. In 23rd ACM Symposium on Principles of Programming Languages (POPL’96). Data types as lattices. Y. Peyton Jones. C. (1993). E. pages 212–219. In (Launchbury and Sansom.. 21:37–79. Template metaprogramming for Haskell. St Petersburg Beach. N. A semantics for imprecise exceptions. Rjemo. F.. T. Time and space profiling for non-strict. Programming Research Group. and Pasalic. S. Journal of Functional Programming. pages 249–257. (1983). S. Advanced Functional Programming. Peyton Jones.. 2000). August 14-21. MIT Press. In (ICFP00. (1992). Runciman. S. In Chakravarty. (1997). 5(3):522–587. L. Marlow. DTI/SERC. Springer Verlag. C. Meta-programming with builtin type equality. (2005). Shapiro. K. Functional 3D graphics in C++ — with an object-oriented. D. pages 203–214. Elliott. G. editors. (2000).. ACM.. D. B. B. M. Sheeran. and Finne. In Proc ACM Conference on Lisp and Functional Programming. Ranta. M. 2004. J. (2004). and Seward. Amsterdam. T. ACM Press. P. (2004). Garbage Collection and Memory Efficiency in Lazy Functional Languages. thesis. Sheeran.. (1996b). Boston. pages 25–36. Languages and Applicatioons (OOPSLA’04). Keele. Washburn. Revised Lectures. higher-order functional languages. Scheevel. Hardware design and functional programming: a perfect match. (1995b). Reid. S. Schechter. In Proceedings of Joint Framework for Information Technology Technical Conference. S. Rojemo. Pope. Hall. In International Conference on Functional Programming. C. Peyton Jones.. Peyton Jones. In 22nd ACM Symposium on Principles of Programming Languages (POPL’95). and Shields. pages 71–84.. (2000). The Glasgow Haskell Compiler: a technical overview. void.. P.. Imperative streams – a monadic combinator library for synchronous programming. (1995). editor. Hammond. (1971). R. (2002). E. Cork. S. Chalmers University. 17:1–82. D. M. Proceedings of the 2002 Haskell Workshop. Scheevel. and Launchbury. Scholz. Heap profiling of lazy functional programs. (1991). Peyton Jones. Sheard. pages 295–308. V. and Abi-Ezzi. S.. S.. 1995).. In (FPCA95. Philadelphia. 5th International School. (1986). Estonia. NORMA — a graph reduction processor. C. Peyton Jones.. Haskell workshop. Revised report on the algorithmic language scheme. J. (2004). In Proceedings of the Fourth International Workshop on Logical Frameworks and Meta-languages (LFM’04). 14(2):145–189. and Meijer. Heap profiling a lazy functional compiler. (1999). Journal of Functional Programming. J. pages 34–41. (1992). In Launchbury. Journal of Functional Programming. Imperative functional programming. drag. pages 636–666. and Wakeling. N. J. Stratified type inference for e generalized algebraic data types. In 20th ACM Symposium on Principles of Programming Languages (POPL’93)..-M. In ACM Conference on Programming Languages Design and Implementation (PLDI’99). Rees. and Wadler. AFP 2004. Declarative debugging with Buddha. (1996a). Pittsburgh. Gordon. ACM. and Peyton Jones. pages 280–292.. Peyton Jones. and Henderson. Journal of Functional Programming. NORMA SASL manual. µFP — An Algebraic VLSI Design Language. F. Darcs home page. C. Roundy. Hoare. (2000). Montreal. Runciman.. POPL00 (2000). (1984). Springer. and Uustalu. Sheard. 27th ACM Symposium on Principles of Programming Languages (POPL’00). Oxford University. and Wadler. Atlanta. Sansom. Eurographics. C. Languages of the future. In Proceedings of the 1994 Eurographics Object-Oriented Graphics Workshop. (1998). MIT Press. Rojemo. S. (1984). C. Unboxed values as first class citizens. Jones. editor. ACM SIGPLAN Notices. Sheeran.D. (2004). editor. (1976). (2005). (1993). N. 1998). volume 3622 of Lecture Notes in Computer Science.. Journal of Universal Computer Sci- . In ACM SIGPLAN International Conference on Functional Programming (ICFP’00). ACM. ACM Conference on Functional Programming and Computer Architecture (FPCA’91).. Technical report. D. Peyton Jones. W. D. J. Algorithmic Debugging. Springer.. Implementing lazy functional languages on stock hardware: The spineless tagless G-machine. P. and Runciman. Concurrent Haskell. In ACM SIGPLAN International Conference on Functional Programming (ICFP’96). FranTk: a declarative GUI language for Haskell. Journal of Functional Programming. (1986). ACM Press. Partain. multiple dispatching implementation. Towards a mathematical semantics for computer languages. 6(4). ACM Press. Programming Research Group.Peyton Jones. continuations to program web servers. Sage. S. A.. M. A. C. 1992). Ph. Weirich. and Strachey. Grammatical framework. PhD thesis. (2006). ACM Press.darcs. In ACM Conference on Object Oriented Programming Systems. Rjemo.. Peyton Jones. M. ACM. New dimensions in heap profiling. Scott. and Weirich. S. and R´ gis-Gianas. Scott. Vytiniotis.. Practical type inference for arbitrary-rank types. S. on LISP and Functional Programming. Boston. (2005). and Clinger. M. W. E. ACM Press. PRG-6.net. E. 3(2):217–246. and Wakeling. Charleston. (1998). and Peyton Jones. Software Practice and Experience. Philadelphia. D. Tang.. and Sheard. Springer Verlag. Somogyi. Technical Report AI-TR-474.. In Proc ACM Conference on Lisp and Functional Programming. (2003). van Heijenoort. and Steele. Initialising mutually-referential abstract objects: the value recursion challenge. G. (1986). The semantic elegance of applicative languages. Steele. In International Symposium on Programming Languages Implementations. ACM. M. C. Journal of Functional Programming. P. A. Turner. Italy. W. 44(2):267–270.nus. pages 85–92. Rabbit: A compiler for Scheme. and Stuckey. and Slous. Sulzmann. In (Jouannaud. Sulzmann.. In Darlington. Springer Verlag. Some practical methods for rapid combinator reduction. Turner. (2005). 1985). L. and Runciman. M. Technical Singh.jucs. In ACM Conference on Programming Languages Design and Implementation (PLDI’96). Morrisett. (1996). The Fun of Programming. pages 472–492. (2005).. (1985). Venice. L. 9. CUP. . Journal of Functional Programming. Journal of Computer and System Sciences. 21(12):158–166. M. In Taha. Deterministic. G. Vuillemin. (2002b). Springer Verlag LNCS 2257. Extracting programs from type class proofs. In Proceedings of the 1981 Conference on Functional Programming Languages and Computer Architecture. A typed representation for HTML and XML documents in Haskell. Steele. (1976).comp. The execution algorithm of Mercury. An overview of Miranda. Scheme — an interpreter for extended lambda calculus. editors. pages 252–257. Amsterdam. (1985). D. In International Symposium on Principles and Practice of Declarative Programming (PPDP’06). Multi-stage programming with explicit annotations. Turner. (1981). (2003). In (Jouannaud. Comprehending monads. Technical report. van Rossum. In Practical Applications of Declarative Languages. D. pages 181–192. Miranda: A non-strict functional language with polymorphic types. From Frege to Godel. Estonia. Spivey. Python reference manual. volume 32 of SIGPLAN Notices. M. Henderson. Charleston. P.. Wadler. J. and Turner. Journal of Symbolic Logic. P. Thiemann. MIT. and Duponcheel. A Sourcebook in Mathematical Logic. S. 1992). P. (1979a). Sulzmann. T..edu. Tallinn. Addison Wesley. 1984). pages 198–220.. Object-oriented style overloading for Haskell. D. (1975). Turner. (1984). and Conway. Z. Building interpreters by composing monads. pages 192–208. Wash/cgi: Server-side web scripting with sessions and typed. Duck. P. Palgrave. (1992). 12(5):435–468. S. Accelerating Adobe Photoshop with reconfigurable logic. pages 113–128. Views: a way for pattern matching to cohabit with data abstraction. (1989). J. Understanding functional dependencies via constraint handling rules.. In Benton.ence. pages 5–26. Munich. compositional forms. MIT. Olympia. (1996).. In 21st ACM Symposium on Principles of Programming Languages (POPL’94). M. A. Towards a formal semantics. (1997). P. W. pages 159–166. G. D. A. Functional Programming and its Applications. Wadler. (2002). Strachey. Syme. T. Amsterdam. (1979b). In Formal Language Description Languages for Computer Programming. Tarditi. J. Cambridge. Jr. D. (1967). Turner. Clarke. North Holland. (2006). (2005). CWI. 17:83–130.. Lexically scoped type variables. Henderson. pages 291–308. C. J. Fourth International Conference on Functional Programming and Computer Architecture. Stone. ACM. pages 177–200. How to replace failure by a list of successes. (1996).. Florence. Theoretical Computer Science. D. Recursion equations as a programming language. S. (1997). Sussman. and Peyton Jones. D. SIGPLAN Notices.org. Peyton Jones. Shields. Sparud. Correct and optimal placement of recursion in a simple programming language. The SASL language manual. and Programs (PLILP’97). Stoye. O.. This and other materials on Miranda are available at. D. In ACM SIGPLAN Symposium on Partial Evaluation and Semantics-Based Program Manipulation (PEPM ’97). (1964). R. Wadler. Harper. and Norman. Shields. Cheng. Washington. A. ACM. an efficient purely declarative logic programming language. ACM. Journal of Logic Programming. pages 203– 217. and Leroy. Pugs home page.. A Haskell programmer’s guide to Chameleon. A. A. Logics. D. Thiemann. Turner.. Swierstra. AI Memo 349. M. Deforestation: transforming programs to eliminate trees. P. (2002a). Interview with David Roundy of Darcs on source control. P. A new implementation technique for applicative languages. TIL: A type-directed optimizing compiler for ML. MA. ACM. Number 1129 in Lecture Notes in Computer Science. S.sg/ ∼sulzmann/chameleon/download/haskell. R. editors. A. A. In IEEE Symposium on FieldProgrammable Custom Computing Machines. IFIP Working Conference... Available at 11 7/hardware design and functional. Sinclair. T. S. Artificial Intelligence Laboratory. ACM. (1974). C.. S. Combinators for logic programming. editor. In Gibbons. 9:31– 49. X. G.org. P.uk. (Launchbury and Sansom. D. 11(7):1135–1158. Tracing lazy functional computations using redex trails. Report Report CS-R9525. J. (2007). pages 1– 16. In (LFP84. pages 97–108. Stosberg. P. F. (1990b). and Seres. P. University of St Andrews. London. M. and de Moor.html. editors. N. G. OSDir News. pages 184–207. Theorems for free! In MacQueen. (1987). Graphical user intefaces for Haskell. (1993). Proc ACM Workshop on ML (ML’2005). (1990a). Another algorithm for bracket abstraction.pugscode. Turner. G. G. and Lee. Harvard University Press. (2001). 73:231–248. Wadler. (1978). Nice. In Workshop on Multi-Language Infrastructure and Interoperability (BABEL’01). A.. (1982). ErrorCorrecting Combinator Parsers. Microsoft Research. In 14th ACM Symposium on Principles of Programming Languages. volume 1292 of Lecture Notes in Computer Science.. IEEE Computer Society Press.. 1985). Wadler. (1995). J. Dept. The bits between the lambdas: binary data in a lazy functional language. ACM. editors. In Proceedings of the 30th ACM SIGPLANSIGACT Symposium on Principles of Programming Languages. (2000). (1998). Albuquerque. and Blott. W. P. P. (2002). A prettier printer. C. Xi. P. Wan. (1988). Wan. IEEE Computer. (2001).. Department of Computer Science. Taha. Young. Brehm. First use of *sections*. Multipleview tracing for Haskell: a new Hat. Event-driven FRP. I. H. T. The essence of functional programming. (1971). How to make ad-hoc polymorphism less ad hoc. Florence. Z. Canada. ACM. pages 224–235.. ACM. In Workshop on Standard ML. and Runciman. (2003). Carnegie-Mellon University. 2:461–493. Z. M. and Runciman. PhD thesis. Yale University. In International Symposium on Memory Management. Wadler. Watson. Palgrave. Functional reactive programming from first principles. and de Moor. W. A Generative. of Computer Science. (1989). Taha. M.ac. and Chen. C. D. Yale University. Z. pages 242–252. (1999). . Wadler. The Fun of Programming. P. on page 30.. Baltimore. Wadler. J. BC. D. In Proceedings of the ACM SIGPLAN ’00 Conference on Programming Language Design and Implementation (PLDI). without even being odd. Comprehending monads. In 20th ACM Symposium on Principles of Programming Languages (POPL’92). Real-time FRP. Vancouver. Available at http: //www.. O. C. P... S. (2001).. and Hudak. Wallace. ACM. The Semantic Analysis of Functional Programs: Theory and Practice. C. In Gibbons. P. PhD thesis. Nested-Sequential Basis for General Purpose Programming Languages. PhD thesis. Chen. Guarded recursive datatype constructors. W. 2001).. Oxford University. Mathematical Structures in Computer Science. M. In Proc 16th ACM Symposium on Principles of Programming Languages. Wallace. (1988). Z. In Proceedings of Sixth ACM SIGPLAN International Conference on Functional Programming. (1982). In Proceedings of Fourth International Symposium on Practical Aspects of Declarative Languages. How to add laziness to a strict language. (1992a). pages 51–57. Semantics and Pragmatics of the Lambda Calculus.uk/fp/nhc98.york. In (ICFP99. and Hudak. M.. Wallace. ACM Press. Texas. and Runciman. (December 2002). (1998). In (Haskell01. C. 1999). ACM. PhD thesis. P. Haskell and XML: Generic combinators or type-based translation. and MacQueen.. Wadsworth. (1973). Italy. Department of Computer Science. Taha. and Gurd.cs. J. (2003). Wadler. A practical data flow computer. pages 148–159.Wadler. Functional Reactive Programming for Real-Time Embedded Systems. G. Wan. P. pages 1–14. Chitil. The nhc98 web pages. (1992b).. Austin. Wan. and Hudak. Wile. Wallace.
https://pt.scribd.com/document/56928328/haskel
CC-MAIN-2017-09
refinedweb
54,623
60.01
Want to find the easiest way to convert Excel to CSV in C#, VB.NET? This section just aims at introducing an easiest solution to convert Excel (XLS/XLSX) to CSV with C#, VB.NET via an Excel component. Two reasons can be attributed to call this solution the easiest: One is that the key code of Excel to CSV is only three lines; the other reason is that you can directly convert Excel to CSV without opening the original Excel file. Spire.XLS for .NET, as a .NET Excel component, enables users to fast create, read, write and modify Excel document without Microsoft Office Excel Automation. Besides, it has powerful conversion functions, such as it can easily convert excel to PDF, Excel to HTML, Excel to XML, XML to Excel, Excel to Image and Excel to CSV. By calling a class Spire.Xls.Worksheet in Spire.XLS.dll component, you can convert Excel to CSV by calling a method: public void SaveToFile(string fileName, string separator, Encoding encoding) with three parameters passed. Please preview the effective screenshot of this task first: Now before view the whole code of Excel to CSV conversion, please Download Spire.XLS for .NET. Sample Code: Excel to CSV in C#, VB.NET using Spire.Xls; namespace xls2csv { class Program { static void Main(string[] args) { Workbook workbook = new Workbook(); workbook.LoadFromFile(@"..\ExceltoCSV.xls"); Worksheet sheet = workbook.Worksheets[0]; sheet.SaveToFile("sample.csv", ",", Encoding.UTF8); } } } Sample Code: Excel to CSV in VB.NET Imports Spire.Xls Namespace xls2csv Friend Class Program Shared Sub Main(ByVal args() As String) Dim workbook As New Workbook() workbook.LoadFromFile("..\ExceltoCSV.xls ") Dim sheet As Worksheet = workbook.Worksheets(0) sheet.SaveToFile("sample.csv", ",", Encoding.UTF8) End Sub End Class End Namespace
http://www.e-iceblue.com/Knowledgebase/Spire.XLS/Program-Guide/How-to-Use-C-/VB.NET-Convert-Excel-to-CSV.html
CC-MAIN-2013-20
refinedweb
290
58.48
String Array - Java Beginners String Array From where can I get, all the functions that are needed for me to manipulate a String Array. For Example, I had a String Array ("3d4..., as to by which method can I separate the Integers from this Array of String String Array - Java Beginners this question to you before, now the problem comes if my String Array consisted... for me to manipulate a String Array. For Example, I had a String Array ("3d4..., as to by which method can I separate the Integers from this Array of String String Array - Java Beginners String Array Thanks for the help you had provided,,, Your solution did worked... I worked on the previous solution given by you and it worked.... I... again,,, and I'll come back to you , if I had other problem regarding JAVA details of my program - Java Beginners details of my program hi 1- allow the customer to order any thing...) days maybe we can give up from using array of oblect and make it as a usual array but it could make the program much longer note: i want to use the method all comment in jsp page in the browser. jsp scriptlet tag also support java comment...; // java comment /* multiple line java...all comment in jsp Defined all comment in jsp ? jsp Array sorting - Java Beginners "Arrays.sort(name)" but the problem is that my relationship with the "type" array will be lost. I need to re-order the "name" array alphabetically so that my "type...Array sorting Hello All. I need to sort one array based sorting an array of string with duplicate values - Java Beginners sorting an array of string Example to sort array string ARRAY SIZE. - Java Beginners ARRAY SIZE. Thanks, well that i know that we can use ArrayList for the elements to get stored and have used that infact in my code but i stuck where... use * operator in arrayList. Heres my code again: i have just stored all ARRAY SIZE!!! - Java Beginners ARRAY SIZE!!! Hi, My Question is to: "Read integers from the keyboard until zero is read, storing them in input order in an array A. Then copy...). The first soloution that came to my mind was to initialize the array to a very big sorting an array of string with duplicate values - Java Beginners sorting an array of string with duplicate values I have a sort method which sorts an array of strings. But if there are duplicates in the array it would not sort properly... of the list. i then copy my list in the array. the method call inside the main sorting an array of string with duplicate values - Java Beginners string with duplicate values How to check the string with duplicate values String in Java - Java Beginners ]); } } } ---------------------------------------------------- I am sending simple code of String array. If you are beginner in java... : Thanks...String in Java hiiiii i initialise a String array,str[] with a size String array sort String array sort Hi here is my code. If i run this code I am... language="java"%> <%@ page session="true"%> <% Connection... result_set=null; String route_number[]=new String[1000]; String Pass the array please.. - Java Beginners Pass the array please.. hi! i'm having problem with this programming..can anyone check where is my error.. Question: Write a program called... them in an array. When finished receiving the numbers, the program should pass Difference between documentation comment and multiline comment in java? Difference between documentation comment and multiline comment in java? Difference between documentation comment and multiline comment in java array password - Java Beginners array password i had create a GUI of encryption program that used the array password. my question is can we do the password change program? i mean we change the older password with the new password: "); Help With My Java Project? Help With My Java Project? Write a console program that repeatedly..., or Mixed). As they enter the data, assign it to a two-dimension array where... these steps: display: As Entered for each entry in the array, display the index Sorting String arrays in java - Java Beginners InterfaceStringArray { private String[] arr; //ref to array arr private int... String[size]; nElems = 0; //create array } public int getSize() { return...Sorting String arrays in java I have to make an interface... void main(String[] args) { ArrayMerge am = new ArrayMerge(); int a[] = {1 are my declared varibles right - Java Beginners are my declared varibles right here is what i have so far juswt wondering if my varibles are correct. public class Bug { public Bug(int... String currentPosition; private double Faceing; } Hi ERROR with my JAVA code - Java Beginners ERROR with my JAVA code The error came up in my main method... it wasnt there until i finished my actionListener coding so I dont know how to fix... actionPerformed(ActionEvent e) { // TODO Auto-generated method stub String String Methods - Java Beginners String Methods Greetings RoseIndia Team; Thank you for your prompt response to my question. I have one other question requiring your assistance... main(String []args){ Scanner scanner = new Scanner(System.in Array in Java Array in Java public class tn { public class State{ String s_name; int p1; int p2; } public void f... } public static void main(String[] args) throws Exception { tn What's wrong with my form? - Java Beginners What's wrong with my form? Dear expert, I'm having trouble with my enquiry form. Hope you can tell me where my mistake lies. Thanks. Here's my jsp code: Enquiries var objForm = new Object plz. answer my ques - Java Beginners plz. answer my ques I want to ask a prog. prob which would create...,IOException{ PrintWriter out = response.getWriter(); String... connection=null; ResultSet rs; String userName=new String(""); String array - Java Beginners " + maxCount); } public static void main(String [] args){ Scanner input=new Some additions for my previous question - Java Beginners Some additions for my previous question This is the question I... for the GUI. Could anyone please help?" So my other two questions are: 1...) { String value = text.getText(); int i = 0; Pattern p = Pattern.compile("[A-Z Insert a Processing Instruction and a Comment Node a Processing Node and Comment Node in a DOM document. JAXP (Java API for XML... Insert a Processing Instruction and a Comment Node  ... and data string. Here target name is "open" and data string is " Array Array Hi, Here is my code: public class Helloworld { public static void main (String [] args) { System.out.println("Hello,World"); } } Thanks array split string array split string array split string class StringSplitExample { public static void main(String[] args) { String st = "Hello...]); } } } Split String Example in Java static void Main(string[] args String Array Java String Array  ... how to make use of string array. In the java programming tutorial string, which are widly used in java program for a sequence of character. String Problem with Link To website with my Java Appliaction - Java Beginners Problem with Link To website with my Java Appliaction Hello Sir...); } public static void main(String z[]){ menu5 pro=new menu5(); pro.dis...; JButton courbtn,facbtn,admin,hebtn,visit; public void openURL(String url clarify my question - Java Beginners clarify my question Dear Sirs/Expert, Thanks for writing back. Yes. I want a page to use id of parents to do the search for the match. So... the matching tutors to the parent. I was telling you that my degree array manipulation - Java Beginners example at: manipulation We'll say that a value is "everywhere" in an array if for every pair of adjacent elements in the array, at least one of the pair how to include Timestamp into my enquiry form? - Java Beginners how to include Timestamp into my enquiry form? Dear expert, I'd like to include a timestamp function into my enquiry form. However, I am stuck at the servlet portion whereby I want to map my Timesatmp class to my update array - Java Beginners array WAP to perform a merge sort operation. Hi Friend, Please visit the following link: Hope that it will be helpful for you. Thanks anyone willing to look over my code? (java) anyone willing to look over my code? (java) package... InOrderWithOutCheats { public static void main(String[] args...); } // prevents "howManyNumbers" from being <= 0 int[] array = new int JavaScript Comment JavaScript Comment: coding to hide all those statements, which are not supposed to run. Comment make.... Example 1: <html> <head> <title>My Firt JavaScript File < Need someone to check and modify my simple code - Java Beginners Need someone to check and modify my simple code How to write... static void main(String[] args) { double grossPay = 320; double...() { public void actionPerformed(ActionEvent ae){ String value1=text1.getText On string - Java Beginners StringUtilsResources: - string -Display reverse String using string reverse method in Java I wanted to display reverse String using string.reverse() method in Java  Array - Java Beginners Array how to declare array of hindi characters in java Array in Java - Java Beginners Array in Java Please help me with the following question. Thank you. Write a program that reads numbers from the keyboard into an array of type int[]. You may assume that there will be 50 or fewer entries in the array. Your i can creat string array, with global visibility. how i can creat string array, with global visibility. Problem : Write a Java class having a String array, with global visibility. Add a method that adds a given sting to the string array. Add a method that searches can't read my xml file in java _COMMENT> //nothing ... in this row and my some java codes : private String getResults(TopDocs docs) { StringBuilder htmlFormat = new...can't read my xml file in java i've a xml file like this : < Java
http://www.roseindia.net/tutorialhelp/comment/85635
CC-MAIN-2014-10
refinedweb
1,623
65.42
This is kinda embrassing. Printed out first member of normals, and it's fine. The problem seems to be with the calcNormal function, as the values aren't quite correct. Somehow got it into my head that calcNormal couldn't possible be wrong. Printable View This is kinda embrassing. Printed out first member of normals, and it's fine. The problem seems to be with the calcNormal function, as the values aren't quite correct. Somehow got it into my head that calcNormal couldn't possible be wrong. I have done some more playing around, and the problem is actually still here, and the problem is when transferring the contents of normArray to normals. I had to make 'normals 2d', as if it is just 1d (without {0,0,0}), it will not compile at all. Both printf's should print out 0,0,-1, but they differ. So, back to square one; why is 'normals' not settings its contents to what is returned by calcNormal? Code: float normArray[3]; // If declared in main, doesn't compile. This is the least of my worries at the moment. int main(void) { GLfloat points[4][3] = { {0,0,0}, {0,1,0}, {1,1,0}, {1,0,0} }; GLfloat normals[2][3] = {{calcNormal(points[0], points[1], points[3])}, {0, 0, 0}}; printf("actual normal is %f, %f, %f\n", normals[0][0], normals[0][1], normals[0][2]); return 0; } GLfloat calcNormal(GLfloat* B1, GLfloat* B2, GLfloat* T) { GLfloat A[3] = {B2[0] - B1[0], B2[1] - B1[1], B2[2] - B1[2]}; GLfloat B[3] = {T[0] - B1[0], T[1] - B1[1], T[2] - B1[2]}; normArray[0] = (A[1]*B[2]) - (A[2]*B[1]); normArray[1] = (A[2]*B[0]) - (A[0]*B[2]); normArray[2] = (A[0]*B[1]) - (A[1]*B[0]); printf("normal is %f, %f, %f\n", normArray[0], normArray[1], normArray[2]); return normArray[0]; } Shouldn't point[0], point[1], etc., have a second dimension? It is a 2D array. It does have a second dimension. Point[0] is (0,0,0) That question confuses me, so will answer best I can. I am passing 3 arrays, of size 3, into calcNormal. calcNormal is recieving the arrays correctly, as it is correctly calculating values. I have verified this with different input arrays. The issue is in passing the values in normArray, AFTER the calculations, into normals, via a pointer. EDIT: Remember, normArray is a 1d array, not 2d, of size 3. My compiler immediately gives the error: "Illegal initialization", on that line. I'm suggesting the normals array, is not getting the values you want it to have. You have 9 ints, trying to be initialized into one row which holds only 3 ints. points[0] = 3 int's points[1] = 3 int's points[3] = 3 ints All for the 0th row of the normals array, which holds 3 ints, only. Oh, you think calcNormals returns an array of size 9? Arrays point[0], [1] and [2] are fed INTO calcNormal (called by normals[x]), which returns pointer to normArray (size 3), which normals[x] is meant to set itself to. No, I've been helping beginners too much. normArray[0] is one number, however. It's not a pointer to three numbers. Well spotted. Not fixed the code, but 1 step there. Should be &normArray[0], and function should be defined to return float*. I still can't get it to compile though, am getting 'excess elements in scalar initializer' errors. I made my numbArray global, as per your suggestion, and changed the return from calcNormal to void. No joy. The compiler won't have anything to do with initializations that involve a row of digits, being subtracted by another row of digits, and the result of that, going into one element of the array being initialized. My suggestion would be to initialize the arrays, then do the subtraction in for loops, and get those answers into variables, and then assign them to the array either with more for loops or explicit assignments. I used that to get main() to compile, like this: I'm not confident that code is correct, because I couldn't actually test it, but it does compile. That's how I'd work with the calcNormal() function.I'm not confident that code is correct, because I couldn't actually test it, but it does compile. That's how I'd work with the calcNormal() function.Code: #include <stdio.h> float normArray[3]; void calcNormal(float* B1, float* B2, float* T); int main(void) { int i; float points[4][3] = { {0,0,0}, {0,1,0}, {1,1,0}, {1,0,0} }; float normals[2][3]; calcNormal(points[0], points[1], points[3]); for(i = 0; i < 3; i++) { normals[0][i] = normArray[i]; normals[1][i] = 0; } normals[0][2] = normArray[3]; printf("actual normal is %f, %f, %f\n", normals[0][0], normals[0][1], normals[0][2]); i = getchar(); return 0; } Your way would be a great time saver, I couldn't grok the concept of it at first. :) Yeah, I began coming to the realisation that what I wanted to isn't actually possible (I'm always looking for small and elegant solutions). Got it all working with some for loops now.
https://cboard.cprogramming.com/c-programming/113902-parsing-returning-array-pointers-2-print.html
CC-MAIN-2017-04
refinedweb
892
63.9
The Developer Diaries Or, Random Thoughts on Life and Software 2012-01-16T04:16:17+00:00 Apache Roller Solaris 10 Containers Released on OpenSolaris Jordan Vaughan 2009-10-23T20:10:33+00:00 2009-10-24T03:14:16+00:00 <p>After roughly nine months of nonstop development, Jerry Jelinek <a href="">integrated the first phase of <em>solaris10</em>-branded zones</a> (a.k.a. Solaris 10 Containers) into <a href="">OpenSolaris</a> build 127 yesterday. Such zones enable users to host environments from <a href="">Solaris 10 10/09</a> and later inside <a href="">OpenSolaris zones</a>. <a href="">As mentioned in one of my earlier posts</a>, we're developing <em>solaris10</em>-branded zones so that users can consolidate their Solaris 10 production environments onto machines running OpenSolaris and take advantage of many innovative OpenSolaris technologies (such as <a href="">Crossbow</a>) within such environments.</p> <p><a href="">As Jerry mentioned in his blog</a>, this first phase delivers emulation for Solaris 10 10/09, physical-to-virtual (p2v) and virtual-to-virtual (v2v) capabilities to help users deploy Solaris 10 environments in <em>solaris10< <a href="">dtrace(1M)</a> and <a href="">mdb(1)</a> on processes running in <em>solaris10< <em>solaris10</em>-branded zones to examine processes running inside of the zones.</p> <p>If you are an OpenSolaris or Solaris 10 kernel developer, then I admonish you to read the <a href=""><em>Solaris10</em>-Branded Zone Developer Guide</a>, which explains the purpose and implementation of <em>solaris10</em>-branded zones as well as what you'll need to do to avoid breaking such zones. It's every kernel developer's responsibility to ensure that <em>solaris10</em>-branded zones will work with his/her changes to the Solaris 10 and OpenSolaris user-kernel interfaces (syscalls, ioctls, kstats, etc.).</p> <p>This project was full of surprises and challenges. One of my favorite bugs involved Solaris 10's libc's use of the x86 <code>%fs</code> segment register. Solaris 10's libc expected the x86 <code>%fs</code> register to contain a nonzero selector value in 64-bit processes (Solaris 10's <a href="">__curthread()</a> returns <code>NULL</code> if <code>%fs</code> is zero.), which was problematic because OpenSolaris' kernel cleared <code>%fs</code>. Libc has always used <code>%fs</code> to locate the current thread's <a href=""><code>ulwp_t</code></a> structure on 64-bit x86 machines. Therefore, 64-bit x86 processes running inside <em>solaris10</em>-branded zones were unable to use <a href=""><code>thr_main(3C)</code></a> and other critical libc functions as well as several common libraries, such as libdoor.</p> <p>The fix was somewhat complicated because it had to guarantee that all threads in all 64-bit x86 processes running in <em>solaris10</em>-branded zones would start with nonzero <code>%fs</code> registers. Fortunately, only two system calls modify <code>%fs</code> in Solaris 10 and OpenSolaris: <code>SYS_lwp_private</code> and <code>SYS_lwp_create</code>. <code>SYS_lwp_private</code> is a libc-private system call that's invoked once when libc initializes after a process execs (see OpenSolaris' implementation of <a href=""><code>libc_init()</code></a>) in order to configure the <code>FS</code> segment so that its base lies at the start of the single thread's <code>ulwp_t</code> structure. <code>SYS_lwp_create</code> takes a <a href=""><code>ucontext_t</code></a> structure and the address of a <code>ulwp_t</code> structure and creates a new thread for the calling process with the given thread context and an <code>FS</code> segment beginning at the start of the specified <code>ulwp_t</code> structure.</p> <p>My initial fix did the following:</p> <ol> <li>The solaris10 brand's emulation library interposed on <code>SYS_lwp_private</code> in <code>s10_lwp_private()</code>. It handed the system call to the OpenSolaris kernel untouched and afterwards invoked <code>thr_main(3C)</code> to determine whether the Solaris 10 environment's libc worked after the kernel configured <code>%fs</code>. If <code>thr_main(3C)</code> returned <code>-1</code>, then the library invoked a special <code>SYS_brand</code> system call to set <code>%fs</code> to the old nonzero Solaris 10 selector value.</li> <li>The brand's emulation library also interposed on <code>SYS_lwp_create</code> in <code>s10_lwp_create()</code> and tweaked the supplied ucontext_t structure so that the new thread started in <code>s10_lwp_create_entry_point()</code> rather than <a href=""><code>_thrp_setup()</code></a>. Of course, new threads had to execute <code>_thrp_setup()</code> eventually, so <code>s10_lwp_create()</code> stored <code>_thrp_setup()</code>'s address in a predetermined location in the new thread's stack. <code>s10_lwp_create_entry_point()</code> invoked <code>thr_main(3C)</code> to determine whether the Solaris 10 environment's libc worked when <code>%fs</code> was zero. If <code>thr_main(3C)</code> returned <code>-1</code>, then the new thread invoked the same <code>SYS_brand</code> system call invoked by <code>s10_lwp_private()</code> in order to correct <code>%fs</code>. Afterwards, the new thread read its true entry point's address (i.e., <code>_thrp_setup()</code>'s address) from the predetermined location in its stack and jumped to the true entry point.</li> <li>The <em>solaris10</em> brand's kernel module ensured that forked threads in <em>solaris10</em>-branded zones inherited their parents' <code>%fs</code> selector values. This ensured that forked threads whose parents needed <code>%fs</code> register adjustments started with correct <code>%fs</code> selector values.</li> </ol> <p>I committed the fix and was content until a test engineer working on <em>solaris10</em>-branded zones, Mengwei Jiao, reported a segfault of a 64-bit x86 test in a <em>solaris10</em>-branded zone. I immediately suspected my fix because the test was multithreaded, yet I was surprised because I thoroughly tested my fix and never encountered segfaults. Mengwei's test created and immediately canceled a thread using <a href="">pthread_create(3C)</a> and <a href="">pthread_cancel(3C)</a>. After spending hours debugging core dumps, I discovered that I forgot to consider signals while testing my fix.</p> <p>The test segfaulted because its new thread read a junk address from its stack in <code>s10_lwp_create_entry_point()</code> and jumped to it. Something clobbered the thread's stack and overwrote its true entry point's address. I noticed that the thread didn't start until its parent finished executing <code>pthread_cancel(3C)</code>, so I suspected that the delivery of the <code>SIGCANCEL</code> signal clobbered the child's stack. It turned out that the child started in <code>s10_lwp_create_entry_point()</code> as expected but immediately jumped to <a href=""><code>sigacthandler()</code></a> in libc to process the <code>SIGCANCEL</code> signal. Such behavior might have been acceptable because the thread's true entry point's address was stored deep within the thread's stack (2KB from the top of the stack) and neither <code>sigacthandler()</code> nor any of the functions it invoked consumed much stack space, but <code>sigacthandler()</code> invoked <a href=""><code>memcpy(3C)</code></a> to copy a <a href=""><code>siginfo_t</code></a> structure and the dynamic linker hadn't yet loaded <code>memcpy(3C)</code> into the library's link map. Consequently, the thread executed <code>ld.so.1</code> routines in order to load <code>memcpy(3C)</code> and fill its associated PLT entry. Eventually the thread's stack grew large enough for <code>ld.so.1</code> to clobber the thread's true entry point's address, which produced the junk address that later led to the segfault.</p> <p>My final solution eliminated the use of new threads' stacks and instead stored entry points in new threads' <code>%r14</code> registers. Libc doesn't store any special initial values in new threads' <code>%r14</code> registers, so I was free to use <code>%r14</code>. Additionally, any <a href="">System V ABI</a>-conforming functions invoked by <code>s10_lwp_create_entry_point()</code> and <code>sigacthandler()</code> had to preserve <code>%r14</code> for <code>s10_lwp_create_entry_point()</code> (<code>%r14</code> is a <em>callee-saved register</em>), so it was impossible for such functions to clobber <code>%r14</code> as seen by <code>s10_lwp_create_entry_point()</code>.</p> <p>I also renamed <code>s10_lwp_create()</code> to <a href=""><code>s10_lwp_create_correct_fs()</code></a> and used a trick that I call <em>sysent table patching</em> to ensure that the brand library only causes <code>SYS_lwp_create</code> to force new threads to start at <a href=""><code>s10_lwp_create_entry_point()</code></a> after <a href=""><code>s10_lwp_private()</code></a> determines that the Solaris 10 environment's libc can't function properly when <code>%fs</code> is zero. The brand's emulation library accesses a global array called <a href=""><code>s10_sysent_table</code></a> to fetch system call handlers. An emulation function can change a system call's entry in the array in order to change the system call's handler. The emulation library invokes <a href=""><code>s10_lwp_create()</code></a> to emulate <code>SYS_lwp_create</code> by default, which simply hands the system call to the OpenSolaris kernel untouched. If <code>s10_lwp_private()</code> determines that new threads require nonzero <code>%fs</code> selector values, then it modifies <code>s10_sysent_table</code> so that <code>s10_lwp_create_correct_fs()</code> handles <code>SYS_lwp_create</code> system calls. <code>SYS_lwp_private</code> is only invoked while a process is single-threaded, so races between <code>s10_lwp_private()</code> and <code>SYS_lwp_create</code> are impossible.</p> <p>I encourage you to <a href="">download and install the latest version of OpenSolaris</a>, update it to build 127 or later (once the builds become available), and try <em>solaris10</em>-branded zones. Jerry and I would appreciate any feedback you might have, which you can send to us via the <a href="">zones-discuss</a> discussion forum on opensolaris.org. Remember that <em>solaris10</em>-branded zones are capable of hosting production environments even though they are still being developed.</p> <p>Enjoy!</p> SVR4 Packages Available for Solaris 10 Containers Jordan Vaughan 2009-07-15T15:33:56+00:00 2009-07-15T22:33:56+00:00 <p><a href="">Jerry Jelinek</a> recently posted SPARC and x86/x64 SVr4 packages on the <a href="">Solaris 10 Containers project page</a>. The packages contain the binaries that allow administrators to create and manage Solaris 10 Containers. (<a href="">See this post</a> for information about Solaris 10 Containers.) As of this writing, the packages are synced to ONNV build 118 and should be able to manage Solaris 10 Containers running S10U7.</p> <p>Please feel free to download and use the packages. However, please note that the technology behind Solaris 10 Containers is still in development. The binaries in the packages represent the technology as it currently stands.</p> <p>Please send us any comments that you might have via the <a href="">zones discussion forum on OpenSolaris.org</a>. Any feedback you can provide regarding bugs would be especially welcome because it would help us discover behaviors that will require emulation. :)</p> Solaris 10 Containers for OpenSolaris Jordan Vaughan 2009-05-07T18:43:38+00:00 2009-05-08T04:24:06+00:00 <p><em>Branded Zones/Containers</em> is a technology that allows Solaris system administrators to virtualize non-native operating system environments within <a href="">Solaris zones</a>, a lightweight OS-level (i.e., no hypervisor) virtualization technology that creates isolated application environments. (<a href="">Look here for more details.</a>) Brands exist for <a href="">Linux on OpenSolaris</a> and <a href="">Solaris 8 and 9 on Solaris 10</a>, but not Solaris 10 on OpenSolaris...until now.</p> <p>On April 23, <a href="">Jerry Jelinek</a> announced the <a href="">development of Solaris 10 containers on OpenSolaris.org</a>).</p> <p <code>truss</code>,).</p> <p>Jerry and I prepared screencast demos of archiving, installing, booting, and working within a Solaris 10 container for the upcoming <a href="">Community One West developer conference</a>. We couldn't decide whose narration was best suited for the demo, so we submitted two versions, one featuring my voice and the other featuring Jerry's voice. <a href="">Take a look at Jerry's demo</a> if you want to see the results (though you might have to download the flash video file because it might not fit within the preview window). We are considering producing more videos or blog posts (or both) as the technology evolves.</p> <p>For more information on Solaris 10 containers and zones/containers in general and how you can contribute to both, visit the <a href="">OpenSolaris.org zones community page</a> and the <a href="">Solaris 10 Brand/Containers project page at OpenSolaris.org</a>.</p> Non-Global Zone Hostid Emulation for Everyone! Jordan Vaughan 2009-03-05T13:45:48+00:00 2009-05-07T22:07:45+00:00 <p?<br /></p> <p <code>zonecfg</code>.</p> <p>Here is an example of zone-emulated hostid in action. Suppose that you create and boot a non-global zone without using the hostid emulation feature. Here is what you might see:</p> <blockquote> <godel>. Creating list of files to copy from the global zone. Copying <9718> files to the zone. Initializing zone product registry. Determining zone package initialization order. Preparing to initialize <1442> packages on the zone. Initialized <1442> packages on zone. Zone <godel> is initialized. The file </godel> contains a log of the zone installation. # zoneadm -z godel boot # zlogin godel "zonename && hostid" godel 83405c0b # zonename && hostid global 83405c0b </godel></pre> </blockquote> <p>The system's hostid is the same within the non-global zone and the global zone. Now specify a hostid for the zone via <code>zonecfg</code>, reboot the zone, and observe the results:</p> <blockquote> <pre># zonecfg -z godel set hostid=1337833f # zoneadm -z godel reboot # zlogin godel "zonename && hostid" godel 1337833f # zonename && hostid global 83405c0b </pre> </blockquote> <p>You can specify any 32-bit hexadecimal hostid for a non-global zone except <code>0xffffffff</code> and the hostid will take effect on subsequent boots of the non-global zone. ('Boots' means boots from the global zone via <code>zoneadm</code>, as in "<code>zoneadm -z <zone_name> boot</code>" or "<code>zoneadm -z <zone_name> reboot</code>".) The hostid emulation feature is available for all native-based brands (<code>native</code>, <code>ipkg</code>, <code>cluster</code>, etc.); however, Linux containers (i.e., <code>lx</code>-branded zones) do not support hostid emulation via <code>zonecfg</code>. To emulate hostids in Linux containers, modify the <code>/etc/hostid</code> file within the container.</p> <p>Migrating non-global zones with legacy hostid-bound licensed software across physical hosts is a cakewalk. Detach the zone, migrate it to the new system, attach it, specify the source system's hostid for the now-attached zone, boot the zone, and <i>viola!</i>, the licensed software still thinks it's on the old system.</p> Bug Wars: The Phantom Pool Bug Menace Jordan Vaughan 2008-12-01T19:03:28+00:00 2008-12-02T03:03:28+00:00 <p>Most systems programmers like to swap tales about tackling tricky or annoying bugs. Now, after a month of pulling my hair out, I can share my first "bug war" story as a systems programmer.</p> <p>A somewhat long time ago, in a Sun Microsystems office not too far away, I occasionally encountered system panics of the following form while running my bug fixes through the standard zones test suite in snv_96:</p> <blockquote> <pre>assertion failed: pool->pool_ref == 0, file: ../../common/os/pool.c, line: 454</pre> </blockquote> <p><a href="">src.opensolaris.org</a> located the assertion in <code>pool_pool_destroy()</code>:</p> <blockquote> <pre>428 /\* 429 \* Destroy specified pool, and rebind all processes in it 430 \* to the default pool. 431 \*/ 432 static int 433 pool_pool_destroy(poolid_t poolid) 434 { 435 pool_t \*pool; 436 int ret; 437 438 ASSERT(pool_lock_held()); 439 440 if (poolid == POOL_DEFAULT) 441 return (EINVAL); 442 if ((pool = pool_lookup_pool_by_id(poolid)) == NULL) 443 return (ESRCH); 444 ret = pool_do_bind(pool_default, P_POOLID, poolid, POOL_BIND_ALL); 445 if (ret == 0) { 446 struct destroy_zone_arg dzarg; 447 448 dzarg.old = pool; 449 dzarg.new = pool_default; 450 mutex_enter(&cpu_lock); 451 ret = zone_walk(pool_destroy_zone_cb, &dzarg); 452 mutex_exit(&cpu_lock); 453 ASSERT(ret == 0); 454 ASSERT(pool->pool_ref == 0); 455 (void) nvlist_free(pool->pool_props); 456 id_free(pool_ids, pool->pool_id); 457 pool->pool_pset->pset_npools--; 458 list_remove(&pool_list, pool); 459 pool_count--; 460 pool_pool_mod = gethrtime(); 461 kmem_free(pool, sizeof (pool_t)); 462 } 463 return (ret); 464 } </pre> </blockquote> <p>Line 454 caused the panic, so something was referring to the dying pool. Looking at the source code further, I discerned that <code>pool_do_bind()</code> was supposed to rebind all processes within the pool specified by <code>poolid</code> to the pool to which the first function argument referred (in this case, the default pool). The zone callback invoked on like 451 only set a zone's pool and processor set associations; it didn't rebind processes. <code>pool_do_bind()</code> returned zero after completing successfully, so the problem was that <code>pool_do_bind()</code> indicated that it successfully rebound all processes from the dying pool to the default pool when in fact it sometimes did not. </p> <p> I dug around the source tree and determined that a process' pool association (as indicated by the <code>proc_t</code> structure's <code>p_pool</code> field) only changed when a new system process spawned (that is, a process without a parent spawned; see <code>newfork()</code> in uts/common/os/fork.c), a process forked (<code>cfork()</code> in uts/common/os/fork.c), a process exited (<code>proc_exit()</code> in uts/common/os/exit.c), or a process was bound to a pool via <code>pool_do_bind()</code>. </p> <p>A gentle introduction to pool rebinding is necessary before I proceed further. When a process forks or exits, it enters the <i>pool barrier</i>, which encloses operations that are sensitive to changes in the process' pool binding. (In other words, a process' pool binding should not change while the process is within the pool barrier.) The pool barrier is sandwiched between invocations of <code>pool_barrier_enter()</code> and <code>pool_barrier_exit()</code> (see uts/common/os/fork.c:211-224,229-236,299-309,525-527,668-672 and uts/common/os/exit.c:489-493,590-605).</p> <p>When a pool (call it <i>P</i>) is destroyed, <code>pool_do_bind()</code> (uts/common/os/pool.c:1239-1647) is invoked to rebind all processes within <i>P</i> to the default pool. <code>pool_do_bind()</code> creates an array of <code>proc_t</code> pointers called <code>procs</code> that can hold twice the number of active processes. <code>procs</code> will hold pointers to all processes that will be rebound to the default pool. Once <code>procs</code> is allocated, <code>pool_do_bind()</code> grabs <code>pidlock</code> and enters what I will call the <i>first loop</i>, which adds all active processes bound to <i>P</i> to <code>procs</code> (see pool.c:1359:1432). These processes are also marked with the <code>PBWAIT</code> flag (pool.c:1408), which causes them to block in <code>pool_barrier_enter()</code> and <code>pool_barrier_exit()</code>, effectively stopping them from entering or exiting the pool barrier. Once the first loop is done, <code>pool_do_bind()</code> releases <code>pidlock</code> and waits until all processes in <code>procs</code> that were within the pool barrier when marked with <code>PBWAIT</code> to block at <code>pool_barrier_exit()</code>. This guarantees that pool rebinding won't occur while the targeted processes are sensitive to pool rebinding.</p> <p>Once the thread in <code>pool_do_bind()</code> resumes execution, it enters what I will call the <i>second loop</i>, which checks if the children of the processes in <code>procs</code> should be added to <code>procs</code>. This loop catches any processes that were spawned via <code>cfork()</code> while the thread in <code>pool_do_bind()</code> waited for marked processes to block at <code>pool_barrier_exit()</code>. (Note that a newly-spawned process' LWPs are not started until the parent process exits the pool barrier.) Once the second loop completes, <code>pool_do_bind()</code> rebinds the processes in <code>procs</code> to the default pool, adjusts <i>P</i>'s reference count, and wakes all processes in <code>procs</code> that are blocked within <code>pool_barrier_enter()</code> and <code>pool_barrier_exit()</code>. (Note that <code>cfork()</code> and <code>proc_exit()</code> also adjust pool reference counts when processes fork or exit.)</p> <p>Now, back to my story. I turned to MDB to give me some clues as to what was going wrong: </p> <blockquote> <pre>> ::status debugging crash dump vmcore.4 (64-bit) from balaclava operating system: 5.11 onnv-bugfix (i86pc) panic message: assertion failed: pool->pool_ref == 0, file: ../../common/os/pool.c, line: 454 dump content: all kernel and user pages > ::panicinfo cpu 3 thread ffffff03afa72400 message assertion failed: pool->pool_ref == 0, file: ../../common/os/pool.c, line: 454 rdi fffffffffbf31690 rsi ffffff0008017988 rdx fffffffffbf311b0 rcx 1c6 r8 ffffff00080179c0 r9 20 rax 0 rbx 1c6 rbp ffffff00080179b0 r10 ffffff00080178d0 r10 ffffff00080178d0 r11 ffffff01ce469680 r12 fffffffffbf311b0 r13 fffffffffbf31018 r14 fffffffffbc5b4d8 r15 3 fsbase 0 gsbase ffffff01d243b580 ds 4b es 4b fs 0 gs 1c3 trapno 0 err 0 rip fffffffffb84be90 cs 30 rflags 246 rsp ffffff00080178c8 ss 38 gdt_hi 0 gdt_lo f00001ef idt_hi 0 idt_lo 10000fff ldt 0 task 70 cr0 8005003b cr2 feda43a8 cr3 e49a3000 cr4 6f8 </pre> </blockquote> <p>The faulting thread had an address of <code>ffffff03afa72400</code> and was on CPU 3.</p> <blockquote> <pre>> ffffff03afa72400::findstack -v stack pointer for thread ffffff03afa72400: ffffff00080178c0 ffffff00080179b0 panic+0x9c() ffffff0008017a00 assfail+0x7e(fffffffffbf31018, fffffffffbf311b0, 1c6) ffffff0008017a50 pool_pool_destroy+0x16b(47) ffffff0008017aa0 pool_destroy+0x40(2, 8067ce8, 47) ffffff0008017ca0 pool_ioctl+0xa32(a300000000, 3, 8064ca0, 102003, ffffff05de08fc48, ffffff0008017e8c) ffffff0008017ce0 cdev_ioctl+0x48(a300000000, 3, 8064ca0, 102003, ffffff05de08fc48, ffffff0008017e8c) ffffff0008017d20 spec_ioctl+0x86(ffffff03ac78f700, 3, 8064ca0, 102003, ffffff05de08fc48, ffffff0008017e8c, 0) ffffff0008017da0 fop_ioctl+0x7b(ffffff03ac78f700, 3, 8064ca0, 102003, ffffff05de08fc48, ffffff0008017e8c, 0) ffffff0008017eb0 ioctl+0x174(3, 3, 8064ca0) ffffff0008017f00 sys_syscall32+0x1fc() </pre> </blockquote> <p>As expected, the failed assertion occurred in <code>pool_pool_destroy()</code> after <code>pool_do_bind()</code> was called. Pool 0x47 (a non-default pool) was being destroyed. </p> <blockquote> <pre>> ::cpuinfo ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD PROC 0 fffffffffbc38fb0 1f 1 0 10 yes no t-0 ffffff03ad328700 ppdmgr 1 ffffff01d23a1580 1f 0 0 60 no no t-0 ffffff01e9f98000 ppdmgr 2 ffffff01d23ce580 1f 0 0 60 no no t-0 ffffff01d2a01560 mesa_vendor_sele 3 fffffffffbc40600 1b 1 0 41 no no t-0 ffffff03afa72400 pooladm </pre> </blockquote> <p>pooladm was responsible for the panic.</p> <blockquote> <pre>> pool_list::walk list | ::print -a pool_t { ffffff01ce887140 pool_id = 0 ffffff01ce887144 pool_ref = 0x5f ffffff01ce887148 pool_link = { ffffff01ce887148 list_next = 0xffffff01d0fb0b48 ffffff01ce887150 list_prev = pool_list+0x10 } ffffff01ce887158 pool_props = 0xffffff01d28d36d0 ffffff01ce887160 pool_pset = 0xffffff01cf500508 } { ffffff01d0fb0b40 pool_id = 0x47 ffffff01d0fb0b44 pool_ref = 0x1 ffffff01d0fb0b48 pool_link = { ffffff01d0fb0b48 list_next = pool_list+0x10 ffffff01d0fb0b50 list_prev = 0xffffff01ce887148 } ffffff01d0fb0b58 pool_props = 0xffffff01d4d759c8 ffffff01d0fb0b60 pool_pset = 0xffffff01cf500508 } </pre> </blockquote> <p>There were two pools. The first was the default pool, which appeared to have been consistent when the system panicked. The second pool was being destroyed. However, its reference count was one when the assertion failed. Everything else in the guilty pool appeared to have been consistent when the system panicked.</p> <blockquote> <pre>> ::walk proc | ::print -a proc_t p_pool ! grep ffffff01d0fb0b40 ffffff07e7586fa0 p_pool = 0xffffff01d0fb0b40 > ::offsetof proc_t p_pool offsetof (proc_t, p_pool) = 0xc00 > ffffff07e7586fa0-0xc00=X e75863a0 </pre> </blockquote> <p>There was exactly one process that referred to the guilty pool: <code>ffffff07e75863a0</code>.</p> <blockquote> <pre>> ffffff07e75863a0::print proc_t p_zone p_zone = 0xffffff01edf1bf00 > ::walk zone fffffffffbfb1180 ffffff01edf1bf00 > 0xffffff01edf1bf00::print zone_t zone_name zone_name = 0xffffff01e6eec200 "jj1" > 0xffffff01edf1bf00::print zone_t zone_pool zone_pool = 0xffffff01ce887140 </pre> </blockquote> <p>The non-global zone <code>jj1</code> contained the guilty process. The global zone was the only other zone in the system. Notice that <code>jj1</code> was associated with the default pool, not the pool that was being destroyed, when the system panicked. So all but one of the processes within <code>jj1</code> and <code>jj1</code> itself were rebound to the default pool.</p> <blockquote> <pre>> ffffff07e75863a0::ptree fffffffffbc36f70 sched ffffff01d25c33a0 init ffffff07e3d2a3a0 svc.startd ffffff07e5b563a0 ppd-cache-update ffffff07e5c4f3a0 ppdmgr ffffff07e60aa3a0 ppdmgr ffffff07e70323a0 ppdmgr ffffff07e731c3a0 ppdmgr ffffff07e75863a0 ppdmgr </pre> </blockquote> <p><code>cfork()</code> forked the guilty process. It was not created via <code>newproc()</code><code></code>, for if it were, then it would not have had a parent process (<code>ffffff07e731c3a0</code>).</p> <blockquote> <pre>>⁞ ffffff07e731c3a0::print proc_t p_pool p_pool = 0xffffff01ce887140 > ffffff07e731c3a0::print proc_t p_zone p_zone = 0xffffff01edf1bf00 > ffffff07e70323a0::print proc_t p_pool p_pool = 0xffffff01ce887140 > ffffff07e70323a0::print proc_t p_zone p_zone = 0xffffff01edf1bf00 </pre> </blockquote> <p>Both the parent and grandparent of the guilty process were within zone <code>jj1</code> and both referred to the default pool when the system panicked.</p>I started to look for interleavings of code from <code>cfork()</code>, <code>proc_exit()</code>, and <code>pool_do_bind()</code> that would lead to inconsistent states, going so far as to create diagrams illustrating which locks were held (and the order in which they were acquired) at various points in the aforementioned functions, but found nothing that suggested a race condition. I struggled to understand the fork and exit code (a nontrivial task) to see if any of the invoked subroutines were generating race conditions, but did not find anything. A fellow engineer suggested three or four possible sources of the bug, including a three-way race between the aforementioned functions, but a little investigation and a few counterexamples put his theories to rest. I made no progress for at least two weeks. <p>My frustrations were about to drive me insane when I stumbled upon what I thought was the source of the bug. The problem was in the second loop, in pool.c:1491-1500:</p> <blockquote> <pre>1491 mutex_enter(&p->p_lock); 1492 /\* 1493 \* Skip processes in local zones if we're not binding 1494 \* zones to pools (P_ZONEID). Skip kernel processes also. 1495 \*/ 1496 if ((!INGLOBALZONE(p) && idtype != P_ZONEID) || 1497 p->p_flag & SSYS) { 1498 mutex_exit(&p->p_lock); 1499 continue; 1500 } </pre> </blockquote> <p>The problem was on line 1496. The first disjunct of this conditional statement made <code>pool_do_bind()</code> skip child processes that were not in the global zone. (<code>idtype == P_POOLID</code> [<code>idtype</code> is one of <code>pool_do_bind()</code>'s parameters] when a pool is being destroyed.) Therefore, if a process in a non-global zone was forking (but had not created its child's <code>proc_t</code> structure yet via <code>getproc()</code> [fork.c:907-1177, esp.1055-1067]) when <code>pool_do_bind()</code> went through the first loop, then the second loop would never have added the process' child to <code>procs</code>. Thus the child process would have remained bound to pool <i>P</i>, resulting in the failed assertion.</p> <p>Here is a sample execution that illustrates this bug (thread <i>A</i> is the thread that is destroying <i>P</i> while thread <i>B</i> is executing <code>cfork()</code>):</p> <ol> <li>A enters <code>pool_do_bind()</code>.</li> <li>B enters <code>cfork()</code>.</li> <li>B enters the pool barrier.</li> <li>B enters <code>getproc()</code>.</li> <li>B allocates and zeroes the child proc's <code>proc_t</code> structure.</li> <li>A acquires <code>pidlock</code>, adds B's proc to <code>procs</code>, and releases <code>pidlock</code> (i.e., A goes through the first loop).</li> <li>B adds the child proc to the process tree and the active process list (both of which require B to grab <code>pidlock</code>).</li> <li>B attempts to exit the pool barrier via <code>pool_barrier_exit()</code>, but <code>PBWAIT</code> is set in its <code>proc_t</code>'s <code>p_poolflag</code> field, so it wakes A and blocks, waiting for A to signal it.</li> <li>A grabs <code>pidlock</code> and examines all processes in <code>procs</code> (i.e., A goes through the second loop).</li> <li>While examining <code>procs</code>, A looks at B's child process, sees that it is not in the global zone and <code>idtype</code> is not <code>P_ZONEID</code> (it is <code>P_POOLID</code>), and consequently does not add B's child process to <code>procs</code>.</li> <li>A rebinds all processes in <code>procs</code> and decrements the old pool's (<i>P</i>'s) reference count accordingly.</li> <li>A signals (wakes) all processes in <code>procs</code>.</li> <li>B wakes up.</li> <li>B turns its child process's LWPs loose.</li> </ol> <p>The solution was simple: extend the first disjunct of the above conditional statement (pool.c:1496) with another conjunct, <code>idtype != P_POOLID</code>, so that the first disjunct reads "<code>!INGLOBALZONE(p) && idtype != P_ZONEID && idtype != P_POOLID</code>". That way emptying pools of processes (e.g., during pool destruction) will not skip new processes in non-global zones.</p> <p>I thought, "At last, I nailed it!" but my success was short-lived. The same assertion failed after a few more runs through the zones test suite. I jumped back into MDB and examined the new dump, which was the same as the old dump (see above) with two exceptions: First, the guilty process had descendants in the new dump. That meant that the guilty process was not being spawned when <code>pool_do_bind()</code> executed. If it were being spawned when <code>pool_do_bind()</code> started the first loop, then its parent process would have blocked at <code>pool_barrier_exit()</code> and the child's LWPs would not have started until <code>pool_do_bind()</code> finished executing, which would have given the child no opportunity to spawn descendants.</p> <p>Furthermore, if the child was being spawned when <code>pool_do_bind()</code> started the first loop and the child started spawning descendants between the time when the thread executing <code>pool_do_bind()</code> returned to <code>pool_pool_destroy()</code> and when the thread encountered the failed assertion, the child's descendants would have been bound to the child's pool, making the pool's reference count greater than one. But the pool's reference count was one, so the child was not being spawned. (One might claim that the descendants could have rebound themselves to other pools before the assertion was made, but that was impossible because the pool lock, which prohibited concurrent pool operations, was held while <code>pool_pool_destroy()</code> and <code>pool_do_bind()</code> were executed.)</p> <p>Second, the guilty process was executing a subroutine called by <code>relvm()</code> (which was inside the pool barrier) within <code>proc_exit()</code>. That fact led me to think that some interaction between <code>proc_exit()</code> and <code>pool_do_bind()</code> was responsible for the bug.</p> <p>Further source code analysis did not reveal anything, so I scattered over twenty static DTrace probes throughout <code>cfork()</code>, <code>proc_exit()</code>, and <code>pool_do_bind()</code> in a desperate effort to acquire more useful information. After taking a few more dumps, adjusting the probes, and parsing the DTrace buffers stored in the dumps, I acquired a vital clue: a process that was exiting (via <code>proc_exit()</code>) and had entered (but not exited) the pool barrier was not being caught by the first loop in <code>pool_do_bind()</code>. Curious, I looked closely at the code surrounding <code>pool_barrier_enter()</code> in <code>proc_exit()</code> and the first loop in <code>pool_do_bind()</code>. I noticed nothing out of the ordinary, so I thoguht, "Great, I might as well reexamine functions called by <code>proc_exit()</code> and <code>pool_do_bind()</code> that I thought were correct." So I reexamined <code>procinset()</code> (which <code>pool_do_bind()</code> used in both the first and second loops to determine if a given process was bound to the pool that was being destroyed) and saw the following (uts/common/os/procset.c):</p> <blockquote> <pre>270 /\* 271 \* procinset returns 1 if the process pointed to by pp is in the process 272 \* set specified by psp, otherwise 0 is returned. A process that is 273 \* exiting, by which we mean that its p_tlist is NULL, cannot belong 274 \* to any set; pp's p_lock must be held across the call to this function. 275 \* The caller should ensure that the process does not belong to the SYS 276 \* scheduling class. 277 \* 278 \* This function expects to be called with a valid procset_t. 279 \* The set should be checked using checkprocset() before calling 280 \* this function. 281 \*/ 282 int 283 procinset(proc_t \*pp, procset_t \*psp) 284 { 285 int loperand = 0; 286 int roperand = 0; 287 int lwplinproc = 0; 288 int lwprinproc = 0; 289 kthread_t \*tp = proctot(pp); 290 291 ASSERT(MUTEX_HELD(&pp->p_lock)); 292 293 if (tp == NULL) 294 return (0); 295 296 switch (psp->p_lidtype) { </pre> </blockquote> <p>Notice lines 293-294. If a process' thread list was <code>NULL</code>, then <code>procinset()</code> indicated failure (the process was not in the process set). Now look at the code surrounding <code>pool_barrier_enter()</code> in <code>proc_exit()</code>:</p> <blockquote> <pre>470 mutex_enter(&p->p_lock); 471 472 /\* 473 \* Clean up any DTrace probes associated with this process. 474 \*/ 475 if (p->p_dtrace_probes) { 476 ASSERT(dtrace_fasttrap_exit_ptr != NULL); 477 dtrace_fasttrap_exit_ptr(p); 478 } 479 480 while ((tmp_id = p->p_itimerid) != 0) { 481 p->p_itimerid = 0; 482 mutex_exit(&p->p_lock); 483 (void) untimeout(tmp_id); 484 mutex_enter(&p->p_lock); 485 } 486 487 lwp_cleanup(); 488 489 /\* 490 \* We are about to exit; prevent our resource associations from 491 \* being changed. 492 \*/ 493 pool_barrier_enter(); 494 495 /\* 496 \* Block the process against /proc now that we have really 497 \* acquired p->p_lock (to manipulate p_tlist at least). 498 \*/ 499 prbarrier(p); 500 501 #ifdef SUN_SRC_COMPAT 502 if (code == CLD_KILLED) 503 u.u_acflag |= AXSIG; 504 #endif 505 sigfillset(&p->p_ignore); 506 sigemptyset(&p->p_siginfo); 507 sigemptyset(&p->p_sig); 508 sigemptyset(&p->p_extsig); 509 sigemptyset(&t->t_sig); 510 sigemptyset(&t->t_extsig); 511 sigemptyset(&p->p_sigmask); 512 sigdelq(p, t, 0); 513 lwp->lwp_cursig = 0; 514 lwp->lwp_extsig = 0; 515 p->p_flag &= ~(SKILLED | SEXTKILLED); 516 if (lwp->lwp_curinfo) { 517 siginfofree(lwp->lwp_curinfo); 518 lwp->lwp_curinfo = NULL; 519 } 520 521 t->t_proc_flag |= TP_LWPEXIT; 522 ASSERT(p->p_lwpcnt == 1 && p->p_zombcnt == 0); 523 prlwpexit(t); /\* notify /proc \*/ 524 lwp_hash_out(p, t->t_tid); 525 prexit(p); 526 527 p->p_lwpcnt = 0; 528 p->p_tlist = NULL; 529 sigqfree(p); 530 term_mstate(t); 531 p->p_mterm = gethrtime(); 532 533 exec_vp = p->p_exec; 534 execdir_vp = p->p_execdir; 535 p->p_exec = NULLVP; 536 p->p_execdir = NULLVP; 537 mutex_exit(&p->p_lock); </pre> </blockquote> <p>Notice anything fishy?</p> <p><code>proc_exit()</code> set the exiting process' <code>p_tlist</code> field to <code>NULL</code> after entering the pool barrier but before releasing the process' <code>p_lock</code> (exit.c:528), which <code>pool_do_bind()</code> grabbed during the first loop before invoking <code>procinset()</code> (pool.c:1367-1378). So if a process entered the pool barrier but did not exit and another process attempted to destroy the pool, then <code>procinset()</code> would have informed the latter process that the former process was not bound to the pool that was being destroyed. Thus the thread executing <code>pool_do_bind()</code> would have skipped the exiting process, which would have remain bound to the dying pool. Hence the failed assertion.</p> <p>(It is funny that I did not notice the comment in procset.c:272-274 when I first examined <code>procinset()</code>. It would have saved me much grief.)<br /></p> <p>The following sample execution will illustrate my point. Suppose that thread <i>A</i> belongs to a process that is bound to a non-default pool <i>P</i>. Suppose further that <i>A</i> is in the middle of <code>proc_exit()</code> and that some other thread <i>B</i> (in a different process) is destroying <i>P</i> and is in the middle of <code>pool_do_bind()</code>. Then the following might happen:</p> <ol> <li>B constructs <code>procs</code> and grabs <code>pidlock</code>. (pool.c:1333-1357)</li> <li>B begins checking each process in the active process list (i.e., it starts going through the first loop). (pool.c:1359-1366)</li> <li>B is context-switched with A.</li> <li>A grabs its process' <code>p_lock</code> and enters the pool barrier. (exit.c:470-493)</li> <li>A sets its process' <code>p_tlist</code> field to <code>NULL</code>. (exit.c:528)</li> <li>A releases its process' <code>p_lock</code>. (exit.c:537)</li> <li>A is context-switched with B.</li> <li>B grabs A's process' <code>p_lock</code>. (pool.c:1367)</li> <li>B calls <code>procinset()</code> and sees a return value of zero. (pool.c:1373)</li> <li>B skips A's process and does not add it to <code>procs</code>. (pool.c:1376-1377)</li> <li>B finishes <code>pool_do_bind()</code> successfully and returns to <code>pool_pool_destroy()</code>.</li> <li>B asserts that the targeted pool's reference count is zero and fails. [pool.c:454]</li> </ol> <p>Thus A's process would not be rebound to the default pool and the assertion would fail.</p> <p>The second loop in <code>pool_do_bind()</code> did not examine the missed process (even if its parent were added to <code>procs</code> during the first loop) because the second loop also used <code>procinset()</code> to determine if child processes were bound the the targeted pool. So <code>pool_do_bind()</code> was incapable of catching an exiting process as described above.</p> <p>Further examination of <code>procinset()</code> revealed that a process' <code>p_tlist</code> field was used only when the <code>idtype</code> argument was <code>P_CID</code>. Thus the most straightforward fix was to check that <code>p_tlist != NULL</code> iff <code>idtype == P_CID</code>. I took this approach, which (in addition to a few minor changes elsewhere) worked beautifully. The bug never appeared again, even when I executed my own test that created several pools, bound one of them to a running zone, and destroyed the pools in a tight loop for days.</p> <p>Thus I found the two causes of the bug in roughly a month. You can imagine what a sigh of relief I gave when I verified my fix!</p> <p>This episode has one moral: <i>RTFC</i> (<i>Read The F\*%!@#& Comments</i>)!<br /></p> The Best Way to Learn Kernel Programming Is to Do It Yourself Jordan Vaughan 2008-07-30T16:12:02+00:00 2008-07-31T08:10:29+00:00 <p.</p> <p>Following my tradition of sticking 'KANE' into my project names in honor of <a href="" target="_blank" title="Wikipedia entry on Kane">Kane</a> from <a href="" target="_blank" title="Wikipedia entry on Command and Conquer">the Command & Conquer series of video games</a>, )<br /></p> <p>I found a couple of websites that might be helpful for amateur kernel hackers like me:</p> <ul> <li><a href="" target="_blank" title="Bona Fide OS Development News">Bona Fide OS Development News</a></li> <li><a href="" target="_blank" title="OS Development">OS Development</a></li> <li><a href="" target="_blank" title="OS Development"></a><a href="" target="_blank" title="OSRC: The Operating System Resource Center">OSRC: The Operating System Resource Center</a></li> </ul> <p>Typing "osdev" into Google search yielded a fair number of OS developer sites, including the ones above.</p> <p>I'm sure that I'll be in for a long but profitable experience. :-) </p> A Lunchware License? Jordan Vaughan 2008-07-29T16:56:18+00:00 2008-07-30T00:02:47+00:00 <div align="left"> <p <a href="">the University of Illinois/NCSA Open Source License</a>. However, I tend to favor BSD-style licenses ("permissive licenses,") for the following reasons:</p> <ol> <li>For me, "free software" means that the licensed code can be incorporated into any application or library, including proprietary projects. Proprietary software should be able to incorporate and modify free software and remain proprietary. BSD-style licenses permit such incorporation: the GPL does not.</li> <li>BSD-style licenses are pithy: the GPL is not.</li> </ol> <p>To complicate matters, I recently stumbled across <a href="">a Wikipedia article on "beerware"</a>:</p> <blockquote> <pre>/\* \*. \*/</pre> </blockquote> <p>I doubt that this license will be widely used. Whatever. :-)</p> </div>
http://blogs.oracle.com/lunchware/feed/entries/atom
CC-MAIN-2014-15
refinedweb
6,744
51.18
The QThreadPool class manages a collection of QThreads. More... #include <QThreadPool> Inherits QObject. Note: All the functions in this class are thread-safe. This class was introduced in Qt 4.4. QtConcurrent::run() or the other Qt Concurrent APIs for higher level alternatives. See also QRunnable. This property represents the number of active threads in the thread pool. Note: It is possible for this function to return a value that is greater than maxThreadCount(). See reserveThread() for more details. Access functions: See also reserveThread() and releaseThread().: This property represents the maximum number of threads used by the thread pool. Note: The thread pool will always use at least 1 thread, even if maxThreadCount limit is zero or negative. The default maxThreadCount is QThread::idealThreadCount(). Access functions: Constructs a thread pool with the given parent. Destroys the QThreadPool. This function will block until all runnables have been completed. Returns the global QThreadPool instance.. Waits for each thread to exit and removes all threads from the thread pool.
http://doc.trolltech.com/4.5-snapshot/qthreadpool.html
crawl-003
refinedweb
166
70.8
Salt Cloud runs on a module system similar to the main Salt project. The modules inside saltcloud exist in the salt/cloud/clouds directory of the salt source. There are two basic types of cloud modules. If a cloud host is supported by libcloud, then using it is the fastest route to getting a module written. The Apache Libcloud project is located at: Not every cloud host is supported by libcloud. Additionally, not every feature in a supported cloud host is necessarily supported by libcloud. In either of these cases, a module can be created which does not rely on libcloud. The following functions are required by all driver modules, whether or not they are based on libcloud. This function determines whether or not to make this cloud module available upon execution. Most often, it uses get_configured_provider() to determine if the necessary configuration has been set up. It may also check for necessary imports, to decide whether to load the module. In most cases, it will return a True or False value. If the name of the driver used does not match the filename, then that name should be returned instead of True. An example of this may be seen in the Azure module: Writing a cloud module based on libcloud has two major advantages. First of all, much of the work has already been done by the libcloud project. Second, most of the functions necessary to Salt have already been added to the Salt Cloud project. The most important function that does need to be manually written is the create() function. This is what is used to request a virtual machine to be created by the cloud host, wait for it to become available, and then (optionally) log in and install Salt on it. A good example to follow for writing a cloud driver module based on libcloud is the module provided for Linode: The basic flow of a create() function is as follows: At various points throughout this function, events may be fired on the Salt event bus. Four of these events, which are described below, are required. Other events may be added by the user, where appropriate. When the create() function is called, it is passed a data structure called vm_. This dict contains a composite of information describing the virtual machine to be created. A dict called __opts__ is also provided by Salt, which contains the options used to run Salt Cloud, as well as a set of configuration and environment variables. The first thing the create() function must do is fire an event stating that it has started the create process. This event is tagged salt/cloud/<vm name>/creating. The payload contains the names of the VM, profile, and provider. A set of kwargs is then usually created, to describe the parameters required by the cloud host to request the virtual machine. An event is then fired to state that a virtual machine is about to be requested. It is tagged as salt/cloud/<vm name>/requesting. The payload contains most or all of the parameters that will be sent to the cloud host. Any private information (such as passwords) should not be sent in the event. After a request is made, a set of deploy kwargs will be generated. These will be used to install Salt on the target machine. Windows options are supported at this point, and should be generated, even if the cloud host does not currently support Windows. This will save time in the future if the host does eventually decide to support Windows. An event is then fired to state that the deploy process is about to begin. This event is tagged salt/cloud/<vm name>/deploying. The payload for the event will contain a set of deploy kwargs, useful for debugging purposed. Any private data, including passwords and keys (including public keys) should be stripped from the deploy kwargs before the event is fired. If any Windows options have been passed in, the salt.utils.cloud.deploy_windows() function will be called. Otherwise, it will be assumed that the target is a Linux or Unix machine, and the salt.utils.cloud.deploy_script() will be called. Both of these functions will wait for the target machine to become available, then the necessary port to log in, then a successful login that can be used to install Salt. Minion configuration and keys will then be uploaded to a temporary directory on the target by the appropriate function. On a Windows target, the Windows Minion Installer will be run in silent mode. On a Linux/Unix target, a deploy script ( bootstrap-salt.sh, by default) will be run, which will auto-detect the operating system, and install Salt using its native package manager. These do not need to be handled by the developer in the cloud module. The salt.utils.cloud.validate_windows_cred() function has been extended to take the number of retries and retry_delay parameters in case a specific cloud host has a delay between providing the Windows credentials and the credentials being available for use. In their create() function, or as a a sub-function called during the creation process, developers should use the win_deploy_auth_retries and win_deploy_auth_retry_delay parameters from the provider configuration to allow the end-user the ability to customize the number of tries and delay between tries for their particular host. After the appropriate deploy function completes, a final event is fired which describes the virtual machine that has just been created. This event is tagged salt/cloud/<vm name>/created. The payload contains the names of the VM, profile, and provider. Finally, a dict (queried from the provider) which describes the new virtual machine is returned to the user. Because this data is not fired on the event bus it can, and should, return any passwords that were returned by the cloud host. In some cases (for example, Rackspace), this is the only time that the password can be queried by the user; post-creation queries may not contain password information (depending upon the host). A number of other functions are required for all cloud hosts. However, with libcloud-based modules, these are all provided for free by the libcloudfuncs library. The following two lines set up the imports: from salt.cloud.libcloudfuncs import * # pylint: disable=W0614,W0401 from salt.utils import namespaced_function And then a series of declarations will make the necessary functions available within the cloud module. get_size = namespaced_function(get_size, globals()) get_image = namespaced_function(get_image, globals()) avail_locations = namespaced_function(avail_locations, globals()) avail_images = namespaced_function(avail_images, globals()) avail_sizes = namespaced_function(avail_sizes, globals()) script = namespaced_function(script, globals()) destroy = namespaced_function(destroy, globals()) list_nodes = namespaced_function(list_nodes, globals()) list_nodes_full = namespaced_function(list_nodes_full, globals()) list_nodes_select = namespaced_function(list_nodes_select, globals()) show_instance = namespaced_function(show_instance, globals()) If necessary, these functions may be replaced by removing the appropriate declaration line, and then adding the function as normal. These functions are required for all cloud modules, and are described in detail in the next section. In some cases, using libcloud is not an option. This may be because libcloud has not yet included the necessary driver itself, or it may be that the driver that is included with libcloud does not contain all of the necessary features required by the developer. When this is the case, some or all of the functions in libcloudfuncs may be replaced. If they are all replaced, the libcloud imports should be absent from the Salt Cloud module. A good example of a non-libcloud driver is the DigitalOcean driver: create()Function¶ The create() function must be created as described in the libcloud-based module documentation. This function is only necessary for libcloud-based modules, and does not need to exist otherwise. This function is only necessary for libcloud-based modules, and does not need to exist otherwise. This function returns a list of locations available, if the cloud host uses multiple data centers. It is not necessary if the cloud host uses only one data center. It is normally called using the --list-locations option. salt-cloud --list-locations my-cloud-provider This function returns a list of images available for this cloud provider. There are not currently any known cloud providers that do not provide this functionality, though they may refer to images by a different name (for example, "templates"). It is normally called using the --list-images option. salt-cloud --list-images my-cloud-provider This function returns a list of sizes available for this cloud provider. Generally, this refers to a combination of RAM, CPU, and/or disk space. This functionality may not be present on some cloud providers. For example, the Parallels module breaks down RAM, CPU, and disk space into separate options, whereas in other providers, these options are baked into the image. It is normally called using the --list-sizes option. salt-cloud --list-sizes my-cloud-provider This function builds the deploy script to be used on the remote machine. It is likely to be moved into the salt.utils.cloud library in the near future, as it is very generic and can usually be copied wholesale from another module. An excellent example is in the Azure driver. This function irreversibly destroys a virtual machine on the cloud provider. Before doing so, it should fire an event on the Salt event bus. The tag for this event is salt/cloud/<vm name>/destroying. Once the virtual machine has been destroyed, another event is fired. The tag for that event is salt/cloud/<vm name>/destroyed. This function is normally called with the -d options: salt-cloud -d myinstance This function returns a list of nodes available on this cloud provider, using the following fields: No other fields should be returned in this function, and all of these fields should be returned, even if empty. The private_ips and public_ips fields should always be of a list type, even if empty, and the other fields should always be of a str type. This function is normally called with the -Q option: salt-cloud -Q All information available about all nodes should be returned in this function. The fields in the list_nodes() function should also be returned, even if they would not normally be provided by the cloud provider. This is because some functions both within Salt and 3rd party will break if an expected field is not present. This function is normally called with the -F option: salt-cloud -F This function returns only the fields specified in the query.selection option in /etc/salt/cloud. Because this function is so generic, all of the heavy lifting has been moved into the salt.utils.cloud library. A function to call list_nodes_select() still needs to be present. In general, the following code can be used as-is: def list_nodes_select(call=None): ''' Return a list of the VMs that are on the provider, with select fields ''' return salt.utils.cloud.list_nodes_select( list_nodes_full('function'), __opts__['query.selection'], call, ) However, depending on the cloud provider, additional variables may be required. For instance, some modules use a conn object, or may need to pass other options into list_nodes_full(). In this case, be sure to update the function appropriately: def list_nodes_select(conn=None, call=None): ''' Return a list of the VMs that are on the provider, with select fields ''' if not conn: conn = get_conn() # pylint: disable=E0602 return salt.utils.cloud.list_nodes_select( list_nodes_full(conn, 'function'), __opts__['query.selection'], call, ) This function is normally called with the -S option: salt-cloud -S This function is used to display all of the information about a single node that is available from the cloud provider. The simplest way to provide this is usually to call list_nodes_full(), and return just the data for the requested node. It is normally called as an action: salt-cloud -a show_instance myinstance Extra functionality may be added to a cloud provider in the form of an --action or a --function. Actions are performed against a cloud instance/virtual machine, and functions are performed against a cloud provider. Actions are calls that are performed against a specific instance or virtual machine. The show_instance action should be available in all cloud modules. Actions are normally called with the -a option: salt-cloud -a show_instance myinstance Actions must accept a name as a first argument, may optionally support any number of kwargs as appropriate, and must accept an argument of call, with a default of None. Before performing any other work, an action should normally verify that it has been called correctly. It may then perform the desired feature, and return useful information to the user. A basic action looks like: def show_instance(name, call=None): ''' Show the details from EC2 concerning an AMI ''' if call != 'action': raise SaltCloudSystemExit( 'The show_instance action must be called with -a or --action.' ) return _get_node(name) Please note that generic kwargs, if used, are passed through to actions as kwargs and not **kwargs. An example of this is seen in the Functions section. Functions are called that are performed against a specific cloud provider. An optional function that is often useful is show_image, which describes an image in detail. Functions are normally called with the -f option: salt-cloud -f show_image my-cloud-provider image='Ubuntu 13.10 64-bit' A function may accept any number of kwargs as appropriate, and must accept an argument of call with a default of None. Before performing any other work, a function should normally verify that it has been called correctly. It may then perform the desired feature, and return useful information to the user. A basic function looks like: def show_image(kwargs, call=None): ''' Show the details from EC2 concerning an AMI ''' if call != 'function': raise SaltCloudSystemExit( 'The show_image action must be called with -f or --function.' ) params = {'ImageId.1': kwargs['image'], 'Action': 'DescribeImages'} result = query(params) log.info(result) return result Take note that generic kwargs are passed through to functions as kwargs and not **kwargs.
https://docs.saltstack.com/en/latest/topics/cloud/cloud.html
CC-MAIN-2017-22
refinedweb
2,312
62.88
10 replies on 1 page. Most recent reply: Jan 6, 2009 12:38 AM by Michele Simionato In the first article of this series I have discussed a very serious problem of the mixin approach, i.e. the namespace overpopulation issue. The namespace overpopulation issue The overpopulation issue comes from the idea of growing functionality by adding more and more mixin classes, which is just plain wrong. It is true that you can use the idea in little frameworks with little damage, but that does not make it a good design solution. Small frameworks have a tendency to grow, and you should not start with a weak design. Some reader argued that this is not a problem of mixins per se, but a problem of bad design. That is true, but I maintain that a technique which is so easy to misuse even by expert programmers, should be regarded with suspicion, especially when there are better solutions available. Moreover, I have a few conceptual issues with mixins - as implemented in most languages - which are independent of the overpopulation problem. First of all, I think everybody agrees that the best way to solve a complex problem is to split it in smaller subproblems, by following the dividi et impera principle. The disturbing thing about mixins is that the principle is applied at the beginning (the problem is decomposed in smaller independent units) but at the end all the functionalities are added back to the client class as an undifferentiated soup of methods. Therefore a design based on mixins looks clean to the framework writer - everything is well separated in his mind - but it looks messy to the framework user - she sees methods coming from all directions without a clear separation. It is really the same situation than using the from module import * idiom, which is rightly frowned upon. I find it most unpythonic that mixins make the life of the framework writer easier, but the life of the framework reader more difficult, since the goal of Python is to make code easy to read, not easy to write. The scenario I have in mind is the usual one: a poor programmer who needs to debug an object coming from a gigantic framework which is terra incognita to her, without any documentation and with a strict deadline (do you see yourself in there?). In such conditions a framework heavily based on mixins makes things harder, since the programmer gets drowned under hundreds of methods which are properly ordered in mixin classes on the paper, but not on the battle field. There is also another conceptual issue. The idea behind mixins is that they should be used for generic functionality which can be applied to different classes (think of mixins like Persistent, Comparable, Printable, etc.). But this is exactly the same situation where you want to use generic functions. In this post of mine I actually argue that generic functions (a.k.a. multimethods) are a better solution than mixins. I also provide a very concrete example, which I think generalizes. The advantage of generic functions is that they are clearly defined outside classes, whereas the mixin approach is kind of schizophrenic: the functionality is actually defined externally, but that fact is made invisible to the final user. I am a big fan of generic functions which are already used in the Python word - print is a generic function, the comparison operators are generic functions, numpy universal functions (ufunctions) are generic functions, etc - but should be used even more. With generic functions, mixins becomes useless. A side effect is that the class namespace becomes much slimmer: for instance, in CLOS classes are used just to contain state, whereas the methods live in a separate namespace. In most languages instead, classes are used as a namespace control mechanism, performing double duty - namespace control should be the job of modules. A tipical beginner's tutorial (for instance, I recommend Using Mix-ins with Python , by Chuck Esterbrook which is very well written and very informative, even if the point of view is exactly the opposite of mine) will tell you that mixins are used to add functionality to the classes they mix in. For instance a mixin class WithLog could be used to enhance a pre-existing class C with a logging capability: class C(object): "A base class" class WithLog(object): "A mixin class" @property def log(self): return logging.getLogger(self.__class__.__name__) class C_WithLog(C, WithLog): "A mixin-enhanced class" An example of usage is the following: >>> c = C_WithLog() >>> c.log.warn("hello") >>> c = C_WithLog() >>> c.log.warn("hello") That prints WARNING:C_WithLog:hello. The usage of mixins you see here is wrong: why would you use inheritance when you need just one method? You can just import the one method you need! Generally speaking, a mixin class has sense only when you have a set of methods which belong together: if you have a single method, or a set of disconnected methods, you are much better off by defining the methods externally, in an utility module, and then by importing them in the class namespace. Of course, here I am assuming that you really want the external method to end up in the class namespace, possibly because of interface requirements, but I am not saying that this is always a good idea. You can import the method in your class as simply as that: class CWithLog(C): from utility import log # log is the property defined above This approach is very little used in Python, probably because most people coming from other languages do not know it is possible, but it is in my opinion a much clearer solution than inheritance. The problem with inheritance is that it requires a substantial cognitive load: when I see the line of code class C_WithLog(C, WithLog) I immediately I ask myself many questions: which methods are exported by C_WithLog? is there any method of C which accidentally overrides one of the methods of C_WithLog? if yes, is there any method cooperation mechanism (super) or not? what are the ancestors of C_WithLog? which methods are coming from them? are such methods overridden by some C method? is there a cooperation mechanism on C_WithLog ancestors? What's the method resolution order of the hierarchy? On the other hand, if I see from utility import log I have very little to understand and very little to worry about. The only caution in this specific example is that I will have a single logger shared by all instances of the class since logging.getLogger(self.__class__.__name__) will return always the same object. If I need different loggers with different configurations for different instances I will have to override the .log attribute on a case by case basis, or I will have to use a different strategy, such as the dependency injection pattern, i.e. I will have to pass the logger to the constructor. There are usages for mixins which are restricted in scope and not dangerous: for instance, you can use mixins for implementing the comparison interface, or the mapping interface. This is actually the approach suggested by the standard library, and by the new ABC's in Python 2.6. This is an acceptable usage: in this case there is no incontrollable growth of methods, since you are actually implementing well know interfaces - typically a few specific special methods. In order to give a practical example, let me discuss a toy application. Suppose you want to define a PictureContainer class in an application to manage pictures and photos. A PictureContainer object may contain both plain pictures (instances of a Picture class) and PictureContainer objects, recursively. From the point of view of the Python programmer it could make sense to implement such a class by using a dictionary. A Picture object will contain information such as the picture title, the picture date, and a few methods to read and write the picture on the storage (the file system, a relation database, an object database like the ZODB or the AppEngine datastore, or anything else). The first version of the PictureContainer class could be something like that: class SimplePictureContainer(object): "A wrapper around the .data dictionary, labelled by an id" def __init__(self, id, pictures_or_containers): self.id = id self.data = {} # the inner dictionary for poc in pictures_or_containers: # both pictures and containers must have an .id self.data[poc.id] = poc At this point, one realized that it is annoying to call the inner dictionary directly and that it would be nicer to expose its methods. A simple solution is to leverage on the standard library class UserDict.DictMixin which is there just for that use case. Since we are at it, we can also add the logging functionality: that means that the low-level interface (calling directly the inner dictionary methods) will not log whereas the high level interface will log: class BetterPictureContainer(SimplePictureContainer, DictMixin): from utility import log() Using DictMixin is acceptable, since However, notice that in this example the usage of DictMixin as mixin class is acceptable, but not optimal: the best solution is to use DictMixin as a base class, not as a mixin class. The core problem is that we started from a wrong desing: we wrote SimplePictureContainer when we did not know of the existence of DictMixin. Now, a posteriori, we are trying to fix the mistake by using multiple inheritance, but that it not the Rigth Thing (TM) to do. The right thing would be to change the source code of SimplePictureContainer and to derive directly from DictMixin. In the real world usually you do not have complete control of the code: you may leverage on a third party library with a design error, or simply with an old library, written when DictMixin did not exist. In such a situation you may have no way to modify the source code. Then using DictMixin and multiple inheritance is a perfectly acceptable workaround, but it is a workaround still, and it should not be traded for a clever design. Moreover, even the best examples of mixins could be replaced by generic functions: this is why I would not provide mixins, should I write a new language from scratch. Of course, in an existing language like Python, one has to follow the common idioms, so I use mixins in a few controlled cases, and I have no problems with that. For instance, one could define an EqualityMixin which defines the special methods __eq__ and __ne__, with __ne__ being the logical negation of __eq__ (Python does not do that by default). That would be a fine usage but I don't do that, I prefer to duplicate two lines of code and to write the __ne__ method explicitly, to avoid complicating my inheritance hierarchy. One should should decide when to use a mixin or not on a case by case basis, with a bias for the not. class Foo(object): logger = Logger(...) log = logger.log # expose to external consumers
http://www.artima.com/forums/flat.jsp?forum=106&thread=246483
CC-MAIN-2015-18
refinedweb
1,835
57.2
On Tue, Jul 22, 2003 at 09:39:40PM +0200, Maciej W. Rozycki wrote: > >. > > Well, the MMU of (original) 32-bit MIPS processors (i.e. R2k/R3k) is > completely different from the one in later ones, too. I suspect this is > true for the R6k as well. The exception handlers differ a bit as well, > especially considering the XTLB refill one. That probably counts as > nitpicking, though... It's also a question of taste - and that one can be discussed forever. How far do you want to factor our common code, as little as possible which was our previous approach or extremly aggressive, glibc-like. And yes, the R6000 is different. With that in mind R2000 and R4000 look like enzygotic twins ... > > Something that made sense for sparc might not make sense for mips. > > Certainly it needs to be analysed on a case by case basis, avoiding > blanket assumptions. Anyway, I still see two reasons for having at least > a separate top-level directory: > > 1. A better separation of the more straightforward 32-bit Makefile and the > more complicated 64-bit one. > >. > There is also no point in having headers in asm-mips consisting of a > single #ifdef CONFIG_MIPS32/#else/#endif conditional, where two distinct > versions should be present in asm-mips and asm-mips64, respectively. It's > easier to make a diff between such separate implementations to verify > everything's OK. Like 80% of the headers could be identical between both files without lots of trickery. The current approach is have two physical copies of these identical files.. Ralf
http://www.linux-mips.org/archives/linux-mips/2003-07/msg00128.html
CC-MAIN-2014-15
refinedweb
260
66.23
I got "numpy.dtype has the wrong size, try recompiling" in both pycharm and terminal when compiling Sci-kit learning. I've upgraded all packages(numpy, scikit to the latest), nothing works.Python version is 2.7. Please help. Appreciate! checking for nltk Traceback (most recent call last): File "startup.py", line 6, in <module> import nltk File "/Library/Python/2.7/site-packages/nltk/__init__.py", line 128, in <module> from nltk.chunk import * File "/Library/Python/2.7/site-packages/nltk/chunk/__init__.py", line 157, in <module> from nltk.chunk.api import ChunkParserI File "/Library/Python/2.7/site-packages/nltk/chunk/api.py", line 13, in <module> from nltk.parse import ParserI File "/Library/Python/2.7/site-packages/nltk/parse/__init__.py", line 79, in <module> from nltk.parse.transitionparser import TransitionParser File "/Library/Python/2.7/site-packages/nltk/parse/transitionparser.py", line 21, in <module> from sklearn.datasets import load_svmlight_file File "/Library/Python/2.7/site-packages/sklearn/__init__.py", line 57, in <module> from .base import clone File "/Library/Python/2.7/site-packages/sklearn/base.py", line 11, in <module> from .utils.fixes import signature File "/Library/Python/2.7/site-packages/sklearn/utils/__init__.py", line 10, in <module> from .murmurhash import murmurhash3_32 File "numpy.pxd", line 155, in init sklearn.utils.murmurhash (sklearn/utils/murmurhash.c:5029) ValueError: numpy.dtype has the wrong size, try recompiling The error "numpy.dtype has the wrong size, try recompiling" means that sklearn was compiled against a numpy more recent than the numpy version sklearn is now trying to import. To fix this, you need to make sure that sklearn is compiled against the version of numpy that it is now importing, or an earlier version. See ValueError: numpy.dtype has the wrong size, try recompiling for a detailed explanation. I guess from your paths that you are using the OSX system Python (the one that ships with OSX, at /usr/bin/python). Apple has modified this Python in a way that makes it pick up its own version of numpy rather than any version that you install with pip etc - see . I strongly recommend you switch to Python.org or homebrew Python to make it easier to work with packages depending on numpy.
https://codedump.io/share/bwoJZv0QpkKz/1/sklearn-quotnumpydtype-has-the-wrong-size-try-recompilingquot-in-both-pycharm-and-terminal
CC-MAIN-2017-51
refinedweb
381
71.21
Automatically Sort Python Module Imports using isort Want to share your content on python-bloggers? click here. In this tutorial we will explore how to automatically sort your Python module imports using isort library. Table of Contents Introduction As your Python projects grow, you start having more and more files and each file has more lines of code, performing more operations, and you import more dependencies. In research step we usually import the libraries one by one as they are needed, making the entire imports section unorganized and often difficult to edit quickly. In addition, when working in a team of engineers, most people have their own preferred way of structuring and ordering imports, which results in different file versions overwriting each other in the same repository. This can be easily fixed using isort, which provides a systematic way of ordering imports in your Python project. To continue following this tutorial we will need the following Python library: isort. If you don’t have it installed, please open “Command Prompt” (on Windows) and install it using the following code: pip install isort What is isort isort is a Python utility and a library that automatically sorts the Python module imports in alphabetical order while separating it into different section and by type. In addition to a CLI utility and a Python library it has plugins for many code editors like VS Code, Sublime, and more. Sample code file In order to test the capabilities of isort library we will need a sample Python file to work with. In this sample file we will mix up the order of imports and add some spacing to illustrate the difference between the unsorted and sorted files. Here is a sample unsorted Python code (main.py): import pandas import os import sys import numpy from sklearn.linear_model import LinearRegression from sklearn.linear_model import Ridge from sklearn.linear_model import ElasticNet How to sort module imports Once we have the Python file or files in a directory, it is easy to sort the module imports with isort. Open the command line or terminal and navigate to the directory with the Python file(s). Sort module imports in a single Python file If you only have one file for which you would like to sort the module imports (in our case it’s main.py), simply run: isort main.py and the reformatted sample Python file should look like this: import os import sys import numpy import pandas from sklearn.linear_model import ElasticNet, LinearRegression, Ridge Looks much better and all the module imports are sorted and organized! Sort module imports in multiple Python files If you want to sort the module imports in multiple Python files or the entire Python project, simply run: isort . isort will automatically find all the Python files and sort the module imports in all the Python files in the directory. Conclusion In this article we explored how to automatically sort the module imports in Python files using isort library. Feel free to leave comments below if you have any questions or have suggestions for some edits and check out more of my Python Programming tutorials. The post Automatically Sort Python Module Imports using isort appeared first on PyShark. Want to share your content on python-bloggers? click here.
https://python-bloggers.com/2022/09/automatically-sort-python-module-imports-using-isort/
CC-MAIN-2022-40
refinedweb
546
59.84
change in the each function Hi there when i was working on a project a problem came up and i made the following changes to resolve it. Hope it is usefull. If func is not supplied by the user the function returns a list of the elements and their corresponding indexes code: {{{ !python def each(self, func=False): """apply func on each nodes if func is not suplied return a list of the elements and it's index """ if not func: return [(i,e) for i,e in enumerate(self)] else: try: for i, element in enumerate(self): func_globals(func)['this'] = element if callback(func, i, element) == False: break finally: f_globals = func_globals(func) if 'this' in f_globals: del f_globals['this'] return self }}} seems useless since enumerate(self) do the same thing. the only point of .each() is to use a callback
https://bitbucket.org/olauzanne/pyquery/issues/26/change-in-the-each-function
CC-MAIN-2017-51
refinedweb
141
58.32
One part of my program requires that the user enters a date and this date is then checked against each product in the dictionary to see if the date the product arrived plus its shelf life causes the product to expire before or after the date entered by the user. import sys from string import * import pickle import datetime cheeseDictionary = {} userInput = ""() def saveProduct(fileName,cheeseDictionary): f = open(fileName, "w") for i in sorted(cheeseDictionary.keys()): v = cheeseDictionary[i] f.write("%s:%s:%s:%s\n" % (i, v["date"], v["life"], v["name"])) f.close() def printProduct(cheeseDictionary): print "ID"," ","Date"," ","Life(days)"," ","Name" for cheese in cheeseDictionary: print cheese," ",cheeseDictionary[cheese]["date"]," ",cheeseDictionary[cheese]["life"]," ",cheeseDictionary[cheese]["name"] def addProduct(): global cheeseDicitonary correct = 0 idInput = "" dateInput = "" lifeInput = "" nameinput = "" while correct != 1: idInput = raw_input("Please enter the ID of the cheese to be added. ") if cheeseDictionary.has_key(idInput): print ("This ID already exists. Please try again.") correct = 0 else: newID = idInput correct = 1 dateInput = raw_input("Please enter the date of the cheese to be added in the format dd/mm/yyyy. ") lifeInput = raw_input("Please enter the life of the cheese to be added in days. ") nameInput = raw_input("Please enter the name of the cheese to be added. ") cheeseDictionary[idInput] = {"date":dateInput, "life":lifeInput, "name":nameInput} def checkProduct(cheeseDictionary): dateCheck = raw_input("Please enter the date in the format dd/mm/yyyy: ") for cheese in cheeseDictionary: I know I need to change the dates store din the dictionary into the date time format but I am unsure how to do this. Thanks for any advice given :)
https://www.daniweb.com/programming/software-development/threads/356138/adding-to-a-date-checking-if-past-expiration-date
CC-MAIN-2021-31
refinedweb
263
54.02
pSelfcom.ericsson.otp.erlang.OtpSelf public class OtpSelf Represents an OTP node. It is used to connect to remote nodes or accept incoming connections from remote nodes. When the Java node will be connecting to a remote Erlang, Java or C node, it must first identify itself as a node by creating an instance of this class, after which it may connect to the remote node. When you create an instance of this class, it will bind a socket to a port so that incoming connections can be accepted. However the port number will not be made available to other nodes wishing to connect until you explicitely register with the port mapper daemon by calling publishPort(). OtpSelf self = new OtpSelf("client", "authcookie"); // identify self OtpPeer other = new OtpPeer("server"); // identify peer OtpConnection conn = self.connect(other); // connect to peer public OtpSelf(java.lang.String node) throws java.io.IOException Create a self public OtpSelf(java.lang.String node, java.lang.String cookie) throws java.io.IOException node- the name of this node. cookie- the authorization cookie that will be used by this node when it communicates with other nodes. java.io.IOException public OtpSelf(java.lang.String node, java.lang.String cookie, int port) throws java.io.IOException java.io.IOException public OtpErlangPid pid() OtpConnectionthat do not specify a sender. public boolean publishPort() throws java.io.IOException This method will fail if an Epmd process is not running on the localhost. See the Erlang documentation for information about starting Epmd. Note that once this method has been called, the node is expected to be available to accept incoming connections. For that reason you should make sure that you call accept() shortly after calling publishPort(). When you no longer intend to accept connections you should call unPublishPort(). java.io.IOException- if the port mapper could not be contacted. public void unPublishPort() public OtpConnection accept() throws java.io.IOException, OtpAuthException java.io.IOException- if a remote node attempted to connect but no common protocol was found. OtpAuthException- if a remote node attempted to connect, but was not authorized to connect. public OtpConnection connect(OtpPeer other) throws java.io.IOException, java.net.UnknownHostException, OtpAuthException other- the remote node to which you wish to connect. java.net.UnknownHostException- if the remote host could not be found. java.io.IOException- if it was not possible to connect to the remote node. OtpAuthException- if the connection was refused by the remote node.
http://www.erlang.org/doc/apps/jinterface/java/com/ericsson/otp/erlang/OtpSelf.html
CC-MAIN-2015-06
refinedweb
406
52.05
and easily define a webserver in python. Flask Homepage - BokehJS is a subset of the Bokeh project and makes it really easy to embed the visuals generated by Bokeh into a webpage. BokehJS Documentation Background – why would I need this? Bokeh is an extremely popular visualization library among python users, particularly among data analysts, and so on. To those who are already familiar with the python data ecosystem, think matplotlib on steroids. Bokeh makes it relatively easy to create stunning and meaningful data visualizations which impart meaning to your data. Especially when used in conjunction with python notebooks, it makes your research and code more accessible to a wider audience However, not everyone knows how to use Python Notebooks, or is even aware of their existence. Or, for some reason, you’d like to keep your code private. Any of these reasons would be enough for you to consider serving a Bokeh visualization via the web – a medium which most users are extremely comfortable with. So how would you serve a Bokeh graph via your typical frontend tools, like ReactJS? You could recreate the graph in some other javascript library – but your data science pals already went through the hassle of creating the visualization in the first place… Enter BokehJS BokehJS allows for easy transferring of visualizations from python to embedded JavaScript. The documentation lists a number of alternative ways to embed a graph in your webpage, in my opinion the easiest and what felt most natural to me, is to serve my graphs via a Flask server (I dont want to depend on a Bokeh server in production since DevOps are already familiar with frameworks like Flask and Django), and provide API endpoints for a ReactJS (or Angular or JQuery(?)…) frontend to get the visualizations. Bokeh catered for this approach brilliantly. The Bokeh library includes the embed module which allows you to generate a json_item: from bokeh.embed import json_item The process is really simple: - Generate a Bokeh plot in python as you normally would. Follow any documentation online and get the visualization you’d like to display - Pass the resulting Bokeh plot object to the json_item function call, which takes two arguments: the actual plot object and a string unique identifier. This unique ID can match the HTML ID of a DIV in your frontend, but this is not absolutely necessary. - Dump the result as a JSON string using the standard python JSON library dumps A simple example from beginning to end using Flask would be something similar to this: @app.route('/plot1') def plot1(): # copy/pasted from Bokeh Getting Started Guide x = linspace(-6, 6, 100) y = cos(x) p = figure(width=500, height=500, toolbar_location="below", title="Plot 1") p.circle(x, y, size=7, color="firebrick", alpha=0.5) # following above points: # + pass plot object 'p' into json_item # + wrap the result in json.dumps and return to frontend return json.dumps(json_item(p, "myplot")) That’s it on the back-end. The front-end is similarly simple: - Import the required JS libraries. You can use a CDN or npm. Note regarding NPM usage: Unfortunately BokehJS uses both relative and absolute path imports in it’s codebase. This results in some “module not available” errors when using it in boilerplate generate by a tool such as create-react-app. You can see the full details here. In order to sidestep this issue I recommend using the CDN option and simply referring to the library via the global reference (e.g. window.Bokeh) - Issue a GET request to the above endpoint (I use Axios for this) - Parse the response as JSON (Axios does this automatically for you) - Pass the JSON response to window.Bokeh.embed, which takes two arguments: the JSON object and an optional identifier specifying which DIV ID you’d like to embed the resulting object in. Using ReactJS, this can be boiled down to: handlePlot1 = () => { Axios.get("") .then(resp => window.Bokeh.embed.embed_item(resp.data, 'testPlot') ) } // in your render function <Button variant="contained" style={{margin: 10}}</div> Note the className set to ‘bk-root’ which allows BokehJS to properly style the resulting visualization. Code example A full boilerplate code example can be found here: The project puts the above into practice. It offers two plots, the first is a static but involved plot, while (best of all in my opinion) the second plot shows that BokehJS supports dynamic widgets such as sliders which allow a front-end user to explore the data (if you go down this route make sure to read the relevant documentation) Note: if you go down the CDN route like I did, make sure to include the correct <script> for JavaScript and <link> for CSS files in your index.html file
http://blog.davidvassallo.me/2019/03/11/embedding-bokeh-into-a-reactjs-app-using-bokehjs/
CC-MAIN-2019-22
refinedweb
791
50.46
This is the continuation of the initial January 2020 magazine article titled “Android Things”, which details using a new Google-backed operating system which facilitates using the GPIO pins on ODROID devices. I2C You can also use I2C on the ODROID board with Android things. You can use any I2C API provided by the Android things. The Android Things supports various sizes of data transmission, byte, word and buffered data. I ported the Weather board 2 () example to Android with Android things. Also I ported I2C display (). Like other familiar I2C devices, both of the above devices are connected with 4 wires, Voltage, Ground, I2C SDA and I2C SCL. In the examples, I connected I2C wires to the I2C-2. Most of pre-procedure are same to GPIO procedure. Add permission to the manifest, import and call the instance of PeripheralManager to the project source code. However, you do not need to get a GPIO instance. You need to call the openI2cDevice method to get the I2C device instance. ... List i2cBusList = manager.getI2cBusList(); I2cDevice device = manager.openI2cDevice(i2cBusList.get(0), I2C_DEVICE_ADDRESS); // or i2cDevice device = manager.openI2cDevice(“I2C-2”, // I2C Device Address); ...The I2C Interface names are I2C-2 and I2C-3. Each I2C interface consists of pins 3,5 and pins 27, 28. When you get the I2C bus device, you should set the I2C device address for each I2C chip. In this case, a weather board2 consists of two I2C chips. So, I created two I2C device instances. One instance is linked by 0x76 to the BME280. The chip offers temperature, pressure and humidity values. And the other instance is linked by 0x60 to the SI1132. The chip offers UV, Visible and IR values. And I2C LCD has one I2C chip, so I created one I2C instance. It linked by 0x27 for control the LCD. Like this, you should create I2C device instance for each device with their own address. Through the I2C instance, you can communicate with the device. Android things provide many methods. For reading the data from a device, it provide read, readRegBuffer, readRegByte and readRegWord method. Also for writing data to device, it provides write, writeRegBuffer, writeRegByte and writeRegWord. The Android Things official website provides a lot of information. I2C Device method reference -. By using the I2C API, I built a wrapper class for weather board2 and I2C LCD. Here is a part of the example code to read and write data with the Android things API. … private void softrst() throws IOException { device.writeRegByte(reg.RST, POWER_MODE.SOFT_RESET_CODE); } private byte getPowerMode() throws IOException { return (byte) (device.readRegByte(reg.CTRL_MEAS) & 0b11); } ...The code is part of BME280.java. First method is called to soft reset the chip and second method is called to get chip’s power mode. Each API’s first parameter is the address of the register in the chip. On the write method, second parameter is usually the data to transfer. also on the read method, second parameter is usually not exist. However, if you want to read data by buffer, you need buffer to read and the buffer is passed as a second parameter. You can test or use the project. Here is the link. Weather board2 with android things example -. I2C LCD with android things example -. Following is the Weather board 2 Hardware connection: The Weather board 2 Output Result would be like so: Following is the I2C LCD hardware connection and result: PWM The android Things also support the PWM. There are many methods to configure and control the PWM interface. You can set the PWM Frequency via setPwmFrequency. Before enabling the pin, you must set frequency via this method. Also you can set PWM duty cycle by setPwmDutyCycle between 0 and 100. Frequency and duty cycle settings can be set in both enabled and disabled state and will be remembered. Please check the Reference.. Here is the PWM testing project. In this example, you can turn on and off a PWM pin. and change duty cycle via progress bar on the Application:. Note that the voltage at GPIO pins on ODROID-N2 are all 3.3V. Reference Be the first to comment
https://magazine.odroid.com/article/using-i2c-on-odroids-with-android-things/
CC-MAIN-2020-50
refinedweb
693
68.67
About half of federal prisoners were convicted of drug crimes, according to this fact sheet from the US Sentencing Commission (USSC). In minimum security prisons, the proportion is higher, and in women's prisons, I would guess it is even higher. About 45% of federal prisoners were sentenced under mandatory minimum guidelines that sensible people would find shocking. And about a third of them had no prior criminal record, according to this report, also from the USSC. In many cases, minor drug offenders are serving sentences much longer than sentences for serious violent crimes. For a list of heart-breaking examples, see these prisoner profiles at Families Against Mandatory Minimums. Or watch this clip from Jon Oliver's Last Week Tonight: When you are done being outraged, here are a few things to do: 1) Read more about Families Against Mandatory Minimums, write about them on social media, and consider making a donation (Charity Navigator gives them an excellent rating). 2) Another excellent source of information, and another group that deserves support, is the Prison Policy Initiative. 3) Then read the rest of this article, which points out that although Kerman's observations are fundamentally correct, her sampling process is biased in an interesting way. The inspection paradoxIt turns out that Kerman is the victim not just of a criminal justice system that is out of control, but also of a statistical error called the inspection paradox. I wrote about it in Chapter 3 of Think Stats, where I called it the Class Size Paradox, using the example of average class size. If you ask students how big their classes are, the average of their responses will be higher than the actual average, often substantially higher. And if you ask them how many children are in their families, the average of their responses will be higher than the average family size. The problem is not the students, for once, but the sampling process. Large classes are overrepresented in the sample because in a large class there are more students to report a large class size. If there is a class with only one student, only one student will report that size. And similarly with the number of children, large families are overrepresented and small families underrepresented; in fact, families with no children aren't represented at all. The inspection paradox is an example of the Paradox Paradox, which is that a large majority of the things called paradoxes are not, actually, but just counter-intuitive truths. The apparent contradiction between the different averages is resolved when you realize that they are averages over different populations. One is the average in the population of classes; the other is the average in the population of student-class experiences. Neither is right or wrong, but they are useful for different things. Teachers might care about the average size of the classes they teach; students might care more about the average of the classes they take. Prison inspectionThe same effect occurs if you visit a prison. Suppose you pick a random day, choose a prisoner at random, and ask the length of her sentence. The response is more likely to be a long sentence than a short one, because a prisoner with a long sentence has a better chance of being sampled. For each sentence duration, x, suppose the fraction of convicts given that sentence is p(x). In that case the probability of observing someone with that sentence is proportional to x p(x). Now imagine a different scenario: suppose you are serving an absurdly-long prison sentence, like 55 years for a minor drug offense. During that time you see prisoners with shorter sentences come and go, and if you keep track of their sentences, you get an unbiased view of the distribution of sentence lengths. So the probability of observing someone with sentence x is just p(x). And that brings me to the question that occurred to me while I was reading Orange: what happens if you observe the system for a relatively short time, like Kerman's 11 months? Presumably the answer is somewhere between p(x) and x p(x). But where? And how does it depend on the length of the observer's sentence? UPDATE 17 August 2015: A few days after I posted the original version of this article, Jerzy Wieczorek dropped by my office. Jerzy is an Olin alum who is now a grad student in statistics at CMU, so I posed this problem to him. A few days later he emailed me the solution, which is that the probability of observing a sentence, x, during and interval, t, is proportional to x + t. Couldn't be much simpler than that! To see why, imagine a row of sentences arrange end-to-end along the number line. If you make an instantaneous observation, that's like throwing a dart at the number line. You chance of hitting a sentence with length x is (again) proportional to x. Now imagine that instead of throwing a dart, you throw a piece of spaghetti with length t. What is the chance that the spaghetti overlaps with a sentence of length x? If we say arbitrarily that the sentence runs from 0 to x, the spaghetti will overlap the sentence if the left side falls anywhere between -t and x. So the size of the target is x + t. Based on this result, here's a Python function that takes the actual PMF and returns a biased PMF as seen by someone serving a sentence with duration t: def bias_pmf(pmf, t=0): new_pmf = pmf.Copy() for x, p in pmf.Items(): new_pmf[x] *= (x + t) new_pmf.Normalize() return new_pmf This IPython notebook has the details, and here's a summary of the results. ResultsTo model the distribution of sentences, I use random values from a gamma distribution, rounded to the nearest integer. All sentences are in units of months. I chose parameters that very roughly match the histogram of sentences reported by the USSC. The following code generates a sample of sentences as observed by a series of random arrivals. The notebook explains how it works. sentences = np.random.gamma(shape=2, scale=60, size=1000).astype(int) releases = sentences.cumsum() arrivals = np.random.random_integers(1, releases[-1], 10000) prisoners = releases.searchsorted(arrivals) sample = sentences[prisoners] cdf2 = thinkstats2.Cdf(sample, label='biased') The following figure shows the actual distribution of sentences (that is, the model I chose), and the biased distribution as would be seen by random arrivals: The following function simulates the observations of a person serving a sentence of t months. Again, the notebook explains how it works: def simulate_sentence(sentences, t): counter = Counter() releases = sentences.cumsum() last_release = releases[-1] arrival = np.random.random_integers(1, max(sentences)) for i in range(arrival, last_release-t, 100): first_prisoner = releases.searchsorted(i) last_prisoner = releases.searchsorted(i+t) observed_sentences = sentences[first_prisoner:last_prisoner+1] counter.update(observed_sentences) print(sum(counter.values())) return thinkstats2.Cdf(counter, label='observed %d' % t) Here's the distribution of sentences as seen by someone serving 11 months, as Kerman did: The observed distribution is almost as biased as what would be seen by an instantaneous observer. Even after 120 months (near the average sentence), the observed distribution is substantially biased: — that many prisoners are serving long sentences that do not fit their crimes — is still valid, in my opinion.
http://allendowney.blogspot.com/2015/08/orange-is-new-stat.html
CC-MAIN-2021-39
refinedweb
1,233
53.92
Recently, I've been doing only source reviews, but since there were a lot of children who are doing their best to check the null / empty list, I thought, "Maybe I don't know it?", So I summarized it. The "work hard check" that I often pointed out in recent reviews looks like this: arrow_down: Work hard List<String> strList = new ArrayList<String>(); //Assuming strList is defined somewhere or comes in as a method argument ... if (strList == null || strList.size() == 0) { return true; } Yeah, it's right. If you can check null & size, you can avoid nullPointerException which is boring for the time being. For new employee training from April to June, if you can write this way, you will pass: white_flower: Java has a lot of useful external libraries (mostly programming languages, not just Java). The famous doco is "org.apache.commons". This time I will write smartly using org.apache.commons. Smart processing import org.apache.commons.collections4.* List<String> strList = new ArrayList<String>(); //Assuming strList is defined somewhere or comes in as a method argument ... if (CollectionUtils.isEmpty(strList)) { return true; } Notable is in the if statement: exclamation: I used "Apache Commons Collections" which is one of the external libraries. Apache Commons Collections is a library that handles Java collections conveniently, and there is `CollectionUtils.isEmpty ()` that handles null check and size 0 check together! Since it is an external file, it cannot be used just by installing Java normally: cry: You need to download and load it. In eclipse, if ```org.apache.commons.collections4. * `` `appears in the import candidates when input completion of CollectionUtils, it can be judged that it has been read. However, it is mostly introduced in most Java projects, maybe. Null / empty check of List can be written neatly by using `CollectionUtils.isEmpty ()`: pencil2: However, it must be pre-loaded with Apache Commons Collections. Recommended Posts
https://linuxtut.com/en/80423ae4a0147b40245c/
CC-MAIN-2022-33
refinedweb
313
56.76
Dojo: Using the Dojo JavaScript Library samzenpus posted more than 5 years ago | from the read-all-about-it dept. stoolpigeon writes "The number and functionality of web based applications has exploded recently. Many of these applications rely heavily on AJAX to provide a more desktop-like experience for users. As the number of people using JavaScript grew, libraries were developed to assist with commonly encountered issues. Jim Harmon's new book Dojo: Using the Dojo JavaScript Library to Build Ajax Applications aims to introduce readers to one of those libraries, the Dojo Toolkit." Keep reading for the rest of JR's review.The Dojo Toolkit, is a JavaScript library, created to increase the speed of writing JavaScript applications. It provides developers with widgets, themes, wrappers for asynchronous communication, client side storage and more. It does all this across various browsers and platforms without requiring the user to worry about differences in browsers. The book follows an interesting pattern. It begins with a five chapter tutorial. The tutorial launches immediately into taking a straight html form and using Dojo widgets to add functionality. All of the code used in the tutorial is available at the book's web site. This tutorial moves quickly, introducing a number of available widgets and giving the reader a nice feel for how Dojo integrates with html markup. What does not take place in the tutorial is the normal introductory material on just what Dojo is, how it is installed, or what it can do. I'm guessing that this will be a welcome change to those used to quickly brushing past the first chapter, or more, of any programming book. Harmon takes advantage of the fact that Dojo is available via the AOL Content Delivery Network, so the examples will work any javascript capable browser connected to the internet. He does give a quick explanation of what would need to be different to use local files. All of the introductory material that I'm use to seeing is still in the book but it does not appear until chapter ten. There Harmon covers the motivation to develop Dojo, explains the history of the project, provides a bit of information regarding the dual-licensing of Dojo. (It is available under the BSD and Academic Free Licenses.) This leads into the last seven chapters, that cover the 'deeper' material in the book. Between the tutorial and chapter ten, there are four chapters of widget documentation with examples and some explanation. Of the three sections this is the longest, though this is in part due to sometimes large sections of white space, as each widget begins on it's own page. The documentation covers each widget and provides a visual representation where applicable. There is some repetition as this section covers widgets that were used in the first section's tutorial. The third section is entitled "Dojo in Detail." It's the level of detail that marks this book as more of an overview, rather than an in-depth treatment of Dojo. Harmon is true to the title, this book is an extremely pragmatic guide to getting started with Dojo as a means of adding Ajax to applications. It is not however going to take the reader to any great depth into the toolkit. There is plenty here to get started, and enough to hit the ground running, but anyone to get really in-depth coverage of the library will be disappointed. The person who will get the most out of this book is someone with some knowledge of mark-up and programming but not to an advanced level. The developer with a lot of experience will probably be frustrated with the amount of explanation and repetition of simple material combined with the lack of depth. The reader with no programming experience may struggle, though they could keep up if they are willing to look outside the book for a few resources to get a good grasp of web technologies. They may become extremely frustrated with some of the later chapters where the code examples skip steps and leave the reader to assume what has happened in between what is shown and the output. That said, this book allows the reader to dive in quickly, get a quick overview and move immediately to making use of the Dojo Toolkit. If one is not concerned with gaining insight on every aspect of the library but would rather just get into it immediately with a little guidance, this may be just right. With this in mind, it would have been nice if the book had provided less time on documentation and more on examples and ideas for how to best use the capabilities of Dojo. It is nice to have a book that isn't so huge that it is overwhelming and difficult to find anything. But if something had to be given up to keep things compact, I'd have much rather lost things that are easy to find in the on-line documentation and subject to change as the toolkit develops. This keeps the book from being excellent, but it is still a solid introduction and primer. You can purchase Dojo: Using the Dojo JavaScript Library to Build Ajax Applications from amazon.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page. Wait, really? (0, Informative) Anonymous Coward | more than 5 years ago | (#25469877) Re:Wait, really? (0, Offtopic) boxxertrumps (1124859) | more than 5 years ago | (#25469995) Nope. Re:Wait, really? (-1, Offtopic) Anonymous Coward | more than 5 years ago | (#25470187) So I'll take the opportunity to moan about Slashdot. Every time I load a page Firefox spends an age waiting for genweb.ostg.com. Sort your bloody servers out you cheap bastards. Re:Wait, really? (0) Anonymous Coward | more than 5 years ago | (#25471141) Also get rid of the retarded shortcut keys without modifiers. I'll accidently press q or something and suddenly I've lost my place. Nobody cares? (0) Anonymous Coward | more than 5 years ago | (#25469955) Story has been up for a while and this is the first post! Re:Nobody cares? (0) Anonymous Coward | more than 5 years ago | (#25471439) book reviews? (0, Offtopic) mattwarden (699984) | more than 5 years ago | (#25470143) Why isnt this under book reviews? Old news (0) Anonymous Coward | more than 5 years ago | (#25470279) Re:Old news (0) Anonymous Coward | more than 5 years ago | (#25470571) Re:Old news (1) VeNoM0619 (1058216) | more than 5 years ago | (#25470821) Is it jquery? (-1, Flamebait) Anonymous Coward | more than 5 years ago | (#25470449) No. Then I don't care. Re:Is it jquery? (3, Informative) Bill, Shooter of Bul (629286) | more than 5 years ago | (#25470901) Re:Is it jquery? (3, Funny) hobo sapiens (893427) | more than 5 years ago | (#25471093) Well, I'll tell you one thing: IBM is a primary sponsor of the Dojo foundation. Not so for jQuery and prototype. If IBM isn't a good enough reason to stay the heck away, I don't know what is. Re:Is it jquery? (2, Insightful) yivi (236776) | more than 5 years ago | (#25471405) dojo is natively supported by the Zend Framework (from version 1.6 onwards). That may be enough on itself as a deciding factor for you... or not. Since I wanted to start with any of this Javascript libraries, the fact that ZF supported this one made my choice much easier. Dojo documentaion is slowly getting better, but it is still sorely lacking. O'Reilly has two other books for dojo: Mastering Dojo [oreilly.com] and The Definitive Guide [oreilly.com] . Re:Is it jquery? (2, Interesting) yivi (236776) | more than 5 years ago | (#25471639) I'm sorry to answer my own post, but I just wanted to add: if industry support is the thing tilting you one way or the other, maybe you should consider that jquery recently got embraced in different ways by both Microsoft and Nokia. So depending on your on your needs, it could go either way. Featurewise, I think that both are pretty solid. I.- Re:Is it jquery? (5, Informative) Anonymous Coward | more than 5 years ago | (#25471563) I think you'll be disappointed to find out that the creators of Dojo (including me), jQuery, and Prototype actually get along really well, and are starting to discuss working together to share code and concepts more formally across our toolkits. As far as advantages of Dojo, in general, we let you build complex, advanced apps with things like native vector graphics, charting, grids, etc., but can still scale down and perform for even the smallest of features or unobtrusive JS. We also lead the charge in areas like accessibility, internationalization, etc. -Dylan Documentation? (4, Insightful) Futurepower(R) (558542) | more than 5 years ago | (#25471977) It's said when EVERY user must do extra work, rather than just one writer. Correction. (1) Futurepower(R) (558542) | more than 5 years ago | (#25474457) Re:Documentation? (0) Anonymous Coward | more than 5 years ago | (#25480485) Agreed. And dojo documentation sucks big time. We chose dojo for a project based on documentation (there was a fantastic set of example 2 years ago). then, in a typical "engineer like to break things" attitude, those examples were removed, and documentation now sucks very very big time. It is getting better lately, but not that much better. Re:Is it jquery? (1) Bill, Shooter of Bul (629286) | more than 5 years ago | (#25472397) Re:Is it jquery? (0) Anonymous Coward | more than 5 years ago | (#25476373) I'd like some non-native vector graphics. Where do I get it? Re:Is it jquery? (1, Informative) Anonymous Coward | more than 5 years ago | (#25471591) Funny you say that.. i just went to the AJAX Experience conf this month in Boston. The dev'ers of the four big ones were on stage together (+ Yahoo!). Things were a bit tense once in a while, but for the most part, they were polite. Re:Is it jquery? (4, Insightful) Arkham (10779) | more than 5 years ago | (#25472027) I can't help but think that all of these JavaScript/AJAX libraries keep reinventing the wheel over and over again. How many grid widgets written in JavaScript do we really need? How many toolkits for a progress bar or a div-based dialog box have to be developed? Is one of them really that compelling over the others. Consider: [dojotoolkit.org] - DoJo Toolkit [activewidgets.com] - ActiveWidgets [prototypejs.org] - Prototype [aculo.us] - Scriptaculous [jquery.com] - jQuery [extjs.com] - Ext JS [yahoo.com] - YUI [google.com] - Google Web Toolkit (GWT) [sproutcore.com] - SproutCore Those are just the ones I have used personally. It's getting ridiculous. Personally, I like the approach GWT has, but of course that's only relevant to the java developers of the world. I'd love to see all of these "widgets" be compatible with one another. Re:Is it jquery? (1) lysergic.acid (845423) | more than 5 years ago | (#25472567) i can't help but think that all these cars keep reinventing the wheel over and over again. how many four-wheeled gas-powered automobiles do we really need? how many cars with driving wheels or airbags or cupholders have been developed? is one of them more compelling than the other? not everyone programs in the same style. not every toolkit is suited to the same programming style. it's good that web developers have a variety of toolkits to choose from. are all the different CMS packages out there reinventing the wheel? what about all the programming languages? all the operating systems? there isn't just a single approach to every problem. the toolkits that are useful will gain a large following and continue to be developed. the ones that aren't will fall into disuse and die. there's no sense in complaining about choice. Re:Is it jquery? (1) scurvyj (1158787) | more than 5 years ago | (#25477725) Re:Is it jquery? (0) Anonymous Coward | more than 5 years ago | (#25480003) "How many grid widgets written in JavaScript do we really need?" how many <insert widget name> do we need ? Just one. But one that works. I had been caught numerous times with some code that did 90% of what I needed, and then you have the choice: hacking existing code, alone or with the help of primary author -or- creating a new wheel. I ended up myself reinventing the wheel again and again because that was simply the easiest thing I could do, adding more noise to the crowded debate. Give me the browser that implements the high level widgets everyone need, and I'll be extremely happy. Ahh, ok, we need to get the standard first... Jeez... Re:Is it jquery? (1) D Ninja (825055) | more than 5 years ago | (#25473411) Close. I'd like to see an all out brawl between the supporters of Dojo, Jquery, Prototype, and any others I've neglected to remember. Obviously Dojo [wikipedia.org] would win, given the fact that it has produced hundreds of exceptional fighters in its time. Re:Is it jquery? (1) Jansingal (1098809) | more than 5 years ago | (#25477363) dojo would win hands down, no question! Problem (0) FluffyWithTeeth (890188) | more than 5 years ago | (#25470487) Of course, the real problem comes when dojo destroyers come to challenge your entire office. Seriously. We paid good money for that damned sign. Dojo vs jQuery (0) Anonymous Coward | more than 5 years ago | (#25470741) I'm just starting out with javascript. I've only done bug fixing on existing sites, and have never used a framework. What are the pros and cons of Dojo and jQuery? Re:Dojo vs jQuery (1) micheas (231635) | more than 5 years ago | (#25479407) I'm just starting out with javascript. I've only done bug fixing on existing sites, and have never used a framework. What are the pros and cons of Dojo and jQuery? Pros of Dojo and jQuery: Cons: How bout something relevant... (3, Informative) RemoWilliams84 (1348761) | more than 5 years ago | (#25470779) Re:How bout something relevant... (2, Interesting) Chatterton (228704) | more than 5 years ago | (#25471037) I use dojo to do a quick prototype of an application. You can start pretty quickly and do some pretty things. But I am blocked by the absence of an official widget to upload files and the fact that the standard input for file upload doesn't work with dojo. None of the sample code I found on the internet to do this seems to work well :( But except that big problem and some other minor ones, Dojo look very good. Re:How bout something relevant... (2, Interesting) kevin_conaway (585204) | more than 5 years ago | (#25474827) We use ExtJS and were able to do file uploads quite easily using a combination of their Ajax form submit and Commons FileUpload. If Dojo has a control to submit a form asynchronously, you should be able to pull this off. Feel free to contact me for more details Re:How bout something relevant... (1) vhogemann (797994) | more than 5 years ago | (#25480515) ExtJS uses a clever hack to allow async file upload... a hidden iFrame. So it's not really an async upload, but looks like one. I guess one could implement the same trick using Dojo with a little effort. Re:How bout something relevant... (4, Interesting) eddy_crim (216272) | more than 5 years ago | (#25471209) here here! i agree, as a coder ive not tried the other javascript front end frameworks like prototype and jquery but i have tried the server-side ajax frameworks like GWT and i don't like the way that i am detached from the JS that is actuall being generated. Dojo makes writing JS very easy but the extensibility of it make it very powerful. The other thing worth noting that may or may not be a good thing is the way Dojo is backed by IBM and used extensively in their products. Hopefully this means dojo is here to stay. Finally if you use dojo on the client with a JSON-RPC-Adapter on the server you can move your MVC view and controller onto the client and just keep a model and service layer back on the server. This opens up some interesting possibilities. Re:How bout something relevant... (0) Anonymous Coward | more than 5 years ago | (#25471463) hear hear, actually. Re:How bout something relevant... (5, Interesting) rufus t firefly (35399) | more than 5 years ago | (#25471531) I also had written a UI in dojo, starting with 0.3.x and porting forward to 0.4.x. However, their API jump to 0.9.x and then 1.x made any further porting nearly impossible. It was riddled with issues that had to be worked around by messing with undocumented properties and all sorts of other nonsense. (Check out the 0.4.x Wizard code for some examples.) Patches to fix problems weren't accepted, and the developers weren't very responsive to any criticism, saying that it would be fixed in the API incompatible next releases. I moved to GWT, and haven't regretted the move at all. Performance wise, the precompilation has made it much faster, and the code is much more maintainable in java than in javascript. There's something nice about programmatically creating a reusable UI in a sane typed programming language instead of hacking together something in Javscript. Re:How bout something relevant... (5, Interesting) djbckr (673156) | more than 5 years ago | (#25472029) Nicely documented, pretty easy to use, high performance, relatively small code footprint (for what it does). Newer versions have properly deprecated methods that makes it easy to move from version-to-version. I shudder to think about using *anything* else for this purpose. Dojo is nice and all (probably the nicest of its kind), but it's nothing compared to GWT. Re:How bout something relevant... (2, Interesting) rufus t firefly (35399) | more than 5 years ago | (#25472333) Plus, if you're really stuck on the way Dojo looks and feels, you can just use Tatami [google.com] , which allows you to use the Dojo toolkit from inside GWT. You get the extra Dojo library bloat, but it may help someone. Re:How bout something relevant... (1) MemoryDragon (544441) | more than 5 years ago | (#25479605)! Re:How bout something relevant... (1) rufus t firefly (35399) | more than 5 years ago | (#25480573)! The point is that a lot of it is duplication, since a great deal of that functionality is already covered by GWT, and all of the dojo "boilerplate" is reproduced. For example, none of the RPC stuff is used, since GWT favors its own implementation over dojo.io.bind() or its descendants. Pure GWT is faster than GWT + Dojo, but I'm not sure by exactly how much. Re:How bout something relevant... (1) MemoryDragon (544441) | more than 5 years ago | (#25482593) Problem starts if you do not want to use java or GWT... ;-) Re:How bout something relevant... (1) rufus t firefly (35399) | more than 5 years ago | (#25483477) Problem starts if you do not want to use java or GWT... ;-) It's definitely a different approach to creating web-based applications. Far more programmatic and structured than straight up Javascript toolkits. Looking at it purely with regard to maintainability and forward porting, GWT leaves just about everything else in the dust. The downside is you have to like to code everything in Java. In the end, it's all about personal preference. GWT and jQuery (1) jDeepbeep (913892) | more than 5 years ago | (#25474849) I have to say a word of agreement here; GWT blows the pants off of anything else I can find Forgive my ignorance here, but why is Google using jQuery in their Google Code site if they could have used GWT? Re:GWT and jQuery (2, Informative) Nathanbp (599369) | more than 5 years ago | (#25484203) I have to say a word of agreement here; GWT blows the pants off of anything else I can find Forgive my ignorance here, but why is Google using jQuery in their Google Code site if they could have used GWT? Google has said that many of their older websites use JavaScript of some sort instead of GWT because GWT had not yet been created when they started work on them, and they feel it would be too much work to move them to GWT. There are some examples of newer Google sites that use GWT that you can look up. Re:How bout something relevant... (1) MemoryDragon (544441) | more than 5 years ago | (#25479601) Well GWT is bound to java and generates the code on the fly, for many this is a big reason not to touch it. I never used it adittably since my work is in another domain. But talking to a guy who extensively used it basically just resulted in a confirmation of what I suspected. He said, GWT is excellent as long as you use just what GWT provides the problems begin once you have to dig into the core of the generated javascript code and if you try to alter that one. This is machine generated code and not very readable not very touchable from the outside. But comparing Dojo and other toolkits which are pure javascript to a solution like GWT, is comparing apples and oranges. Dojo is javascript only and never will leave that domain and for that it does currently the best job of all libraries out there! GWT is javascript generated code from java classes. So go figure how inappropriate the comparison is in reality! Re:How bout something relevant... (1) zuperduperman (1206922) | more than 5 years ago | (#25476175) I had a similar problem. For about a year you had to choose between Dojo 0.4 - 0.6 which had some very poor documentation but which were obsoleted by massive breaking API changes coming in 1.0. But 1.0 had no documentation and was nearly impossible to decipher and use. So there was simply no good version of dojo to use at all. Add to that the fact that the default dojo theme just looked amateur and ugly - it was a non-starter for us. We moved to YUI which is like entering a different universe to dojo - hundreds of pages of documentation, great looking default theme, examples, entire videos on how to use it. No regrets. look at you hacker... (2, Funny) rarel (697734) | more than 5 years ago | (#25470863) Re:look at you hacker... (1) Delkster (820935) | more than 5 years ago | (#25470917) "Story" tag (0) Anonymous Coward | more than 5 years ago | (#25470949) Department Of Justice?!? (0, Offtopic) supernova_hq (1014429) | more than 5 years ago | (#25470979) Re:Department Of Justice?!? (1) MillionthMonkey (240664) | more than 5 years ago | (#25477695) Yeah the DOJ JavaScript Library continually sends AJAX requests to the FBI and the NSA in the background while your page is open. I prefer Mojo over Dojo (1) zukinux (1094199) | more than 5 years ago | (#25471101) vs. Scriptaculous? (1) blackfrancis75 (911664) | more than 5 years ago | (#25471283) Wait, I'll get some popcorn.. OK - Go! Re:vs. Scriptaculous? (1) TheCycoONE (913189) | more than 5 years ago | (#25474043) Relevance? (1, Insightful) Anonymous Coward | more than 5 years ago | (#25471313) Re:Relevance? (1) trouser (149900) | more than 5 years ago | (#25475543) Is Microsoft even relevant anymore now that Vista? I recently bought 2 other related books (3, Interesting) MarkWatson (189759) | more than 5 years ago | (#25471315) I bought "Mastering Dojo" and although I have not finished it yet, I like it. I got into using Dojo a few years ago when I was experimenting with Common Lisp back end code with a REST architectural style - and a rich client Dojo web interface. Dojo is very cool. I have also used Dojo in a Rails web app and tried it with a JSP based web app (just a test, not a real project). The other related book I bought recently is "Javascript, The Good Parts" that has made me appreciate the language more. Re:I recently bought 2 other related books (0) Anonymous Coward | more than 5 years ago | (#25472763) You just wait until "Javascriptn The Bad Parts" is out... Recently looked at Dojo, but chose jQuery (2, Interesting) gbrayut (715117) | more than 5 years ago | (#25471859) Re:Recently looked at Dojo, but chose jQuery (1) Sancho (17056) | more than 5 years ago | (#25473593) That's a ditto moment. It sounds exactly like where I was about 2 months ago. I consider a framework essential for DOM traversal these days. If you want to run in multiple browsers, you'll either be writing one yourself or using one that's pre-written. There are a lot of really good frameworks out there, but I picked jQuery for the exact same reasons you did--it's no-nonsense, low-cruft, and highly extensible. Highly recommended. Re:Recently looked at Dojo, but chose jQuery (0) Anonymous Coward | more than 5 years ago | (#25476987) We ended up using an XML to JSON converter plugin since parsing arbitrary XML can be a pain to do in a cross browser world. What plug-in did you use, and what were your experiences with it? Thanks in advance. - T Cyuo F`ail It (-1) Anonymous Coward | more than 5 years ago | (#25472399) Learning curve (1) dino213b (949816) | more than 5 years ago | (#25472475) Being a practical type, I must confess that the learning curve with Dojo has been rather steep; having said that, once you get over the first major hump - it's literally all downhill from there. But, I'm not defending Dojo. Instead, I'm complimenting the book. This book appears to solve the learning curve problem by starting with a practical tutorial and then going into guts. IMO, the biggest problem with Dojo's userbase growth has been that Dojo seems to be both large and small at the same exact time, making it difficult to get oriented. One thing that developers should keep in mind is that Dojo is very scalable; performing a custom build will whittle it down from its 37+ MB source distribution (yes, graphics included) to however low you need it (in my case, couple of hundred kilobytes - smaller than some logo images out there). In my case, I've completely embraced Dojo as a reliable way to quickly produce backend systems with it - and - more recently - front ends. But that's just work. As for fun - without Dojo, I don't think that I would have put my open source project together or released it to the public. There are so many hours in a day and I don't have time to reinvent the wheel; Dojo was there for me. For some amusing interaction between Dojo and PHP (not using the Zend framework..), see the videos / screenshots from [sourceforge.net] How the heck did you do a "backend" in Dojo? (1) HighOrbit (631451) | more than 5 years ago | (#25472695) Producing a 'backend' system with a JS library must be a neat trick, since JS is client side. However, I totally agree with you about the learning curve. I couldn't make heads-or-tails of the code. Re:How the heck did you do a "backend" in Dojo? (1) dino213b (949816) | more than 5 years ago | (#25472891) Apparently you can't make heads-or-tails of terminology either. You're thinking strict code execution, while I'm thinking conceptual division. In this case, I thought it was pretty obvious that I was referring to a CMS; I'll be sure to spell things out clearly next time. And- just for the record, neat trick indeed. JavaScript can be and is executed server-side as well as client-side. The Dojo release build system works like that. See [mozilla.org] Re:How the heck did you do a "backend" in Dojo? (1) Sancho (17056) | more than 5 years ago | (#25473657) JS is client side [wikipedia.org] Just 'cause it's fun pointing out the mistakes of others, you know? Re:Learning curve (1) MemoryDragon (544441) | more than 5 years ago | (#25479621) I fully agree here, the learning curve is really steep because dojo is so extensive. It is basically to javascript what the java runtime is to java. It is a complete coverage of the entire domain of what you need in third party libs. Also I think the learning curve used to be much steeper in the past, thanks to three excellent books (I prefer the one from pragmatic programmers though) and to the improved manuals online, which unfortunately really have a load of black holes in there! As for the build size, I dont think this is an issue. If you run custom builds you can get the thing down to sizes between 50-500kbyte no matter how big the source distribution is (have in mind the source distro has several skins, a huge load of unit tests, demos etc...) which you all remove in a custom build. Sure it takes time to invest learning it, but what you get from it is impressive! Dojo is complex for complexity's sake (2, Interesting) HighOrbit (631451) | more than 5 years ago | (#25472529) For people who want to use some simple, yet powerful JS/Ajax/CSS, I'v been recommending that they check out BrainJar [brainjar.com] . Brainjar has some pretty neat stuff that is much easier to figure out, although its random stuff and not a comprehensive toolkit. But brainjar will give you some neat ideas of the things you can to with JS and CSS. Check out the windowing demo [brainjar.com] and as a plus it won't screw with your mind like Dojo. Re:Dojo is complex for complexity's sake (1) LDoggg_ (659725) | more than 5 years ago | (#25473525) Don't think of dojo as just a collection of scripts for widgets or animations like in scriptaculous. Dojo is a HUGE library that gives you the ability to pick bits and pieces where they might be helpful and in a way that hides some of the uglier browser-specific things that you don't want to deal with. For example, using connect() to add an onclick to a DOM element will put some sanity to attributes of the mousevent object that gets passed. Sure I could write a bunch of code to get around the browser differences myself, but it's pointless when there's a free library like this to do it for me. Comparison to YUI? (1) eison (56778) | more than 5 years ago | (#25472727) How does Dojo compare to Yahoo UI (YUI)? Re:Comparison to YUI? (2, Interesting) zuperduperman (1206922) | more than 5 years ago | (#25476299) YUI is extremely well documented, has a great active forum and has (quite literally) hundreds of examples - dozens for each control / feature offered. They also offer a very nice, consistent theme that goes right across the whole library and integrates with every component. I haven't looked at dojo in a while but when I did, the documentation was *horrible*. You really had to go through a lot of pain to "grok" how it worked under the hood before you would be productive (this may have gotten better). My impression however is that it is much more cutting edge than YUI - folks doing research into new techniques are far more likely to put it into dojo than any where else (certainly not YUI etc.) - however as a result it is much less stable, less consistent and less well documented. For a full end to end framework for use in developing a commercial app I prefer YUI, because every aspect of it is mature and solid and the support from Yahoo for it is amazing. On the other hand, if you're doing something cutting edge where you really want to push the limits and use new browser features or super fancy never-before-seen effects - dojo could be the best choice. Re:Comparison to YUI? (1) MemoryDragon (544441) | more than 5 years ago | (#25479645) Btw. if you need examples and documentation on dojo there probably is no better site than dojocampus.org I am way happier with this site than with the original dojo documentation! Re:Comparison to YUI? (1) MemoryDragon (544441) | more than 5 years ago | (#25479635) YUI has mainly the objective to cover widgets, it is comparable to the Dojo dijit widget set! The code itself and how the class files are done how the widgets are initialized are pretty similar. YUI however has the better documentation. While dojos has improved it is necessary to get one of the tree dojo books to really grasp everything correctly. In the end I still would choose dojo over YUI due to the fact that dojo is so much more. Around 90% of my work in javascript definitely I just use the dojo core 10% are the widgets and YUI is more widget centered! So it depends on your needs, but I think you can even mix both, I have not seen anything in YUI which would prevent it to be mixed with other libs (unlike Prototype which is the high school bully of all javascript libraries) stoolpigeon = Jim Harmon (0) Anonymous Coward | more than 5 years ago | (#25475551) stoolpigeon = Jim Harmon Hello Jim, Enjoying free publicity? Why JavaScript at all? (0) Anonymous Coward | more than 5 years ago | (#25476085) I still don't understand why anyone would want to use JavaScript for web front-end development. It's just insane. And I think I should point out that I'm not saying this because I dislike JavaScript as a language or do not have enough experience with it to know what I'm doing. As a scripting language, I like it and have used it on the server side in a few instances where we needed the ability to dynamically run code on the server without re-deploying our application. And I'm part of a team that was tasked with re-writing our entire application as a "Web 2.0" (I hate that term) application. So lest anyone think I'm just trying to say something inflammatory, my comments come from 2 years of hands-on daily usage of JavaScript and I even understand what is involved in writing a JavaScript framework that will perform in all of our target browsers (IE6+,FF2+,Safari2+). Because at the time (roughly 2 years ago) when we set out to re-write our application, none of the available JavaScript-only libraries performed up to our needs...it seems that none of them make performance a priority. Sure most of them allow you to quickly make simple pages that are pretty to look at, but once you get to something moderately complex, the page gets unacceptably slow in at least one browser (almost always IE6, but rarely the other browsers get in on the act too). They all pretty much fall prey to one of the many "gotchas" when it comes to JavaScript (excessive object creation, excessive DOM traversal, etc). So we ended up rolling our own. It wasn't the kind of thing that could be released publicly since it relies almost entirely on convention rather than strict APIs, but with a lot of work we were able to get it to perform the way we need it to. Along the way, we've written approximately 3000-4000 jsunit tests, so that should give a pretty good picture of the combined size of the framework and application. And yet even with this accomplished, it still took an excess amount of time to develop front-end functionality and there were still a ton of bugs in the UI (mostly in IE6) that we couldn't seem to catch with automated testing. And that burden on QA was eating all their time preventing them from having the time necessary to test back-end logic in a comprehensive fashion. So about 3 months ago we determined to do another evaluation of the JavaScript libraries available. And once again, none of the pure-JavaScript libraries was up to handling our test page (not our most complex, but close to it). But what we did find was that GWT has evolved to the point where we can use it and, simply put, it smokes even our own framework when it comes to performance. We had dismissed it immediately when we did our original evaluation since it made it too difficult to embed it into existing pages, but that's no longer the case. It allows us to leverage our Java expertise, since we write our back-end in Java. It allows us to write our unit tests for the front-end in JUnit and run them along side our back-end tests in our continuous integration setup. We've cludged together a means of doing the same for JSUnit, but it requires individual "worker" machines that run the tests on each browser, which makes the tests take forever to run. As we replace all our JSUnit tests with JUnit tests, the time necessary to do a continuous integration build is steadily decreasing, and faster build times lead to less time when the build is unstable. So now we're developing all new pages in GWT and slowly replacing existing pages. And while there still are bugs in the UI, the defect rate in our GWT-based pages is at least an order of magnitude less than we had using JavaScript. And when we do get bugs, they get fixed faster since hosted mode gives us a real debugging environment. And it's this experience with GWT contrasted with the experience both writing/using our own framework and doing a ton of test pages with the various frameworks that leads me to the original question...Why would anyone want to use JavaScript anymore? With all the advantages that GWT gives without any real down side, why would anyone subject themselves to the pain of developing in JavaScript? No. (0) moniker127 (1290002) | more than 5 years ago | (#25476911) Re:No. (0) Anonymous Coward | more than 5 years ago | (#25477331) I've met a few of the contributors and I doubt there's anyone over 40 in the group. And even if there was, what does that have to do with anything? If you've ever written anything more complex than "hello world", you'd know that libraries make things less complex. Re-inventing the wheel is complicated and usually a waste of time. Reusing a tested piece of code is easy. Dojo is UN-documented (1) scurvyj (1158787) | more than 5 years ago | (#25477615) Re:Dojo is UN-documented (1) MemoryDragon (544441) | more than 5 years ago | (#25479653) For a better documentation look here [dojocampus.org] Also buy one of the books. The documentation has becomed better however on the original site. I really can recommend the pragmatic programmers book however to get a full grasp! dojo seems like a neat library but... (0) Anonymous Coward | more than 5 years ago | (#25478173) it has a fatal flaw. It is built around the idea of using the dojoType attribute to declare dojo wigets. But dojo type isn't a real attribute! Modern browsers only accept it because they have to accept all sorts of garbage for html. If you are trying to serve xhtml, it will break your shits. So, I'd say it's fatal flaw is lack of decent xhtml support. Xhtml has the concept of namespaces that should make custom elements easy... so I don't know why they've left things like this. Re:dojo seems like a neat library but... (1) LDoggg_ (659725) | more than 5 years ago | (#25478379) Cross-Domain Ajax (1) cparker15 (779546) | more than 5 years ago | (#25484255) If JSONP isn't an option for you, and you need to make use of a REST endpoint on another domain (or even subdomain), see if you can get the service provider to add Dojo's XIP [dojotoolkit.org] server files to their server. 50% ? (2, Interesting) Tablizer (95088) | more than 5 years ago | (#25493901) I browsed around the web for Dojo examples, and only about half worked. Not a good sign. Some outright crashed, others half-worked with things like text in the wrong place or only deletion working but not insertion. Reminds me of Java applets a decade ago.
http://beta.slashdot.org/story/108957
CC-MAIN-2014-35
refinedweb
6,761
71.14
Custom tools aren’t a particularly well-known technology – in fact, they are the ‘barely visible’ players of the Visual Studio infrastructure. This article describes what they are, how they are used, and gives an example for programming your own. Please note that this is a tutorial for beginners, so I won’t be showing any advanced stuff here. In order to compile/run the examples, you need Visual Studio 2008 with Service Pack 1. Please note that the Service Pack is important, since prior to it coming out, VS2008 developers wishing to write custom tools got stuck in limbo due to some conflicting UUIDs. By the way, VS2005 is fine for custom tool development, though there are tiny differences in the API. You will also need the Visual Studio 2008 SDK installed. Here are some statements which describe a custom tool: ComVisible A custom tool is a file generator because its purpose is to generate files from an existing file. The original intent of the tool was to generate just one file, but by writing some custom code, you can generate several. What’s the point? Well, how about generating a data set from an XSD? A custom tool is precisely the mechanism for it. One can think of many more uses for custom tools, such as: How do we tell a file to use a custom tool? It’s simple: select the file in the solution tree, and open the Properties window (press F4). Then, type in the name of the custom tool you want to use. Here’s how it looks: As soon as you specify the custom tool, it will run using the selected file(s) as input. If you misspelled the name, or if the custom tool is broken, Visual Studio will let you know. Provided everything went well, you’ll end up with some freshly generated files! Where do these files go? Well, they go into the code-behind. In other words, they are one level below the selected item in the solution tree. Here’s an illustration: As you can see on the above screenshot, code-behind files appear just under the item for which a custom tool was specified (e.g., Neurovisual.xml). They all have the same icon with the blue arrow – I have no idea why and, to my knowledge, there is no way to change this. Now that we know where the generated files go, a good question is when. Well, the files are (re)generated every time you save the source file (the file that uses the custom tool). You can also force a re-generation by right-clicking the file and choosing Run Custom Tool: Voilà! You’ve got your generated code. The way this magic is possible is due to the extensibility API that Visual Studio provides. Specifically, it provides us with an interface – IVsSingleFileGenerator – that a custom tool must implement. However, unfortunately, implementing this interface on a public type and compiling a DLL does not make a custom tool available in Visual Studio – some extra steps need to be taken. IVsSingleFileGenerator Visual Studio needs to be told about your tool. Since VS uses the Component Object Model (COM) for extensibility, it wants to know the Globally Unique Identifier (GUID) of each of your custom tools. This means three things: Guid In the next section, we shall go through the process of creating a custom tool. Let’s make a basic custom tool – one that counts the number of lines in a file and generates a text file with that number. Here are the steps to get the tool working: Note: Some people recommend creating integration packages instead, because they can be debugged. I haven't tested this. Microsoft.VisualStudio.Shell.Interop LineCountGenerator DefaultExtension() Generate() public int DefaultExtension(out string pbstrDefaultExtension) { pbstrDefaultExtension = ".txt"; return pbstrDefaultExtension.Length; } public int Generate(string wszInputFilePath, string bstrInputFileContents, string wszDefaultNamespace, IntPtr[] rgbOutputFileContents, out uint pcbOutput, IVsGeneratorProgress pGenerateProgress) Let’s discuss each of the arguments in turn: wszInputFilePath bstrInputFileContents wszDefaultNamespace rgbOutputFileContenst IntPtr[] System.Runtime.InteropServices.AllocCoTaskMem pcbOutout rgbOutputFileContents pGenerateProgress If everything went well, you need to return VSConstants.S_OK from the function – to get this enumeration value, you’ll need to add a reference to Microsoft.VisualStudio.Shell in your project. Or, you can just return 0 (zero). VSConstants.S_OK Microsoft.VisualStudio.Shell int lineCount = bstrInputFileContents.Split('\n').Length; Now, we use the Encoding class to get the bytes to write, as well as how many there are: Encoding byte[] bytes = Encoding.UTF8.GetBytes(lineCount.ToString()); int length = bytes.Length; Having acquired the bytes, we need to write them using the COM task allocator: rgbOutputFileContents[0] = Marshal.AllocCoTaskMem(length); Marshal.Copy(bytes, 0, rgbOutputFileContents[0], length); There is no de-allocation of the memory – Visual Studio will do it for us. All that remains now is to set the number of bytes written, and return S_OK. S_OK pcbOutput = (uint)length; return VSConstants.S_OK; We’re not done yet! All we’ve got so far is the tool functionality, we haven’t added COM support yet. Let’s do it now. nguid [Guid("A4F30983-CAD7-454C-BB27-00BCEECF2A67")] public class LineCountGenerator : IVsSingleFileGenerator { ⋮ } true Visual Studio can also handle the registration for you. Just open project properties, and select the Build tab. On the bottom, you’ll see the check box to Register for COM Interop. Check it, and you won’t need to regasm your tools while developing them (you’ll still need regasm if you plan to deploy your custom tool). SOFTWARE\Microsoft\VisualStudio\visual_studio_version\Generators\{language_guid} There are two variables here: Now that we know where to place the subkey, let’s discuss what the subkey should contain. Overall, the subkey should contain the following values: The simplest was to associate the data above with the custom tool is to associate it with an attribute, so that our Line Counter custom tool would now look as follows: [Guid("A4F30983-CAD7-454C-BB27-00BCEECF2A67")] [CustomTool("LineCountGenerator", "Counts the number of lines in a file.")] public class LineCountGenerator : IVsSingleFileGenerator The CustomTool class (courtesy of Chris Stephano, see [1]) is a simple attribute class – I will not present it here (it’s in the sample code). The only thing to note is that, unfortunately, it cannot inherit from GuidAttribute, which would have made everything look even more elegant. But now, we have a problem: how to integrate all this wonderful metadata and create Registry entries from it. CustomTool GuidAttribute [ComRegisterFunction] public static void RegisterClass(Type t) { GuidAttribute guidAttribute = getGuidAttribute(t); CustomToolAttribute customToolAttribute = getCustomToolAttribute(t); using (RegistryKey key = Registry.LocalMachine.CreateSubKey( GetKeyName(CSharpCategoryGuid, customToolAttribute.Name))) { key.SetValue("", customToolAttribute.Description); key.SetValue("CLSID", "{" + guidAttribute.Value + "}"); key.SetValue("GeneratesDesignTimeSource", 1); } } [ComUnregisterFunction] public static void UnregisterClass(Type t) { CustomToolAttribute customToolAttribute = getCustomToolAttribute(t); Registry.LocalMachine.DeleteSubKey(GetKeyName( CSharpCategoryGuid, customToolAttribute.Name), false); } I won’t go into all the plumbing here – these functions use a couple of extra methods that are just utilities for getting the Registry keys built. The important thing here is that by creating these two functions, we make the custom tool self-register with Visual Studio. After compilation, there are just two steps remaining. You must register the assembly for COM interop, and place it in the Global Assembly Cache (GAC). The order of these two operations is unimportant. regasm YourCustomTool.dll and to unregister, you would call it with the /u switch: regasm /u YourCustomTool.dll gacutil /i YourCustomTool.dll Assembly removal is done with the /u switch, and you must remember to remove the .dll ending, as the tool wants the assembly display name, not the file name. gacutil /u YourCustomTool If you have set up your project to COM-register your custom tool automatically, feel free to add the call to gacutil to the post-build step. Please note, however, that you might have to specify the full path to the gacutil.exe program. Well, that pretty much covers the steps necessary to get your own custom tool working. Let’s recap the steps needed to produce a custom tool. One of the problems of the IVsSingleFileGenerator is that it only makes a single code-behind file. Sometimes, we might want to have several. Luckily, a fellow by the name of Adam Langley created a solution [2] to this problem that allows our custom tool to create several files. His example is particularly interesting – he shows a generator that takes an HTML file and adds all the images it refers to as code-behind files. For the sake of completeness, I will describe that solution briefly – feel free to read the original article if you are interested. Here’s a brief reminder about the class you need to derive your custom tools from to get multi-file generation capability. The class is called VsMultiFileGenerator<T>. This class is an enumerator, and the T generic parameter is defined by your subclass. The type can be anything: this generic parameter is mainly for you to process how you see fit. The most sensible choice is to define it as a string. VsMultiFileGenerator<T> T string The abstract methods you need to override are as follows: IEnumerator<T> GetEnumerator() string GetFileName(string element) byte[] GenerateContent(string element) byte[] GenerateSummaryContent() string GetDefaultExtension() GetDefaultExtension() I promised to explain about the ‘summary content’, so here goes. Basically, the multiple-file generator is a single-file generator that also does extra things (such as, you know, generate additional files). After it does that, however, it is forced to create at least one file the old-fashioned way to satisfy the IVsSingleFileGenerator interface contract. This isn’t always so great – for example, if you are writing an adaptive generator that does not know the types of files it will create until it’s actually executed, you’re in trouble – you’ll end up with an extra file being added to the code-behind (because you must have one with a defined extension). This is a cosmetic problem, though, and does not break functionality in any way. And, if you decide to be clever and supply VS with null data and a length of 0 (zero), you will get an error dialog box. Don’t say I didn’t warn you! To see an example use of the multi-file generator, you can take a look at the multi-file XSL transformer I wrote [3]. There are a few things that need to be mentioned with respect to custom tools. First, integration with source control doesn’t always work the way you want it to. The generated files do seem to be added to source control normally, but sometimes, you might run into situations where some people will see them and others won’t. I have no clue how the mechanics of this work – I have only seen it in SourceSafe, so I kind of hope that TFS is better at handling them. Anyways, a custom tool for XML->XSL transforms did get used on a commercial project – successfully. Just so you know. If you are wondering what the difference between a custom tool and just a plain boring VS add-in is, well, there isn’t much! In fact, add-ins are better because they do not generate the spurious ‘summary’ file. The Custom Tool mechanism is primarily designed for 1->1 file transforms that happen almost automatically (when you save, mainly). You can also program identical Save-triggered functionality into an add-in. My advice is to use custom tools for basic transforms (e.g., getting a preview of a class as it is serialized). For anything serious, it’s better to write an add-in. A custom tool doesn’t have to be specified explicitly for a file. Instead, you can associate it with a file extension. For example, Visual Studio does it for the .tt file extension. This allows any file saved with a .tt extension to be executed by the Text Templating processor (also known as T4). Making your own association is easy – when writing the information to the Registry, instead of making a subkey with the name of the custom tool (e.g., MyGenerator), specify the extension of the files that the custom tool will always be applied to (e.g., .myfile). Don’t forget the dot before the extension itself! I hope that this article has demonstrated that custom tools aren’t that difficult to program. Sure, there are a few steps that need to be taken, but I’ve described them all, so hopefully there’s nothing preventing you from writing your own custom tool if you need one. If you liked this article, please vote for it. If you did not, please vote anyways, and let me know what I could have done better..
http://www.codeproject.com/Articles/31257/Custom-Tools-Explained?fid=1531696&df=90&mpp=50&noise=5&prof=True&sort=Position&view=None&spc=Relaxed
CC-MAIN-2016-18
refinedweb
2,121
55.24
advance java advance java i want to refer advanced java book can u plz suggest me auther name You can learn from the following books: 1)Advanced Java, Gajendra Gupta 2)Core servlets and JavaServer Pages,Marty Hall Advance and Core JAVA Topics Advance and Core JAVA Topics topics come under core java and topics come under advanced java? Under Core Java, following topics comes... Sockets Design Pattern JSP Servlets Java Beans Session Management For more CoreJava Project CoreJava Project Hi Sir, I need a simple project(using core Java, Swings, JDBC) on core Java... If you have please send to my account  ... application logic then the java compiler does not have knowledge in advance where...: An immutable class is a class to which values assigned to the variables advance java of that application. The ServletContext can also be used by the servlets to share...advance java what is servlet context?why its used?in what situation...: corejava - Java Beginners corejava how to retriving data from html to servlet?how send the data servlet to text.file hai friend, By using FormBeans we can... information. Thanks Amardeep core java - Java Beginners core java how to create a login page using only corejava(not servlets,jsp,hibernate,springs,structs)and that created loginpage contains database(ms-access) the database contains data what ever u r enter and automatically date corejava - Java Interview Questions Core Java vs Advance Java Hi, I am new to Java programming and confuse around core and advance java Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava ; Core java Interview Question page1 An immutable... in the constructor. Core java Interview Question Page2 A Java... of an interface. Core Java Interview Question Page3 Generally Java code and specification u asked - Java Beginners code and specification u asked you asked me to send the requirements... can build in java are extensive and have plenty of capability built in. We... in any application that can impress the heck out of people. Just as you?d Java Servlets ). It is safe and doest not display the data. It can send large amount of data. Servlet...Java Servlets If the binary data is posted by both doGet and doPost then which one is efficient?Please give me some example by using both doGet Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava Core Java Interview Questions Page3 Q 1. How can I get the full path of Explorer.exe ? Ans : Generally Java sandbox does not allow to obtain this reference so servlets servlets how can I run java servlet thread safety program using tomcat server? please give me a step by step procedure to run the following program...?); DataInputStream d=new DataInputStream(new FileInputStream(r)); acct.bal=d.readInt can u plz try this program - Java Beginners can u plz try this program Write a small record management... operation. Thanks in advance Hi friend, form code.... --------------------- <%@ page language="java CoreJava CoreJava Sir, What is the difference between pass by value and pass by reference. can u give an example corejava - Java Beginners Deadlock Core Java What is Deadlock in Core Java?  ... at the same time . To avoid this problem java has a concept called synchronization... block, function or class can be declared as a synchronized one. Hello Advance Java Training Program Advance Java Training Program Hi, Can anyone guide me how to study advance Java in few day's Please visit the following link: Java Tutorials Through the above link, you will get the links of advanced java tutorials Core Java Core Java Hi, can any one please send me a code to count the dupicates charaters from a string. Thanks a lot in advance!! The given code accepts the string from the user and display the occurrence of each character corejava - Java Beginners Tutorials for Core Java beginners Can anyone share their example of Encapsulation in java? I'm a core Java beginner. Hi,Here is the description of Encapsulation in java:Encapsulation is a process of binding Core Java Core Java Hi, Can any one please tell me the program to print the below matrix in a spiral order. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Thanks a lat in advance core java core java Dear Friends, I am new to java and i faced one program.Can... a b c d a b c d e a b c d e f a b c d e a b c d a b c a b a how can we get... new to java and i faced one program.Can any one just help me getting output Advance Java training topics Advance Java training topics 1. Programming with Servlets & JSP...; JSP and Servlets Explain the use of directives on JSPs. Implement core java - JSP-Servlet one thing that using servlets we can do that. But Our Project Manager told me... not aware of this concept... Could you pls... Help me Thanks in advance...core java Thank you very much for your fast reply.. Now Thank U - Java Beginners Thank U Thank U very Much Sir,Its Very Very Useful for Me. From SUSHANT CORE JAVA CORE JAVA CORE JAVA PPT NEED WITH SOURCE CODE EXPLANATION CAN U ?? Core Java Tutorials corejava - Java Beginners What is Dynamic Binding What is Dynamic Binding in Core Java? Hi,Dynamic Binding:It is the way that provide the maximum functionality...:// then post your Core java - Java Beginners Core java Hello sir/madam, Can you please tell me why multiple inheritance from java is removed.. with any example.. Thank you in advance for your valuable answer Hi, I am sending core java core java Hi, can any one expain me serialization,Deseralization and exterenalization in core java core java core java Hi, can any one exain me the concept of static and dynamic loading in core java corejava - Java Interview Questions corejava how can we make a narmal java class in to singleton class corejava - Java Beginners corejava hai this is jagadhish. I have a doubt on corejava.How many design patterns are there in core java? which are useful in threads?what r... for more information: advance java question advance java question how to develop servlets without using any IDE... information, visit the following links: corejava - Java Interview Questions singleton java implementation What is Singleton? And how can it get implemented in Java program? Singleton is used to create only one...://- - - - - - - -;) Meeya core java - Java Beginners core java pl. tell me about call by value and call by reference... System.out.println("Massage 2: i= " + i + ", d= " + d); Double(i, i); //Java... the caller?s reference variables, i.e. make them point to different objects, but it can Core Java Core Java Hi, Can any one please share a code to print the below: 1 23 456 78910 thanks a lot in advance Reply Me - Java Beginners class (which contain your database information). Check previous I send U two...Reply Me Hi Rajnikant, I know MVC Architecture but how can use this i don't know... please tell me what is the use core java - Java Beginners ????????????? plzzzzzzzzzzz help me its very urgent in advance thanks If you want to sort...core java Hi Guys, what is the difference between comparable... this will help u advance java advance java application to demonstrate email validator core java - Java Beginners core java "Helo man&sir can you share or gave me a java code hope.... core java jsp servlet Friend use Core JAVA .thank you so much.hope you... enter the number of salespeople to be processed: 5(sample u input) Enter servlets , visit the following links: servlets what are filters in java servlets what are filters in java  ... functionality to the servlets apart from processing request and response paradigm... application, and they can be applied to any resources like HTML, graphics, a JSP page advance java advance java give the sourse code for sky high institute of management using j2ee Core Java Core Java Hi, Can any one please tell me is if possible to access the private member from out side of the class or How to make possible for a base class to access the private member of it's parent class Core Java Core Java can any one please tell me the difference between Static and dynamic loading in java??? The static class loading is done... type identification. Also called as reflection. can any one explain send me example of jmsmq - JMS send me example of jmsmq please send me example about jmsmq (java microsoft message queuing ) library Can someone help me with this? Can someone help me with this? I have this project and i dont know how to do it. Can someone help me? please? Write a java class named "PAMA..., Multiply, Divide) Help me please! Thanks in advance! import core java core java 1.Given: voidwaitForSignal() { Object obj = new Object... statement is true? A. This code can throw an InterruptedException. B. This code can throw an IllegalMonitorStateException. C. This code can throw Advance Java Advance Java Context ctx = new InitialContext(); if(ctx==null) { throw new RuntimeException("JNDI Context not Found"); } ds=(DataSource)ctx.lookup Core Java Core Java Hi, can any one please expain me below topics: 1.How to handle memory leakage from your developed code. 2.How to handle stack overflow error from your developed code. Thnaks Advance Java Training Advance Java Training 1. Programming with Servlets & JSP... and Servlets. To explain the use of directives on JSPs. Implementing servlets servlets what is ResultSet? ResultSet is a java object... set would contain this table of data and each row can be accessed one by one. we can use the resultset.get() methods to get the data from Core Java Core Java Hi, can any plese help me solve below problem. I have 2 hash map where key is String.I want to store the value of both the hashmap...) } please help me to solve Core Java Core Java Hi, I am trying to remove duplicated charater from a given string without using built in function, but getting some issue in that. can anyone please share the code for that??? Thanks a lot in advance!!   core java Kapoor...... how can i do it....plz help me.......its very urgent...core java hello sir....i have one table in which i have 3 columns First Name,Middle Name,Last Name...............but i have to show single name Jobs at Rose India with your job. You can work in Core Java and we will provide you training in advance java to make you advance Java programmer. So, if you know Core Java... Core Java Jobs at Rose India   Core Java Core Java Hi, can one please share the code to count the occurance...++; } System.out.println(c+"="+count); } } Thnx Deepak:) can you please explain me the code shared above interview Help Core Java interview questions with answers Hi friend minor project on service center management in core java minor project on service center management in core java I need a minor project on service center management in core java...If u have then plz send me Core Java - Java Beginners Core Java Can u give real life and real time examples of abstraction, Encapsulation,Polymarphism....? I guess you are new to java and new... ,, and u will find everything about java here for sure Hi .Again me.. - Java Beginners Hi .Again me.. Hi Friend...... can u pls send me some code...... REsponse me.. Hi friend, import java.io.*; import java.awt....:// Thanks. I am sending running code core Java Collections - Java Beginners core Java Collections what is Vector? send me any example what.... It contains components that can be accessed using an integer index like an array. The size of a Vector can grow or shrink as needed to accommodate adding and removing CAN U HELP ME TO CODE IN JSP FOR ONLINE VOTING SYSTEM CAN U HELP ME TO CODE IN JSP FOR ONLINE VOTING SYSTEM can u help me to code in jsp for online voting system java servlets - Java Beginners page send code to me...java servlets i want to close window when ever he logout from one page... this closing is done with yes/no option. when ever press "yes servlets - Java Beginners that image in page from the db.pls any one send me code for that in servlets asap.pls vary urgent. i will be thankful to you pls send me core java - Java Beginners core java Hi guys, String class implements which interface plzzzzzzzzzzzzz can any body tell me its very very urgentttttttttttttttt Thanks String implements the Serializable, CharSequence servlets - JSP-Servlet files required) /web.xml /addstudent.html After executing the java files place the entire Student Project in Tomcat/Webapps. can u plzz explain me what...servlets hi deepak, u had replied to me as First in Student core java - Java Beginners core java Hi, if two interfaces having same method can that method will override i mean can we override methods in interface.plzzzzzzzzzzz can any body help me plzzzzzzzzzzzzzzzzzz... Hi java auto mail send - Struts java auto mail send Hello, im the beginner for Java Struts. i use java struts , eclipse & tomcat. i want to send mail automatically when... scheduler.It referesh the server can send mail after specify interval. For more Send me Binary Search - Java Beginners Send me Binary Search how to use Binary think in java give me the Binary Search programm thx.. Hi friend, import java.io....)); } } ----------------------------------------- Read for more information. can I take input? hai.... u can take input through command line or by using buffered reader. An wexample for by using... information : Thanks core java core java how can we justify java technology is robust Core Java - Java Beginners Core Java How can we explain about an object to an interviewer .... An object is a combination of messages and data. Objects can receive and send... to : java 2 d array program java 2 d array program write a program 2-d matrix addition through user's input? Hi Friend, Try the following code: import java.util.*; class MatrixAddition{ public static void main(String[] args After changing url params d req checkboxes are not showing as clicked in jsf programming - Java Server Faces Questions u have given recently. I want to modify d url parameters(like clientId...; } } /*********************Library.java*******over****************/ Here u can see the method of Library.java... After changing url params d req checkboxes are not showing as clicked in jsf send me javascript code - Java Beginners send me javascript code please send me code javascript validation code for this html page.pleaseeeeeeeee. a.first:link { color: green;text-decoration:none; } a.first:visited{color:green;text Core Java - Java Beginners Core Java Hi Sir/Madam, Can u please explain about the Double in java. I have problem with Double datatype. public class DoubleTesting { public static void main(String[] args) { Double amt=137.17*100 java questions - Java Interview Questions java questions HI ALL , how are all of u?? Plz send me the paths of java core questions and answers pdfs or interview questions pdfs... the interview for any company for <1 year experience thanks for all of u in advance servlets - Java Beginners servlets Hello! am doin my servlet course at niit, i want know in detail methods of servlets,and where can we implement it i.e. i want to know... explain me the separate examples with one method implemented please give me following output Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://www.roseindia.net/tutorialhelp/comment/65522
CC-MAIN-2015-40
refinedweb
2,610
65.62
Data Mining: Finding Similar Items and Users How to find related items? Here are recipes based on really simple formulas. If you pay attention, this technique is used all over the web (like on Amazon) to personalize the user experience and increase conversion rates. Because we want to give kick-ass product recommendations. To get one question out of the way: there are already many available libraries that do this, but as you’ll see there are multiple ways of skinning the cat and you won’t be able to pick the right one without understanding the process, at least intuitively. Defining the Problem # To find similar items to a certain item, you’ve got to first define what it means for 2 items to be similar and this depends on the problem you’re trying to solve: - on a blog, you may want to suggest similar articles that share the same tags, or that have been viewed by the same people viewing the item you want to compare with - Amazon has this section called “customers that bought this item also bought”, which is self-explanatory - a service like IMDB, based on your ratings, could find users similar to you, users that liked or hated approximately the same movies you did, thus giving you suggestions on movies you’d like to watch in the future In each case you need a way to classify these items you’re comparing, whether it is tags, or items purchased, or movies reviewed. We’ll be using tags, as it is simpler, but the formula holds for more complicated instances. Redefining the Problem in Terms of Geometry # We’ll be using my blog as sample. Let’s take some tags: ["API", "Algorithms", "Amazon", "Android", "Books", "Browser"] That’s 6 tags. Well, what if we considered these tags as dimensions in a 6-dimensional Euclidean space? Then each item you want to sort or compare becomes a point in this space, in which a coordinate (representing a tag) is either one (tagged) or zero (not tagged). So let’s say we’ve got one article tagged with API and Browser. Then its associated point will be: [ 1, 0, 0, 0, 0, 1 ] Now these coordinates could represent something else. For instance they could represent users. If say you’ve got a total of 6 users in your system, 2 of them rating an item with 3 and 5 stars respectively, you could have for the article in question this associated point (do note the order is very important): [ 0, 3, 0, 0, 5, 0 ] So now you can go ahead and calculate distances between these points. For instance you could calculate the angle between the associated vectors, or the actual euclidean distance between the 2 points. For a 2-dimensional Euclidean space, here’s how it would look like: Euclidean Distance # The mathematical formula for the Euclidean distance is really simple.. Here’s some Ruby code: # Returns the Euclidean distance between 2 points # # Params: # - a, b: list of coordinates (float or integer) # def euclidean_distance(a, b) sq = a.zip(b).map{|a,b| (a - b) ** 2} Math.sqrt(sq.inject(0) {|s,c| s + c}) end # Returns the associated point of our tags_set, relative to our # tags_space. # # Params: # - tags_set: list of tags # - tags_space: _ordered_ list of tags def tags_to_point(tags_set, tags_space) tags_space.map{|c| tags_set.member?(c) ? 1 : 0} end # Returns other_items sorted by similarity to this_item # (most relevant are first in the returned list) # # Params: # - items: list of hashes that have [:tags] # - by_these_tags: list of tags to compare with def sort_by_similarity(items, by_these_tags) tags_space = by_these_tags + items.map{|x| x[:tags]} tags_space.flatten!.sort!.uniq! this_point = tags_to_point(by_these_tags, tags_space) other_points = items.map{|i| [i, tags_to_point(i[:tags], tags_space)] } similarities = other_points.map{|item, that_point| [item, euclidean_distance(this_point, that_point)] } sorted = similarities.sort {|a,b| a[1] <=> b[1]} return sorted.map{|point,s| point} end And here is the test you could do, and btw you can copy the above and the bellow script and run it directly: # SAMPLE DATA all_articles = [ { :article => "Data Mining: Finding Similar Items", :tags => ["Algorithms", "Programming", "Mining", "Python", "Ruby"] }, { :article => "Blogging Platform for Hackers", :tags => ["Publishing", "Server", "Cloud", "Heroku", "Jekyll", "GAE"] }, { :article => "UX Tip: Don't Hurt Me On Sign-Up", :tags => ["Web", "Design", "UX"] }, { :article => "Crawling the Android Marketplace", :tags => ["Python", "Android", "Mining", "Web", "API"] } ] # SORTING these articles by similarity with an article # tagged with Publishing + Web + API # # # The list is returned in this order: # # 1. article: Crawling the Android Marketplace # similarity: 2.0 # # 2. article: "UX Tip: Don't Hurt Me On Sign-Up" # similarity: 2.0 # # 3. article: Blogging Platform for Hackers # similarity: 2.645751 # # 4. article: "Data Mining: Finding Similar Items" # similarity: 2.828427 # sorted = sort_by_similarity( all_articles, ['Publishing', 'Web', 'API']) require 'yaml' puts YAML.dump(sorted) The Problem (or Strength) of Euclidean Distance # Can you see one flaw with it for our chosen data-set and intention? I think you can - the first 2 articles have the same Euclidean distance to [“Publishing”, “Web”, “API”], even though the first article shares 2 tags with our chosen item, instead of just 1 tag as the rest. To visualize why, look at the points used in calculating the distance for the first article: [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1] [1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1] So 4 coordinates are different. Now look at the points used for the second article: [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1] [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1] Again, 4 coordinates are different. So here’s the deal with Euclidean distance: it measures dissimilarity. The coordinates that are the same are less important than the coordinates that are different. For my purpose here, this is not good - because articles with more tags (or less) tags than the average are going to be disadvantaged. Cosine Similarity # This method is very similar to the one above, but does tend to give slightly different results, because this one actually measures similarity instead of dissimilarity. Here’s the formula:\[similarity= cos(\theta)= \frac{A \cdot B}{\|A\| \|B\|}= \frac{ \displaystyle\sum_{i=1}^{n} A_i B_i }{ \sqrt{ \displaystyle\sum_{i=1}^{n} A_i^2 } \sqrt{ \displaystyle\sum_{i=1}^{n} B_i^2 } }\] If you look at the visual with the 2 axis and 2 points, we need the cosine of the angle theta that’s between the vectors associated with our 2 points. And for our sample it does give better results. The values will range between -1 and 1. -1 means that 2 items are total opposites, 0 means that the 2 items are independent of each other and 1 means that the 2 items are very similar (btw, because we are only doing zeros and ones for coordinates here, this score will never get negative for our sample). Here’s the Ruby code (leaving out the wiring to our sample data, do that as an exercise): def dot_product(a, b) products = a.zip(b).map{|a, b| a * b} products.inject(0) {|s,p| s + p} end def magnitude(point) squares = point.map{|x| x ** 2} Math.sqrt(squares.inject(0) {|s, c| s + c}) end # Returns the cosine of the angle between the vectors #associated with 2 points # # Params: # - a, b: list of coordinates (float or integer) # def cosine_similarity(a, b) dot_product(a, b) / (magnitude(a) * magnitude(b)) end Also, sorting the articles in the above sample gives me the following: - article: Crawling the Android Marketplace similarity: 0.5163977794943222 - article: "UX Tip: Don't Hurt Me On Sign-Up" similarity: 0.33333333333333337 - article: Blogging Platform for Hackers similarity: 0.23570226039551587 - article: "Data Mining: Finding Similar Items" similarity: 0.0 Right, so much better for this chosen sample and usage. Ain’t this fun? BUT, you guessed it, there’s a problem with this too … The Problem with Our Sample; The Tf-Idf Weight # Our data sample is so simple that we could have simply counted the number of common tags and use that as a metric. The result would be the same without getting fancy with Cosine Similarity :-) Clearly a tag such as “Heroku” is more specific than a general purpose tag such as “Web”. Also, just because Jekyll was mentioned in an article, that doesn’t make the article about Jekyll. Also an article tagged with “Android” may be twice as Android-related as another article also tagged with “Android”. So here’s a solution to this: the Tf-Idf weight, a statistical measure used to evaluate how important a word is to a document in a collection or corpus. With it you can give values to your coordinates that are much more specific than simple ones and zeros. But I’ll leave that for another day. Also, related to our simple data-set here, perhaps an even simpler metric, like the Jaccard index would be better. Pearson Correlation Coefficient # The Pearson Correlation Coefficient for finding the similarity of 2 items is slightly more sophisticated and doesn’t really apply to my chosen data-set. This coefficient measures how well two samples are linearly related. For example, on IMDB we may have 2 users. One of them, lets call him John, has given the following ratings to 5 movies: [1, 2, 3, 4, 5]. The other one, Mary, has given the following ratings to the same 5 movies: [4, 5, 6, 7, 8]. The 2 users are very similar, as there is a perfect linear correlation between them, since Mary just gives the same rankings as John plus 3. The formula itself or the theory is not very intuitive though. But it is simple to calculate:\[r = \frac{ \displaystyle\sum_{i=1}^{n} a_i b_i - \frac{ \displaystyle\sum_{i=1}^{n} a_i \displaystyle\sum_{i=1}^{n} b_i }{n} }{ \sqrt{ (\displaystyle\sum_{i=1}^{n} a_i^2 - \frac{(\displaystyle\sum_{i=1}^{n} a_i)^2}{n}) \cdot (\displaystyle\sum_{i=1}^{n} b_i^2 - \frac{(\displaystyle\sum_{i=1}^{n} b_i)^2}{n}) } }\] Here’s the code: def pearson_score(a, b) n = a.length return 0 unless n > 0 # summing the preferences sum1 = a.inject(0) {|sum, c| sum + c} sum2 = b.inject(0) {|sum, c| sum + c} # summing up the squares sum1_sq = a.inject(0) {|sum, c| sum + c ** 2} sum2_sq = b.inject(0) {|sum, c| sum + c ** 2} # summing up the product prod_sum = a.zip(b).inject(0) {|sum, ab| sum + ab[0] * ab[1]} # calculating the Pearson score num = prod_sum - (sum1 *sum2 / n) den = Math.sqrt((sum1_sq - (sum1 ** 2) / n) * (sum2_sq - (sum2 ** 2) / n)) return 0 if den == 0 return num / den end puts pearson_score([1,2,3,4,5], [4,5,6,7,8]) # => 1.0 puts pearson_score([1,2,3,4,5], [4,5,0,7,8]) # => 0.5063696835418333 puts pearson_score([1,2,3,4,5], [4,5,0,7,7]) # => 0.4338609156373132 puts pearson_score([1,2,3,4,5], [8,7,6,5,4]) # => -1 Manhattan Distance # There is no one size fits all and the formula you’re going to use depends on your data and what you want out of it. For instance the Manhattan Distance computes the distance that would be traveled to get from one data point to the other if a grid-like path is followed. I like this graphic from Wikipedia that perfectly illustrates the difference with Euclidean distance: Red, yellow and blue lines all have the same length and the distance is bigger than the corresponding green diagonal, which is the normal Euclidean distance. Personally I haven’t found a usage for it, as it is more related to path-finding algorithms, but it’s a good thing to keep in mind that it exists and may prove useful. Since it measures how many changes you have to do to your origin location to get to your destination while being limited to taking small steps in a grid-like system, it is very similar in spirit to the Levenshtein distance, which measures the minimum number of changes required to transform some text into another.
https://alexn.org/blog/2012/01/16/cosine-similarity-euclidean-distance/
CC-MAIN-2022-33
refinedweb
2,039
58.32
Hey, I am having a very noob(ish) problem. I would like to use the MathNet.Numerics Library. When I add the following to my code (only that line to code that works otherwise) and add MathNet.Numerics.dll to my references in MonoDeveloper I can build the file. However when I go to run (push the play button) my unity project I get the following error. using MathNet.Numerics.IntegralTransforms; error CS0246: The type or namespace name `MathNet' could not be found. Are you missing a using directive or an assembly reference? I have the MathNet.Numerics.dll file in the same folder as UnityEngine.dll. I've tried putting the MathNet.Numerics.dll file into my project folder (Assets) but that give the following error. TypeLoadException: Could not load type 'MathNet.Numerics.Algorithms.LinearAlgebra.ILinearAlgebraProvider' from assembly 'MathNet.Numerics, Version=2.5.0.27, Culture=neutral, PublicKeyToken=null'. I don't understand what the problem is. Please Help. Thanks Answer by thebebinator · Apr 11, 2016 at 07:33 AM I got this working. Steps: 1) Create a folder called Assets/Plugins in your project. 2) go to and download the latest version of MatNet.Numerics.dll in zip format. 3) open the folder called Net35. Unity apparently only runs on this version of .net. 4) copy BOTH MathNet.Numerics.dll AND System.Threading.dll into Assets/Plugins. Note: Don't touch anything in MonoDevelop. it should reference it automatically. For me i was installing the newer .net version and I didn't realize it also needed System.Threading.dll. Once I got it in the above format it worked fine. FYI: I'm on El Capitan OSX11.4 and Unity 5.3.4f1 personal. Hope that helps! Adam Hi @thebebinator, your solution also worked for me! Many thanks. - Max I got this working using your instructions @thebebinator . Thanks! By the way: Make sure to use a MathNet.Numerics version compatible with .NET 3.5. In this OneDrive folder you can find 3.20.2 for instance. Answer by HackAndSlashDev · Dec 01, 2015 at 10:40 AM Well, this answer helped find the answer. I was able to get the Math.NET Numerics 3.9 running on unity on osx El Capitan. Here are the steps I took: Install NuGet in Monodevelop using this tutorial Search for and install Math.NET Numerics using the NuGet plugin. That will also install System.Threading for which you'll have to accept a license agreement. Close monodevelop. I copied "packages/TaskParallelLibrary.1.0.2856.0/lib/Net35" to a folder I named Assets/Libs. Then I also copied the contents of "packages/MathNet.Numerics.3.9.0/lib/net35" to "Assets/Libs/Net35". For correctness sake I placed the readmes and licenses under "Assets/Libs". Finally I deleted the packages directory, and the file Assets/packages.config That's it. You can test it by dragging the following script onto an object: using UnityEngine; using System.Collections; using Matrix = MathNet.Numerics.LinearAlgebra.Matrix<double>; using Vector = MathNet.Numerics.LinearAlgebra.Vector<double>; public class MathTest : MonoBehaviour { // ************************************************************************* void Start() { Matrix A = Matrix.Build.DenseOfArray(new double[,] { {1,0,0}, {0,2,0}, {0,0,3}}); Vector b = Vector.Build.Dense(new double[] {1, 1, 1}); Vector x = A.Solve(b); Debug.Log("x = A^-1b: " + x); } } Answer by randomuser · May 23, 2013 at 11:03 PM You don't add the reference in monodevelop, you add the .dll to your unity project in the editor, which should then add it for use via the "using" statement. Here is the documentation for it: Answer by TriangleKing · May 24, 2013 at 10:10 AM MathNet requires .NET 4, which unity's version of monodevelop doesn't support. If you manage to get it working, please tell me! I've been trying to use Iridium's stable distribution, but it's giving me bogus numbers. Iridium was discontinued in 2008 so I'm thinking it has lots of bugs. And I really don't want to implement my own probability distributions. I've managed to get Math.Net working in a Unity3d project and because the steps weren't super clear, I've also created a sample GitHub repo with a working test. Answer by TheD0ctor · Dec 14, 2016 at 06:52 AM You could use Math.NET reference an enum from outside a built DLL 1 Answer Why can't I build Web Player in Unity while I have no problems with building standalone versions? 2 Answers InControl does not work as a managed DLL. 0 Answers Importing a DLL that contains C#-wrapped C++ Code 0 Answers DLL not working when running .exe 1 Answer
https://answers.unity.com/questions/462042/unity-and-mathnet.html
CC-MAIN-2018-43
refinedweb
780
53.88
One of the key problems in human-computer interactions is the ability of the computer to understand what a person wants.. What is a LUIS app?. Key concepts - What is an utterance? An utterance is the textual input from the user, that your app needs to interpret. It may be a sentence, like "Book me a ticket to Paris", or a fragment of a sentence, like "Booking" or "Paris flight." Utterances aren't always well-formed, and there can be many utterance variations for a particular intent. See Add example utterances for information on training a LUIS app to understand user utterances. - What are intents? Intents are like verbs in a sentence. An intent represents actions the user wants to perform. It is a purpose or goal expressed in a user's input, such as booking a flight, paying a bill, or finding a news article. You define a set of named intents that correspond to actions users want to take in your application. A travel app may define an intent named "BookFlight", that LUIS extracts from the utterance "Book me a ticket to Paris". - What are entities? If intents are verbs, then entities are nouns. An entity represents an instance of a class of object that is relevant to a user’s intent. In the utterance "Book me a ticket to Paris", "Paris" is an entity of type location. By recognizing the entities that are mentioned in the user’s input, LUIS helps you choose the specific actions to take to fulfill an intent. See Entities in LUIS for more detail on the types of entities that LUIS provides. Plan your LUIS app, a "BookFlight" intent could trigger an API call to an external service for booking a plane ticket, which requires entities like the travel destination, date, and airline. See Plan your app for examples and guidance on how to choose intents and entities to reflect the functions and relationships in an app. Build and train a LUIS app Once you have determined which intents and entities you want your app to recognize, you can start adding them to your LUIS app. See create a new LUIS app, for a quick walkthrough of creating a LUIS app. For more detail about the steps in configuring your LUIS app, see the following articles: - Add intents - Add utterances - Add entities - Improve performance using features - Train and test - Use active learning - Publish You can also watch a basic video tutorial on these steps. Improve performance using active learning Once your application is deployed and traffic starts to flow into the system, LUIS uses active learning to improve itself. In the active learning process, LUIS identifies the utterances that it is relatively unsure of, and asks you to label them according to intent and entities. This process has tremendous advantages. LUIS knows what it is unsure of, and asks for your help in the cases that lead to the maximum improvement in system performance. LUIS learns quicker, and takes the minimum amount of your time and effort. This is active machine learning at its best. See Label suggested utterances for an explanation of how to implement active learning using the LUIS web interface. Configure LUIS programmatically LUIS offers a set of programmatic REST APIs that can be used by developers to automate the application creation process. These APIs allow you to author, train, and publish your application. Integrate LUIS with a bot It's easy to use a LUIS app from a bot built using the Bot Framework, which provides the Bot Builder SDK for Node.js or .NET. You simply reference the LUIS app as shown in the following examples: Node.js // Add a global LUIS recognizer to your bot using the endpoint URL of your LUIS app var model = ''; bot.recognizer(new builder.LuisRecognizer(model)); C# // The LuisModel attribute specifies your LUIS app ID and your LUIS subscription key [LuisModel("2c2afc3e-5f39-4b6f-b8ad-c47ce1b98d8a", "9823b65a8c9045f8bce7fee87a5e1fbc")] [Serializable] public class TravelGuidDialog : LuisDialog<object> { // ... The Bot Builder SDK provides classes that automatically handle the intents and entities returned from the LUIS app. For code that demonstrate how to use these classes, see the following samples: Integrate LUIS with Speech Your LUIS endpoints work seamlessly with Microsoft Cognitive Service's speech recognition service. In the C# SDK for Microsoft Cognitive Services Speech API, you can add the LUIS application ID and LUIS subscription key, and the speech recognition result is sent for interpretation. See Microsoft Cognitive Services Speech API Overview.
https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/Home
CC-MAIN-2017-43
refinedweb
747
61.97
Today was my second workshop with Kent - there are 8 over the next number of weeks and I'm going to blog briefly about each of them. I aim to return to these posts and flesh them out more over time - potentially making individual posts with the sections that get longer. For now, these are my notes during the workshop and immediately following it. These workshops are starting in the late afternoon for me and ending at 8.30pm. By the end, I was tired so revisiting those sections is going to be important. This workshop was about the most common React hooks that are used in app development - useState, useEffect and useRef. After Kent's initial sharing of the expectations, we jumped into our breakout rooms and got started. The actual exercise was pretty straight-forward. I write React most days at the moment and so these are the tools you'd expect to reach for. Good to remember that if we use a prop to initialise the component, which is then modified in state, that component won't be re-rendered if the initial variable is changed. function Greeting({ initialName = '' }) { const [name, setName] = React.useState(initialName) function handleChange(event) { setName(event.target.value) } return ( <div> <form> <label htmlFor="name">Name: </label> <input onChange={handleChange} </form> {name ? <strong>Hello {name}</strong> : 'Please type your name'} </div> ) } function App() { return <Greeting key="" initialName="" /> } You can use the key prop on a component to trigger a re-initialisation. If you change the key, React will re-render. There was an interesting discussion about whether useState should be for a single value or an object of values. There is an argument that you should probably more to useReducer at that point. Kent referenced a blogpost (here)[] about that and assures us we'll explore more of it later. Interesting explanation for why this is important. React components should be idempotent - no matter how many times the operation is run the final effect should be the same. If we want to have any side effects, we need this hook. I enjoyed the exercise and the extra credit - I haven't done a lot with localStorage so this was good. Use-effect is to get the state of the world to be in-sync with the state of our application. React is a way to render UI, manage state and update UI based on that state. setCount(c=>c+1) A custom hook is just a regular function that calls and uses other hooks. An interesting hook-flow example and diagram. A really common issue is needing to share state between sibling components. We can move the state up to the parent in order to share it between them. If things change though and we find that we have a single component that needs the state, we should colocate that state within the relevant component. Kent has a blog post for it here We can categorize our state into different types. Blogpost on managed and derived state here If you have an async callback on set state, it is typically better to do something like this: setSquares(currentSquares => { const squaresCopy = [...currentSquares] squaresCopy[square] = calculateNextValue(currentSquares) return squareCopy }) If you are using the previous state to update the state (particularly async) always use this function form. useRefand useEffect: DOM interaction useRef - you can change a value without triggering a re-render. A ref is an object with a current property which can mutated freely. We can use the ref to target a dom node which is useful for implementing JS libraries that act directly on the DOM. You have something that changes over time but you don't want to trigger a re-render. Another use is here on Dan Abarmov's blog. HTTP requests are side effects - we are bringing our app and the outside world into sync.
https://www.kevincunningham.co.uk/posts/react-hooks/
CC-MAIN-2020-29
refinedweb
644
63.7
User Details - User Since - Apr 15 2014, 12:19 PM (373 w, 4 d) Sat, May 29 Yes, this looks good to me. Thanks for this fix! Fri, May 28 Hm so this is definitely something that we'd want in 12.0.1? @tstellar it doesn't change any ABI, so should be perfectly safe. I'll create a PR. May 4 2021 May 3 2021 Reverted due to the influence this had on some other test cases. I'll do a full check-all and go over them one by one, but it will take some time... Update comment and commit message, and add mips64 test. Add test case, and update comment to explicitly mention ld.lld. Apr 21 2021 Welcome, Kate. Apr 7 2021 Apr 6 2021 Apr 4 2021 Apr 3 2021 Apr 2 2021 Squash commits to convince Phabricator. Address review comments. Apr 1 2021 Mar 25 2021 My previous update accidentally dropped the changes to SCCP.cpp. Squashed both local revisions to get both the functionality change and the test case. Mar 24 2021 LGTM, but this would probably need some sort of test, at least that it correctly accepts the flag(s) now? LGTM Looks like a good idea to me. Mar 20 2021 Added a bugpoint-reduced test case. (I could only get clang and opt to trigger errors on a Linux machine by building with LLVM_USE_SANITIZER=Address.) Ok, I have not been able to make either llc or opt show any addressanitizer errors with the .ll output from the .cpp test case. Is it OK to check in a .cpp test case instead? That would require clang at regression test time though, and I'm unsure if that is available? Mar 15 2021 Hm, I've reduced the test case to the following C++ source: Mar 14 2021 Mar 8 2021 Oh please s/FreeVBD/FreeBSD in the description before committing :) Yes please, this warning has annoyed me for ages. I think we can't get rid of this include entirely, since we need to get at the declarations even if they are deprecated. Feb 15 2021 Note, this is mainly aimed at getting the 12.0.0 release built, as I had to apply a custom hack to turn the operators on. But I would like to get rid of the hack and have this applied in a way acceptable to the libc++ maintainers. Jan 28 2021 Jan 22 2021 LGTM. Jan 19 2021 @sylvestre.ledru removed the minor version from the binary (on purpose, I think?) in rGa8b717fda42294d1c8e1f05d71280503e5839f14: Jan 8 2021 LGTM (this mirrors what is in libcxx/include/__config btw) Nov 28 2020 Address comments from review: - Move FreeBSD specific include under common block - Use #if #elif #endif for OS specific implementations Oct 30 2020 Yes, this looks pretty fine to me, but indeed needs a test. Oct 16 2020 Oct 9 2020 Sep 10 2020 Sep 3 2020 Sep 2 2020 Aug 29 2020 Aug 26 2020 Aug 24 2020 Please note, I did a follow-up commit in rG47b0262d3f82, to address compile errors I received from the x86_64-linux-debian build bot: This has been obsoleted by D86397 (committed as rGcde8f4c164a2 with follow-up rG47b0262d3f82). Aug 22 2020 After this change, it turns out that some of the errors that this results in can be very confusing. For example, when building ocaml, it uses a .S file () containing: Aug 21 2020 As @JDevlieghere suggests, only instantiate Error objects when necessary. Aug 15 2020 Hm, the static FrameHeaderCache ProcessFrameHeaderCache; is biting us here, at least in a multithreaded process like lld, which is the process that is crashing for me all the time now. I added some instrumentation, which shows the problem by adding a thread ID to each log message, and by adding an assert in FrameHeaderCache::add() that MostRecentlyUsed != nullptr: I turned on _LIBUNWIND_DEBUG_FRAMEHEADER_CACHE, and indeed such segfaults appear immediately after the message libunwind: FrameHeaderCache reset. That means FrameHeaderCache::MostRecentlyUsed has just been set to nullptr ... It seems that after this change, I'm getting sporadic lld segfaults in libunwind, which look like: Aug 10 2020 For me, this fixes the assertion for both the original test case (a fully preprocessed ASTReader.cpp) and the minimized test case from Aug 9 2020 Aug 3 2020 Aug 2 2020 Hm, this review's still open after two years, and even as of 2020-08-02 clang still crashes on the sample. :) Aug 1 2020 Note that as of rG1db4318766256f25a03ef80af8dbb3f99743ebe9, applying this change results in the Passed testcases increasing from 49259 to 49666, so more than 400 Failed test cases are fixed by it. Jul 31 2020 Jul 19 2020 Jul 16 2020 LGTM.
https://reviews.llvm.org/p/dim/
CC-MAIN-2021-25
refinedweb
787
71.44
It is always a fear that one day you might end up leaking your API key in a public git repository. In flutter is there are many ways of hiding the API key some are not working and some don't work properly in this article I will be showing you one way to work with API keys in flutter. Let's see how we can do it We will be using this package If you are using it in a project with null safety dependencies: flutter_dotenv: ^4.0.0-nullsafety.0 If you are using it in a project without null safety dependencies: flutter_dotenv: ^3.1.0 then create a file in the root directory called .env For those of you who don't know what a .envfile is it is basically a file in which we store secret variables. In the .env file you can add your secret API keys in this format SUPER_SECRET_API_KEY=This is a super secret API key THIS_CAN_BE_CALLED_ANYTHING=This here can be anything like ut4ihyeFn49 Important: Never commit these .env files in your version control. If you are using git version control system add the .envfile to .gitignore After making this .env file add it as an asset in the pubspec.yaml assets: - .env Then run flutter pub get In your main.dart file load the .env file import 'package:flutter_dotenv/flutter_dotenv.dart' as DotEnv; Future main() async { await DotEnv.load(fileName: ".env"); //...runapp } Now in your code you can load the variables from the .env file anywhere like this. import 'package:flutter_dotenv/flutter_dotenv.dart'; env['SUPER_SECRET_API_KEY']; That's it, thanks for reading hope this short article helps! Top comments (8) Thanks for writing down how developers can avoid this common pitfall :) Now I would like to recommend you to read my answer in StackOverflow to the question How to protect Flutter app from reverse engineeringto understand the other threats involved with using an API key in a mobile app. My answer is split in sections: Found one more answer I gave in StackOverflow to a question with the title Securely Saving API Keys In Android (flutter) Apps, that is also split in sections: Feel free to ask here questions about any doubt you may have after reading it. This won’t make your api key safe, still very easy to get it, for example, hacker can just unzip you android package, then your asset folder will show up, next thing is just read your .env file content You'd assume the "build" version of the ENV will only contain the variables that are needed to run the app, rather than everything you might have like signing entitlements etc Yes but if you are putting it on Github public repo it is at least safer. Using envars is common practice of how to inject security things inside some app code, its not flutter specific. But still this is not solving the core problem. How will I deliver the service account/api keys to mobile app to use it in secure way when someone just download it from google play store? Nice recommendation, thanks! Your welcome! No use. Still showing up when do decompile
https://dev.to/aadityasiva/protecting-api-keys-in-flutter-619
CC-MAIN-2022-40
refinedweb
529
70.13
No matter what sort of selling you do (real estate, advertising, car sales, cold calling) the trick is to PREPARE YOURSELF with good rebuttals and killer comebacks! ... and you'll find some new, fresh and EFFECTIVE comeback responses using this sales Guide than can be obtainable from any individual sales book you might discover on Amazon. Download your copy now to be equipped to close more sales within hours of quickly going through the sales Guide -- and we have a 90 day 100% money-back guarantee to back that up. These modern comebacks and rebuttals are DIFFERENT and BETTER than the common scripts. Picture yourself assertively running successfully through a sales call for the reason that you are entirely 100% sure that you will outwit any sales objection a customer could present to you ... If you can feel this great on all of your calls, it would be a good decision to invest in this sales Guide if it is really as good as everyone says it is, wouldn't it? ... prove it for yourself. The sales guide has 49 pages do not help yourself, and decide not to download this sales Guide today, you will then be another sad sales guy/girl who didn't prepare with 152 clever comeback replies that skillfully overcome sales objections such as: These will have your buyer saying "YES." Can you see the power? You will receive these as well as 145 more intelligent comebacks and rebuttals. If you learn just two new things from this sales. auto sales technique | car selling tip | closing argument sample | cold calling cowards | cold sales call pitch sample | direct sales tool | import sales lead | negotiation skill article | negotiation technique | negotiation tip | online negotiation course | phone sales script | retail sales training book | salary negotiation strategy | salary negotiation technique | sales lead solutions.com | sales performance training | sales script done deal | sales training course | sales training toronto | sales training workshop | selling skill | selling technique | telemarketing sales lead | telesales technique
http://www.acceleratedsoftware.net/sales-techniques/sales-team-training.html
crawl-003
refinedweb
329
54.76
15 January 2013 18:06 [Source: ICIS news] Correction: In the ICIS news story headlined "US propylene contracts for January settle 15 cents/lb higher" dated 15 January 2013, please read in the sixth paragraph …up 36% from 53.25 cents/lb in mid-November… instead of … up 36% from 53.25 cents/lb in mid-December…. A corrected story follows. HOUSTON (ICIS)--US propylene contracts rose by 15 cents/lb ($331/tonne, €248/tonne) for January, lifted by a surge in spot prices in the past few weeks and strong demand, market sources said on Tuesday. The 26% increase puts polymer-grade propylene (PGP) contracts at 73.00 cents/lb and chemical-grade propylene (CGP) contracts at 71.50 cents/lb. A large increase for January had been expected after spot prices surged in the past few weeks on the back of plant outages, including an unexpected shutdown at PetroLogistics’ propane dehydrogenation (PDH) plant in ?xml:namespace> Several US producers originally had nominated increases of 11.50 and 13.00 cents/lb for January, but one of the those suppliers later bumped the initiative to 15.00 cents/lb as spot prices continued to rise while negotiations got under way. PGP for January traded on Monday at 72.25 cents/lb, up 36% from 53.25 cents/lb in mid-November, while refinery-grade propylene (RGP) traded at 69.00 cents/lb on Tuesday, rising by nearly 40% from deals done at 49.50-50.00 cents/lb four weeks earlier. The surge in the price of RGP, which accounts for about 60% of the Market sources also cited stronger demand, saying PGP supply restrictions resulting from unplanned outages created additional demand for RGP as a feedstock for the higher-grade monomer. PGP demand also strengthened on its own, sources said, adding that buyers flocked to the market in the second half of December once it became clear that a massive increase for propylene was in the pipeline for January. With the January settlement completed, market attention quickly will shift to February, but sources said the outlook for next month is unclear, noting that it is too soon to tell how the January increase will affect demand. Another source said trends in the Major The main buyers include Dow Chemical, INEOS, Ascend Performance Materials and Total. (
http://www.icis.com/Articles/2013/01/15/9631988/corrected-us-propylene-contracts-for-january-settle-15-centslb-higher.html
CC-MAIN-2015-14
refinedweb
387
62.98
Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava with a single SQL statement does not make sense from a database management point of view... Core Java Interview Questions Page3  ... makes a class available to the class that imports it so that its public variables  ... a class having all its methods abstract i.e. without any implementation. That means... specialized, it means that its internal behavior may be quite complex and pose un Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava Core Java Interview Questions Page1  ... to return its string content. If an object refers to a String or other type by using...; A Java interface is an abstract data type like a class having all its methods corejava - Java Beginners arguments to the method call is passed to the method as its parameters. Note very...corejava pass by value semantics Example of pass by value semantics in Core Java. Hi friend,Java passes parameters to methods using pass corejava corejava Creating the object using "new" and usins xml configurations whih one is more useful?why corejava - Java Interview Questions Core Java vs Advance Java Hi, I am new to Java programming and confuse around core and advance java corejava - Java Interview Questions corejava how to validate the date field in Java Script? Hi friend, date validation in javascript var dtCh= "/"; var minYear=1900; var maxYear=2100; function isInteger(s){ var i; for (i = 0 Hi.. - Struts .....its very urgent Hi Soniya, I am sending you a link. This link...Hi.. Hi, I am new in struts please help me what data write...-bean.tld: This tag library contains tags useful in accessing beans Ple help me its very urgent Ple help me its very urgent Hi.. I have one string 1)'2,3,4' i want do like this '2','3','4' ple help me very urgent Very Very Urgent -Image - JSP-Servlet with some coding, its better.. PLEASE SEND ME THE CODING ASAP BECAUSE ITS VERY VERY...Very Very Urgent -Image Respected Sir/Madam, I am... called "View Database". When I click this button, a small pop up window gets CoreJava Project CoreJava Project Hi Sir, I need a simple project(using core Java, Swings, JDBC) on core Java... If you have please send to my account Interview question - Java Interview Questions Interview question Hi Friends, Give me details abt synchronization in interview point of view. I mean ow to explain short and neat. Thanks corejava - Java Interview Questions What is View? What is View? What is View? Hi, A SQL View is a virtual table, which is based on SQL SELECT query. Essentially a view is very close...), except for the fact that the real tables store data, while the views don?t. The view view view hi iam writing some struts application by using dyanafalidator form in place of actionform bean classes i can enter data by using some jsp file and it will insert to database by using business logic. now my requirement hi... - Struts also its very urgent Hi Soniya, I am sending you a link. I hope...hi... Hi Friends, I am installed tomcat5.5 and open the browser and type the command but this is not run please let me Model View Architecture - JSP-Interview Questions Model View Architecture Describe the architectural overview of Model view architecture? Hi friend, Model-view-controller (MVC... rules used to manipulate the data, the view corresponds to elements of the user Hi - Struts please help me. its very urgent Hi friend, Some points to be remember...Hi Hi Friends, I want to installed tomcat5.0 version please help me i already visit ur site then i can't understood that why i installed Open Source Point of sale ; HP Bundles, Brands its Turnkey Point-of-Sale System Small-scale... this powerful and elegant POS program at a very aggressive price point...Open Source Point of sale point of sale Point of sale, or POS Interview Questions and Answers obviously expects you to look at things from their point of view. No employer... for the interview as well as the interview board itself. It is very common... Interview Questions and Answers   Help Very Very Urgent - JSP-Servlet requirements.. Please please Its Very very very very very urgent... Thanks/Regards, R.Ragavendran.. Hi friend...Help Very Very Urgent Respected Sir/Madam, I am sorry..Actually Hi - Hibernate Interview Questions Hi please send me hibernate interview questions: corejava - Java Beginners What is Dynamic Binding What is Dynamic Binding in Core Java? Hi,Dynamic Binding:It is the way that provide the maximum functionality to a program for a specific type at runtime. There are two type of binding first interview questions - Java Interview Questions interview questions for Java Hi Any one can u please post the interview point of questions.Warm Regards, Satish Floating-point the decimal point). Generally scientific notation is used only for very...-state.edu/~dws/grouplinks/floating_point_math.pdf) has a very detailed... Java NotesFloating-point Floating-point numbers are like real numbers Hi... - Java Beginners ,its execute the aa.jsp and bb.jsp. i think its useful for u...Hi... Hi friends, I hv two jsp page one is aa.jsp & bb.jsp... but this not working Upload Record please tell me Hi Please help me... its very urgent Please help me... its very urgent Please send me a java code to check whether INNODB is installed in mysql... If it is there, then we need to calculate the number of disks used by mysql java - Java Interview Questions /beginners/ Thanks hi!! in a very simplest way, class is a block...java Difference between class and object? Hi Friend... be used simply. It acts like a blue print. It is the central point of OOP Populate Menus In Tree View - Struts Populate Menus In Tree View Hi all, i am writing one simple... explorer when we click in a item its subitem get expanded including "plus sign" as prefix.In the same way i am thinking to bring the menus in a tree view in java - JSP-Interview Questions java hi.. snd some JSP interview Q&A and i wnt the JNI(Java Native Interface) concepts matrial thanks krishna Hi friend, Read more information. hint - Java Interview Questions hint Dear roseindia, i want the java interview question and the corresponding question answers. Hi Friend, Please visit the following link: Here you will get lot interview questions - EJB interview questions in Java Need interview questions in Java Hi,Please check the following linksJ2EE tutorial and documentations: Questions: Interview Questions - Struts Interview Questions Interview Questions - Struts Interview Questions.... The controller is responsible for selecting the next view based on user input... the browser, invoke a business operation and coordinating the view to return Tree View with database - Java Beginners Tree View with database Hi, I'm working with Swing.I have to construct a SplitPane with JTree on the leftside and its details which is stored... for this plz... Thanks Hi Friend, Try the following code: import Java Interview - Java Interview Questions Java Interview Please provide some probable Java interviewe Question. Thanking you. Pabitra kr debanth. Hi friend, I am sending.... Thanks iOS View iOS View What is the Difference between View's bounds and frame in iOS? Difference between View's bounds and frame in iOS The frame of a view is the rectangle, expressed as a location (x,y) and size (width,height java - Java Interview Questions java i want to java&j2ee interview questions. Regards Akhilesh Kumar Hi friend, I am sending you a link. This link will help you. Read for more information. Difficult Interview Questions Page -10 Difficult Interview Questions Page -10  ... using your current job as a reference point. Question 92: How to do measure... person to person. But the ideal answer may like: " According to my point Java - Java Interview Questions Interview interview, Tech are c++, java, Hi friend, Now get more information. you can lean all type interview question by following link link. Point Style code. - Java Beginners Point Style code. hi , I am Nagarjuna ,this is my first qestion in this forum, pls to this question through the mail. my E- mail id:bodduna.nagarjuna@gmail.com Q: I am working on GIS project.I want to read java - Java Interview Questions modifiers Hi Just to let you know, you can find these difference anywhere while searching on google but if you want you can learn it in a interview preparation manner as well.. I have a java interview question URL from where you hi roseindia - Java Interview Questions hi roseindia advantages of object oriented progamming language? Hi friend, Some advantages of object oriented progamming language : 1) A provides modular structure . 2) The programming can be done with real HR Interviews,HR Interview Guide,HR Interview ; HR Interview is very important in getting a job, this page will guide you in getting sure success. HR Interview Advice - Interview the Interviewer...; Interview Questions and Answers Here are some very popular interview questions java - Java Interview Questions ??? Its very urgent plzzzzz any boby tell me...java Hi guys, why we can not use super and this in static block.If i will use what will happen??? For e.g.suppose a super class Move Curve Control Point Move Curve Control Point  ... control point and manipulates the shape of curve. The Control points.... On stretching and rotating the control point, the shape of the curve and the angle java questions - Java Interview Questions java questions HI ALL , how are all of u?? Plz send me the paths of java core questions and answers pdfs or interview questions pdfs... the interview for any company for <1 year experience thanks for all of u in advance Simple Program Very Urgent.. - JSP-Servlet Simple Program Very Urgent.. Respected Sir/Madam, I am... coding asap because its most urgent.. Thanks/Regards, R.Ragavendran.. Hi friend, Step To solve the problem: "winopenradio.jsp" 1 Hi... - Struts Hi... Hi, If i am using hibernet with struts then require... of this installation Hi friend, Hibernate is Object-Oriented mapping tool that maps the object view of data into relational database and provides efficient C - Java Interview Questions ; Hi Friend, Output is 63. As you have set the value 7 to the variable i.The i++ increments the number after its use therefore it uses 7... to 8.. ++i = 9. Since at this point i was incremented to 8 and since JSP Tutorials Resource - Useful Jsp Tutorials Links and Resources by progressing from very simple examples to complex examples. For best progress... working. The early examples might seem very simple; please have patience... is useful (and commonly done) for a number of reasons: The Web Global Positioning Systems and its uses Global Positioning Systems and its uses Global Positioning Systems... more useful These objects include vehicles, mobile phones, some other electronic.... With its help, they can find out that who are the most productive employees j2ee - Java Interview Questions Bejjanki. Refer SCWCD Book , you get idea about mvc and its use Hi friend, Swing architecture is rooted in the model-view-controller (MVC... and delegated its look-and-feel implementation to separate view and controller Common Interview Questions and Their Answers. Common Interview Questions and Their Answers... for successful interviews is to remember how you feel. Yes, it is your nth interview.... It is a good idea to rehearse the interview session beforehand. The most jsp - JSP-Interview Questions is very useful for my viva presentation. Hi JavaServer Pages (JSP JSF Interview Questions JSF Interview Questions What does component mean and what are its...: JSF contains its basic set of UI components like text Java Architecture Over View patterns Java originally made its debut on client machine and browser... are very important for organization so it may kept to another place.   Core Java Interview Question, Interview Question Core Java Interview Question Page 9  ... is a very simple example just to illustrate how to use the second approach to creating... of Thread.States. A thread can only be in one of the following states at a given point hi - XML Thanks Rajanikant Hi friend, It is a format to share data... who read the feed on its computer. Then it uses an RSS reader or its browser java - Java Interview Questions /interviewquestions/ Here you will get lot of interview questions...java hello sir this is suraj .i wanna ask u regarding interview... ...or any of the sites....which handles these kind of questions... Hi OOPS - Java Interview Questions ? explain with an example? Hi friend, Association... another to perform an action on its behalf. This relationship is structural... Aggregation. Thanks Hi friend, Aggregation is a special kind Useful SEO Articles - 5 Killer Tips for Writing SEO Articles for Home-Based Online Business Ideas strategy which can be adopted at this point is that of using website keywords... as this will guarantee your home-based online business ideas and its corresponding SEO... into headlines with HI tag and keywords second in importance to prime keywords
http://www.roseindia.net/tutorialhelp/comment/2142
CC-MAIN-2014-10
refinedweb
2,172
57.57
4.52 2021-05-04 [ FIX ] - sort hash keys for deterministic behaviour (GH #245, GH #246) 4.51 2020-10-01 [ DOCUMENTATION ] - Document support for SameSite=None cookies in CGI::Cookie (GH #244) 4.50 2020-06-22 [ ENHANCEMENT ] - Add APPEND_QUERY_STRING option (GH #243, thanks to stevenh) 4.49 2020-06-08 [ FIX ] - remove deprecation warning as no longer in core (GH #221) 4.48 2020-06-02 [ FIX ] - fix CGI::Cookie->bake() doesn't work with mod_perl redirects (GH #240) - thanks to sherrardb for the PR (GH #241) 4.47 2020-05-01 [ FIX / TESTING ] - fix typo in variable name (GH #239) 4.46 2020-02-03 [ DOCUMENTATION ] - Document support for SameSite=None cookies (GH #238) 4.45 2019-06-03 [ ENHANCEMENT ] - Add support for SameSite=None cookies (GH #237, thanks to Dur09) 4.44 2019-06-03 [ ENHANCEMENT ] - Replace only use of "base" with "parent" (GH #235) 4.43 2019-05-01 [ FIX / TESTING ] - support unquoted multipart/form-data name values (GH #234) 4.42 2019-03-26 [ DOCUMENTATION ] - clarify licence also in Makefile.PL (GH #232) 4.41 2019-03-26 [ DOCUMENTATION ] - clarify licence (GH #232) 4.40 2018-08-15 [ FIX / TESTING ] - support perls < 5.10.1 in Makefile.PL by being more dynamic (GH #229, GH #230, thanks to Aristotle) 4.39 2018-08-13 [ FIX / TESTING ] - specify CONFIGURE_REQUIRES in Makefile.PL so can use TEST_REQUIRES to build with older perls (GH #228) 4.38 2017-12-01 [ TESTING ] - command_line.t: Avoid -I for libs (GH #224, thanks to cpansprout) 4.37 2017-11-01 [ FIX ] - Fix incorrect quoting of ? in ->url (GH #112, GH #222, with thanks to Reuben Thomas) 4.36 2017-03-29 [ ENHANCEMENT ] - Support PATCH HTTP method (thanks to GovtGeek for the... patch) - pass through max_age and samesite to CGI::Cookie->new in the call in CGI->cookie (GH #220) [ FIX ] - skip t/command_line.t on windows as it doesn't work 4.35 2016-10-13 [ FIX ] - revert changes from 4.34 as they broke stuff 4.34 2016-10-13 [ ENHANCEMENT ] - If running from the command line, url_param now picks up parameters given on then command line or on stdin (GH #210) [ DOCUMENTATION ] - documentation for above addition 4.33 2016-09-16 [ DOCUMENTATION ] - clarify that ->param will return the first value if there are multiple values (when not called in list context) 4.32 2016-07-19 [ DOCUMENTATION ] - make perldoc CGI object consistent (GH #205) - clarify reason for absolute URLs (GH #206) [ INTERNALS ] - tweak dependency defs in Makefile.PL (GH #207, GH #208) - (thanks to karenetheridge and kentfredric) 4.31 2016-06-14 [ FEATURES ] - Add SameSite support to Cookie handling (thanks to pangyre) [ INTERNALS ] - The MultipartBuffer package has been renamed to CGI::MultipartBuffer. This has been done in a way to ensure any $MultipartBuffer package variables are still set correctly in CGI::MultipartBuffer. if you are explicitly using MultipartBuffer in a form such as: MultipartBuffer->new your code will break. you should be calling: CGI->new->new_MultipartBuffer( $boundary,$length ); to ensure the correctly package is called. if you are extending the MultipartBuffer package though use of ISA or base (or parent) then you will need to update your code to use CGI::MultipartBuffer - fake using strict and warnings to appease CPANTS Kwalitee - require File::Temp v0.17+ to get seekable file handles (GH #204) 4.28 2016-03-14 [ RELEASE NOTES ] - please see v4.21 Changes for any potentially impacting changes [ SPEC / BUG FIXES ] - undef %QUERY_PARAM in initialize_globals to clean mod_perl env [ TESTING ] - improve test coverage on request types (GH #199, GH #200) - improve test coverage on CGI::Carp 4.27 2016-03-02 [ RELEASE NOTES ] - please see v4.21 Changes for any potentially impacting changes [ INTERNALS ] - fix a couple of warnings in test harness - add taint flag to example file_upload - fix a warnings in STORE subroutine 4.26 2016-02-04 [ RELEASE NOTES ] - please see v4.21 Changes for any potentially impacting changes [ SPEC / BUG FIXES ] - sort HTML attributes by default (GH #106, GH #196) [ DOCUMENTATION ] - clarifications about HTML function non removal 4.25 2015-12-17 [ RELEASE NOTES ] - please see v4.21 Changes for any potentially impacting changes [ DOCUMENTATION ] - fix link to CONTRIBUTING file (thanks to Manwar for the fix) - clarify that "soft" deprecation means that the HTML functions are deprecated but will not raise any deprecation warnings [ SPEC / BUG FIXES ] - make the list context warning only happen once per process (or thread) to prevent excessive log noise in long running or in persistent processes (thanks to @dadamail for the suggestion) 4.23 2015-12-17 [ RELEASE NOTES ] - Documentation fixes only - please see v4.21 Changes for any potentially impacting changes [ DOCUMENTATION ] - add LICENSE file and LICENSE info to Makefile.PL 4.22 2015-10-16 [ RELEASE NOTES ] - Documentation fixes only - please see v4.21 Changes for any potentially impacting changes [ DOCUMENTATION ] - fix typos in CONTRIBUTING file - links to docs, stackoverflow and perlmonks - clarify deprecation policy on HTML functions (GH #188) - mention HTML::Tiny in CGI::HTML::Functions (thanks to osfameron for the suggestion) 4.21 2015-06-16 [ RELEASE NOTES ] - CGI.pm is now considered "done". See also "mature" and "legacy" Features requests and non-critical issues will be outright rejected. The module is now in maintenance mode for critical issues only. - This release removes the AUTOLOAD and compile optimisations from CGI.pm that were introduced into CGI.pm twenty (20) years ago as a response to its large size, which meant there was a significant compile time penalty. - This optimisation is no longer relevant and makes the code difficult to deal with as well as making test coverage metrics incorrect. Benchmarks show that advantages of AUTOLOAD / lazy loading / deferred compile are less than 0.05s, which will be dwarfed by just about any meaningful code in a cgi script. If this is an issue for you then you should look at running CGI.pm in a persistent environment (FCGI, etc) - To offset some of the time added by removing the AUTOLOAD functionality the dependencies have been made runtime rather than compile time. The POD has also been split into its own file. CGI.pm now contains around 4000 lines of code, which compared to some modules on CPAN isn't really that much - This essentially deprecates the -compile pragma and ->compile method. The -compile pragma will no longer do anything, whereas the ->compile method will raise a deprecation warning. More importantly this also REMOVES the -any pragma because as per the documentation this pragma needed to be "used with care or not at all" and allowing arbitrary HTML tags is almost certainly a bad idea. If you are using the -any pragma and using arbitrary tags (or have typo's in your code) your code will *BREAK* - Although this release should be back compatible (with the exception of any code using the -any pragma) you are encouraged to test it throughly as if you are doing anything out of the ordinary with CGI.pm (i.e. have bugs that may have been masked by the AUTOLOAD feature) you may see some issues. - References: GH #162, GH #137, GH #164 [ SPEC / BUG FIXES ] - make the list context warning in param show the filename rather than the package so we have more information on exactly where the warning has been raised from (GH #171) - correct self_url when PATH_INFO and SCRIPT_NAME are the same but we are not running under IIS (GH #176) - Add the multi_param method to :cgi export (thanks to xblitz for the patch and tests. GH #167) - Fix warning for lack of HTTP_USER_AGENT in CGI::Carp (GH #168) - Fix imports when called from CGI::Fast, restores the import of CGI functions into the callers namespace for users of CGI::Fast (GH leejo/cgi-fast#11 and GH leejo/cgi-fast#12) - Fix regression of tmpFileName when calling with a plain string (GH #178, thanks to Simon McVittie for the report and fix) [ FEATURES ] - CGI::Carp now has $CGI::Carp::FULL_PATH for displaying the full path to the offending script in error messages - CGI now has env_query_string() for getting the value of QUERY_STRING from the environment and not that fiddled with by CGI.pm (which is what query_string() does) (GH #161) - CGI::ENCODE_ENTITIES var added to control which chracters are encoded by the call to the HTML::Entities module - defaults to &<>"' (GH #157 - the \x8b and \x9b chars have been removed from this list as we are concerned more about unicode compat these days than old browser support.) [ DOCUMENTATION ] - Fix some typos (GH #173, GH #174) - All *documentation* for HTML functionality in CGI has been moved into its own namespace: CGI::HTML::Functions - although the functionality continues to exist within CGI.pm so there are no code changes required (GH #142) - Add missing documentation for env variable fetching routines (GH #163) [ TESTING ] - Increase test coverage (GH #3) [ INTERNALS ] - Cwd made a TEST_REQUIRES rather than a BUILD_REQUIRES in Makefile.PL (GH #170) - AutoloadClass variables have been removed as AUTOLOAD was removed in v4.14 so these are no longer necessary (GH #172 thanks to alexmv) - Remove dependency on constant - internal DEBUG, XHTML_DTD and EBCDIC constants changes to $_DEBUG, $_XHTML_DTD, and $_EBCDIC 4.13 2014-12-18 [ RELEASE NOTES ] - CGI::Pretty is now DEPRECATED and will be removed in a future release. Please see GH #162 () for more information and discussion (also GH #140 for HTML function deprecation discussion:) [ TESTING ] - fix t\rt-84767.t for failures on Win32 platforms related to file paths 4.11 2014-12-02 [ SPEC / BUG FIXES ] - more hash key ordering bugs fixed in HTML attribute output (GH #158, thanks to Marcus Meissner for the patch and test case) [ REFACTORING ] - escapeHTML (and unescapeHTML) have been refactored to use the functions exported by the HTML::Entities module (GH #157) - change BUILD_REQUIRES to TEST_REQUIRES in Makefile.PL as these are test dependencies not build dependencies (GH #159) [ DOCUMENTATION ] - replace any remaining uses of indirect object notation (new Object) with the safer Object->new syntax (GH #156) 4.10 2014-11-27 [ SPEC / BUG FIXES ] - favour -content-type arg in header if -type and -charset options are also passed in (GH #155, thanks to kaoru for the test case). this change also sorts the hash keys in the rearrange method in CGI::Util meaning the order of the arrangement will always be the same for params that have multiple aliases. really you shouldn't be passing in multiple aliases, but this will make it consistent should you do that [ DOCUMENTATION ] - fix some typos 4.09 2014-10-21 [ RELEASE NOTES ] - with this release the large backlog of issues against CGI.pm has been cleared. All fixes have been made in the versions 4.00 and above so if you are upgrading from 3.* you should thoroughly test your code against recent versions of CGI.pm - an effort has been made to retain back compatibility against previous versions of CGI.pm for any fixes made, however some changes related to the handling of temporary files may have consequences for your code - please refer to the RELEASE NOTES for version 4.00 and above for all recent changes and file an issue on github if there has been a regression. - please do *NOT* file issues regarding HTML generating functions, these are no longer being maintained (see perldoc for rationale) [ SPEC / BUG FIXES ] - tweak url to DTRT when the web server is IIS (RT #89827 / GH #152) - fix temporary file handling when dealing with multiple files in MIME uploads (GH #154, thanks to GeJ for the test case) 4.08 2014-10-18 [ DOCUMENTATION ] - note that calling headers without a -charset may lead to a nonsensical charset being added to certain content types due to the default and the workaround - remove documentation stating that calls to escapeHTML with a changed charset force numeric encoding of all characters, because that does not happen - documentation tweaks for calling param() in list context and the addition of multi_param() [ SPEC / BUG FIXES ] - don't sub out PATH_INFO in url if PATH_INFO is the same as SCRIPT_NAME (RT #89827) - add multi_param() method to allow calling of param() in list context without having to disable the $LIST_CONTEXT_WARN flag (see RELEASE NOTES for version 4.05 on why calling param() in list context could be a bad thing) 4.07 2014-10-12 [ RELEASE NOTES ] - please see changes for v4.05 [ TESTING ] - typo and POD fixes, add test to check POD and compiles 4.06 2014-10-10 [ RELEASE NOTES ] - please see changes for v4.05 [ DOCUMENTATION ] - make warning on list context call of ->param more lenient and don't warn if called with no arguments 4.05 2014-10-08 [ RELEASE NOTES ] - this release includes *significant* refactoring of temporary file handling in CGI.pm. See "Changes in temporary file handling" in perldoc - this release adds a warning for when the param method is called in list context, see the Warning in the perldoc for the section "Fetching the value or values of a single named parameter" for why this has been added and how to disable this warning [ DOCUMENTATION ] - change AUTHOR INFORMATION to LICENSE to please Kwalitee [ TESTING ] - t/arbitrary_handles.t to check need for patch in RT #54055, it turns out there is no need - the first argument to CGI->new can be an arbitrary handle - add test case for incorrect unescaping of redirect headers (RT #61120) - add tests for the handle method (RT #85074, thanks to TONYC@cpan.org) [ SPEC / BUG FIXES ] - don't set binmode on STDOUT/STDERR/STDIN if a none standard layer is already set on them on none UNIX platforms (RT #57524) - make XForms:Model data accesible through POSTDATA/PUTDATA param (RT #75628) - prevent corruption of POSTDATA/PUTDATA when -utf8 flag is used and use tempfiles to handle this data (RT #79102, thanks anonymous) - unescape request URI *after* having removed the query string to prevent removal of ? chars that are part of the original URI (and were encoded) (RT #83265) - fix q( to qq( in CGI::Carp so $@ is correct interpolated (RT #83360) - don't call ->query_string in url unless -query is passed (RT #87790) (optimisation and fits the current documented behaviour) 4.04 2014-09-04 [ RELEASE NOTES ] - this release removes some long deprecated modules/functions and includes refactoring to the temporary file handling in CGI.pm. if you are doing anything out of the ordinary with regards to temp files you should test your code before deploying this update as temp files may no longer be stored in previously used locations [ REMOVED / DEPRECATIONS ] - startform and endform methods removed (previously deprecated, you should be using the start_form and end_form methods) - both CGI::Apache and CGI::Switch have been removed as these modules 1) have been deprecated for *years*, and 2) do nothing whatsoever [ SPEC / BUG FIXES ] - handle multiple values in X-Forwarded-Host header, we follow the logic in most other frameworks and take the last value from the list (RT #54487) - reverse the order of TEMP dir placement for WINDOWS: TEMP > TMP > WINDIR (RT #71799, thanks to jeff@math.tntech.edu), this returns the behaviour to pre e24d04e9bc5fda7722444b02fec135d8cc2ff488 but with the undefined fix still in place - refactor CGITempFile::find_tempdir to use File::Spec->tmpdir (related: RT #71799) - fix warnings when QUERY_STRING has empty key=value pairs (RT #54511) - pad custom 500 status response messages to > 512 for MSIE (RT #81946) - make Vars tied hash delete method return the value deleted from the hash making it act like perl's delete (RT #51020) [ TESTING ] - add .travis.yml () - test case for RT #53966 - disallow filenames with ~ char - test case for RT #55166 - calling Vars to get the filename does not return a filehandle, so this cannot be used in the call to uploadinfo, also update documentation for the uploadInfo to show that ->Vars should not be used to get the filename for this method - fix t/url.t to pass on Win32 platforms that have the SCRIPT_NAME env variable set (RT #89992) - add procedural call tests for upload and uploadInfo to confirm these work as should (RT #91136) [ DOCUMENTATION ] - tweak perldoc for -utf8 option (RT #54341, thanks to Helmut Richter) - explain the HTML generation functions should no longer be used and that they may be deprecated in a future release 4.03 2014-07-02 [ REMOVED / DEPRECATIONS ] - the -multiple option to popup_menu is now IGNORED as this did not function correctly. If you require a menu with multiple selections use the scrolling_list method. (RT #30057) [ SPEC / BUG FIXES ] - support redirects in mod_perl2, or fall back to using env variable for up to 5 redirects, when getting the query string (RT #36312) - CGI::Cookie now correctly supports the -max-age argument, previously if this was passed the value of the -expires argument would be used meaning there was no way to supply *only* this argument (RT #50576) - make :all actually import all methods, except for :cgi-lib, and add :ssl to the :standard import (RT #70337) [ DOCUMENTATION ] - clarify documentation regarding query_string method (RT #48370) - links fixed in some perldoc (Thanks to Michiel Beijen) [ TESTING ] - add t/changes.t for testing this Changes file - test case for RT #31107 confirming multipart parsing is to spec - improve t/rt-52469.t by adding a timeout check 4.02 2014-06-09 [ NEW FEATURES ] - CGI::Carp learns noTimestamp / $CGI::Carp::NO_TIMESTAMP to prevent timestamp in messages (RT #82364, EDAVIS@cpan.org) - multipart_init and multipart_start learn -charset option (RT #22737) [ SPEC / BUG FIXES ] - Support multiple cookies when passing an ARRAY ref with -set-cookie (RT #15065, JWILLIAMS@cpan.org) [ DOCUMENTATION ] - Made licencing information consistent and remove duplicate comments about licence details, corrected location to report bugs (RT #38285) 4.01 2014-05-27 [ DOCUMENTATION ] - CGI.pm hasn't been removed from core *just* yet, but will be soon: 4.00 2014-05-22 [ INTERNALS ] - CGI::Fast split out into its own distribution, related files and tests removed - developer test added for building with perlbrew [ DOCUMENTATION ] - Update perldoc to explain that CGI.pm has been removed from perl core - Make =head2 perldoc less shouty (RT #91140) - Tickets migrated from RT to github issues (both CGI and CGI.pm distributions) - Repointing bugtracker at newly forked github repo and note that Lee Johnson is the current maintainer. - Bump version to 4.00 for clear boundary of above changes Version 3.65 Feb 11, 2014 [INTERNALS] - Update Makefile to refine where CGI.pm gets installed (Thanks to bingo, rjbs:) Version compatibility <form> tag inserted by startform and start_form. It can cause rendering problems in some cases. Thanks to SJOHNSTON@cpan.org (RT#67719) - Workaround "Insecure Dependency" warning generated by some versions of Perl (RT#53733). Thanks to degatcpan@ntlworld.com, klchu@lbl.gov and Anonymous Monk [DOCUMENTATION] - Clarify that when -status is used, the human-readable phase should be included, per RFC 2616. Thanks to SREZIC@cpan.org (RT#76691). [INTERNALS] - More tests for header(), thanks to Ryo Anazawa. - t/url.t has been fixed on VMS. Thanks to cberry@cpan.org (RT#72380) - MANIFEST patched so that t/multipart_init.t is included again. Thanks to shay@cpan.org (RT#76189) Version 3.59 Dec 29th, 2011 [BUG FIXES] -. Thanks to Philip Potter and Yanick Champoux. See RT#52469 for details: [INTERNALS] - remove tmpdirs more aggressively. Thanks to rjbs (RT#73288) - use Text::ParseWords instead of ancient shellwords.pl. Thanks to AlexBio. - remove use of define(@arr). Thanks to rjbs. - spelling fixes. Thanks to Gregor Herrmann and Alessandro Ghedini. - fix test count and warning in t/fast.t. Thanks to Yanick. Version 3.58 Nov 11th, 2011 [DOCUMENTATION] - Clarify that using query_string() only has defined behavior when using the GET method. (RT#60813) Version 3.57 Nov 9th, 2011 [INTERNALS] - test failure in t/fast.t introduced in 3.56 is fixed. (Thanks to zefram and chansen). - Test::More requirement has been bumped to 0.98 Version 3.56 Nov 8th, 2011 [SECURITY]. <> <> (Thanks to chansen) [INTERNALS] - tmp files are now cleaned up on VMS ( RT#69210, thanks to cberry@cpan.org ) - Fixed test failure: done_testing() added to url.t (Thanks to Ryan Jendoubi) - Clarify preferred bug submission location in docs, and note that Mark Stosberg is the current maintainer. Version 3.55 June 3rd, 2011 (); Version 3.54, Apr 28, 2011 No code changes [INTERNALS] - Address test failures in t/tmpdir.t, thanks to Niko Tyni. Some tests here are failing on some platforms and have been marked as TODO. Version 3.53, Apr 25, 2011 [NEW FEATURES] - The DELETE HTTP verb is now supported. (RT#52614, James Robson, Eduardo Ari�o de la Rubia) [INTERNALS] - Correct t/tmpdir.t MANIFEST entry. (RT#64949) - Update minimum required Perl version to be Perl 5.8.1, which has been out since 2003. This allows us to drop some hacks and exceptions (Mark Stosberg) Version 3.52, Jan 24, 2011 [DOCUMENTATION] - The documentation for multi-line header handling was been updated to reflect the changes in 3.51. (Mark Stosberg, ntyni@iki.fi) [INTERNALS] - Add missing t/tmpfile.t file. (RT#64949) - Fix warning in t/cookie.t (RT#64570, Chris Williams, Rainer Tammer, Mark Stosberg) - Fixed logic bug in t/multipart_init.t (RT#64261, Niko Tyni) Version 3.51, Jan 5, 2011 [NEW FEATURES] - A new option to set $CGI::Carp::TO_BROWSER = 0, allows you to explicitly exclude a particular scope from triggering printing to the browser when fatatlsToBrowser is set. (RT#62783, Thanks to papowell) - The <script> tag now supports the "charset" attribute. (RT#62907, Thanks to Fabrice Metge) - In CGI::Cookie, "Max-Age" is now supported for better spec compliance. (Mark Stosberg) [BUG FIXES] - Setting charset() now works for all content types, not just "text/*". (RT#57945, Thanks to Yanick and Gerv.) - support for user temporary directories ($HOME/tmp) was commented out in 2.61 but the documentation wasn't updated (Peter Gervai, Niko Tyni) - setting $CGITempFile::TMPDIRECTORY before loading CGI.pm has been working but undocumented since 3.12 (which listed it in Changes as $CGI::TMPDIRECTORY) (Peter Gervai, Niko Tyni) - unfortunately the previous change broke the runtime check for looking for a new temporary directory if the current one suddenly became unwritable (Peter Gervai, Niko Tyni) - A bug was fixed in CGI::Carp triggered by certain death cases in the BEGIN phase of parent classes. (RT#57224, Thanks to UNERA, Yanick Champoux, Mark Stosberg) - CGI::Cookie->new() now follows the documentation and returns undef if the -name and -value args aren't provided. This new behavior is also consistent with the docs and code of CGI::Simple::Cookie. (Mark Stosberg) - CGI::Cookie->parse() now trims leading and trailing whitespace from cookie elements as intended. The change also makes this part of the parsing identical to CGI::Simple::Cookie (Mark Stosberg) - Temp file handling was improved (RT#62762) [SECURITY] - Further improvements have been made to guard against newline injections in headers. (Thanks to Max Kanat-Alexander, Yanick Champoux, Mark Stosberg) [PERFORMANCE] - Make EBCDIC a compile-time constant so there's zero overhead (and less compiled code) in subroutines that test for it. (Tim Bunce) - If you just want to use CGI::Cookie, CGI.pm will no longer be loaded unless you call the bake() method, which requires it. (Mark Stosberg) [DOCUMENTATION] - quit referring to the <link> tag as being "rarely used". (Victor Sanders) - typo and whitespace fixes (RT#62785, thanks to scop@cpan.org) - The -dtd argument to start_html() is now documented (RT#60473, Thanks to giecrilj and steve@fisharerojo.org) - CGI::Carp doc are updated to reflect that it can work with mod_perl 2.0. - when creating a temporary file in the directory fails, the error message could indicate the root of the problem better (Peter Gervai, Niko Tyni) [INTERNALS] - Re-fixing https test in http.t. (RT#54768, thanks to SPROUT) - param_fetch no longer triggers a warning when called with no arguments (ysth, Mark Stosberg) Version 3.50, Nov 8, 2010 [SECURITY] 1. The MIME boundary in multipart_init is now random. Thanks to Byron Jones, Masahiro Yamada, Reed Loden, and Mark Stosberg 2. Further improvements to handling of newlines embedded in header values. An exception is thrown if header values contain invalid newlines. Thanks to Michal Zalewski, Max Kanat-Alexander, Yanick Champoux, Lincoln Stein, Fr�d�ric Buclin and Mark Stosberg [DOCUMENTATION] 1. Correcting/clarifying documentation for param_fetch(). Thanks to Ren�e B�cker. (RT#59132) [INTERNALS] 1. Fixing https test in http.t. (RT#54768) 2. Tests were added for multipart_init(). Thanks to Mark Stosberg and CGI::Simple. Version 3.49, Feb 5th, 2010 ) 4. CGI::Carp now properly handles stringifiable objects, like Exception::Class throws (RT#39904) ) Version 3.48, Sep 25, 2009 [BUG FIXES] 1. <optgroup> default values are now properly escaped. Thanks to #raleigh.pm and Mark Stosberg. (RT#49606) 2. The change to exception handling in CGI::Carp introduced in 3.47 has been reverted for now. It caused regressions reported in RT#49630. Thanks to mkanat for the report. [DOCUMENTATION] 1. Documentation for upload() has been overhauled, thanks to Mark Stosberg. 2. Documentation for tmpFileName has been added. Thanks to Mark Stosberg and Nathaniel K. Smith. 3. URLS were updated, thanks to Leon Brocard and Yanick Champoux. (RT#49770) [INTERNALS] 1. More tests were added for autoescape, thanks to Bob Kuo. (RT#25485) Version 3.47, Sep 9, 2009 No code changes. [INTERNALS] Re-release of 3.46, which did not contain a proper MANIFEST Version 3.46 [BUG FIXES] 1. In CGI::Pretty, we no longer add line breaks after tags we claim not to format. Thanks to rrt, Bob Kuo and and Mark Stosberg. (RT#42114). 2. unescapeHTML() no longer falsely recognizes certain text as entities. Thanks to Pete Gamanche, Mark Stosberg and Bob Kuo. (RT#39122) 3. checkbox_group() now correctly includes a space before the "checked" attribute. Thanks to Andrew Speer and Bob Kuo. (RT#36583) 4. Fix case-sensitivity in http() and https() according to docs. Make https() return list of keys in list context. Thanks to riQyRoe and Rhesa Rozendaal. (RT#12909) 5. XHTML is now automatically disabled for HTML 4, as well as HTML 2 and HTML 3. Thanks to Dan Harkless and Yanick Champoux. (RT#27907) 6. Pre-compiling 'end_form' with ':form' switch now works. Thanks to ryochin and Yanick Champoux. (RT#41530) 7. Empty name/values pairs are now properly saved and restored from filehandles. Thanks to rlucas and Rhesa Rozendaal (RT#13158) 8. Some differences between startform() and start_form() have been fixed. Thanks to Slaven Rezic and Shawn Corey. (RT#22046) 9. url_param() has been updated to be more consistent with the documentation and param(). Thanks to Britton Kerin and Yanick Campoux. (RT#43587) 10.hidden() now correctly supports multiple default values. Thanks to david@dierauer.net and Russell Jenkins. (RT#20436) 11.Calling CGI->new() no longer clobbers the value of $_ in the current scope. Thanks to Alexey Tourbin, Bob Kuo and Mark Stosberg. (RT#25131) 12.UTF-8 params should not get double-decoded now. Thanks to Yves, Bodo, Burak G�rsoy, and Michael Schout. (RT#19913) 13.We now give objects passed to CGI::Carp::die a chance to be stringified. Thanks to teek and Yanick Champoux (RT#41530) 14.Turning off autoEscape() now only affects the behavior of built-in HTML generation fuctions. Explicit calls to escapeHTML() always escape HTML regardless of the setting. Thanks to vindex, Bob Kuo and Mark Stosberg (RT#40748) 15.In CGI::Fast, preferences set via pragmas are now preserved. Thanks to heinst and Mark Stosberg (RT#32119) [DOCUMENTATION] 1. remote_addr() is now documented. Thanks to Yanick Champoux. (RT#38884) 2. In CGI::Pretty in the list of tags left unformatted was updated to match the code. Thanks to Mark Stosberg. (RT#42114) 3. In CGI::Pretty, performance concerns are now documented. Thanks to Jochen, Rhesa Rozendaal and Mark Stosberg (RT#13223) 4. A number of outdated Netscape references have been removed. Thanks to Mark Stosberg. 5. The documentation has been purged of examples of using indirect object notation. Thanks to Mark Stosberg. 6. Some POD formatting was fixed. Thanks to Dave Mitchell (RT#48935). 7. Docs and examples were updated to highlight start_form instead of startform. Thanks to Slaven Rezic. 8. Note that CGI::Carp::carpout() doesn't work with in-memory filehandles. Thanks to rhubbell and Mark Stosberg. 9. The documentation for the -newstyle_urls is now less confusing. Thanks to Ryan Tate and Mark Stosberg (RT#49454) [INTERNALS] 1. Quit bundling an ancient copy of Test::More and and using a custom 'lib' path for the tests. Instead, Test::More is now a dependency. Thanks to Ansgar and Mark Stosberg (RT#48811) 2. Automated tests for hidden() have been added, thanks to Russel Jenkins and Mark Stosberg (RT#20436) 3. t/util.t has been updated to use Test::More instead of a home-grown test function. Thanks to Bob Kuo. Version 3.45, Aug 14, 2009 [BUG FIXES] 1. Prevent warnings about "uninitialized values" for REQUEST_URI, HTTP_USER_AGENT and other environment variables. Patches by Callum Gibson, heiko and Mark Stosberg. (RT#24684, RT#29065) 2. Avoid death in some cases when running under Taint mode on Windows. Patch by Peter Hancock (RT#43796) 3. Allow 0 to be used as a default value in popup_menu(). This was broken starting in 3.37. Thanks to Haze, who was the first to report this and supply a patch, and pfschill, who pinpointed when the bug was introduced. A regression test for this was also added. (RT#37908) 4. Allow "+" as a valid character in file names, which fixes temp file creation on OS X Leopard. Thanks to Andy Armstrong, and alech for patches. (RT#30504) 5. Set binmode() on the Netware platform, thanks to Guenter Knauf (RT#27455) 6. Don't allow a CGI::Carp error handler to die recursively. Print a warning and exit instead. Thanks to Marc Chantreux. (RT#45956) 7. The Dump() method now is fixed to escape HTML properly. Thanks to Mark Stosberg (RT#21341) 8. Support for <optgroup> with scrolling_list() now works the same way as it does for popup_menu(). Thanks to Stuart Johnston (RT#30097) 9. CGI::Pretty now works properly when $" is set to ''. Thanks to Jim Keenan (RT#12401) 10. Fix crash when used in combination with PerlEx::DBI. Thanks to Burak G�rsoy (RT#19902) [DOCUMENTATION] 1. Several typos were fixed, Thanks to ambs. (RT#41105) 2. A typo related to the nosticky pragma was fixed, thanks to Britton Kerin. (RT#43220) 3. examples/nph-clock.cgi is now more portable, by calling localtime() rather than `/bin/date`, thanks to Guenter Knauf. (RT#27456). 4. In CGI::Carp, the SEE ALSO section was cleaned up, thanks to Slaven Rezic. (RT#32769) 5. The docs for redirect() were updated to reflect that most headers are ignored during redirection. Thanks to Mark Stosberg (RT#44911) [INTERNALS] 1. New t/unescapeHTML.t test script has been added. It includes a TODO test for a pre-existing bug which could use a patch. Thanks to Pete Gamache and Mark Stosberg (RT#39122) 2. New test scripts have been added for user_agent(), popup_menu() and query_string(), scrolling_list() and Dump() Thanks to Mark Stosberg and Stuart Johnston. (RT#37908, RT#43006, RT#21341, RT#30097) 3. CGI::Carp and CGI::Util have been updated to have non-developer version numbers. Thanks to Slaven Rezic. (RT#48425) 4. CGI::Switch and CGI::Apache now properly set their VERSION in their own name space. Thanks to Alexey Tourbin (RT#11941,RT#11942) Version 3.44, Jul 30, 2009 1. Patch from Kurt Jaeger to allow HTTP PUT even if the content length is unknown. 2. Patch from Pavel merdin to fix a problem for one of the FireFox addons. 3. Fixed issue in mod_perl & fastCGI environment of cookies returned from CGI->cookie() leaking from one session to another. Version 3.43, Apr 06, 2009 1. Documentation patch from MARKSTOS@cpan.org to replace all occurrences of "new CGI" with CGI->new()" to reflect best perl practices. 2. Patch from Stepan Kasal to fix utf-8 related problems in perl 5.10 Version 3.42, Sep 08, 2008, Aug 25, 2008. Version 3.40, Aug 06, 2008 1. Fixed CGI::Fast docs to eliminate references to a "special" version of Perl. 2. Makefile.PL now depends on FCGI so that CGI::Fast installs properly. 3. Fix script_name() call from Stephane Chazelas. Version 3.39, Jun 29, 2008 1. Fixed regression in "exists" function when using tied interface to CGI via $q->Vars. Version 3.38, Jun 25, 2008 1. Fix annoying warning in 2. Added nobr() function 3. popup_menu() allows multiple items to be selected by default, satisfying 4. Patch from Renee Backer to avoid doubled <http-equiv> headers. 5. Fixed documentation bug that describes what happens when a parameter is empty (e.g. "? tags -- don't be surpised. 2. Fixed bug involving the detection of the SSL protocol. 3. Fixed documentation error in position of the -meta argument in start_html(). 4. HTML shortcuts now generate tags in ALL UPPERCASE. 5. start_html() now generates correct SGML header: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML//EN"> 6. CGI::Carp no longer fails "use strict refs" pragma. Version 2.25 1. Fixed bug that caused bad redirection on destination URLs with arguments. 2. Fixed bug involving use_named_parameters() followed by start_multipart_form() 3. Fixed bug that caused incorrect determination of binmode for Macintosh. 4. Spelling fixes on documentation. Version 2.24 1. Fixed bug that caused generation of lousy HTML for some form elements 2. Fixed uploading bug in Windows NT 3. Some code cleanup (not enough) Version 2.23 1. Fixed an obscure bug that caused scripts to fail mysteriously. 2. Fixed auto-caching bug. 3. Fixed bug that prevented HTML shortcuts from passing taint checks. 4. Fixed some -w warning problems. Version 2.22 1. New CGI::Fast module for use with FastCGI protocol. See pod documentation for details. 2. Fixed problems with inheritance and autoloading. 3. Added TR() (<tr>) and PARAM() (<param>) methods to list of exported HTML tag-generating functions. 4. Moved all CGI-related I/O to a bottleneck method so that this can be overridden more easily in mod_perl (thanks to Doug MacEachern). 5. put() method as substitute for print() for use in mod_perl. 6. Fixed crash in tmpFileName() method. 7. Added tmpFileName(), startform() and endform() to export list. 8. Fixed problems with attributes in HTML shortcuts. 9. Functions that don't actually need access to the CGI object now no longer generate a default one. May speed things up slightly. 10. Aesthetic improvements in generated HTML. 11. New examples. Version 2.21 1. Added the -meta argument to start_html(). 2. Fixed hidden fields (again). 3. Radio_group() and checkbox_group() now return an appropriate scalar value when called in a scalar context, rather than returning a numeric value! 4. Cleaned up the formatting of form elements to avoid unesthetic extra spaces within the attributes. 5. HTML elements now correctly include the closing tag when parameters are present but null: em('') 6. Added password_field() to the export list. Version 2.20 1. Dumped the SelfLoader because of problems with running with taint checks and rolled my own. Performance is now significantly improved. 2. Added HTML shortcuts. 3. import() now adheres to the Perl module conventions, allowing CGI.pm to import any or all method names into the user's name space. 4. Added the ability to initialize CGI objects from strings and associative arrays. 5. Made it possible to initialize CGI objects with filehandle references rather than filehandle strings. 6. Added the delete_all() and append() methods. 7. CGI objects correctly initialize from filehandles on NT/95 systems now. 8. Fixed the problem with binary file uploads on NT/95 systems. 9. Fixed bug in redirect(). 10. Added '-Window-target' parameter to redirect(). 11. Fixed import_names() so that parameter names containing funny characters work. 12. Broke the unfortunate connection between cookie and CGI parameter name space. 13. Fixed problems with hidden fields whose values are 0. 14. Cleaned up the documentation somewhat. Version 2.19 1. Added cookie() support routines. 2. Added -expires parameter to header(). 3. Added cgi-lib.pl compatibility mode. 4. Made the module more configurable for different operating systems. 5. Fixed a dumb bug in JavaScript button() method. Version 2.18 1. Fixed a bug that corrects a hang that occurs on some platforms when processing file uploads. Unfortunately this disables the check for bad Netscape uploads. 2. Fixed bizarre problem involving the inability to process uploaded files that begin with a non alphabetic character in the file name. 3. Fixed a bug in the hidden fields involving the -override directive being ignored when scalar defaults were passed. 4. Added documentation on how to disable the SelfLoader features. Version 2.17 1. Added support for the SelfLoader module. 2. Added oodles of JavaScript support routines. 3. Fixed bad bug in query_string() method that caused some parameters to be silently dropped. 4. Robustified file upload code to handle premature termination by the client. 5. Exported temporary file names on file upload. 6. Removed spurious "uninitialized variable" warnings that appeared when running under 5.002. 7. Added the Carp.pm library to the standard distribution. 8. Fixed a number of errors in this documentation, and probably added a few more. 9. Checkbox_group() and radio_group() now return the buttons as arrays, so that you can incorporate the individual buttons into specialized tables. 10. Added the '-nolabels' option to checkbox_group() and radio_group(). Probably should be added to all the other HTML-generating routines. 11. Added the url() method to recover the URL without the entire query string appended. 12. Added request_method() to list of environment variables available. 13. Would you believe it? Fixed hidden fields again! Version 2.16 1. Fixed hidden fields yet again. 2. Fixed subtle problems in the file upload method that caused intermittent failures (thanks to Keven Hendrick for this one). 3. Made file upload more robust in the face of bizarre behavior by the Macintosh and Windows Netscape clients. 4. Moved the POD documentation to the bottom of the module at the request of Stephen Dahmen. 5. Added the -xbase parameter to the start_html() method, also at the request of Stephen Dahmen. 6. Added JavaScript form buttons at Stephen's request. I'm not sure how to use this Netscape extension correctly, however, so for now the form() method is in the module as an undocumented feature. Use at your own risk! Version 2.15 1. Added the -override parameter to all field-generating methods. 2. Documented the user_name() and remote_user() methods. 3. Fixed bugs that prevented empty strings from being recognized as valid textfield contents. 4. Documented the use of framesets and added a frameset example. Version 2.14 This was an internal experimental version that was never released. Version 2.13 1. Fixed a bug that interfered with the value "0" being entered into text fields. Version 2.01 1. Added -rows and -columns to the radio and checkbox groups. No doubt this will cause much grief because it seems to promise a level of meta-organization that it doesn't actually provide. 2. Fixed a bug in the redirect() method -- it was not truly HTTP/1.0 compliant. Version 2.0 The changes seemed to touch every line of code, so I decided to bump up the major version number. 1. Support for named parameter style method calls. This turns out to be a big win for extending CGI.pm when Netscape adds new HTML "features". 2. Changed behavior of hidden fields back to the correct "sticky" behavior. This is going to break some programs, but it is for the best in the long run. 3. Netscape 2.0b2 broke the file upload feature. CGI.pm now handles both 2.0b1 and 2.0b2-style uploading. It will probably break again in 2.0b3. 4. There were still problems with library being unable to distinguish between a form being loaded for the first time, and a subsequent loading with all fields blank. We now forcibly create a default name for the Submit button (if not provided) so that there's always at least one parameter. 5. More workarounds to prevent annoying spurious warning messages when run under the -w switch. -w is seriously broken in perl 5.001! Version 1.57 1. Support for the Netscape 2.0 "File upload" field. 2. The handling of defaults for selected items in scrolling lists and multiple checkboxes is now consistent. Version 1.56 1. Created true "pod" documentation for the module. 2. Cleaned up the code to avoid many of the spurious "use of uninitialized variable" warnings when running with the -w switch. 3. Added the autoEscape() method. v 4. Added string interpolation of the CGI object. 5. Added the ability to pass additional parameters to the <BODY> tag. 6. Added the ability to specify the status code in the HTTP header. Bug fixes in version 1.55 1. Every time self_url() was called, the parameter list would grow. This was a bad "feature". 2. Documented the fact that you can pass "-" to radio_group() in order to prevent any button from being highlighted by default. Bug fixes in version 1.54 1. The user_agent() method is now documented; 2. A potential security hole in import() is now plugged. 3. Changed name of import() to import_names() for compatibility with CGI:: modules. Bug fixes in version 1.53 1. Fixed several typos in the code that were causing the following subroutines to fail in some circumstances 1. checkbox() 2. hidden() 2. No features added New features added in version 1.52 1. Added backslashing, quotation marks, and other shell-style escape sequences to the parameters passed in during debugging off-line. 2. Changed the way that the hidden() method works so that the default value always overrides the current one. 3. Improved the handling of sticky values in forms. It's now less likely that sticky values will get stuck. 4. If you call server_name(), script_name() and several other methods when running offline, the methods now create "dummy" values to work with. Bugs fixed in version 1.51 1. param() when called without arguments was returning an array of length 1 even when there were no parameters to be had. Bad bug! Bad! 2. The HTML code generated would break if input fields contained the forbidden characters ">< or &. You can now use these characters freely. New features added in version 1.50 1. import() method allows all the parameters to be imported into a namespace in one fell swoop. 2. Parameters are now returned in the same order in which they were defined. Bugs fixed in version 1.45 1. delete() method didn't work correctly. This is now fixed. 2. reset() method didn't allow you to set the name of the button. Fixed. Bugs fixed in version 1.44 1. self_url() didn't include the path information. This is now fixed. New features added in version 1.43 1. Added the delete() method. New features added in version 1.42 1. The image_button() method to create clickable images. 2. A few bug fixes involving forms embedded in <PRE> blocks. New features added in version 1.4 1. New header shortcut methods + redirect() to create HTTP redirection messages. + start_html() to create the HTML title, complete with the recommended <LINK> tag that no one ever remembers to include. + end_html() for completeness' sake. 2. A new save() method that allows you to write out the state of an script to a file or pipe. 3. An improved version of the new() method that allows you to restore the state of a script from a file or pipe. With (2) this gives you dump and restore capabilities! (Wow, you can put a "121,931 customers served" banner at the bottom of your pages!) 4. A self_url() method that allows you to create state-maintaining hypertext links. In addition to allowing you to maintain the state of your scripts between invocations, this lets you work around a problem that some browsers have when jumping to internal links in a document that contains a form -- the form information gets lost. 5. The user-visible labels in checkboxes, radio buttons, popup menus and scrolling lists have now been decoupled from the values sent to your CGI script. Your script can know a checkbox by the name of "cb1" while the user knows it by a more descriptive name. I've also added some parameters that were missing from the text fields, such as MAXLENGTH. 6. A whole bunch of methods have been added to get at environment variables involved in user verification and other obscure features. Bug fixes 1. The problems with the hidden fields have (I hope at last) been fixed. 2. You can create multiple query objects and they will all be initialized correctly. This simplifies the creation of multiple forms on one page. 3. The URL unescaping code works correctly now.
https://metacpan.org/changes/release/LEEJO/CGI-4.52
CC-MAIN-2021-21
refinedweb
7,482
66.54
NAME psusan - pseudo-SSH for untappable, separately authenticated networks SYNOPSIS psusan [ options ] DESCRIPTION psusan is a server program that behaves like the innermost `connection' layer of an SSH session, without the two outer security layers of encryption and authentication. It provides all the post-authentication features of an SSH connection: - • - choosing whether to run an interactive terminal session or a single specified command - • - multiple terminal sessions at once (or a mixture of those and specified commands) - • - SFTP file transfer - • - all the standard SSH port-forwarding options - • - X11 forwarding - • - SSH agent forwarding The catch is that, because it lacks the outer layers of SSH, you have to run it over some kind of data channel that is already authenticated as the right user, and that is already protected to your satisfaction against eavesdropping and session hijacking. A good rule of thumb is that any channel that you were prepared to run a bare shell session over, you can run psusan over instead, which adds all the above conveniences without changing the security properties. The protocol that psusan speaks is also spoken by PuTTY, Plink, PSCP, and PSFTP, if you select the protocol type `Bare ssh-connection' or the command-line option -ssh-connection and specify the absolute path to the appropriate Unix-domain socket in place of a hostname. EXAMPLES The idea of a secure, pre-authenticated data channel seems strange to people thinking about network connections. But there are lots of examples within the context of a single Unix system, and that's where psusan is typically useful. Docker A good example is the console or standard I/O channel leading into a container or virtualisation system. Docker is a familiar example. If you want to start a Docker container and run a shell directly within it, you might say something like docker run -i -t some:image which will allow you to run a single shell session inside the container, in the same terminal you started Docker from. Suppose that you'd prefer to run multiple shell sessions in the same container at once (perhaps so that one of them can use debugging tools to poke at what another is doing). And perhaps inside that container you're going to run a program that you don't trust with full access to your network, but are prepared to let it make one or two specific network connections of the kind you could set up with an SSH port forwarding. In that case, you could remove the -t option from that Docker command line (which means `allocate a terminal device'), and tell it to run psusan inside the container: docker run -i some:image /some/path/to/psusan (Of course, you'll need to ensure that psusan is installed somewhere inside the container image.) If you do that from a shell command line, you'll see a banner line looking something like this: SSHCONNECTION@putty.projects.tartarus.org-2.0-PSUSAN_Release_0.75 which isn't particularly helpful except that it tells you that psusan has started up successfully. To talk to this server usefully, you can set up a PuTTY saved session as follows: - • - Set the protocol to `Bare ssh-connection' (the psusan protocol). - • - Write something in the hostname box. It will appear in PuTTY's window title (if you run GUI PuTTY), so you might want to write something that will remind you what kind of window it is. If you have no opinion, something generic like `dummy' will do. - • - In the `Proxy' configuration panel, set the proxy type to `Local', and enter the above `docker run' command in the `Telnet command, or local proxy command' edit box. - • - In the `SSH' configuration panel, you will very likely want to turn on connection sharing. (See below.) This arranges that when PuTTY starts up, it will run the Docker command as shown above in place of making a network connection, and talk to that command using the psusan SSH-like protocol. The effect is that you will still get a shell session in the context of a Docker container. But this time, it's got all the SSH amenities. If you also turn on connection sharing in the `SSH' configuration panel, then the `Duplicate Session' option will get you a second shell in the same Docker container (instead of a primary shell in a separate instance). You can transfer files in and out of the container while it's running using PSCP or PSFTP; you can forward network ports, X11 programs, and/or an SSH agent to the container. Of course, another way to do all of this would be to run the full SSH protocol over the same channel. This involves more setup: you have to invent an SSH host key for the container, accept it in the client, and deal with it being left behind in your client's host key cache when the container is discarded. And you have to set up some login details in the container: either configure a password, and type it in the client, or copy in the public half of some SSH key you already had. And all this inconvenience is unnecessary, because these are all precautions you need to take when the connection between two systems is going over a hostile network. In this case, it's only going over a kernel IPC channel that's guaranteed to go to the right place, so those safety precautions are redundant, and they only add awkwardness. User-mode Linux User-mode Linux is another container type you can talk to in the same way. Here's a small worked example. The easiest way to run UML is to use its `hostfs' file system type to give the guest kernel access to the same virtual filesystem as you have on the host. For example, a command line like this gets you a shell prompt inside a UML instance sharing your existing filesystem: linux mem=512M rootfstype=hostfs rootflags=/ rw init=/bin/bash If you run this at a command line (assuming you have a UML kernel available on your path under the name `linux'), then you should see a lot of kernel startup messages, followed by a shell prompt along the lines of root@(none):/# To convert this into a psusan-based UML session, we need to adjust the command line so that instead of running bash it runs psusan. But running psusan directly isn't quite enough, because psusan will depend on a small amount of setup, such as having /proc mounted. So instead, we set the init process to a shell script which will do the necessary setup and then invoke psusan. Also, running psusan directly over the UML console device is a bad idea, because then the psusan binary protocol will be mixed with textual console messages. So a better plan is to redirect UML's console to the standard error of the linux process, and map its standard input and output to a serial port. So the replacement UML command line might look something like this: linux mem=512M rootfstype=hostfs rootflags=/ rw \ con=fd:2,fd:2 ssl0=fd:0,fd:1 init=/some/path/to/uml-psusan.sh And the setup script uml-psusan.sh might look like this: #!/bin/bash # Set up vital pseudo-filesystems mount -t proc none /proc mount -t devpts none /dev/pts # Redirect I/O to the serial port, but stderr to the console exec 0<>/dev/ttyS0 1>&0 2>/dev/console # Set the serial port into raw mode, to run a binary protocol stty raw -echo # Choose what shell you want to run inside psusan export SHELL=/bin/bash # Set up a default path export PATH=/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin # And now run psusan over the serial port exec /home/simon/src/putty/misc/psusan Now set up a PuTTY saved session as in the Docker example above. Basically you'll want to use the above linux command as the local proxy command. However, it's worth wrapping it in setsid(1), because when UML terminates, it kills its entire process group. So it's better that PuTTY should not be part of that group, and should have the opportunity to shut down cleanly by itself. So probably you end up setting the proxy command to be something more like: setsid linux mem=512M rootfstype=hostfs rootflags=/ rw \ con=fd:2,fd:2 ssl0=fd:0,fd:1 init=/some/path/to/uml-psusan.sh You may also find that you have to enable the bug workaround that indicates that the server `Discards data sent before its greeting', because otherwise PuTTY's outgoing protocol greeting can be accidentally lost during UML startup. (See Debian bug #991958.) Once you've done that, you'll have a PuTTY session that starts up a clean UML instance when you run it, and (if you enabled connection sharing) further instances of the same session will connect to the same instance again. Windows Subsystem for Linux On Windows, the default way to use WSL is to run the wsl program, or one of its aliases, in a Windows console, either by launching it from an existing command prompt, or by using a shortcut that opens it in a fresh console. This gives you a Linux terminal environment, but in a Windows console window. If you'd prefer to interact with the same environment using PuTTY as the terminal (for example, if you prefer PuTTY's mouse shortcuts for copy and paste), you can set it up by installing psusan in the Linux environment, and then setting up a PuTTY saved session that talks to it. A nice way to do this is to use the name of the WSL distribution as the `host name': - • - set the local proxy command to `wsl -d %host /usr/local/bin/psusan' (or wherever you installed psusan in the Linux system) - • - enter the name of a particular WSL distribution in the host name box. (For example, if you installed WSL Debian in the standard way from the Windows store, this will just be `Debian'.) - • - set the protocol to `Bare ssh-connection', as usual. Like all the other examples here, this also permits you to forward ports in and out of the WSL environment (e.g. expose a WSL2 network service through the hypervisor's internal NAT), forward Pageant into it, and so on. Cygwin Another Unix-like environment on Windows is Cygwin. That comes with its own GUI terminal application, mintty (as it happens, a derivative of PuTTY); but if you'd prefer to use PuTTY itself to talk to your Cygwin terminal sessions, psusan can help. To do this, you'll first need to build the Unix PuTTY tools inside Cygwin (via the usual cmake method). Then, copy the resulting psusan.exe into Cygwin's /bin directory. (It has to be in that directory for non-Cygwin programs to run it; otherwise it won't be able to find the Cygwin DLL at startup.) Then set up your PuTTY saved session like this: - • - set the local proxy command to run psusan.exe via its real Windows path. You might also want to add the --sessiondir option so that shell sessions start up in your Cygwin home directory. For example, you might use the command `c:\cygwin64\bin\psusan.exe --sessiondir /home/simon' (changing the pathname and username to match your setup). - • - enter anything you like in the host name box; `Cygwin' is probably a good choice - • - set the protocol to `Bare ssh-connection', as usual. Port forwarding is probably not particularly useful in this case, since Cygwin shares the same network port space as the host machine. But turning on agent forwarding is useful, because then the Cygwin command-line SSH client can talk to Pageant without any further configuration. schroot Another example of a container-like environment is the alternative filesystem layout set up by schroot(1). schroot is another program that defaults to running an interactive shell session in the terminal you launched it from. But again, you can get a psusan connection into the schroot environment by setting up a PuTTY saved session whose local proxy command is along the lines of schroot -c chroot-name /some/path/to/psusan Depending on how much of the chroot environment is copied from your main one, you might find this makes it easier to (for example) run X11 programs inside the chroot that open windows on your main X display, or transfer files in and out of the chroot. Between network namespaces If you've set up multiple network namespaces on a Linux system, with different TCP/IP configurations, then psusan can be a convenient unprivileged-user gateway between them, if you run it as a non-root user in the non-default one of your namespaces, listening for connections on a Unix-domain socket. If you do that, then it gives you convenient control over which of your outgoing network connections use which TCP/IP configuration: you can use PuTTY to run a shell session in the context of the other namespace if you want to run commands like ping, or you can set up individual port forwardings or even a SOCKS server so that processes running in one namespace can send their network connections via the other one. For this application, it's probably most convenient to use the --listen option in psusan, which makes it run as a server and listen for connections on a Unix-domain socket. Then you can enter that socket name in PuTTY's host name configuration field (and also still select the `Bare ssh-connection' protocol option), to connect to that socket as if it were an SSH client. Provided the Unix-domain socket is inside a directory that only the right user has access to, this will ensure that authentication is done implicitly by the Linux kernel. Between user ids, via GNU userv If you use multiple user ids on the same machine, say for purposes of privilege separation (running some less-trusted program with limited abilities to access all your stuff), then you probably have a `default' or most privileged account where you run your main login session, and sometimes need to run a shell in another account. psusan can be used as an access channel between the accounts, using GNU userv(1) as the transport. In the account you want to access, write a userv configuration stanza along the lines of if (glob service psusan & glob calling-user my-main-account-name) reset execute /some/path/to/psusan fi This gives your main account the right to run the command userv my-sub-account-name psusan and you can configure that command name as a PuTTY local proxy command, in the same way as most of the previous examples. Of course, there are plenty of ways already to access one local account from another, such as sudo. One advantage of doing it this way is that you don't need the system administrator to intervene when you want to change the access controls (e.g. change which of your accounts have access to another): as long as you have some means of getting into each account in the first place, and userv is installed, you can make further configuration changes without having to bother root about it. Another advantage is that it might make file transfer between the accounts easier. If you're the kind of person who keeps your home directories private, then it's awkward to copy a file from one of your accounts to another just by using the cp command, because there's nowhere convenient that you can leave it in one account where the other one can read it. But with psusan over userv, you don't need any shared piece of filesystem: you can scp files back and forth without any difficulty. OPTIONS The command-line options supported by psusan are: - --listen unix-socket-name - Run psusan in listening mode. unix-socket-name is the pathname of a Unix-domain socket to listen on. You should ensure that this pathname is inside a directory whose read and exec permissions are restricted to only the user(s) you want to be able to access the environment that psusan is running in. The listening socket has to be a Unix-domain socket. psusan does not provide an option to run over TCP/IP, because the unauthenticated nature of the protocol would make it inherently insecure. - --listen-once - In listening mode, this option causes psusan to listen for only one connection, and exit immediately after that connection terminates. - --sessiondir pathname - This option sets the directory that shell sessions and subprocesses will start in. By default it is psusan's own working directory, but in some situations it's easier to change it with a command-line option than by wrapping psusan in a script that changes directory before starting it. - -v, --verbose - This option causes psusan to print verbose log messages on its standard error. This is probably most useful in listening mode. - -sshlog logfile - - -sshrawlog logfile - These options cause psusan to log protocol details to a file, similarly to the logging options in PuTTY and Plink. -sshlog logs decoded SSH packets and other events (those that -v would print). -sshrawlog additionally logs the raw wire data, including the outer packet format and the initial greetings.
https://man.archlinux.org/man/extra/putty/psusan.1.en
CC-MAIN-2022-27
refinedweb
2,908
52.63
Closed Bug 516396 (CVE-2009-0689) Opened 10 years ago Closed 10 years ago Array indexing error in NSPR's Balloc() leads to floating point memory vulnerability (SA36711) Categories (NSPR :: NSPR, defect, critical) Tracking (status1.9.2 beta1-fixed, blocking1.9.1 .4+, status1.9.1 .4-fixed) 4.8.2 People (Reporter: reed, Assigned: wtc) References Details (6 keywords, Whiteboard: [sg:critical]) Attachments (11 files, 3 obsolete files) From Secunia to security@: --------------------------- Secunia Research has discovered a vulnerability in Mozilla Firefox, which can be exploited by malicious people to compromise a user's system. The vulnerability is caused due to an array indexing error while allocating space for floating point numbers. This can be exploited to trigger a memory corruption via a specially crafted floating point number. Successful exploitation may allow execution of arbitrary code. The vulnerability is confirmed in version 3.0.14 and 3.5.3. Other versions may also be affected. Vulnerability Details: ---------------------- The vulnerability is caused due to an error when converting strings to floating point numbers in nsprpub/pr/src/misc/prdtoa.c. The s2b() function takes the total number of digits and determines the first number K for which : 1 << K >= (numdigits + 8)/9. K is then passed to Balloc() to allocate memory. Balloc() dereferences the static "freelist" buffer of 16 elements using K as an index. If K is above 15, malformed pointers following the freelist array will be returned from Balloc(). *** #define Kmax 15 ... static Bigint *freelist[Kmax+1]; ... Balloc ..(k).. ... if (rv = freelist[k]) { <-- out of bounds freelist[k] = rv->next; } ... return rv; *** For e.g. K = 17, a pointer to a limited heap buffer is returned from Balloc(), and used to hold the converted big number. This results in a heap-based buffer overflow, followed by a call to a function grabbed from a corrupted pointer to a virtual function table. This results in the execution of an arbitrary address when paired with heap spraying. Closing comments: ----------------- We have assigned this vulnerability Secunia advisory SA36711 and CVE identifier CVE-2009-1563. Credits should go to: Alin Rad Pop, Secunia Research. ------------------------ Flags: blocking1.9.2? Flags: blocking1.9.0.15? PoC has been requested from Secunia. The vulnerability can also be reproduced by creating an HTML file containing JavaScript code initializing a variable with 0.<1179649 '1' digits> . blocking1.9.1: --- → ? Whiteboard: [sg:critical] Thank you for this bug report. dtoa.c is also used in Mozilla's, WebKit's, and Chromium's JavaScript implementations, and in Ruby. Has Secunia or CVE notified the other users of dtoa.c? We should also notify dtoa.c's author, David M. Gay. The obvious fix of comparing the index 'k' with Kmax in Balloc is incomplete because we need a corresponding change to Bfree: Index: prdtoa.c =================================================================== RCS file: /cvsroot/mozilla/nsprpub/pr/src/misc/prdtoa.c,v retrieving revision 4.7 diff -u -u -r4.7 prdtoa.c --- prdtoa.c 20 Mar 2009 03:41:21 -0000 4.7 +++ prdtoa.c 14 Sep 2009 17:46:14 -0000 @@ -581,7 +581,7 @@ #endif ACQUIRE_DTOA_LOCK(0); - if (rv = freelist[k]) { + if (k <= Kmax && rv = freelist[k]) { freelist[k] = rv->next; } else { It seems that we'll have to either require that k <= Kmax, or to reallocate freelist when it's not big enough. After chatting with Chris Evans of the Google Security Team, I found that this bug is already fixed in the dtoa.c upstream: Since dtoa.c's author only publishes the latest revision of dtoa.c, it's not easy to extract the fix for this bug, but I believe the fix is entirely within the Balloc and Bfree functions. See also this OpenBSD patch for gdtoa, which is the next generation of dtoa.c:;r2=1.2;f=h The code looks very similar to dtoa.c. Flags: blocking1.9.2? → blocking1.9.2+ blocking1.9.1: ? → .4+ status1.9.1: --- → wanted Flags: wanted1.9.0.x+ Flags: blocking1.9.0.15? Flags: blocking1.9.0.15+ (adding SpiderMonkey people since JS is affected) Webkit has a clean dtoa implementation. Its missing some piece we need, but otherwise we should be able to take it. Note that the value of Kmax is also reduced from 15 to 7. I don't know why. I believe this entry in the "changes" file () describes this change: Sun Mar 1 20:57:22 MST 2009 dtoa.c and gdtoa/gdtoaimp.h and gdtoa/misc.c: reduce Kmax, and use MALLOC and FREE or free for huge blocks, which are possible only in pathological cases, such as dtoa calls in mode 3 with thousands of digits requested, or strtod() calls with thousand of digits. For the latter case, I have an alternate approach that runs much faster and uses less memory, but finding time to get it ready for distribution may take a while. Attachment #400623 - Flags: review?(reed) Comment on attachment 400623 [details] [diff] [review] Patch from dtoa.c upstream er, I clicked submit too early... I'm not an NSPR peer, so tossing to ted for review. My initial look through the checks seems ok, though, and it's pretty similar to what gdtoa did to fix this problem. However, I did have a random question... this new code adds the use of free(), which was not used directly before in prdtoa. Do we need to do anything special because of it (such as use PR_Free() instead or something like that)? Comment on attachment 400623 [details] [diff] [review] Patch from dtoa.c upstream The best person to review this patch is a JS developer familiar with jsdtoa.cpp. Any volunteer? Is Brian Crowder still working on Mozilla? Reed, thanks for the review. We don't need to do anything special about the new FREE macro. If FREE is not defined, the code uses free(), which matches malloc(). (The code defines MALLOC as malloc by default. It should also define FREE as free by default.) Comment on attachment 400623 [details] [diff] [review] Patch from dtoa.c upstream Does js/src/dtoa.c have the same bug? Attachment #400623 - Flags: review?(crowder) → review+ Yes, js/src/dtoa.c has the same bug. We should open a JS bug for that. Why was Kmax reduced to 7? It's explained in the "changes" file entry (see comment 8). We don't want to add huge blocks to 'freelist'. It's possible that dtoa.c's author fixed this bug unknowingly when he reduced Kmax to 7. Comment on attachment 400623 [details] [diff] [review] Patch from dtoa.c upstream I checked this in on the NSPR trunk (NSPR 4.8.1). Checking in pr/src/misc/prdtoa.c; /cvsroot/mozilla/nsprpub/pr/src/misc/prdtoa.c,v <-- prdtoa.c new revision: 4.8; previous revision: 4.7 done (In reply to comment #15) > (From update of attachment 400623 [details] [diff] [review]) > I checked this in on the NSPR trunk (NSPR 4.8.1). We'll need to get NSPR tag(s) with the fix created for the various branches (trunk/1.9.2, 1.9.1, 1.9.0). Can you assist with that? Attachment #400623 - Flags: approval1.9.1.4? Comment on attachment 400623 [details] [diff] [review] Patch from dtoa.c upstream Approved for 1.9.1.4, a=dveditz I guess for the branches using hg (trunk,1.9.2,1.9.1) you just check those in there, but for 1.9.0.x we'll need an updated NSPR tag and corresponding client.mk change. Attachment #400623 - Flags: approval1.9.1.4? → approval1.9.1.4+ The hg branches should also use a tag: unless things have gone very strange we update hg to NSPR/NSS tags using client.py. Comment on attachment 400623 [details] [diff] [review] Patch from dtoa.c upstream This is blocking1.9.2+, so approval1.9.2 not needed. I will work on a new NSPR release this weekend and next Monday. Does any of the 1.9.0.15-based Mozilla products need to support Windows 9x or Mac OS X 10.2 and 10.3? The 1.9.0 branch already exclude those OS versions. . This does crash 1.9.1.3 cleanly on Linux. When I run this testcase on last night's 1.9.1.4pre build (Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.4pre) Gecko/20090921 Shiretoko/3.5.4pre), Firefox just hangs completely instead of crashing. Status1.9.1 says that this is fixed so this is either not fixed after all or the status was changed before the NSPR made it into the build so it isn't fixed yet. Can someone comment? The status of this bug has not been fixed on any Firefox build. The fix has been checked into the NSPR trunk (comment 15). oops, I was off by one field. Thanks. This is showing up in my "unverified for 1.9.1.4" query from the Tinderbox though for some reason. It's strange that if it isn't fixed, it isn't crashing in the same way as 1.9.1.3 though. I pushed the fix to mozilla-central in changeset 7ec23b6b3611: I pushed the fix to mozilla-1.9.2 in changeset fb2192ebeff0: Status: NEW → RESOLVED Closed: 10 years ago Resolution: --- → FIXED Target Milestone: --- → 4.8.1 I pushed the fix as part of the NSPR_4_8_1_BETA1 CVS tag to mozilla-1.9.1 in changeset ae5cf3fd2e69: I will push the NSPR_4_8_1_RTM tag to mozilla-1.9.1 after two days of testing. Please let me know before you do your first 1.9.1.4 candidate build. Firefox 3 is using NSPR_4_7_5_RTM, so the upgrade to NSPR 4.8.1 will be a minor version upgrade. But I'm confident it'll be uneventful. dveditz: Thunderbird 2 doesn't need this fix, right? Thunderbird 2 probably can get away without it (and certainly isn't going to be happy with an upgrade to nspr 4.8.1) but SeaMonkey 1.1.x definitely needs it. I think the testcases in this bug are testing bug 516862 -- the behavior on 1.9.0.15pre definitely changed even though this fix hasn't landed on the 1.9.0 branch yet (but that one has). I think we'd need an SVG testcase to trigger the NSPR version of this function. I'm not sure the fix is correct, though, since it still crashes (differently): see bug 516862 comment 16 Flags: wanted1.8.1.x+ Flags: blocking1.8.1.next+ Does SeaMonkey 1.1.x need to support Windows 9x or Mac OS X 10.2 and 10.3? I am considering adding an input length check to PR_strtod to prevent it from running for too long. The NSPR test case in this patch takes a long time to finish, so I can't check it in. It's very difficult to check the string length as we parse it, so I just added a string length check at the beginning of the PR_strtod function. If the string is too long, PR_strtod returns 0.0 and sets the PR_INVALID_ARGUMENT_ERROR. This is modeled after the strtod function, which returns 0.0 and sets errno to EINVAL when it cannot convert the string: A debug build of this test took 12 minutes to finish on Linux: wtc@aes$ time ./dtoa PASSED real 12m33.549s user 12m32.939s sys 0m0.044s This verifies the crash fix works, but we really need the input length check. Attachment #402497 - Attachment is obsolete: true (In reply to comment #28) > Created an attachment (id=402497) [details] > NSPR test case > > Does SeaMonkey 1.1.x need to support Windows 9x or Mac OS X 10.2 and 10.3? We currently have already broken Win9x support due to the NSS upgrade, but that might even be fixed. SeaMonkey 1.1.x has supported all those versions so far and it would be quite bad to break it within the series. We want to avoid that if in any way possible. (In reply to comment #31) > We currently have already broken Win9x support due to the NSS upgrade (Just to make it clear - not all of Win9x, only those installations that don't have IE4, i.e. esp. Win95 and NT 3.51) Uh, wait, as I understand it, SM 1.1.x has the same platform support as FF2, which is the set of platform supported for the 1.8 branch. As I understand it, that set is shown on It does not include Win95 or NT 3.5, but it does include Win98 and NT 4.0. No, 1.8.1 branch itself and therefore also SeaMonkey 1.1.x actually do support Win95 (with DCOM for Win95) and NT 3.51, Firefox/Toolkit did desupport those there without the actual platform losing support for them, so we continued to include those system, see - and a number of users were happy about it. There was no large outcry now that this NSS upgrade broke those systems where IE4 isn't installed, but we did get a number of people reporting it. Of course, the SeaMonkey 2.0 coming next month on a 1.9.1 base desupports everything below Win2k, but it's a huge jump in every respect after all. It would be nice if we could support the old systems for a few months still (even though I don't dare to talk about the number of security patches we are already missing on 1.8.1 branch compared to 1.9.0 or 1.9.1 so far. Reopening to make sure we get the new patch in. Status: RESOLVED → REOPENED status1.9.1: .4-fixed → ? Resolution: FIXED → --- (In reply to comment #29) > Created an attachment (id=402499) [details] > Add an input length check to PR_strtod Who can review this? Ted? status1.9.1: ? → .4-fixed Comment on attachment 402502 [details] [diff] [review] NSPR test case (no unrelated change) How about another NSPR module owner? Attachment #402502 - Flags: review+ Comment on attachment 402499 [details] [diff] [review] Add an input length check to PR_strtod Nelson, it's this patch that needs reviewing. The 20480 upper bound is just an arbitrary number I picked. We should pick a number that allows PR_strtod to finish in, say, 10 seconds. Thanks! The "reopen" should have applied to the branches, too. Clearing those fixed statuses until the additional patch lands (if I'm understanding this correctly). Comment on attachment 400623 [details] [diff] [review] Patch from dtoa.c upstream I verified with valgrind on Linux that this patch fixes the "Invalid write of size 1" error in prdtoa.c, rev. 4.7. wtc@aes:/home/wtc/nspr-tip/linux.dbg/pr/tests$ valgrind ./dtoa ==15994== Memcheck, a memory error detector ==15994== Copyright (C) 2002-2009, and GNU GPL'd, by Julian Seward et al. ==15994== Using Valgrind-3.6.0.SVN and LibVEX; rerun with -h for copyright info ==15994== Command: ./dtoa ==15994== ==15994== Invalid write of size 1 ==15994== at 0x69019BB: memcpy /tmp/vg/memcheck/mc_replace_strmem.c:482 ==15994== by 0x691C8A7: multadd /home/wtc/nspr-tip/linux.dbg/pr/src/misc/../../../../mozilla/nsprpub/pr/src/misc/prdtoa.c:675 ==15994== by 0x691C97B: s2b /home/wtc/nspr-tip/linux.dbg/pr/src/misc/../../../../mozilla/nsprpub/pr/src/misc/prdtoa.c:712 ==15994== by 0x691E088: PR_strtod /home/wtc/nspr-tip/linux.dbg/pr/src/misc/../../../../mozilla/nsprpub/pr/src/misc/prdtoa.c:2027 ==15994== by 0x8048B80: main /home/wtc/nspr-tip/linux.dbg/pr/tests/../../../mozilla/nsprpub/pr/tests/dtoa.c:228 ==15994== Address 0x6950000 is not stack'd, malloc'd or (recently) free'd The same "Invalid write of size 1" error is repeated for three other addresses: ... ==15994== Address 0x6950001 is not stack'd, malloc'd or (recently) free'd ... ==15994== Address 0x6950002 is not stack'd, malloc'd or (recently) free'd ... ==15994== Address 0x6950003 is not stack'd, malloc'd or (recently) free'd I increased the maximum length to 64K characters. 32K: 0.72 seconds 64K: 2.86 seconds 128K: 11.45 seconds 64K seems long enough. Attachment #402499 - Attachment is obsolete: true Attachment #402883 - Flags: review?(nelson) Comment on attachment 402883 [details] [diff] [review] Add an input length check to PR_strtod v2 >+ for(s = s00, i = 0; *s && i < 64 * 1024; s++, i++) >+ ; >+ if (*s) { I believe the above lines of code are equivalent to the following two lines: +#define MAX_STR_LEN 64 * 1024 + if (MAX_STR_LEN == strnlen(s00, MAX_STR_LEN)) { and the strnlen library code might be more well optimized on some platforms than the compiler's code for that for loop. Since this entire exercise is about bounding time spent, seems like we should optimize where we can. >+ PR_SetError(PR_INVALID_ARGUMENT_ERROR, 0); >+ return 0.0; Which is better to return in this case? 0.0? or NAN? r+=nelson, because I believe the patch is correct as is, but if you think any of these suggestions is an improvement, and want to make it, please feel free to make another. Comment on attachment 402883 [details] [diff] [review] Add an input length check to PR_strtod v2 Nelson, thanks for the review comments. I considered strnlen but am not sure if it's available on all the platforms. Maybe we can add a configure test for it. The 0.0 return value and PR_INVALID_ARGUMENT_ERROR are modeled after strtod(). I checked in this patch on the NSPR trunk (NSPR 4.8.1). Checking in prdtoa.c; /cvsroot/mozilla/nsprpub/pr/src/misc/prdtoa.c,v <-- prdtoa.c new revision: 4.10; previous revision: 4.9 done Comment on attachment 402883 [details] [diff] [review] Add an input length check to PR_strtod v2 See Nelson's r+=nelson in comment 42. Comment on attachment 402883 [details] [diff] [review] Add an input length check to PR_strtod v2 I pushed this patch to mozilla-central in changeset ed8abff562ad: I pushed this patch to mozilla-1.9.2 in changeset 7a6b4aea8917: Status: REOPENED → RESOLVED Closed: 10 years ago → 10 years ago Resolution: --- → FIXED Comment on attachment 402883 [details] [diff] [review] Add an input length check to PR_strtod v2 Approved for 1.9.1.4, a=dveditz for release-drivers Attachment #402883 - Flags: review?(nelson) Attachment #402883 - Flags: review+ Attachment #402883 - Flags: approval1.9.1.4+ Comment on attachment 402883 [details] [diff] [review] Add an input length check to PR_strtod v2 I pushed this patch to mozilla-1.9.1 in changeset 491f6fe5c98d: Running Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.1.4pre) Gecko/20090929 Shiretoko/3.5.4pre, the Secunia PoC attached here and the testcase in comment 2 still hang Firefox (but no outright crashes). Comment on attachment 400490 [details] Testcase for JavaScript based on comment #1 Al: this test case is actually for JavaScript (bug 516862). This test case doesn't test NSPR's PR_strtod function. We should be able to construct an NSPR test case using SVG. Attachment #400490 - Attachment description: Testcase based on comment #1 → Testcase for JavaScript based on comment #1 This change seems to have caused a 0.82%-1.46% regression in our tSVG results. Is that expected and is there anything we can do to avoid that? Did we ever file a bug on jsdtoa here? As for comment 50, prdtoa is a huge performance bottleneck. We simply stopped using it in the CSS parser (since we don't in fact need most of the things it does for us)... I don't know whether that's reasonable in the SVG case; if not the only way I see to fix the performance issue is to do the length check during conversion. To be more precise, in the css parser we used to both call dtoa and ToInteger; we switched to producing both results at once; with some loss of precision on the dtoa that's not noticeable in practice due to how css stores its floats. (In reply to comment #51) > Did we ever file a bug on jsdtoa here? Yes, bug 516862.. (In reply to comment #55) >. This makes sense to me and I agree that it's probably not worth worrying about. I've opened a bug 519794 about adding a comment to the svg code about the performance of PR_strtod. If the problem becomes important we can deal with it then. It's not as hard as I thought, if I dive into the code. You can consider using this patch for JavaScript. This patch does the length check after PR_strtod has counted the number of digits. Since the various counters nd, nd0, nf, nz, nz0 have the type 'int', in 64-bit environment these counters can overflow if the input string is longer than 2G characters. I didn't bother to fix this integer overflow issue in this patch. The fix is probably to declare these counters as size_t and eliminate any compiler warnings by typecasts. Comment on attachment 400623 [details] [diff] [review] Patch from dtoa.c upstream I checked in this patch on the NSPR_4_7_BRANCH for NSPR 4.7.6. Checking in prdtoa.c; /cvsroot/mozilla/nsprpub/pr/src/misc/prdtoa.c,v <-- prdtoa.c new revision: 4.5.36.2; previous revision: 4.5.36.1 done This regressed SVG on 1.9.2 and trunk: bug 519839. (In reply to comment #57) > Created an attachment (id=403970) [details] > Do the length check during conversion If you land on 1.9.0, can you be sure to take this too to fix the SVG bustage? Reopening to make sure we get the new patch on trunk and all the branches. Status: RESOLVED → REOPENED status1.9.1: .4-fixed → ? Resolution: FIXED → --- Comment on attachment 403970 [details] [diff] [review] Do the length check during conversion Who can review this patch? You need to dive into the first part of the PR_strtod function. The 0 return value and the PR_INVALID_ARGUMENT_ERROR error code (equivalent to EINVAL) when no conversion could be performed come from the strtod man page in the Single Unix Specification: I checked in this patch on the NSPR trunk (NSPR 4.8.2). Checking in prdtoa.c; /cvsroot/mozilla/nsprpub/pr/src/misc/prdtoa.c,v <-- prdtoa.c new revision: 4.11; previous revision: 4.10 done Comment on attachment 403970 [details] [diff] [review] Do the length check during conversion Try Nelson again? (In reply to comment #61) > Reopening to make sure we get the new patch on trunk and all the branches. Why? This bug/symptom is fixed, reopening implies they aren't. We're taking additional patches to fix regressions but that's what the regression bug is for. I haven't followed this entire bug in great detail before now. Was the input length check that was added in attachment 402883 [details] [diff] [review] necessary to eliminate the vulnerability? Or was it merely a performance enhancement, bounding the time spent in this function on absurdly long strings? I'm not sure exactly what the objectives of the patch in attachment 403970 [details] [diff] [review] are, so it is difficult for me to judge if it accomplishes those or not. So, instead of giving r+ or r-, I will make some review observations and let Wan-Teh or others determine if these should be considered r- or not. The patch in attachment 402883 [details] [diff] [review] absolutely limited the input string length to an upper bound of a certain size in all cases. The patch in attachment 403970 [review] does not. Leading zero characters, before the decimal point (if any) are not counted against that limit. That's probably OK. Likewise, the number of zero characters immediately following the decimal point are not counted against that total, unless they are followed by a non-zero digit. That's probably OK too. I think I could construct some strings that would be detected by the patch in attachment 402883 [details] [diff] [review], but not by the patch in attachment 403970 [details] [diff] [review], and would waste oceans of CPU time, and maybe produce incorrect results. Would the existence of such a string mean that this patch is not accomplishing its objective? Consider an input string consisting of a '1' following by hundreds of thousands, or millions of digits, followed by a decimal point. Or alternatively, a decimal point, followed by hundreds of thousands, or millions of zero digits, followed by a non-zero decimal digit. Eventually, the huge number of digits would be detected, but not before hundreds of thousands, or millions, of integer multiplies (by 10) had been done, wasting LOTS of CPU time. Seems like, at the very least, the code should detect overflow of the accumulators in those multiply-by-10-and-add loops, but it doesn't. But that's probably a separate pre-existing bug. This patch is equivalent to the previous patch but is more readable. 'nd' won't change after the dig_done label, so it's better to check 'nd' as soon as we reach that stage. dveditz: Can we just fix the vulnerability for now? The input length check requires a lot of time from two developers (one to write the patch, the other to review it). I can't spend too much time on this bug. Nelson, the input length check is not necessary to eliminate the vulnerability. It merely bounds the time spent in PR_strtod on absurdly long strings. However, without the input length check, it's hard for QA to verify the vulnerability fix because the application consumes 100% CPU for a very long time (at least two minutes) on Reed's test case for JavaScript. The original input length check is incorrect because it applies to the entire input string, but we should only check the initial part of the input string that is a number. For example, an absurdly long string of the form "0.0not a number not a number not a number ..." should be accepted by PR_strtod because only the "0.0" part of the string needs to be converted. The original input length check causes this valid input to fail. During the initial parsing phase, up to the dig_done label, PR_strtod never multiplies by 10 for more than roughly DBL_DIG + 1 times. Attachment #403970 - Attachment is obsolete: true BTW, FWIW, after reviewing the portion of the code that Wan-Teh patched, I'm not sure that the function PR_strtod will produce the correct result for a number with more than about 20 digits, e.g. a number like 10^22 when represented without an E+ notation, e.g. 10000000000000000000000. which is too big to be represented in a 64-bit integer. I believe that PR_strtod _should_ be able to produce the same answer for that value as for 1E+22, an answer that is correct to the limits of the precision of a double, but I suspect it doesn't. I should write a little test program. But this is a separate issue. (In reply to comment #66) > dveditz: Can we just fix the vulnerability for now? The input > length check requires a lot of time from two developers (one > to write the patch, the other to review it). I can't spend > too much time on this bug. We agree. How do we get to a state where the crash/security-bug is fixed, but we have to live with a hang on stupidly-long strings? I think that means we need to back-out the checkins in comment 45 and comment 47, and create a new 4.7.6 BETA tag that Firefox 3.0 can use. reftest for what's needed to keep SVG working. I backed out the input length check on the NSPR trunk (NSPR 4.8.2) and NSPR_4_7_BRANCH (NSPR 4.7.6). Checking in prdtoa.c; /cvsroot/mozilla/nsprpub/pr/src/misc/prdtoa.c,v <-- prdtoa.c new revision: 4.12; previous revision: 4.11 done Checking in prdtoa.c; /cvsroot/mozilla/nsprpub/pr/src/misc/prdtoa.c,v <-- prdtoa.c new revision: 4.5.36.4; previous revision: 4.5.36.3 done The new NSPR_4_8_2_RTM and NSPR_4_7_6_BETA2 tags have the vulnerability fix but not the input length check. I pushed NSPR_4_8_2_RTM to mozilla-central in changeset f6b3d9aaaa7f: I pushed NSPR_4_8_2_RTM to mozilla-1.9.2 in changeset 449846a09609: I pushed NSPR_4_8_2_RTM to mozilla-1.9.1 in changeset d9554296d4e2: Status: REOPENED → RESOLVED Closed: 10 years ago → 10 years ago status1.9.1: ? → .4-fixed Resolution: --- → FIXED Comment on attachment 400623 [details] [diff] [review] Patch from dtoa.c upstream Approved for 1.9.0.15, a=dveditz Attachment #400623 - Flags: approval1.9.0.15? → approval1.9.0.15+ This tag update, then, should be all that remains to fix this on the 1.9.0 branch for Firefox 3.0 Attachment #404742 - Flags: review?(wtc) Comment on attachment 400623 [details] [diff] [review] Patch from dtoa.c upstream I checked this in (via the NSPR_4_7_6_BETA2 tag) on the Mozilla 1.9.0 branch. Checking in client.mk; /cvsroot/mozilla/client.mk,v <-- client.mk new revision: 1.395; previous revision: 1.394 done Comment on attachment 404742 [details] [diff] [review] update client.mk for 1.9.0 r=wtc. I already committed this (see comment 74). (In reply to comment #67) Nelson, PR_strtod always protects multiplications of 'y' and 'z' by 10 with nd < xxx checks: I verified that PR_strtod converts "10000000000000000000000" to 1e22. Comment on attachment 404742 [details] [diff] [review] update client.mk for 1.9.0 dveditz: Do the 1.9.0 builds look good? If you don't know of any problem introduced by NSPR_4_6_7_BETA2, I will create the NSPR_4_7_6_RTM tag today. Comment on attachment 404742 [details] [diff] [review] update client.mk for 1.9.0 I updated the NSPR tag to NSPR_4_7_6_RTM for Mozilla 1.9.0. Checking in client.mk; /cvsroot/mozilla/client.mk,v <-- client.mk new revision: 1.396; previous revision: 1.395 done As spoken on IRC with dveditz the hang is expected for now due to the regression (bug 519839) started with the fix for the hang. So with the following builds no crash happens anymore. Marking verified fixed on 1.9.1, 1.9.2, and trunk. Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.3a1pre) Gecko/20091007 Minefield/3.7a1pre ID:20091007093342 Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.2b1pre) Gecko/20091007 Namoroka/3.6b1pre ID:20091007034618 Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1.4) Gecko/20091006 Firefox/3.5.4 ID:20091006224018 Status: RESOLVED → VERIFIED Keywords: verified1.9.1, verified1.9.2 Al, Henrik: We don't have an SVG test case that makes Firefox crash in prdtoa.c. Reed's test case exercises JavaScript's copy of dtoa.c. Reed's test case has an input string of 1024 * 1024 characters. A shorter test case of 384 * 1024 characters should still crash JavaScript, and will take only 2 minutes to complete with the crash fix. It'll allow QA to verify the crash fix. Thanks Wan-Teh but I was already able to verify the fix even when I had to wait around 5 minutes for each test. Oh, and I missed to say that I have also run tests on Windows and Linux. I did the same thing as Henrik with 1.9.0 (Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.0.15) Gecko/2009101601 Firefox/3.0.15 (.NET CLR 3.5.30729)) and verified the lack of a crash, even though it takes forever. Keywords: fixed1.9.0.15 → verified1.9.0.15 Al, Henrik: there were no browser test cases for this NSPR bug. If you used the first attachment in this bug, you verified the JavaScript bug. Please use this SVG test case to verify this NSPR bug. That was what I was afraid of. So I verified the *other* bug. Great. Verified for 1.9.0.15 using the new testcase from comment 83. Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.0.15) Gecko/2009101601 Firefox/3.0.15 (.NET CLR 3.5.30729) No crash with the newly attached SVG testcase with builds on OS X and Windows: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1.4) Gecko/20091016 Firefox/3.5.4 ID:20091016081620 Can we make sure to get those tests checked-in once this bug gets opened to the public? Sadly we don't have a in-testsuite flag for NSPR. Maksymilian Arciemowicz just informed security@m.o that this bug seems to be similar to -- if not the same as -- CVE-2009-0689 (). He reported the issue in June to NetBSD, FreeBSD, and to Google Chrome (as per). A slightly different fix was used in at least NetBSD's gdtoa implementation: Are we sure this bug is completely (and correctly) fixed? This is the current version of dtoa.c in the V8 JavaScript engine used by Google Chrome: The fix is the same as ours except for the reduction of Kmax from 15 to 7: Group: core-security Changed the target milestone from 4.8.1 to 4.8.2. The fix for this bug in NSPR 4.8.1 was incorrect. Target Milestone: 4.8.1 → 4.8.2 Updating CVE number per Mitre, coalescing -1563 into the older -0689 Alias: CVE-2009-1563 → CVE-2009-0869 Alias: CVE-2009-0869 → CVE-2009-0689 wtc said the 1.8.1 branch can be updated to NSPR 4.7.6, that it both contains the fix and corrects the bustage on older Mac platforms that kept us from using 4.7.5 on that branch. It'll be important to test on the oldest supported versions of Win and Mac to verify we didn't break things. The fix itself doesn't need much more than simple verification as 4.7.6 is the version we're currently using on the 1.9.0 branch Keywords: fixed1.8.1.24 (In reply to comment #91) See bug 509262 comment 1 for the changes I made in NSPR 4.7.6 to restore support for Mac OS X 10.2 and 10.3. Please focus your testing on two things on Mac OS X: 1. No build errors. 2. Test a feature that uses the PR_GetPhysicalMemorySize function. Alternatively, attach a debugger and verify PR_GetPhysicalMemorySize returns the right value. You don't need to do extra testing on Windows. MoCo QA has a single OS X 10.3 machine available to test on. We don't have 10.2. As far as testing a feature that uses the PR_GetPhysicalMemorySize function, can someone suggest one for Thunderbird or Seamonkey 1.1? After doing a few rounds with IT, QA has no way to test on OS X 10.2. We have a 10.3 machine to test on but no access to 10.2 media to even install it. Al: testing on just Mac OS X 10.3 is fine. Don't go out of your way to install Mac OS X 10.2. This broke, at the very least, OS X 10.5 and 10.6. Thunderbird 2 will just hang, before even getting to profile manager, when invoked. Here is a sample from the latest nightly with the problem: Last good build is Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.24pre) Gecko/2010022203 Thunderbird/2.0.0.24pre The only checkin is this bug. I'm going to find out if 10.4 is broken. We've been waiting on an IT fix happening right now to check 10.2 and 10.3 but that should be resolved anytime. Nick Thomas has 10.4 and commented in IRC that it is broken there as well. <humor> Fortunately for us all, 10.2 and 10.3 work great! I just downloaded my mail on 10.2 from a test account! </humor> The 10.2 and 10.3 tests were on a PPC box. I checked 10.4 on PPC (the highest we have there) and it works there as well. It looks like Intel is what is busted. I am told that for PPC, we build with the 10.2.8 SDK and for Intel I think we use the 10.4u SDK. (In reply to comment #99) > I am told that for PPC, we build with the 10.2.8 SDK and for Intel I think we > use the 10.4u SDK. That's right That file is sourced at the top of the mozconfig for the build. Dunno if this is related, but when we landed NSPR 4.7.3 on the 1.9.0 branch (cvs head) we had to back it out because it caused bug 466531 (crashes on PPC). Relanding 4.7.3 involved patching mozilla/js/src/jscpucfg.cpp, and the 1.8.1 branch does not have those fixes. Since PPC is working fine now maybe that's completely unrelated. Dan: good memory! Al: What's the "uname -a" output of the Mac build machine? If you cross-compile or do a universal build, you should backport my JS patch from bug 466531. Use attachment 353918 [details] [diff] [review]. Wan-Teh, I think you want to ask Nick this. I'm QA so I don't work with the Mac build machine and I don't have 1.8.1 setup to build on any box that I own. The box is Darwin bm-xserve05 8.7.0 Darwin Kernel Version 8.7.0: Fri May 26 15:20:53 PDT 2006; root:xnu-792.6.76.obj~1/RELEASE_PPC Power Macintosh powerpc so we're building natively on PPC, then cross-compile for Intel, then create a Universal build from those. Dan, could you look at the suggested attachment ? Nick: Thanks. Then the Intel bits cross-compiled on that PPC box are incorrect in JavaScript. Could you or Dan backport the JS changes in attachment 353918 [details] [diff] [review] to the branch? Thanks. The patch applied cleanly, shifted a bit (minus the client.mk part, of course). I've attached a merged version here because I can't check into mozilla/js/src -- it's locked down. Comment on attachment 429065 [details] [diff] [review] jscpucfg fix from bug 466531 for the 1.8 branch Approved for 1.8.1.24, a=dveditz Attachment #429065 - Attachment description: jscpucfg fix for the 1.8 branch → jscpucfg fix from bug 466531 for the 1.8 branch Attachment #429065 - Flags: approval1.8.1.next+ Comment on attachment 429065 [details] [diff] [review] jscpucfg fix from bug 466531 for the 1.8 branch This additional fix checked in to the 1.8.1 branch I note that this testcase pegs my CPU on Seamonkey 1.1's nightly and on my current Firefox 3.7 nightly both. The bustage for Thunderbird from previous checkins is now fixed and TB is verified to run on OS X 10.5 and 10.6 intel and on 10.4 and 10.3 PPC. Re-reading all of the old comments, including my own, I realize that the case in comment 1 is for the JS version of this, not the SVG version, per Wan-Teh. I've already verified that SVG is fixed in bug 519839. The JS version was bug 521306. Do we need to fix this for 1.8.1.24? The js fix is bug 516882, and we've taken that in 1.8.1.24; bug 521306 is about the fact that now that we've prevented the crash we realize the routine can get really really slow on huge numbers. We're not going to hold 1.8.1.24 for that DoS fix. > The js fix is bug 516882 I meant bug 516862, of course (see comment 27)
https://bugzilla.mozilla.org/show_bug.cgi?id=516396
CC-MAIN-2019-43
refinedweb
6,687
77.13
public class Function { public static void main(String args[]) { System.out.println(power(3,2)); System.out.println(power(3,2)); System.out.println(power(2)); } public long power(int m) { return m*m; } public long power(int m,int n) { long product=1; for(int i=1;i<=n;i++) { product=product*m; } return product; } } Compiler displays this error :- Function.java:5: non-static method power(int,int) cannot be referenced from a static context [edit] Sorry about the indentation thingy :/ I'll keep that in mind from now on. Ok so I just added static keyword and it's working fine now. What difference does this static keyword make ? I am a beginner to java and have not yet studied about what static does. I sure will read it in further chapters of the book but someone please give me an idea what it does. Thanks.
http://ansaurus.com/question/3047079-java-beginner-question-what-is-wrong-with-the-code-below
CC-MAIN-2019-30
refinedweb
148
54.93
Gradient Descent with Backpropagation Assumptions/Recommendations: I assume you know matrix/vector math, introductory calculus (differentiation, basic understanding of partial derivatives), how basic feedforward neural nets work and know how to compute the output of a 2-layer neural net, and basic python/numpy usage. I recommend Andrew Ng's machine learning coursera course as a pre-requisite for this (at least through the neural net portion of the course). I also recommend you first try to implement the code using Matlab/Octave because the syntax for linear algebra operations is much cleaner and can prevent bugs that occur when using numpy. This is a tutorial for a specific group of people given the aforementioned assumptions. It's for people like me. I have a programming background, but a very weak math background (I only took basic college calculus, including some multivariate). I went through Andrew Ng's machine learning course on Coursera. I understood and could make a neural network run forward, that's pretty easy. It's just some straightforward sequence of matrix multiplication. But I did not get backpropagation. I understood the general principle, but I didn't really get it. I voraciously read through all the beginner tutorials about neural networks and how to train them. Most machine learing literature is at an advanced level, often described in largely mathematical terms, so even a lot of these so called beginner-level guides went over my head. I almost gave up, but decided to do just a little more researching, a little more thinking, and finally, it clicked and I built my own simple neural network trained by backpropagation and gradient descent. This is written for people like me, hopefully so I can save you the pain of searching all over the internet for various explanations, to elucidate how backpropagation and gradient descent work in theory and in practice. If you've already taken Andrew Ng's course or have done some research on backprop already, then some of this will be redundant, but I don't want to just start in the middle of the story, so bear with me. I will not shy away from math, but will try to explain it in an incremental, intuitive way. Our objective in training a neural network is to find a set of weights that gives us the lowest error when we run it against our training data. In a previous post, I described how I implemented a genetic algorithm (GA) to iteratively search the "weight-space," if you will, to find an optimal set of weights for the toy neural network. I also mentioned there that GAs are generally much slower than gradient descent/backpropagation, the reason is, unlike GAs where we iteratively select better weights from a random pool, gradient descent gives us directions on how to get weights to an optimum. It tells us whether we should increase or decrease the value of a specific weight in order to lower the error function. It does this using derivatives. Let's imagine we have a function $f(x) = x^4 - 3x^3 + 2$ and we want to find the minimum of this function using gradient descent. Here's a graph of that function: from sympy import symbols, init_printing from sympy.plotting import plot %matplotlib inline init_printing() x = symbols('x') fx = x**4 - 3*x**3 + 2 p1 = plot(fx, (x, -2, 4), ylim=(-10,50)) #Plotting f(x) = x^4 - 3x^3 + 2, showing -2 < x <4 As you can see, there appears to be a minimum around ~2.3 or so. Gradient descent answers this question: If I start with a random value of x, which direction should I go if I want to get to the lowest point on this function? Let's imagine I pick a random x value, say x = 4, which would be somewhere way up on the steep part of the right side of the graph. I obviously need to start going to the left if I want to get to the bottom. This is obvious when the function is an easily visualizable 2d plot, but when dealing with functions of multiple variables, we need to rely on the raw mathematics. Calculus tells us that the derivative of a function at a particular point is the rate of change/slope of the tangent to that part of the function. So let's use derivatives to help us get to the bottom of this function. The derivative of $f(x) = x^4 - 3x^3 + 2$ is $f'(x) = 4x^3 - 9x^2$ . So if we plug in our random point from above (x=4) into the first derivative of $f(x)$ we get $f'(4) = 4(4)^3 - 9(4)^2 = 112$. So how does 112 tell us where to go? Well, first of all, it's positive. If we were to compute $f'(-1)$ we get a negative number (-13). So it looks like we can say that whenever the $f'(x)$ for a particular $x$ is positive, we should move to the left (decrease x) and whenever it's negative, we should move to the right (increase x). Let's formalize this: When we start with a random x and compute it's deriative $f'(x)$, our new x should be proportional to $x - f'(x)$. I say proportional to because we want to control to what degree we move at each step, for example when we compute $f'(4)=112$, do we really want our new $x$ to be $x - 112 = -108$? No, if we jump all the way to -108, we're even farther from the minimum than we were before. We want to take relatively small steps toward the minimum. So instead, let's say that for any random x, we want to take a step (change x a little bit) such that our new $x$ $ = x - \alpha*f'(x)$. We'll call $\alpha$ our learning rate because it determines how big a step we take. $\alpha$ is something we will just have to play around with to find a good value. Some functions might require bigger steps, others smaller steps. In this case, we want small steps because the graph is very steep at each end. Let's set $\alpha$ to 0.001. This means, if we randomly started at $f'(4)=112$ then our new $x$ will be $ = 4 - (0.001 * 112) = 3.888$. So we moved to the left a little bit, toward the optimum. Let's do it again. $x_{new} = x - \alpha*f'(3.888) = 3.888 - (0.001 * 99.0436) = 3.79$ Nice, we're indeed moving to the left, closer to the minimum of $f(x)$, little by little. Let's use a little python script to make it go all the way (this is from wikipedia). x_old = 0 x_new = 4 # The algorithm starts at x=4 gamma = 0.01 # step size precision = 0.00001 def f_derivative(x): return 4 * x**3 - 9 * x**2 while abs(x_new - x_old) > precision: x_old = x_new x_new = x_old - gamma * f_derivative(x_old) print("Local minimum occurs at", x_new) Local minimum occurs at 2.2500325268933734 I think that should be relatively straightforward. The only thing new here is the precision constant. That is just telling the script when to stop. If x is not changing by more than 0.00001, then we're probably at the bottom of the "bowl" because our slope is approaching 0, and therefore we should stop and call it a day. Now, if you remember some calculus and algebra, you could have solved for this minimum analytically, and you should get $\frac 94 = 2.25$. Very close to what our gradient descent algorithm above found. That's it, that's gradient descent. Well, it's vanilla gradient descent. There are some bells and whistles we could add to this process to make it behave better in some situations, but I'll have to cover that in another post. One thing to note, however, is that gradient descent cannot gaurantee finding the global minimum of a function. If a function contains local and global minima, it's quite possible that gradient descent will converge on a local optimum. One way to deal with this is to just make sure we start at random positions, run it a few times, and see if we get different results. If our random starting point is closer to the global minimum than to the local minimum, we'll converge to that. As you might imagine, when we use gradient descent for a neural network, things get a lot more complicated. Not because gradient descent gets more complicated, it still ends up just being a matter of taking small steps downhill, it's that we need that pesky derivative in order to use gradient descent, and the derivative of a neural network cost function (with respect to its weights) is pretty intense. It's not a matter of just analytically solving $f(x)=x^2, f'(x)=2x$ , because the output of a neural net has many nested or "inner" functions, if you will. That's why someone discovered backpropagation. Backpropagation is simply a method of finding the derivative of the neural net's cost function (with respect to its weights) without having to do crazy math. Also unlike our toy math problem above, a neural network may have many weights. We need to find the optimal value for each individual weight to lower the cost for our entire neural net output. This requires taking the partial derivative of the cost/error function with respect to a single weight, and then running gradient descent for each individual weight. Thus, for any individual weight $W_j$, $weight_{new} = W_j - \alpha*\frac{\partial C}{\partial W_j}$ and as before, we do this iteratively for each weight, many times, until the whole network's cost function is minimized. I'm by nature a reductionist, that is, I like to reduce problems to the simplest possible version, make sure I understand that, and then I can build from there. So let's do backpropagation and gradient descent on the simplest possible neural network (in fact, it's hardly a network), a single input, a single output, no bias units. We want our NN to model this problem: X | Y ------- 0.1 | 0 0.25| 0 0.5 | 0 0.75| 0 1.0 | 0 That is, our network is simply going to return 0 for any input $x$ between 0 and 1 (not including 0 itself, the sigmoid function will always return 0.5 if $x=0$). That's as simple as it gets. Here's a diagram of our network: Where $f = sigmoid(W_1 * X_1)$ and $$sigmoid = \frac{1}{(1+e^{-z})}$$ You should already be familiar with the sigmoid function and it's shape, it squashes any input $x \in \Bbb R$ (any input, $x$ in the real numbers), between $0-1$ Clearly, we just need $W_1$ to be a big negative number, so that any $x$ will become a big negative number, and our $sigmoid$ function will return something close to 0. Just a quick review of partial derivatives, say we have a function of two variables, $f(x,y) = 2x^2 + y^3$, then the partial derivative of $f(x,y)$ with respect to $x$, $\frac{\partial f}{\partial x} = 4x$ and $\frac{\partial f}{\partial y} = 3y^2$. Thus, we just pretend like $y$ is a constant (and it disappears like in ordinary differentiation) when we partially differentiate with respect to $x$ (and vice versa for $y$) Now let's define our cost (or error) function. A commonly used cost function is the Mean Squared Error (MSE), which is $$C(h(x)) = \frac{1}{2m}\sum_1^m{(h(x) - y)^2}$$ $h(x)$ is whatever our output function is (in our case the neural net's sigmoid). This says, that for every $x$: ${x_1, x_2, x_3 ... x_m}$ and therefore every $h(x)$, we substract $y$ which is the expected or ideal value, square it, and then add them all up, at the end multiply the sum by $\frac 1{2m}$, where $m$ is the number of training examples, (e.g. in the above neural net, we define 5 training examples.) Let's assume we built this neural net, and without training it all, just with a random initial weight $W_1 = 0.3$, it outputs $h(0.1) = 0.51$ and $h(0.25) \approx 0.52$. Let's compute the error/cost with our function defined above for just the first two training examples to keep it simple. $$m = 2$$ $$h(x) = \frac{1}{(1+e^{-0.3*x_m})}$$ $$ C(W_j) = \frac{1}{2m}\sum_1^m{(h(x) - y)^2} $$ $$ C(0.3) = \frac{1}{2m}\sum_1^m{(h(x) - y)^2} = \frac{1}{4}(0.51 - 0)^2 + \frac{1}{4}(0.52 - 0)^2 = \mathbf{0.133}$$ So our cost is 0.133 with a random initial weight $W_j$ = 0.3. Now the hard part, let's differentiate this whole cost function with respect to $W_j$ to figure out our next step to do gradient descent.Let me first make an a priori simplification. If you tell Wolfram|Alpha to differentiate our sigmoid function, it will tell you this is the derivative: $$sigmoid' = e^x/(e^x+1)^2$$. It turns out, there is a different, more simple form of this (I won't prove it here, but you can plug in some numbers to verify yourself). Let's call our sigmoid function $g(x)$, the derivative of $g(x)$ with respect to x, $$\frac{d}{d{x}}g(x) = g(x) * (1 - g(x))$$ This says that the derivative of $g(x)$ is simply the output of $g(x)$ times 1 - the output of $g(x)$ Much simpler right? Okay, so now let's differentiate our cost function. We have to use the chain rule.$$u = h(x) - y$$ $$ C(W_j) = \frac{1}{2m}\sum_1^m {(u)^2} $$ $$ \frac{d}{dW_j}C(W_j) = \frac{1}{2m}\sum_1^m 2u = \frac{1}{m}\sum_1^m u*u'$$ Remember, $u = h(x) - y$, we treat $y$ like a constant (because it is here, it can't change), and thus $$u' = h'(x) = h(x)*(1-h(x))$$ Putting it all together...$$ \frac{d}{dW_j}C(W_j) = \frac{1}{m}\sum_1^m (h(x) - y) * h'(x) = \frac{1}{m}\sum_1^m (h(x) - y) * h(x) * (1 - h(x))$$ Great, now we have $\frac{d}{dW_j}C(W_j)$ which is what we need to do gradient descent. Each gradient descent step we take will be $$W_j = W_j - \alpha * \frac{d}{dW_j}C(W_j)$$ Notice how this is not technically a partial derivative since there's only 1 weight, so just keep in mind this is a big simplification over a "real" neural net. Let's write some code.. #Our training data X = np.matrix('0.1;0.25;0.5;0.75;1') y = np.matrix('0;0;0;0;0') #Let's "Randomly" initialize weight to 5, just so we can see gradient descent at work weight = np.matrix('5.0') #sigmoid function def sigmoid(x): return np.matrix(1.0 / (1.0 + np.exp(-x))) #run the neural net forward def run(X, weight): return sigmoid(X*weight) #2x1 * 1x1 = 2x1 matrix #Our cost function def cost(X, y, weight): nn_output = run(X,weight) m = X.shape[0] #num training examples, 2 return np.sum((1/m) * np.square(nn_output - y)) print('Cost Before Gradient Descent: %s \n' % cost(X, y, weight)) #Gradient Descent alpha = 0.5 #learning rate epochs = 2000 #num iterations for i in range(epochs): cost_derivative = np.sum(np.multiply((run(X, weight) - y), np.multiply(run(X, weight), (1 - run(X, weight))))) weight = weight - alpha * cost_derivative print('Final Weight: %s\n' % weight) print('Final Cost: %s \n' % cost(X, y, weight)) Cost Before Gradient Descent: 0.757384221746 Final Weight: [[-24.3428836]] Final Cost: 0.00130014553005 np.round(run(X, weight),2) array([[ 0.08], [ 0. ], [ 0. ], [ 0. ], [ 0. ]]) It worked! You can change the $y$ matrix to be all $1$'s if you want to prove that gradient descent will then guide our weight in the other (more positive) direction. But let's be honest, that reductionist problem was super lame and completely useless. But I hope it demonstrates gradient descent from the mathematical underpinnings to a Python implementation. Please keep in mind that we have not done any backpropagation here, this is just vanilla gradient descent using a micro-neural net as an example. We did not need to do backpropagation because the network is simple enough that we could calculate $\frac{d}{dW_j}C(W_j)$ by hand. Eventually we'll learn the backpropagation process for calculating every $\frac{\partial}{\partial W_j}C(W)$ in an arbitrarily large neural network. But before then, let's build up our skills incrementally. So the 2 unit 'network' we built above was cute, and the gradient descent worked. Let's make a slightly more difficult network by simply adding a bias unit. Now our network looks like this: We'll attempt to train this network on this problem: All it's doing is outputting the opposite bit that it receives as input. This problem could not be learned by our 2 unit network from above (think about it if you don't know why). Now that we have two weights, which we'll store in a $1x2$ weight vector, $\mathbf W$ and our inputs in a 2x1 vector, $\mathbf X$, we'll definitely be dealing with partial derivatives of our cost function with respect to our two weights. Let's see if we can solve for $\frac{\partial}{\partial W_1}C(W)$ and $\frac{\partial}{\partial W_2}C(W)$ analytically. Remember our derivative of the cost function is: $$ \frac{d}{dW_j}C(W_j) = \frac{1}{m}\sum_1^m (h(x) - y) * h'(x)$$ but our $h(x)$ is no longer a normal $sigmoid(W_1*x_m)$, which we could easily derive. It's now $h(x) = sigmoid(W * X)$ where W is a 1x2 vector and X is our 2x1 training examples (0,1). If you remember from vector multiplication, this turns out to be $W * X = (W_1*x_0) + (W_2*x_1)$. So we can rewrite $h(x) = sigmoid(W_1*x_0 + W_2*x_1)$. This definitely changes our derivative. But not too much; remember than $x_0$ (our bias) is always 1. Thus it simplifies to: $h(x) = sigmoid(W_1 + W_2*x_1)$. If we solve for the derivative of $h(x)$ with respect to $W_1$, we simply get the same thing we got last time (i'm using $g(x)$ as shorthand for $sigmoid$) $$\frac{\partial C_w(x)}{\partial W_1} = g(W_1 + W_2*x_1) * (1 - g(W_1 + W_2*x_1))$$ But when we solve for the partial with respect to $W_2$, we get something slightly different. $$\frac{\partial C_w(x)}{\partial W_2} = g(W_1 + W_2*x_1) * (1 - g(W_1 + W_2*x_1))*x_1$$ I'm not going to explain why we have to add that $x_1$ as a multiplier on the outside, it's just the result of chain rule differentiation. You can plug in into Wolfram Alpha or something and it can give you the step by step. Since we have our two partial derivatives of the cost function with respect to each of our two weights, we can go ahead and do gradient descent again. I modified our previous neural network to be a 3 unit network now. import numpy as np import matplotlib.pyplot as plt from matplotlib import cm #these are for later (we're gonna plot our cost function) from mpl_toolkits.mplot3d import Axes3D #Our training data X = np.matrix('0 1;1 1') y = np.matrix('1;0') #Let's randomly initialize weights weights = np.matrix(np.random.normal(0, 5, (2,1))))) print('Initial Weight: %s\n' % weights) print('Cost Before Gradient Descent: %s \n' % cost(X, y, weights)) #Gradient Descent alpha = 0.05 #learning rate epochs = 12000 #num iterations for i in range(epochs): #Here we calculate the partial derivatives of the cost function for each weight costD1 = np.sum(np.multiply((run(X, weights) - y), np.multiply(run(X, weights), (1 - run(X, weights))))) costD2 = np.sum(X[:,0] * np.multiply((run(X, weights) - y), np.multiply(run(X, weights), (1 - run(X, weights)))).T) weights[0] = weights[0] - alpha * costD1 weights[1] = weights[1] - alpha * costD2 print('Final Weight: %s\n' % weights) print('Final Cost: %s \n' % cost(X, y, weights)) print('Result:\n') print(np.round(run(X, weights))) print('Expected Result\n') print(y) Initial Weight: [[-0.14935895] [ 7.10444213]] Cost Before Gradient Descent: 0.499047925023 Final Weight: [[-4.77235179] [ 2.4814493 ]] Final Cost: 0.00719841740292 Result: [[ 1.] [ 0.]] Expected Result [[1] [0]] Sweet! It worked! But I actually had to run that a couple of times for it to converge to our desired goal. This is because, depending on what the initial weights were, gradient descent is liable to fall into local minima if those are closer. Let's explore this further... I modified the code to our 3 unit network from above so that it would plot a 3d surface of the cost function as a function of $W_1$ and $W_2$. Please note, I tend to use the term weights and theta interchangeably (e.g. theta1 = weight1).)) #MSE #random pairs of theta1 and theta2 to feed into our = 160 ax.set_xlabel('Theta1') ax.set_ylabel('Theta2') ax.set_zlabel('Cost') plt.show() Neat! A 3d surface of our cost function. Just in case it's not obvious, the more red parts of the surface are the highest (highest cost) and the darkest green/black is the lowest part of the plot (lowest cost). Notice how there's a big mountain peak and a deep valley, but then there are these very flat areas on each side. Those are basically local minima. If our weights initialize somewhere in there, it's not likely they'll escape down into the low cost valley because the gradients (slopes) in those parts are extremely small. Imagine we put a ball on top of the hill, that's our weight vector. Then we give it a little kick. There's a good chance it will roll down into the valley. But if we put the ball on the flat "grassy" area and give it a little kick, there's a good chance it'll stay in that flat area. As it turns out, there's actually something we can do to make our situation better. We can simply define a different cost function. While the Mean Squared Error function is nice and simple and easy to comprehend, it clearly can lead to a lot of local minima. There's a much better cost function called a cross-entropy cost function, but unfortunately, it looks a lot uglier than the MSE. Here it is (I'm using $\theta$ to denote the weight vector):$$C(\theta) = \frac 1m * \sum_1^m [-y * log((h_{\theta}(x))) - (1 - y)(log(1 - (h_{\theta}(x)))]$$ Yikes, it must be horrible to figure out the derivative for that beast right? Actually it turns out to be okay. I'm not going to go through the steps of differantiating it, but if you really want to know please see reference #2. $$ \frac{\partial C}{\partial W_j} = \frac 1m \sum_1^m x_m(h(x) - y) $$ Now this is just the partial derivative for the weights in the last layer, connecting to the output neurons. Things get more complicated when we add in hidden layers. That's why we need backpropagation. But first, let's see how our new cost function compares to the MSE. I'm literally copying and pasting the code from above, and only changing the return line in our cost function. Let's take a look.( -y.T*np.log(nn_output) - (1-y).T*np.log(1-nn_output)); #cross entropy = 200 ax.set_xlabel('Theta1') ax.set_ylabel('Theta2') ax.set_zlabel('Cost') plt.show() Again, more red = higher cost, more dark = lower cost. Hopefully you can appreciate that this cost function looks much smoother, with pretty much no major local minima. If you placed a ball anywhere on this surface and gave it a little kick, it would almost certainly roll to the low cost valley. It's been a long time coming, but we're finally ready to delve into backpropagation. But if you've followed me so far, then backpropagation should make a lot more sense. As I've already mentioned before, backpropagation is simply the method or process we can use to compute $\frac{\partial C}{\partial W_j}$ for any weight in the network, including hidden weights. I'm not going to prove backpropagation to you here because I just don't have the mathematical skills to do so, but I will show you how to use it. In order to demonstrate backpropagation, we're going to build a 3-layer neural network to solve the XOR problem. Below is the neural net architecture and the truth table for XOR. As you can see, our 3-layer network has 2 inputs, 3 hidden neurons, and 1 output (ignoring the bias nodes). There are two theta matrices between each layer, $\theta_1 = 3x3$ and $\theta_2 = 4x1$ (referring to matrix dimensions). Great, now how do we calculate those partial derivatives so we can do gradient descent? Here are the steps. 1. Starting at the output neurons, we calculate their node delta, $\delta$. In our case, we only have one output neuron, and therefore a $\delta_1$. Delta's represent the error for each node in the network. It's easy to calculate the output node's delta, it's simply $\delta_1 = (h(x) - y)$ which is what the output neuron output minus $y$, the expected value. Note, even if we had more than one output neuron, we could calculate all their deltas, $\delta_1 ... \delta_j$, in the same way, $\delta_j = (h(x) - y)$ and at the same time if $h(x)$ and $y$ are vectors. 2. Here's where we start the backpropagation. To calculate the previous layer's (in our case, the hidden layer) deltas, we backpropagate the output errors/deltas using this formula: $$\delta_j^{l-1} = (\theta_2 * \delta_j^l) \odot (a^{l-1} \odot (1 - a^{l-1}))$$ Where $*$ indicates the dot product, $\odot$ indicates element-wise multiplication (hamdard product), and $l$ = the number of layers. So $\delta_j^l$ refers to the output layer deltas, whereas $\delta_j^{l-1}$ refers to the previous, hidden layer deltas, and $\delta_j^{l-2}$ would be 2 layers before the output layer and so on. $a^{l-1}$ refers to the activations/outputs of the hidden layer (or layer before $l$). Note: We only calculate delta's up to the last hidden layer, we don't calculate deltas for the input layer. 3. Calculate the gradients using this formula: $$\frac{\partial C}{\partial \theta_j} = \delta^{1-1} * a^l$$ 4. Use the gradients to perform gradient descent like normal. Below I've implemented the XOR neural net we've described using backpropagation and gradient descent. If you have familiarity with forward propagation in simple neural nets, then most of it should be straightforward. Hopefully the backpropagation portion is starting to make sense now too. import numpy as np X = np.matrix([ [0,0],[0,1],[1,0],[1,1]]) #4x2 (4=num training examples) y = np.matrix([[0,1,1,0]]).T #4x1 numIn, numHid, numOut = 2, 3, 1; #setup layers #initialize weights theta1 = ( 0.5 * np.sqrt ( 6 / ( numIn + numHid) ) * np.random.randn( numIn + 1, numHid ) ) theta2 = ( 0.5 * np.sqrt ( 6 / ( numHid + numOut ) ) * np.random.randn( numHid + 1, numOut ) ) #initialize weight gradient matrices theta1_grad = np.matrix(np.zeros((numIn+1, numHid))) #3x2 theta2_grad = np.matrix(np.zeros((numHid + 1, numOut)))#3x1 alpha = 0.1 #learning rate epochs = 10000 #num iterations m = X.shape[0]; #num training examples def sigmoid(x): return np.matrix(1.0 / (1.0 + np.exp(-x))) #backpropagation/gradient descent for j in range(epochs): for x in range(m): #for each training example #forward propagation a1 = np.matrix(np.concatenate((X[x,:], np.ones((1,1))), axis=1)) z2 = np.matrix(a1.dot(theta1)) #1x3 * 3x3 = 1x3 a2 = np.matrix(np.concatenate((sigmoid(z2), np.ones((1,1))), axis=1)) z3 = np.matrix(a2.dot(theta2)) a3 = np.matrix(sigmoid(z3)) #final output #backpropagation delta3 = np.matrix(a3 - y[x]) #1x1 delta2 = np.matrix(np.multiply(theta2.dot(delta3), np.multiply(a2,(1-a2)).T)) #1x4 #Calculate the gradients for each training example and sum them together, getting an average #gradient over all the training pairs. Then at the end, we modify our weights. theta1_grad += np.matrix((delta2[0:numHid, :].dot(a1))).T theta2_grad += np.matrix((delta3.dot(a2))).T #1x1 * 1x4 = 1x4 #update the weights after going through all training examples theta1 += -1 * (1/m)*np.multiply(alpha, theta1_grad) theta2 += -1 * (1/m)*np.multiply(alpha, theta2_grad) #reset gradients theta1_grad = np.matrix(np.zeros((numIn+1, numHid))) theta2_grad = np.matrix(np.zeros((numHid + 1, numOut))) print("Results:\n")#run forward after training to see if it worked a1 = np.matrix(np.concatenate((X, np.ones((4,1))), axis=1)) z2 = np.matrix(a1.dot(theta1)) a2 = np.matrix(np.concatenate((sigmoid(z2), np.ones((4,1))), axis=1)) z3 = np.matrix(a2.dot(theta2)) a3 = np.matrix(sigmoid(z3)) print(a3) Results: [[ 0.01205117] [ 0.9825991 ] [ 0.98941642] [ 0.02203315]] Awesome, it worked! We expected [0 1 1 0] and we got results pretty darn close. Notice that the epochs is 10,000! While gradient descent is way better than random searches or genetic algorithms, it still can take many, many iterations to successfully train on even a simple XOR problem like we've implemented. Also note that I chose $\alpha = 0.1$ after experimenting with it to see which value gave me the best result. I generally change it by a factor of ten in both directions to check. Closing words... I really struggled understanding how backpropagation worked and some of the other details of training a neural network so this is my attempt to help others who were in my position get a better handle on these concepts. Despite the increasing popularity of machine learning, there still is a lack of good tutorials at a true beginner level, especially for those of us without strong math backgrounds. This was a long article, so please email me if you spot inevitable errors or have comments or questions. - - (*Highly recommend) - Machine Learning by Andrew Ng, Coursera -
http://outlace.com/Gradient-Descent.html
CC-MAIN-2018-17
refinedweb
5,102
64
The QMediaSeekWidget class allows the user to seek within a media content object. More... #include <QMediaSeekWidget> Inherits QWidget. The QMediaSeekWidget class allows the user to seek within a media content object. The QMediaSeekWidget class can be used in conjunction with the QMediaContent class to enable the user to seek within a media content object. QMediaSeekWidget *seek = new QMediaSeekWidget( this ); QMediaContent *content = new QMediaContent( url, this ); seek->setMediaContent( content ); The user can seek within a media content object by holding down the left and right arrow keys. A hold of Qt::Key_Left seeks backward while a hold of Qt::Key_Right seeks forward. The actual seek within the media content object is deferred until the user releases the arrow keys. During the seek the positionChanged() signal will be emitted continually with the current seek position. While the user is not seeking the positionChanged() and lengthChanged() signals will be emitted when position and length have changed in the media content object. QMediaSeekWidget is a player object. Constructs a seek widget. The parent argument is passed to the QWidget constructor. Destroys the widget. Returns the media content object currently being used. This signal is emitted when the total time of the playing media content has changed. The time is given by ms in milliseconds. This signal is emitted when the seek position has changed. While the user is not seeking this signal will be emitted when the position has changed in the media content object. The position is given by ms in milliseconds. Sets content as the media content object to use.
https://doc.qt.io/archives/qtopia4.3/qmediaseekwidget.html
CC-MAIN-2021-17
refinedweb
258
50.02
Minutes of the JEE 5 Working Group meeting Dec 07, 2006 Revision as of 10:26, 11 December 2006 by Shaun.smith.oracle.com (Talk | contribs) Contents Teleconference on JEE 5 Support in WTP Dec 07, 2006 Attendance - Rob Frost - Naci Dai - Paul Fullbright - Chuck Bridgham - Chris Brealey (IBM) - Brian Vosburgh - Neil Hauge - Shaun Smith - Karen Moore - Kaloyan Raev - Hristo Sabev and more.... Agenda - Common utilities and Annotation Support [Chuck, Neil] - JEE 5 namespace support for deployment descriptors - EMF 2 XML - How to be JEE5 friendly - tolerating JEE5 style projects and extensions [Kaloyan] - JAX-WS [Chris] - Review plan items Minutes - <Naci D> - Chuck and Neil do we have any progress on evaluating the potantial of extracting common utilities from Dali? - <Chuck B> - Not yet, not in M4 - Probably by M5. It is too late to look at it for M4 - <Naci D> - What is min for next week ? - <Chuck B> - SAP patch for JEE5 is possible, but I don't want to do it because it alters a lot of classes to add JEE 5 namespace and support. I would realy like to do it phases, and as extensions. First phase would be ejb3 facets (like a utility project maybe?) and target a server. We can start from that and add deployment descriptor support alter. We can create a new facet and make deployment descriptors optional and we can start investigation of what else is needed. - <Rob F> - In BEA we were outlining some milestones for JEE 5 support and this is what we planned also. JEE5 Facets and Project wizards. This will be the minimum and will help WTP stay out of the way of adopters who will extend it with JEE 5 support. - <Chuck B> - I have a bugzilla open 167101 [[1]] - I will target EJB3 facets to M4. This is achivable. And add things like validation later and link it to another bug (validation fix 167097) [[2]]. We can start with disabling current validators. - <Naci D> - I support this. This also answers JBoss IDE requirements that were voiced last week. - <Rob F> - That works for us - <Kaloyan Raev> - We also would like better extension points for the JEE5 model. With the existing model, It was too difficult to build extensions. - <Rob F> - What is your timing for Web facets (2.5) - <Chuck B> - Don't know. What do we expect from them? - <Rob F> - We want to create projects and do similar for web and other stuff - <Chuck B> - Lot of this work can be done in different plugins as extensions and done in time maybe by M5. Some model changes easier but I don't have the time and resources to do it by M4 - <Naci D> - This sounds like a good plan. We add EJB 3 facets and project by M4, add Add web facets by M5 and handle deployment descriptors with JEE 5 namespaces. - <Rob F> - So, the first cut will be without annotation injection into the deployment descriptors? - <Chuck B> - Yes, just XML stuff. - <Rob F> - So M5 is some model no annotation injection, and try to achieve things by M5/M6? - <Naci D> - We should be careful not to commit too much, we don't have the time. Only M5/M6 left to do any work and that is not too far away. - <Chuck B> - Yes, our goals should be to open things and not get in the way, allow people to add it. - <Rob F> - Model support would be our min bar for us. What you suggested would not be blocker for us.. As longs as we can do that - we could be fine - <Chuck B> - I have an EJB facet and playing with it. Facet is there, it picks lot of the existing support - It mostly works, but we need to be careful we don't want to fool actions that it is the wrong version. - <Naci D> - I think we have a plan for WTP 2.0, facets and basic model support for JEE5 without annotation injection. - <Chuck B> - There must be some server support - <Naci D> - Tomcat 6 & Glassfish supports JEE5. Tim says they have not looked into it yet. - <Chuck B> - There is Glassfish with JEE 5 reference implementation. - <Shaun S> -Oracle will also have support for EJB 3/JPA. We will have server support for it (Oracle AS 10.1.3.1) in WTP 2.0 M4 [[3]]. - <Naci D> - Back to the agenda, what is our story of EMF 2 XML? - <Chuck B> - Current Emf 2 Xml restriction and bugzilla items providing fixes - There were comments to contribute by next milestone. We do not have commitment for resources by M5 - <Naci D> - Neil, have done anything in DALI for EMF 2 XML? - <Neil H> - No - <Karen M> - The bugzilla patch is there. He is looking for feedback from WTP committers. - <Naci D> - We should review it by next week (Chuck action item) - <Naci D> - Can the adopters, for example SAP, tell us what they expect for extension points from JEE5 models - <Hristo S> - Extensions for JEE5 model - Good set of extensions for example providers for nodes in tree structure (node in a project node etc.) - <Naci D> - Do uou have a document you can share with us? - <Kaloyan R> - I can share the architectural document when we are ready in a bugzilla item. - <Rob F> -For extension point ideas - no concrete ideas yet... As long as there is general mechanism to augment the model we should be fine. We should have the capability to have the extensions by WTP2.0 - <Chuck B> - This is important to get in place by WTP2.0, not sure how much we can contribute. - <??> - When do we expect these ext points to be called - <Chuck B> - Don't know yet. We will wait and see SAP use cases. - <Rob F> -I imagine probably ours will be the same use-cases. Is there a milestone target - <Chuck B> - M5/M6. - <Naci D> - Chris can you comment on JAX-WS support for WTP. Chris detailed his response in a comment to last weeks minutes. You can read it there too. - <Chris B> - Basically we do not have much, and do not plan do much by WTP 2.0. We do not know what facets are needed. We do not have Sun jars yet. We do not have much to see what is needed. One way to measure it by tools in WTP. We have Tools to create / explore WS, and project explorer. Not much needed for runtimes. Maybe explorer extensible for new breeds of web service provider and clients. (JSR 109 like). Web Service explorer is massive and does not know soap 1.2 not in there yet. Mostly enablement. Any tooling mus have a runtime, there are 3 that I know that supports jax-ws: glassfish, axis2++, celtixfire. We are working to have axis2 in wtp but axis2 is not jax-ws. axis2 has its own architecture, there is a proposal for axis2 to add jax-ws. Testing is another issue our test tools won't work yet. - <Hristo S> - We have done some work for Jax-WS we have it working with WTP and have some workarounds. The only problems we have are with the WS explorer. Otherwise it was easy. We support SAP JAX-WS runtime. - <Naci D> - How about JAX-WS Annotations? - <Chris B> -We haven't done anything. That needs jar, and classes. We have not delved into this yet. You need a runtime. - <Hristo S> - At the moment we have a JAX-WS EMF model from schemas, and we build an annotation model, and adapt them. We have some progress for annotations. We can consider contributing them. It is not SAP specific models. - <Chris B> -Sounds very promising. I would like know more about them. - <Hristo S> - Proxy class depends on our runtime, but we generate adapters for JAX-RPC - <Chris B> -We did smt similar originally for axis. For JSPs. When we generate axis client we also generate a conv. bean that can be used by JSP. - <Naci D> - There is a separate WTP working group for Axis2 - <Chris B> -Yes,we will discuss Jax-ws at the Axis2 working group. - <Naci D> - Can we say No JAX-WS in Wtp 2.0? - <Chris B> -We will not prevent adopters from doing it, but we will only do axis2. - <Chuck B> - May I remind you to mark your bugzilla items with WTP 2.0 M5 target. Especially the design documents for extensions and JEE5 models. - <Naci D> - Thank you and we will meet again next week Please add your comment here:
http://wiki.eclipse.org/index.php?title=Minutes_of_the_JEE_5_Working_Group_meeting_Dec_07,_2006&oldid=19973
CC-MAIN-2015-06
refinedweb
1,436
83.66
POWDER Working Group Blog Archives for: July 2008 Friday, July 18th 2008 02:43:29 pm, Categories: Meeting summaries F2F Meeting Summary 14 - 15 July 2008 The group took the opportunity to go through several of its documents in detail over the course of the 2 days, beginning with the Grouping of Resources document. There was a fairly brief discussion about some of the features that have been removed from earlier versions: grouping by property and CIDR block, for example, and the section on redirection. The changes have come about through a series of discussions with their resolution largely driven by practicality and ensuring that POWDER processing remains manageable. One action item not yet completed is an addition to the POWDER processor section of the DR doc to say that whether a particular processor includes redirected IRIs in a group is application-specific. There was discussion about the canonicalisation section. It was agreed that the text should say that where a DR author knows of patterns that might erroneously lead to inclusion of IRIs in their group, perhaps through processing of IDNs, then he/she should add specific exclusions. As a test of the grouping mechanism and its canonicalisation steps, the group looked at a tool designed to implement this . At present, everything except Punycode translation is included in this tool. Even the proposed new top level domains with no subdomain - such as - still fit the model. It was agreed that in/exclude query contains should only be allowed 0 or 1 times in line with more or less all the other POWDER IRI constraints. The group then moved on to discuss the Description resources document. This prompted a wide ranging debate about various issues, only the most substantial of which are discussed here. In recent it was decided that POWDER would support both foaf:maker and dcterms:creator but that our examples would almost all show the dcterms:creator version. This discussion continued at the face to face and in the end it was resolved that we would define our own term of 'issuedby' as both a POWDER element name and POWDER-S property. In the latter case it will be a sub property of both the FOAF and DC properties so that authors are free to use either foaf:Agent or dcterms:Agent classes. The next substantive debate concerned how we handle some specific properties in the descriptor set. In general, properties in POWDER descriptor sets are transformed into property restrictions in the OWL class that takes the place of the descriptor set. This is inappropriate for some commonly used RDF and RDFS properties. As a result we will define POWDER element names of typeof, seealso, comment and label that will be transformed into rdf:type, rdfs:seeAlso, rdfs:comment and rdfs:label annotations respectively. Recognising that authors may, through habit, write these terms in a descriptor set anyway, the transform will also pick up on these properties and render them as if the element names had been used. In other words, you can do either and the POWDER transform will handle it. The next question was "should it be possible to process POWDER without having an RDF processor?" Most of our examples show that the DR author is described in a separate file (these days commonly a FOAF file). Allowing that information to be embedded in a POWDER doc as shown in Example 2-2 strongly suggests that you will need an RDF parser in your POWDER Processor. Requiring an external reference lifts that requirement in some situations. However... a descriptor set can include arbitrary RDF (subject to certain restrictions) so the RDF parser is already required. End result - given that POWDER transports RDF - the group recognises that being able to process POWDER documents implies at least some ability within a POWDER Processor to parse and process RDF. Day 1 ended and day 2 began with a discussion around the abouthosts element. The semantics of it are far from simple - POWDER would be a lot simpler without it, however, it plays too important a role to be dropped so we have to handle it. That said, it should only be used by DR authors where it is necessary and a warning to this effect will be added to the DR doc. The group discussed the POWDER Processor at some length, particularly how they should handle abouthosts and how errors should be reported. There are two basic types of error that a PP may detect: errors in the data and errors in the processing. We assign the error code 100 and 200 to these respectively and suggest that specific implementations can then create their own error codes in the 1xx and 2xx ranges. To support this there'll be 3 new wrds properties: data_error, proc_error and err_code. These will be used in the RDF returned by the POWDER processor (PP) and where there is a data error, the triples will be 'about' the data source and where there is a processing error the triples will be 'about' the processor itself. One specific error, number 101, is defined to report that a DR refers to a descriptor set in another document that sets an abouthosts value which is not in the scope of the DR. The full list of conformance criteria for a PP was agreed and will be in the next version of the DR doc. There was some discussion around the issue of linking resources to POWDER docs. The DR document is OK as is but we have some follow up work to do to make sure that the @rel type of powder is registered/recognised. This will involve liaison with various groups and individuals. The group was joined briefly by Dan Appelquist who was able to confirm the Mobile Web Best Practices Group's requirements for mobileOK. A full example of this will be included in the next version of the DR doc. The discussion of the DR doc came to a close at that point. Changes arising from the discussion will be made in the coming days. Attention turned briefly to the Formal Semantics document. Ivan Herman has made several comments some of which have in fact already been addressed in an as-yet unpublished update but there was significant discussion going on between Ivan, Jeremy Carroll and Stasinos Konstantopoulos (i.e. the people best-able to discuss the detail) and the F2F participants left the formal doc untouched for now. The strong hope is that the next group telecon will resolve to publish updated versions of the Grouping Doc, DR doc and Formal doc. At least the first two of these (and all being well, all 3) will be the Last Call versions. Attention then turned to the Primer. This will be a Group Note and is more or less ready for First Public Working Draft now. The group discussed the existing content and what had to be put in place before FPWD. This amounts to an update of the various examples to match changes in the other documents and adding some place holders for further sections. POWDER has evolved into a complex technology and so the Primer has to tread a fine line between making the it overly complex (and so off-putting for potential users) and over simplified (and so becoming less useful). Over the course of the 2 days it was agreed that the group home page should contain links to other resources produced by group members and others - the Primer is there to stimulate interest in what POWDER can do and how it can fit in with real-world situations and then point readers to the relevant specification documents. The hope and expectation is that a first public working draft can be ready at the time of the Last Call announcement on the Rec Track documents. The final document reviewed by the group was the Test Suite. Again, this is very close to FPWD standard and should be published alongside the Last Call announcement. The aim is to provide tests to match each MUST/MUST NOT statement in the Rec Track documents. Many are already defined in the Test Suite. Like the Primer, this document will end up as a Group Note. The face to face meeting finally turned its attention to the timeline for the remainder of its charter (the end of 2008), Candidate Recommendation Exit Criteria and forthcoming key dates. In brief, it is expected that the formal exit criteria will be two independent implementations of a POWDER Processor and one implementation of the Semantic Extension. The latter will be created in Java and offered as an extension to Jena. Several WG members are planning implementations of various kinds that will demonstrate POWDER's key features: DRs DR lists External Descriptors Links to POWDER documents Trust mechanisms Resource Grouping The Semantic Extension The transformation Important dates in the calendar are: Last Call period on the three Rec Track documents will end on 29 August 16th September - POWDER outreach meeting at Yahoo!'s Mission College Campus in Santa Clara, California. This is open to anyone but registration will be essential. 25th September - demonstration of POWDER in the European Quatro Plus project at the Safer Internet Forum , Luxembourg. The group does not expect to have met its CR exit criteria by then but to have made significant progress towards it. 20 October - TPAC . The WG hopes to use its meeting time to review CR Exit criteria and generally tidy up before seeking transition to Proposed Recommendation. This is a tight schedule but reflects the fact that the group has already well exceeded its original charter and is anxious to complete its work within its extended charter period. Finally, the group thanked its host (Kevin Smith of Vodafone). It was a productive meeting! Phil ARCHER Friday, July 11th 2008 01:43:38 pm, Categories: Meeting summaries Meeting Summary 7th July The group reviewed the current status of its publications. The DR and Grouping doc have both been updated recently and the Formal Semantics document has been published as a first working draft. The various XML schemas are in place too but not (yet) the POWDER-S vocabulary. It is anticipated that all the namespace documents will be in place before next week's face to face meeting in London. The group then looked at and resolved some relatively minor outstanding issues. 1. Taking note of comments from Masahide Kanzaki and Ivan Herman the erroneous references to rdf:nodeIDs in all POWDER-S examples will be fixed. 2. That there will be a possible child element of descriptorset will be typeof that will take a ref attribute, the value of which is the URI of a Class. The semantics will be that all members of the IRI set will be instances of the referred to class. 3. A longer discussion centred on whether and how to transport XML metadata within POWDER. It was eventually resolved that the group does not need to change or add anything to make this possible since RDF already supports the concept of an XML Literal as an object of a triple. However, an example will be given in the Primer for how to do this. The group also discussed the Primer and Test Suite documents - progress is being made with both of those. The target for the face to face meeting in London next week is that the Grouping, DR and Formal doc will be ready for Last Call, and all other documents, including the Test Suite and Primer will be in the public domain as drafts....
https://www.w3.org/blog/powder/2008/07/
CC-MAIN-2016-50
refinedweb
1,930
57.81
Use the MVVM design pattern In this topic - Prerequisites - Create a WPFapp - Add a map - Move the map out of the page - Create a view model class - Bind the view model to the view using XAML - Handle a button click in the view model - Handle other events in the view model - Bind extent values to the view Model-View-ViewModel (MVVM) is a popular design pattern for XAML-based app development. MVVM helps you separate the user interface components of your app from the data and logic components.To accomplish this, the pattern divides components into the following three categories. - Model—Classes to represent data consumed in the app - View—User interface (UI) elements with which the user interacts - ViewModel—Classes that wrap data (coming from a model) and provide business logic for the UI (views) In a pure MVVM implementation, components should fall exclusively within one of these categories. A UI class, such as a XAML page, should not contain code for the controls it contains. Instead, all code that is not directly related to the View (such as code for setting a page's data context) must be provided by a ViewModel. The data binding support provided in WPF, Windows Store, and Windows Phone apps allows you to define your UI with XAML and to bind controls in the View to ViewModel classes for required data and functionality Generally, when creating an app using MVVM, most of your time is spent working within ViewModels. These classes provide data (either coming from a Model or within the ViewModel) in a form that is usable for the View. ViewModels may also contain code to handle events for controls on the page or other logic required by the View. The MVVM pattern is well documented. The purpose of this tutorial is not to teach the details of the pattern, but rather to introduce it and illustrate its use with ArcGIS Runtime SDK for .NET. To learn more about the MVVM design pattern, refer to the links at the end of the tutorial. Prerequisites This tutorial requires a supported version of Microsoft Visual Studio and ArcGIS Runtime SDK for .NET. Refer to the appropriate topics in the guide for information on installing the SDK and system requirements. Familiarity with Visual Studio, XAML, and C# is recommended.. - Choose a folder location for your new project and name it MvvmApp. - Click OK to create the project. Your project opens in Visual Studio and contains a single WPF window called MainWindow.xaml. - Right-click the References node under the MvvmApp project listing in the Visual Studio Solution Explorer window, and click Add Reference in the context menu. - Check the listing for the Esri.ArcGISRuntime assembly under Assemblies > Extensions. - Click OK to add the reference to ArcGIS Runtime for .NET. Add a map Now that you've created your project and added a reference to ArcGIS Runtime, you're ready to add a map to your app. In your map, you'll display a satellite imagery basemap from a public map service provided by ArcGIS Online and a point layer showing (fictional) community submitted incident reports. You'll define a custom renderer for the points and set an initial extent centered on Euro="" - Add the following XAML inside the <Grid> element to define a new MapView control on the page containing a Map control with a few layers. <esri:MapView x: > </esri:MapView> - Run your app. A map similar to the one shown below appears. Note: The Incidents layer is based on an editable feature service. The number and location of features (red triangles) can vary day to day. If features (red triangles) don't appear in your initial map extent, try zooming to another location. Move the map out of the page As a step towards separating the UI from data and app logic, move the XAML that defines the contents of the MapView control (the map) out of the page. You will store it in the application's resource dictionary and use data binding to display it in the MapView control. - In your MainWindow.xaml file, select the entire contents of the MapView control (but not the map view itself). Choose Cut from the Edit menu to remove this XAML from its containing MapView. Your page should now contain an empty MapView, as shown in the following example: <Grid> <esri:MapView x: </esri:MapView> </Grid> - Open the App.xaml file in your project. - If it doesn't exist, create an Application.Resources element, as shown in the following example. <Application.Resources> </Application.Resources> - Place your mouse cursor inside the Application.Resources element in the page, and choose Paste from the Edit menu. The XAML that defines your map is added to the application's resource dictionary as shown in the example below. <Application.Resources> > </Application.Resources> - Add the same XML namespace reference you added earlier to the Application element of the App.xaml page, as shown in bold in the following example. <Application x: - Add an x:Key property to the Map element and give it a value of IncidentMap. <esri:Map x: Note: The x:Key property is used to identify a resource and should be unique within the scope of the object (in this case, the entire application). - Return to the XAML for your Window and add XAML to bind the IncidentMap resource to the MapView element, as shown below. <esri:MapView x: </esri:MapView> A data binding for the Map property is specified using the binding markup extension in XAML. Bindings can be made to resources in the application, other elements on the page, or to a class in the app that provides data. More information about data binding in a XAML app can be found at Data binding overview (MSDN). - Run your app. The map displays as it did when it was defined directly in the MapView. Is your app implementing the MVVM pattern? Not in its purest form, but the App.xaml page is acting as a view model by providing a Map to which the MapView can bind. This illustrates one of the benefits of using the pattern, which is clearly separating the UI from implementation details where possible. The MapView is a UI control, which belongs on the page. The Map and the layers it contains are data displayed in the control, which is subject to change and should be managed outside the UI. With this architecture, the page and the map are loosely coupled, which means you can easily make changes to one without affecting the other. To change the map displayed in the page, for example, you could point the binding to another resource available in your application. To implement the more traditional form of the MVVM pattern, you will create a view model class. The class acts as the data context for the page, and exposes data and functionality to which the UI can bind. Create a view model class A view model provides data and functionality that a view can access through data binding. It's common for a view to use a single view model, but it's not unusual for several view models to be associated with a single view. You can also have one view model that is used by several different views in your app. You will create a single view model (MapViewModel) to provide all data and functionality needed by your view. - Right-click the MvvmApp node in the Solution Explorer and choose Add > Class from the context menu to add a new class to your project. Name the class MapViewModel.cs. - At the top of the MapViewModel code module, add the following using statements. using Esri.ArcGISRuntime.Controls; using Esri.ArcGISRuntime.Layers; - Add a new property to the class called IncidentMap, as shown in the following example. private Map map; public Map IncidentMap { get { return this.map; } set { this.map = value; } } - Add a constructor to the MapViewModel class that initializes the map, as shown in the following example. You could build the map programmatically by creating all of the required layers, symbolizing them, and setting the initial extent, and so on. Instead, reference the Map that you defined earlier in the App.xaml resources. public MapViewModel() { // when the view model initializes, read the map from the App.xaml resources this.map = App.Current.Resources["IncidentMap"] as Map; } Bind the view model to the view using XAML To associate a view model with a view, you set the view's DataContext property to an instance of a view model. You can set the data context using XAML or by adding code to the view's code behind. In the following steps, you will use XAML to set the data context and bind the IncidentMap property in the view model to the Map property of the page's MapView. Note: If you want to set the view's data context programmatically, you can add the following two lines of code to the constructor in your code behind for (MainWindow.xaml.cs). If you add this code, proceed to step 4 to update the data binding for the map. this.DataContext = new MapViewModel(); - Open your App.xaml page. At the top of the page, inside the Application element, add an XML namespace reference for the local assembly (MvvmApp), as shown in the following example. <Application x: - Inside the Application.Resources element (below where the IncidentMap is defined), add XAML to define a new MapViewModel object (from the local namespace) with the key MapVM. <local:MapViewModel x: - Open the MainWindow.xaml page. Set the data context for the entire page to the MapVM object, as shown in bold in the following example. <Window x: Since the view model is set as the data context for the entire page, any control on the page can bind to properties exposed by the view model class. You will bind the IncidentMap property to the Map property of the MapView control. - Change the binding statement for the MapView to point to the IncidentMap property. Because the page's data context is set to a MapViewModel instance, it's implied that the binding comes from one of the properties of that object. <esri:MapView x: </esri:MapView> - Run your code. Again, the map should appear as it did when it was defined directly on the page. What is the advantage of using such a scheme to display a map? The app looks the same regardless of where you define the map, yet using a view model seems to add a lot of complexity. For a basic app like this, there really is no advantage to using the MVVM pattern. As your app becomes more complex, however, you'll find that using an MVVM architecture makes your app much easier to maintain and promotes sharing of code between apps. Handle a button click in the view model Keeping data in a view model and binding it to the UI is straightforward, but what about view functionality? To properly implement the MVVM pattern, all code logic should be contained in a view model. The code behind for UI classes should not contain event handling code for controls in the page. Fortunately, data binding can also be used to bind control events in the view to code in the view model. Implement a command Some controls, such as Button, CheckBox, RadioButton, and MenuItem provide a Command property. Commands are custom classes that implement the ICommand interface and define what happens when a control is clicked (Execute method), and determine when it should be enabled (CanExecute method). Commands can be created in your view model, and bound to the Command property of the appropriate control. Tip: There are several MVVM frameworks available that provide an implementation for ICommand. When using these frameworks, you can instantiate command objects in your view model without the intermediate step of creating your own command class, as described in this section. If you plan to use MVVM extensively, consider using a framework when developing your apps, such as MVVM Light. - Add a new class to your project. Name it DelegateCommand.cs. - The ICommand interface is in the System.Windows.Input namespace. Add a using statement at the top of the class for this namespace. using System.Windows.Input; - Implement the System.Windows.Input.ICommand interface in your class. The complete implementation for the class is provided in the following example for you to paste into your class. class DelegateCommand : System.Windows.Input.ICommand { // a var to store the command's execute logic (button click, for example) private readonly Action<object> execute; // a var to store the command's logic for enabling/disabling private readonly Func<object, bool> canExecute; // an event for when the value of "CanExecute" changes (not implemented) public event EventHandler CanExecuteChanged; // constructor: store the logic for executing and enabling the command public DelegateCommand(Action<object> executeAction, Func<object, bool> canExecuteFunc = null) { this.canExecute = canExecuteFunc; this.execute = executeAction; } // if it was passed in, execute the enabling logic for the command public bool CanExecute(object parameter) { if (this.canExecute == null) { return true; } return this.canExecute(parameter); } // execute the command logic public void Execute(object parameter) { this.execute(parameter); } } When you need to bind a button to a command, create an instance of your DelegateCommand and pass in the behavior for the command execution and for enabling the control to the constructor. The parameter for the execute action is of type object to give maximum flexibility to the command. - Save and close DelegateCommand.cs, since you no longer need to work in this class. Create a command property in the view model Anything you need to bind in your view must be defined as a public property in your view model. In this step, you'll create a new property in the MapViewModel that returns a DelegateCommand object. The command is defined with execution and enabling logic so it can be bound to a button in your view. - Open your MapViewModel class. Define a public property called ToggleLayerCommand that returns a DelegateCommand object, as shown in the following example. public DelegateCommand ToggleLayerCommand { get; set; } - Create a new function to handle command execution, called ToggleLayer. This function toggles the visibility of a layer in the map. The input parameter specifies the name of the layer to toggle. private void ToggleLayer(object parameter) { var lyr = this.map.Layers[parameter.ToString()]; lyr.IsVisible = !(lyr.IsVisible); } - Create another new function to determine the CanExecute state of the command. Since the command toggles a specific layer, it should only execute (be enabled, in other words) if that layer exists in the current map. The same parameter, the layer's name, is passed to this function. private bool OkToExecute(object parameter) { var lyr = this.map.Layers[parameter.ToString()] as FeatureLayer; return (lyr != null); } If a feature layer with the specified name does not exist in the map, OkToExecute returns false, which disables the associated control. - In the MapViewModel constructor, add the following line of code to instantiate the ToggleLayerCommand. Pass in theToggleLayer function as the command's execution logic and OkToExecute as the enabling logic. public MapViewModel() { // when the view model initializes, read the map from the App.xaml resources this.map = MvvmApp.App.Current.Resources["IncidentMap"] as Map; ToggleLayerCommand = new DelegateCommand(ToggleLayer, OkToExecute); } Note: The CanExecute parameter for the DelegateCommand constructor was defined as optional. You don't need to provide a value if you want the command to always be enabled. Bind a button to the view model command Buttons provide a Click event that you can handle to execute code. Assigning a Command defines code for a button click, but has the advanatage of also containing logic to indicate when the button should be enabled or disabled. A Button object's Command property can be bound to an object that implements the ICommand interface. In this section, you'll create a new button and bind the DelegateCommand object you created to its Command property. - Add a new button to your MainWindow.xaml page below the existing XAML for the MapView control. <Button Height="30" Width="70" HorizontalAlignment="Left" VerticalAlignment="Bottom" Content="Toggle" /> - Set the new button's Command property by binding it to the ToggleLayerCommand property of the view model. <Button Height="30" Width="70" HorizontalAlignment="Left" VerticalAlignment="Bottom" Content="Toggle" Command="{Binding ToggleLayerCommand}"/> - Provide the incident layer's name as the command parameter by setting a value for the button's CommandParameter property. <Button Height="30" Width="70" HorizontalAlignment="Left" VerticalAlignment="Bottom" Content="Toggle" Command="{Binding ToggleLayerCommand}" CommandParameter="Incidents"/> - Run your app. Click the Toggle button to verify that the code in the view model executes to turn the incidents layer on and off. Commands work well for binding functionality in your view model to certain controls in your view, such as buttons and menu choices. But what if you need to handle other events, such as a selection change in a combo box, or mouse and touch events on the map view? If you're interested in binding other event handlers from your view model, continue to the following section. Handle other events in the view model The process of binding to commands in the view model described previously works well for controls that provide the required Command property. What if you need to handle other events that cannot be directly bound using a command property? Fortunately, .NET provides additional classes that enable you to bind an event in the view to a command or function in your view model. In the following steps, you will handle the ExtentChanged event of the MapView control using classes in the System.Windows.Interactivity assembly. - Choose Project > Add Reference. In the Reference Manager dialog box, on the Extensions tab, check the listing for System.Windows.Interactivity. You'll find it under Assemblies > Extensions. Note: If this assembly is not available, you'll need to install the Microsoft Expression Blend SDK. You can also find a NuGet package with these assemblies by searching the Visual Studio NuGet package manager for "blend". After updating the project references, you must add an XML namespace reference to your MainWindow.xaml page so those classes can be used in your XAML. - Inside your page's Window element, add the XML namespace reference shown in the following example. xmlns:interactivity="" - Add the following XAML inside the MapView element to define an EventTrigger for the map view's ExtentChanged event. <esri:MapView x: <interactivity:Interaction.Triggers> <interactivity:EventTrigger </interactivity:EventTrigger> </interactivity:Interaction.Triggers> </esri:MapView> You can respond to the EventTrigger using a function (method) or with a Command object. Because you need a parameter (the MapView), use a Command and provide a CommandParameter to pass the current map view to the view model code. - Add the XAML in the following example to define an InvokeCommandAction in response to the event. The ExtentChangedCommand command does not yet exist in your view model; however, you will create it soon. <interactivity:InvokeCommandAction The binding syntax shown in the previous code provides an example of binding to a XAML element. The command parameter is being set with an element on the page called MyMapView, which is your map view control. The map view object is needed to read current extent values. Next, you will create the ExtentChangedCommand to handle the event. - Open MapViewModel.cs and add the following code to define a new DelegateCommand object called ExtentChangedCommand. public DelegateCommand ExtentChangedCommand { get; set; } - Create the function shown in the following example to respond to an extent change. For now, verify that you can get the extent object from the map view. public void MyMapViewExtentChanged(object parameter) { var mv = parameter as MapView; var extent = mv.Extent; } - In the constructor for MapViewModel, add the code shown in the following example to create the ExtentChangedCommand. public MapViewModel() { // when the view model initializes, read the map from the App.xaml resources this.map = MvvmApp.App.Current.Resources["IncidentMap"] as Map; ToggleLayerCommand = new DelegateCommand(ToggleLayer, OkToExecute); ExtentChangedCommand = new DelegateCommand(MyMapViewExtentChanged); } The CanExecute logic for DelegateCommand is optional, which is why you didn't need to specify it. - Set a breakpoint on the last line of the MyMapViewExtentChanged function and run your app. You hit the breakpoint as soon as the app starts. - When you are done testing, remove the breakpoint. The binding successfully handles the extent changed event in the view model. The process described here could be used to handle any event raised by UI controls, including those that don't provide a Command property. More work is required to show the extent values in the UI. If you're interested in completing this functionality, proceed to the next section. Bind extent values to the view To show the current extent coordinates in the app, you need to create a new public property on the view model to expose that information. Your property can then be bound to a UI element, such as a TextBox. - Open MapViewModel.cs and add the following code to define a CurrentExtentString property. private string extentString; public string CurrentExtentString { get { return this.extentString; } set { this.extentString = value; } } - Add the following code to set the value of CurrentExtentString in the extent changed event handler. public void MyMapViewExtentChanged(object parameter) { var mv = parameter as MapView; var extent = mv.Extent; CurrentExtentString = string.Format("XMin={0:F2} YMin={1:F2} XMax={2:F2} YMax={3:F2}", extent.XMin, extent.YMin, extent.XMax, extent.YMax); } Extent coordinates are formatted in a string that looks like this: XMin=-2598746.47 YMin=4253523.55 XMax=5130888.16 YMax=8976345.50. - Open your view, MainWindow.xaml, and add the following XAML to define a new TextBlock to display the extent string. Place this XAML below the XAML that defines your map view. <TextBlock Height="30" Width="Auto" FontSize="16" Foreground="AliceBlue" HorizontalAlignment="Center" VerticalAlignment="Bottom" Text="{Binding CurrentExtentString}"/> The text block appears along the bottom center of the application and is bound to the property containing the extent values. - Run your app. The extent string does not appear as expected. For data binding to work as expected, you must raise a notification event when bound properties change. In this case, the extent string is being bound to the text block before it is given a value (it contains an empty string). When the value updates in the extent changed handler, the view is never notified that it should get an updated value. If you provided a default value for CurrentExtentString, you would see that value display when the app starts, but the value would not change. If that's the case, why does the binding for the map work? The IncidentMap property of the view model is set in the class constructor, which executes before the view model is set as the page's data context. To raise the required notification when a property changes, you must implement the INotifyPropertyChanged interface in the System.ComponentModel namespace. - Add the following using statements to the top of your MapViewModel class using System.ComponentModel; using System.Runtime.CompilerServices; - Implement System.ComponentModel.INotifyPropertyChanged in your MapViewModel class, as shown in the following example. class MapViewModel : INotifyPropertyChanged { ... - Implement PropertyChangedEventHandler by adding the code shown in the following example. public event PropertyChangedEventHandler PropertyChanged; private void RaiseNotifyPropertyChanged([CallerMemberName]string propertyName = null) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } - Raise the change notification in your CurrentExtentString property's setter by calling the RaiseNotifyPropertyChanged function. private string extentString; public string CurrentExtentString { get { return this.extentString; } set { this.extentString = value; this.RaiseNotifyPropertyChanged(); } } - Run your app again. The extent information appears at the bottom of the map. Zoom and pan the display to see the information update. You've completed the tutorial, nice work. You now have an understanding of how to construct an app using the MVVM design pattern, including: data binding, commands, event triggers, and property change notification. To appreciate the usefulness of the MVVM pattern, create a new project (perhaps for another .NET platform) and reuse your ViewModel and Command classes for a different UI (View). To learn more about these topics, consult the following resources:
https://developers.arcgis.com/net/10-2/desktop/guide/use-the-mvvm-design-pattern.htm
CC-MAIN-2017-09
refinedweb
4,001
55.54
Frozen-Flask¶ Frozen-Flask freezes a Flask application into a set of static files. The result can be hosted without any server-side software other than a traditional web server. Note: This project used to be called Flask-Static. Installation¶ Install the extension with one of the following commands: $ easy_install Frozen-Flask or alternatively if you have pip installed: $ pip install Frozen-Flask or you can get the source code from github. Context¶ This documentation assumes that you already have a working Flask application. You can run it and test it with the development server: from myapplication import app app.run(debug=True) Frozen-Flask is only about deployment: instead of installing Python, a WGSI server and Flask on your server, you can use Frozen-Flask to freeze your application and only have static HTML files on your server. Getting started¶ Create a Freezer instance with your app object and call its freeze() method. Put that in a freeze.py script (or call it whatever you like): from flask_frozen import Freezer from myapplication import app freezer = Freezer(app) if __name__ == '__main__': freezer.freeze() This will create a build directory next to your application’s static and templates directories, with your application’s content frozen into static files. Note Frozen-Flask considers it “owns” its build directory. By default, it will silently overwrite files in that directory, and remove those it did not create. The configuration allows you to change the destination directory, or control what files are removed if at all. This build will most likely be partial since Frozen-Flask can only guess so much about your application. Finding URLs¶ Frozen-Flask works by simulating requests at the WSGI level and writing the responses to aptly named files. So it needs to find out which URLs exist in your application. The following URLs can be found automatically: - Static files handled by Flask for your application or any of its blueprints. - Views with no variable parts in the URL, if they accept the GETmethod. - New in version 0.6: Results of calls to flask.url_for()made by your application in the request for another URL. In other words, if you use url_for()to create links in your application, these links will be “followed”. This means that if your application has an index page at the URL / (without parameters) and every other page can be found from there by recursively following links built with url_for(), then Frozen-Flask can discover all URLs automatically and you’re done. Otherwise, you may need to write URL generators. URL generators¶ Let’s say that your application looks like this: @app.route('/') def products_list(): return render_template('index.html', products=models.Product.all()) @app.route('/product_<int:product_id>/') def product_details(): product = models.Product.get_or_404(id=product_id) return render_template('product.html', product=product) If, for some reason, some products pages are not linked from another page (or these links are not built by url_for()), Frozen-Flask will not find them. To tell Frozen-Flask about them, write an URL generator and put it after creating your Freezer instance and before calling freeze(): @freezer.register_generator def product_details(): for product in models.Product.all(): yield {'product_id': product.id} Frozen-Flask will find the URL by calling url_for(endpoint, **values) where endpoint is the name of the generator function and values is each dict yielded by the function. You can specify a different endpoint by yielding a (endpoint, values) tuple instead of just values, or you can by-pass url_for and simply yield URLs as strings. Also, generator functions do not have to be Python generators using yield, they can be any callable and return any iterable object. All of these are thus equivalent: @freezer.register_generator def product_details(): # endpoint defaults to the function name # `values` dicts yield {'product_id': '1'} yield {'product_id': '2'} @freezer.register_generator def product_url_generator(): # Some other function name # `(endpoint, values)` tuples yield 'product_details', {'product_id': '1'} yield 'product_details', {'product_id': '2'} @freezer.register_generator def product_url_generator(): # URLs as strings yield '/product_1/' yield '/product_2/' @freezer.register_generator def product_url_generator(): # Return a list. (Any iterable type will do.) return [ '/product_1/', # Mixing forms works too. ('product_details', {'product_id': '2'}), ] Generating the same URL more than once is okay, Frozen-Flask will build it only once. Having different functions with the same name is generally a bad practice, but still work here as they are only used by their decorators. In practice you will probably have a module for your views and another one for the freezer and URL generators, so having the same name is not a problem. Testing URL generators¶ The idea behind Frozen-Flask is that you can use Flask directly to develop and test your application. However, it is also useful to test your URL generators and see that nothing is missing, before deploying to a production server. You can open the newly generated static HTML files in a web browser, but links probably won’t work. The FREEZER_RELATIVE_URLS configuration can fix this, but adds a visible index.html to the links. Alternatively, use the run() method to start an HTTP server on the build result, so you can check that everything is fine before uploading: if __name__ == '__main__': freezer.run(debug=True) Freezer.run() will freeze your application before serving and when the reloader kicks in. But the reloader only watches Python files, not templates or static files. Because of that, you probably want to use Freezer.run() only for testing the URL generators. For everything else use the usual app.run(). Flask-Script may come in handy here. Controlling What Is Followed¶ Frozen-Flask follows links automatically or with some help from URL generators. If you want to control what gets followed, then URL generators should be used with the Freezer’s with_no_argument_rules and log_url_for flags. Disabling these flags will force Frozen-Flask to use URL generators only. The combination of these three elements determines how much Frozen-Flask will Configuration¶ Frozen-Flask can be configured using Flask’s configuration system. The following configuration values are accepted: FREEZER_BASE_URL - Full URL your application is supposed to be installed at. This affects the output of flask.url_for()for absolute URLs (with _external=True) or if your application is not at the root of its domain name. Defaults to ''. FREEZER_RELATIVE_URLS If set to True, Frozen-Flask will patch the Jinja environment so that url_for()returns relative URLs. Defaults to False. Python code is not affected unless you use relative_url_for()explicitly. This enables the frozen site to be browsed without a web server (opening the files directly in a browser) but appends a visible index.htmlto URLs that would otherwise end with /. New in version 0.10. FREEZER_DEFAULT_MIMETYPE The MIME type that is assumed when it can not be determined from the filename extension. If you’re using the Apache web server, this should match the DefaultTypevalue of Apache’s configuration. Defaults to application/octet-stream. New in version 0.7. FREEZER_IGNORE_MIMETYPE_WARNINGS If set to True, Frozen-Flask won’t show warnings if the MIME type returned from the server doesn’t match the MIME type derived from the filename extension. Defaults to False. New in version 0.8. FREEZER_DESTINATION - Path to the directory where to put the generated static site. If relative, interpreted as relative to the application root, next to the staticand templatesdirectories. Defaults to build. FREEZER_REMOVE_EXTRA_FILES If set to True(the default), Frozen-Flask will remove files in the destination directory that were not built during the current freeze. This is intended to clean up files generated by a previous call to Freezer.freeze()that are no longer needed. Setting this to Falseis equivalent to setting FREEZER_DESTINATION_IGNOREto ['*']. New in version 0.5. FREEZER_DESTINATION_IGNORE A list (defaults empty) of fnmatchpatterns. Files or directories in the destination that match any of the patterns are not removed, even if FREEZER_REMOVE_EXTRA_FILESis true. As in .gitignorefiles, patterns apply to the whole path if they contain a slash /, to each slash-separated part otherwise. For example, this could be set to ['.git*']if the destination is a git repository. New in version 0.10. FREEZER_STATIC_IGNORE A list (defaults empty) of fnmatchpatterns. Files served by send_static_file that match any of the patterns are not copied to the build directory. As in .gitignorefiles, patterns apply to the whole path if they contain a slash /, to each slash-separated part otherwise. For example, this could be set to ['*.scss']to stop all SASS files from being frozen. New in version 0.12. FREEZER_IGNORE_404_NOT_FOUND If set to True(defaults False), Frozen-Flask won’t stop freezing when a 404 error is returned by your application. In this case, a warning will be printed on stdout and the static page will be generated using your 404 error page handler or flask’s default one. This can be useful during development phase if you have already referenced pages which aren’t written yet. New in version 0.12. FREEZER_REDIRECT_POLICY The policy for handling redirects. The default is 'follow'which means that when a redirect response is encountered it will follow it to get the content from the redirected location. 'ignore'will not stop freezing, but no content will appear in the redirected location. 'error'will raise an exception if a redirect is encountered. New in version 0.13. FREEZER_SKIP_EXISTING If set to True(defaults False), Frozen-Flask will skip the generation of files that already exist in the build directory, even if the contents would have been different. Useful if your generation takes up a very long time and you only want to generate new files. New in version 0.14. Filenames and MIME types¶ For each generated URL, Frozen-Flask simulates a request and saves the content in a file in the FREEZER_DESTINATION directory. The filename is built from the URL. URLs with a trailing slash are interpreted as a directory name and the content is saved in index.html. Query strings are removed from URLs to build filenames. For example, /lorem/?page=ipsum is saved to lorem/index.html. URLs that are only different by their query strings are considered the same, and they should return the same response. Otherwise, the behavior is undefined. Additionally, the extension checks that the filename has an extension that matches the MIME type given in the Content-Type HTTP response header. In case of mismatch, the Content-Type that a static web server will send will probably not be the one you expect, so Frozen-Flask issues a warning. For example, the following views are both wrong: @app.route('/lipsum') def lipsum(): return '<p>Lorem ipsum, ...</p>' @app.route('/style.css') def compressed_css(): return '/* ... */' as the default Content-Type in Flask is text/html; charset=utf-8, but the MIME types guessed by the Frozen-Flask as well as most web servers from the filenames are application/octet-stream and text/css. This can be fixed by adding a trailing slash to the URL or serving with the right Content-Type: # Saved as `lipsum/index.html` matches the 'text/html' MIME type. @app.route('/lipsum/') def lipsum(): return '<p>Lorem ipsum, ...</p>' @app.route('/style.css') def compressed_css(): return '/* ... */', 200, {'Content-Type': 'text/css; charset=utf-8'} Alternatively, these warnings can be disabled entirely in the configuration. Character encodings¶ Flask uses Unicode everywhere internally, and defaults to UTF-8 for I/O. It will send the right Content-Type header with both a MIME type and encoding (eg. text/html; charset=utf-8). Frozen-Flask will try to preserve MIME types through file extensions, but it can not preserve the encoding meta-data. You may need to add the right <meta> tag to your HTML. (You should anyway). Flask also defaults to UTF-8 for URLs, so your web server will get URL-encoded UTF-8 HTTP requests. It’s up to you to make sure that it converts these to the native filesystem encoding. Frozen-Flask always writes Unicode filenames. API reference¶ - class flask_frozen. Freezer(app=None, with_static_files=True, with_no_argument_rules=True, log_url_for=True)¶ all_urls()¶ Run all generators and yield URLs relative to the app root. May be useful for testing URL generators. register_generator(function)¶ Register a function as an URL generator. The function should return an iterable of URL paths or (endpoint, values)tuples to be used as url_for(endpoint, **values). root¶ Absolute path to the directory Frozen-Flask writes to, ie. resolved value for the FREEZER_DESTINATIONconfiguration. flask_frozen. walk_directory(root, ignore=())¶ Recursively walk the root directory and yield slash-separated paths relative to the root. Used to implement the URL generator for static files. flask_frozen. relative_url_for(endpoint, **values)¶ Like url_for(), but returns relative URLs if possible. Absolute URLs (with _external=Trueor to a different subdomain) are unchanged, but eg. /foo/barbecomes ../bar, depending on the current request context’s path. (This, of course, requires a Flask request context.) URLs that would otherwise end with /get index.htmlappended, as Frozen-Flask does in filenames. Because of this behavior, this function should only be used with Frozen-Flask, not when running the application in app.run()or another WSGI sever. If the FREEZER_RELATIVE_URLSconfiguration is True, Frozen-Flask will automatically patch the application’s Jinja environment so that url_forin templates is this function. Changelog¶ Version 0.14¶ Released on 2017-03-22. - Add the FREEZER_SKIP_EXISTINGconfiguration to skip generation of files already in the build directory. (Thanks to Antoine Goutenoir.) - Add shared superclass FrozenFlaskWarningfor all warnings. (Thanks to Miro Hrončok.) Version 0.12¶ Released on 2015-11-05. Version 0.11¶ Released on 2013-06-13. - Add Python 3.3 support (requires Flask >= 0.10 and Werkzeug >= 0.9) - Drop Python 2.5 support - Fix #30: relative_url_for()with a query string or URL fragment. Version 0.10¶ Released on 2013-03-11. - Add the FREEZER_DESTINATION_IGNOREconfiguration (Thanks to Jim Gray and Christopher Roach.) - Add the FREEZER_RELATIVE_URLSconfiguration - Add the relative_url_for()function. Version 0.9¶ Released on 2012-02-13. Add Freezer.run(). Version 0.8¶ Released on 2012-01-17. - Remove query strings from URLs to build a file names. (Should we add configuration to disable this?) - Raise a warning instead of an exception for MIME type mismatches, and give the option to disable them entirely in the configuration. Version 0.7¶ Released on 2011-10-20. - Backward incompatible change: Moved the flaskext.frozenpackage to flask_frozen. You should change your imports either to that or to flask.ext.frozenif you’re using Flask 0.8 or more recent. See Flask’s documentation for details. - Added FREEZER_DEFAULT_MIMETYPE - Switch to tox for testing in multiple Python versions Version 0.6¶ Released on 2011-07-29. - Thanks to Glwadys Fayolle for the new logo! - Frozen-Flask now requires Flask 0.7 or later. Please use previous version of Frozen-Flask if you need previous versions of Flask. - Support for Flask Blueprints - Added the log_url_forparameter to Freezer. This makes some URL generators unnecessary since more URLs are discovered automatically. - Bug fixes. Version 0.5¶ Released on 2011-07-24. - You can now construct a Freezer and add URL generators without an app, and register the app later with Freezer.init_app(). - The FREEZER_DESTINATIONdirectory is created if it does not exist. - New configuration: FREEZER_REMOVE_EXTRA_FILES - Warn if an URL generator seems to be missing. (ie. if no URL was generated for a given endpoint.) - Write Unicode filenames instead of UTF-8. Non-ASCII filenames are often undefined territory anyway. - Bug fixes. Version 0.4¶ Released on 2011-06-02. - Bugfix: correctly unquote URLs to build filenames. Spaces and non-ASCII characters should be %-encoded in URLs but not in frozen filenames. (Web servers do the decoding.) - Add a documentation section about character encodings. Version 0.3¶ Released on 2011-05-28. - URL generators can omit the endpoint and just yield valuesdictionaries. In that case, the name of the generator function is used as the endpoint, just like with Flask views. Freezer.all_urls()and walk_directory()are now part of the public API. Version 0.2¶ Released on 2011-02-21. Renamed the project from Flask-Static to Frozen-Flask. While we’re at breaking API compatibility, flaskext.static.StaticBuilder.build() is now flaskext.frozen.Freezer.freeze() and the prefix for configuration keys is FREEZER_ instead of STATIC_BUILDER_. Other names were left unchanged.
https://pythonhosted.org/Frozen-Flask/index.html
CC-MAIN-2017-22
refinedweb
2,683
51.44
Create a random tree generator. This hack creates a natural-looking tree using the usual suspects (fractals/recursion/repeat-and-scale algorithms). In the next hack, we'll create movement using an embedded hierarchy of movie clips. For those of us who prefer speaking English, we're going to grow a tree and make it sway in a breeze [Hack #7]. We will do this by re-creating natural phenomena in code. The first time I went to Flash Forward (), Josh Davis talked about what made him tick. To paraphrase his 45-minute presentation into a single sentence, he said, "Look at nature, and see what it throws up at you, and then think what you can do with the result." The Web is full of such experiments, and no hacks book would be complete without one or two such excursions. To get the following information on nature, I had a short conversation with my girlfriend Karen. We have a neat division of labor: she deals with the garden, and I deal with the computer. Here's what I learned without having to set foot outside. Trees follow a very basic pattern, and this is usually regular. A branch will be straight for a certain length and will then split. The thickness of the parent branch is usually related to the branches that grow from it?normally the cross-section is conserved (the total thickness of the trunk is roughly the same as, or proportional to, the thickness of the branches that sprout from it). This means that a twig grows and splits in exactly the same way as a main branch: the relative dimensions are the same. You know that this self-same process between tree and twig is going on because, if you plant a twig (well, if Karen plants it; mine always dies), you end up with a tree. With this in mind, I created a random tree generator. Two example results are shown in Figure 1-28. Both trees (and many more) were created using the same code. Here is treeGen.fla, which is downloadable from the book's web site: function counter( ) { if (branchCounter == undefined) { branchCounter = 0; } return (branchCounter++); } function grow( ) { // Grow this limb... this.lineStyle(trunkThickness, 0x0, 100); this.moveTo(0, 0); this.lineTo(0, trunkLength); // If this isn't the trunk, change the angle and branch size if (this._name != "trunk") { this._rotation = (Math.random( )*angle) - angle/2; this._xscale *= branchSize; this._yscale *= branchSize; } // Grow buds... var seed = Math.ceil(Math.random( )*branch); for (var i = 0; i < seed; i++) { if (counter( ) < 3000) { var segment = this.createEmptyMovieClip("segment" + i, i); segment.onEnterFrame = grow; segment._y = trunkLength; } } delete (this.onEnterFrame); } // Define the trunk position and set the onEnterFrame handler to grow( ) this.createEmptyMovieClip("trunk", 0); trunk._x = 200; trunk._y = 400; trunk.onEnterFrame = grow; // Tree parameters var angle = 100; var branch = 5; var trunkThickness = 8; var trunkLength = -100; var branchSize = 0.7; The basic tree shape is defined by the parameters in the last few lines of the listing: The maximum angle a branch makes with its parent The maximum number of buds (daughter branches) any branch can have The thickness of the tree trunk The tree trunk's length The ratio between the daughter branch and the parent branch (which makes branches get smaller as you move away from the trunk) First, we create the trunk and set its position. We then attach grow( ) as its onEnterFrame event handler. As its name suggests, grow( ) makes our empty movie clip grow by doing two things. First, it creates our branch by drawing a vertical line of height trunkLength and thickness trunkThickness. If we are currently drawing the trunk, we leave it as-is, resulting in scene 1. If we are not drawing the trunk, we also rotate it by +/- angle, as seen in scene 2, and scale it by branchSize; as seen in scene 3; all are shown in Figure 1-29. The code then creates between 1 and branch new buds. The hacky part is that these buds are given the same onEnterFrame event handler as the current one, namely grow( ), so in the next frame, the buds grow their own buds and so on. Here is the portion of grow( ) that spawns a new movie clip for each bud and assigns the onEnterFrame event handler. Our tree could create new branches forever, but we need a limit; otherwise, Flash slows down and eventually crashes. To prevent this, the function counter( ) is used to limit the total number of branches to 3,000. var seed = Math.ceil(Math.random( )*branch); for (var i = 0; i < seed; i++) { if (counter( ) < 3000) { var segment = this.createEmptyMovieClip("segment" + i, i); segment.onEnterFrame = grow; segment._y = trunkLength; } } Finally, grow( ) deletes itself, as it needs to run only once per branch. So, we are using a self-calling function (or, rather, one that creates copies of branches that contain the same function attached to them) to create our fractal tree. Not only do we have a tree consisting of branches and sub-branches, but we also have the same hierarchy reflected in the movie clip timelines. You can see this by using the debugger (although you will have to set the maximum number of branches down from 3000; otherwise, you will be in for a long wait!). The result is vaguely oriental in its simplicity. However, it doesn't involve motion graphics, and static Java tree generators are a dime a dozen. Therefore, we add movement to our tree in the next hack.
http://etutorials.org/Macromedia/Flash+hacks.+100+industrial-strength+tips+tools/Chapter+1.+Visual+Effects/Hack+6+A+Tree+Grows+in+Brooklyn/
CC-MAIN-2018-51
refinedweb
925
73.68
A DESCRIPTION OF THE REQUEST : Sometimes we must mix generics and non-generics code. The simplest example: we have two objects o1 and o2, declared as "Object", and need to compare them via Comparable: ((Comparable)o1).compare(o2). The most "popular" and annoying compiler warning here is "unchecked cast", when we need to cast some raw type to a generics one. Of course, we may use @SuppressWarnings("unchecked") annotation any time when we really need such casting. But it does not beautify the code. Moreover, some old IDEs do not "understand" it, because there was no this annotation in the first releases of Java 1.5. Could you add the following static method into System or Class: @SuppressWarnings("unchecked") public static <T> T castType(Object object) { return (T)object; } It allows to avoid "unchecked cast" warning or using @SuppressWarnings in most cases while compiling the user's code. JUSTIFICATION : Often using @SuppressWarnings("unchecked") chokes the Java code and does not help in old IDEs (though these IDEs allow compilation with JDK 1.7). CUSTOMER SUBMITTED WORKAROUND : I've implemented such a method in my package-private library and call this always instead unsafe type casing.
https://bugs.java.com/bugdatabase/view_bug.do?bug_id=6542176
CC-MAIN-2020-24
refinedweb
195
51.07
GIMP goes SVG 370 An anonymous reader writes "The GIMP developers released a new snapshot in the development series. Version 1.3.21 (aka the path to excellence release) features an improved path tool with superb path stroking and adds SVG support. You can now export your GIMP paths to SVG and the new SVG import plug-in not only renders Scalable Vector Graphics for you at the desired resolution, it also imports SVG paths as GIMP paths." LAMENS TERMS (Score:2, Informative) Gimp now Works like Photoshop AND Illustrator. Re:LAMENS TERMS (Score:5, Funny) Outstanding! (Score:3, Informative) First, it's "layman's." (Lamens? Is that a brand of ramen noodles or something?) Secondly, no, this announcement does NOT mean GIMP works like Photoshop AND Illustrator. Nothing of the sort, not even close. ALL this means is that GIMP can now save into a scalable vector format designed for the web. The decidedly low- to mid-tier GIMP project still has a long way to go before it even touches Photoshop, let alone Illustrator (although, so as not to seem like *too* Re:Outstanding! (Score:2) Re: INACCURATE TERMS (Score:2) Re: INACCURATE TERMS (Score:2) Re: INACCURATE TERMS (Score:2) I can't. And for most non-graphical artists, TheGimp is not only completely sufficient, but also quite powerful. And, of course, Free Software, which is a good enough reason for me to use it over proprietary alternatives. Re: INACCURATE TERMS (Score:2) Should they also compile everything for Windows for free too? Not at all, im willing to pay. When did i say i wasnt? I just want 1.4stable... (Score:2, Insightful) 1.3.20. I periodically try out the development releases and admire the pretty widgets if the thing will compile or load. As a matter of fact, I built one this morning; compiled fine but crashed within 5 minutes of loading. Personally, I would be much more impressed if the developers decided on a feature-freeze and cleaned up their mess. I can't remember off-hand how long stable has been at 1.2.5, but it's beginning to look a bit incongruous with GTK-1.x wi Re:I just want 1.4stable... (Score:2) Apart from the odd crash when I try to edit graphics at the same time as having Epiphany, XMMS, two gVims and GAIM open when I'm doing web design, it runs like a dream. Re:I just want 1.4stable... (Score:2, Informative) If it crashed on you, you should have tried to obtain a stack trace and file a bug-report then at bugzilla.gnome.org. If you don't do that, the GIMP developers have no chance to fix the bug that hit you. BTW, the next version will be called 2.0, not 1.4. Re: INACCURATE TERMS (Score:3, Informative) Re: INACCURATE TERMS (Score:3, Interesting) Goy does, though, and she agrees with me. Re: INACCURATE TERMS (Score:2, Interesting) GUIs are about organization (Score:4, Insightful) I then installed The GIMP. I found its GUI confusing. I then bought Paint Shop Pro on the recommendation of a digital photography book I respected. Again, I found the GUI confusing, but at least the book got me started on what I should be looking for. I think if I had read such a book about Photoshop earlier I'd probably be a Photoshop fan right now. I now switch back and forth between The GIMP and Paint Shop Pro. The GIMP does some things better (script-fu is really slick) and PSP does some things better (scratch and dust repair, contrast enhancement.) But I still have to hunt through confusing menus, pop-up toolbar things, etc. I've simply come to expect that any powerful photo editing program is going to have a confusing interface, and that any program is going to take an investment of time to learn. Paint Shop Pro has some hand-holding tutorials that I found to be excellent at getting me up to speed. I'm sure these things exist for the other programs as well. Anyway, it's all still easier than the old days with a camelhair brush and hand spotting negatives and prints. Re: INACCURATE TERMS (Score:2, Interesting) Re:LAMENS TERMS (layman's terms?) (Score:3, Interesting) Anyone? Note: there seems to be no agreement here, but I'd assume the users' community (or better the project's developers) would have it right - I'm not trying to start a war. I have always pronounced it with a hard G, both becase it is the G(uh)nu Image Manipulation Program and because the word gimp is pronounced that way. That said, it is yet another example of why the free software movement suffers from poor marketing. Gimp Re: Soft vs Hard G (Score:3, Funny) A hard G is like the G in "garage", whereas a soft G is like the G in "garage". SVG a Huge plus (Score:4, Informative) Way to go Gimp! If doing practically everything photoshop can do for free didn't put Gimp on the map. The addition of SVG ought to. Re:SVG a Huge plus (Score:2) Then I would be happy Re:SVG a Huge plus (Score:2) Re:SVG a Huge plus (Score:2) convert lion.svg lion.png eog lion.png Pops up eye of gnome with the png version the cute little lion cub. No problems at all. I have had some problems with a few graphics in the past, but most seem to work fine. Also, for most SVGs PNG should be the preferred raster format to convert them as they are usually more solid colors than a photo, which is what JPEG is great for. This is done with the version of ImageMagick that ships with RedHat 9 (or mayb Re:SVG a Huge plus (Score:5, Informative) Also, if you want a good vector graphics editor for free, try SodiPodi. It's good. Especially for a 0.3 level program. P.S. This isn't meant to be rude to GIMP. It's being compared only to THE BEST. They actually have a better interface than most other programs that compete with Photoshop (that is, programs that I've tried). Re:SVG a Huge plus (Score:3, Insightful) Re:SVG a Huge plus (Score:2, Insightful) I'm constantly amazed by this argument. As if there was an objective way of comparing user interfaces. The only real measure of how good an interface is is how comfortable people feel while using it. There's nothing wrong in liking a GUI because you're used to it. However, trying to coerce people to start using "a better GUI" (be it Gimp vs. Photoshop or X desktop vs. Win GUI) is wrong. There's no "better GUI" than th Re:SVG a Huge plus (Score:3, Insightful) Re:SVG a Huge plus (Score:2, Insightful) In this case, though, your parent post is merely saying that the GIMP's interface is good in its own right -- not that everyone should switch over to it. The grandparent, on the other hand, is basically saying that the Gimp should be changed to be more familiar to Photoshop users. That may be a valid response to Everyone Should Switch Over arguments, but if trying to coerce someone Re:SVG a Huge plus (Score:2) No. It depends on the application. For applications that are a part of daily life, then the measure of how good an interface is is how fast a user can accomplish tasks, and how complete their interaction with the tool is. "Feeling comfortable" is really only useful for applications that you don't use very often. Emacs, for example, is a wonderful interface for programmers, but a horrible interface for peopl Re:SVG a Huge plus (Score:2) There are indeed objective mesurements of an interface. The better interface is one that takes less time to complete your task, is less prone to errors, and once learned does not require one's attention to be taken from the task at hand. There are laws that allow one to estimate these factors for a given interface, and there are tests that can prov Re:SVG a Huge plus (Score:3, Interesting) Everyone listening? Photoshop is a massive pain in the arse, people! It's not that great! There is a reason I choose to use the gimp at home! Any volunteers to join my new 'Photoshop Sucks' club? Re:SVG a Huge plus (Score:2) Also, is it possible yet to make the cursors be actually the same size as the brush? I can't really work any other way. Re:SVG a Huge plus (Score:2) Re:SVG a Huge plus (Score:2) MDI interface for the GIMP (Score:3, Informative) Offering some kind of MDI interface for the GIMP has been suggested several years ago. This may be a good solution, as long as it is optional because some people prefer the current interface. You can find some discussion about that in bug report #7379 [gnome.org]. The feature may be implemented in GIMP 3.0, or earlier if I find enough spare time to implement it or (more likely) if someone else takes the job and implements this feature. Note that version 1.3.x and the upcoming version 2.0 offer the option of displa Re:SVG a Huge plus (Score:2) I just installed this program and I've been playing with it for like 30 seconds. Wow. I've been looking for something like this for linux for quite a while. Thanks! Re:SVG a Huge plus (Score:2) Correct me if I'm wrong, though.. haven't looked at The Gimp in many months. S Re:SVG a Huge plus (Score:3, Informative) Re:SVG a Huge plus (Score:2) As I understand it there are several problems surrounding CMYK and you're blurring them a bit: Re:SVG a Huge plus (Score:2) GIMP website interface... (Score:3, Informative) Re:GIMP website interface... (Score:2) OT, but does anybody know what software they are using on mmmaybe.gimp.org ? I looked at the HTML output and concluded that it might be plone, but maybe that's just my prejudice showing. Re: the sig (Score:2) Or are they to allow special interest groups and politicians to get what they want? SVG is the future (Score:4, Informative) Get an SVG enabled Mozilla build [mozilla.org] and start playing with it. It's fun. Re:SVG is the future (Score:2) Re:SVG is the future (Score:2) Are you sure? [toso-digitals.de] SVG is not the future (Score:2, Interesting) This is not a troll, this is the truth. Joe Average doesn't care; a new vector graphics format is only exciting to geeks. Joe Average only cares about "images", regardless of the underlying technology. Unless either IE supports SVG natively, or everybody has an SVG plugin, SVG will never become popular. Re:SVG is not the future (Score:2) Re:SVG is not the future (Score:5, Interesting) Don't look for any new features in IE for the next several years. By integrating it tightly into the OS and killing it as a standalone product, Microsoft has effectively eliminated all potential innovation in the browser area, since browser releases now equals OS releases. IE 7 won't be out until Longhorn (at least a year away), and even then it won't be widely used as most people will never migrate off XP for the life of their machines. This is an unprecedented opportunity for Mozilla to win the browser war. Being a standalone installable app (that can run on win98 and up), Mozilla can add new features and support new standards. Just spread the word. Tell your friends. Talk to your favorite web developers. Re:SVG is not the future (Score:2) Re:There is an SVG plugin for IE (Score:3, Informative) Since it don't install itself automatically you have to copy the files from some "shared files/adobe" directory to "Mozilla/plugins", just search for "NPSVG6.dll" and "NPSVG6.zip" Re:SVG is not the future (Score:2) Wow, that is such a great insult! It is almost as if I have to go find/create an enemy to try it on. What does this mean for Sodipodi? (Score:5, Interesting) Re:What does this mean for Sodipodi? (Score:2) Re:What does this mean for Sodipodi? (Score:5, Informative) One of the goals of adding SVG support in the GIMP was to allow better cooperation between the GIMP and Sodipodi or other vector-based applications. Until recently, if you were using Sodipodi, you had to convert your SVG file to a bitmap format (such as PNG) before being able to load it in the GIMP. Now it is possible to import the SVG file directly into the GIMP and make some minor adjustments before creating the final image. You can also convert some parts of the SVG (imported as paths in the GIMP) to selections and apply more complex effects that what SVG would allow. Note that the SVG support in the GIMP is only due to the integration of the SVG plug-in that had been available since a while as part of libsvg. So it's nothing really new, although including it as part of the default GIMP distribution seems to make a significant difference. Re:What does this mean for Sodipodi? (Score:2, Interesting) Three Questions (Score:5, Interesting)) 3) Does it make this simple? I've tried to figure a way to do both Vector and Raster editing in one program before, and had some ideas, but nothing that would truly make it easy. The reason Illustrator and Photoshop are separate is not for the chance to sell two products (although I suspect that influences the idea a bit) but because there isn't a way to do vector and raster editing in a well mixed manner. At best, you end up with something that changes back and forth between being a vector editor and a raster editor depending on what is selected. Re:Three Questions (Score:2) As for UI, well, I just had to click a swap view button and the tool bars would change to something appropriate. So yeah, I think its relatively simple. Re:Three Questions (Score:3, Informative) All that was added is the ability to import and export raster files encapsulated as SVG - AND import and export Gimp vector - The Bezier Paths existing in gimp 1.2.x. Re:Three Questions (Score:4, Interesting) Re:Three Questions (Score:5, Insightful) Re:Three Questions (Score:2) Re:Three Questions (Score:5, Informative) We (or rather Sven) used rsvg to read and render the SVG as a bitmap. >) It just imports SVG to a rastermap, and exports paths to SVG. There is no support for the funky stuff like gradient fills, object groups, etc. This is not a vector graphics program. > 3) Does it make this simple? Yes. You load your SVG, specifying the size of the bounding box, and there you go. Cheers, Dave. Re:Three Questions (Score:3, Informative) That program is the one reason I have to boot a Windows machine now and then. There is nothing I have found that is faster for producing web interface mock-ups. It doesn't have the same range of power as Photoshop + Illustrator, or for raster even The Gimp, but I can do basic work, and 90% of non-print stuff is basic work, in about 1/10th the time. If The Gimp gets decent vector editing capabilities, I can final SVG rendering engine? (Score:4, Insightful) So what does the GIMP use to render SVG and how good is it? In particular, is it different from the libart that Mozilla has been using? The world really needs a high quality open source SVG renderer. Adobe's plugins don't exist for every platform and Batik, AFAIK, relies on Java 2D. Re:SVG rendering engine? (Score:5, Informative) (Maintained by my good friend and fellow AbiWord developer Dom Lachowicz) Martin Re:SVG rendering engine? (Score:3, Informative) Yes, the Gimp has stolen my RSVG plugin. No doubt Sven and Yosh have since souped it up. Best regards, Dom OpenGL options? (was Re:SVG rendering engine?) (Score:2) I just googled [google.com] for some OpenGL [opengl.org]-based options, and Amaya [w3.org] looks interesting. Anyone used Amaya or have other recommendations for an open source, OpenGL based SVG rendering API? JPG properties (Score:4, Interesting) V1.2.4 does not support this which make it an inconvenient choice to edit pictures taken with a digital camera. All JPG properties like date the picture was shot and other parameters get lost when saving. Re:JPG properties (Score:5, Informative) rudimentary CMYK separation also (Score:4, Informative) Nice, but where's the color calibration? (Score:2) There are solutions for Windows and Mac [pantone.com] but not for Linux/BSD. Maybe someone could start an open color matching standard at some Re:Nice, but where's the color calibration? (Score:4, Informative) (Sorry, unnecessary snarkyness. I agree that there is no good UI, nor tools, for color management in X11. However it should be noted that X11 has complete color management support built-in. It's just that nobody uses it on Linux. I bet if I peeked in SGI's X distribution, it would be loaded with color management features.) The GIMP New Web Site (Score:4, Informative) Do not miss the new GIMP site, taht will soon replace the contents in: mmaybe.gimp.org [gimp.org] . Re:The GIMP New Web Site (Score:2) OT, but does anybody know what software they are using? I looked at the HTML output and concluded that it might be plone, but mmmaybe that's just my prejudice showing. Re:The GIMP New Web Site (Score:2) Well, I can tell you from a reliable source that the software used for maintaining the web site is a combination of CVS for storing all source files (module gimp-web in gnomecvs), good old hand-written XHTML for the contents of a pages, and a custom set of Python scripts for wrapping the page contents into the templates (header, footer, menu bar). Until a few weeks ago, the last part was done using Apache SSI, but now the pages are pre-processed by the scripts instead of requiring the web server to do all Other goodies to look foward to in gimp 1.3.x (Score:4, Interesting) CMYK support! Now uses GTK 2, no more ugly fonts, no more GREY, its all in the colour you want! Hundreds of new plugins, and there is the excellent plug in registry as well. If there isn't a filter you wan't then it can easily be created due to the GIMP's API Support for standards from the freedesktop project, including thumnails. The new Docking gui, which allows you to reduce your screen clutter! Just drag and drop those tabs! Much faster, starts in around 3 seconds, and it uses MMX extentions to accelerate your graphics filters. Simply put, gimp 1.3.x is really powerful, and Adobe should start to become worried. Remember, if the feature you wan't isn't there, it will be soon due to the extremly rapid development. Even a 0.01 increment == TONS of features! Also, the "gimp" himself looks a lot cuter in SVG. Re:Nice, but... (Score:2) Use rectangular selection for n-pixels wide and l-pixels long, then fill with a solid color. Use a 1x1 brush and click on a pixel to color it any color you want. Select the transform tool, which is set to rotation by default, then drag your image. This pops up another window with the rotation properties, and that allows you to either check your dragging or manually set the rotation. No secret key combinations required. Sure you need to SVG support? (Score:2, Interesting) Re:SVG support? (Score:2) Visio 2003 [microsoft.com] has SVG support. I didn't get a chance to beta test it though, so I can't comment on it's capabilities and integration into the rest of M$* 2003 applications. I don't mean to gripe but.... (Score:5, Informative) Adjustment layers! (with masks) and no, you can't really 'emulate' them with the currently available toolset unfortunately (remember that they have masks and are non-destructive). I love GIMP but the absence of this feature (which is not exactly a new thing, even PSPro has it!) is really killing me... Re:I don't mean to gripe but.... (Score:2) Re:I don't mean to gripe but.... (Score:2) [gnome.org] a better bug (marked as a dupe of this one) with links to tutorials etc. is [gnome.org] and from the various comments in them it doesn't seem the gimp developers really either understand what this functionality is really about or how useful it is. I am really surprised that the gimp team doesn't have some pro PhotoShop users on board to gi Re:Even more basic... (Score:2) 1: How hard would it be to add a straght line tool? 2: How are users supposed to know that you use the shift key to draw straight lines? That is a UI flaw. Drawing straight lines is a common thing that users do. Including it as a non-apparent feature (which makes the user hunt down the documentation) is just plain stupid. Re:Even more basic... (Score:2) It should be rather obvious that 'straight' is not a tool in itself, but a modifier for the movement of existing tools. More like using a straight-edge accessory with your pen, brush or eraser than having a seperate pen, brush or eraser that only move in straight lines. 2: How are users supposed to know that you use the shift key to draw straight lines? Since 'straight' is a modifer of movement rather than a tool, its not a huge leap to use a modifier Re:Even more basic... (Score:2) Personally, I guessed based on the way the selection tools work. That's known as a consistant UI. Also, you could read the manual. TWW acronyms? (Score:2, Funny) COOL! (Score:3, Interesting) There was an OS/2 program (forget its name) which mixed vectors and layers, and also had the unique ability to layer EFFECTS...for example, I could do black text, put a blur effect layer over that, and then colored text over that to achieve a drop shadow with very little effort. Of course, you could then put an effect layer over the text for texturizing, etc. You could combine effects to your hearts content, and if you didn't like the way it worked, it was trivial to back out, or move the effect elsewhere. Vector support seems like the necessary first step to this type of thing and I hope that the GIMP developers discover this cool and unique way to manipulate images. KDE 3.2 will have SVG too (Score:2) so when is.. (Score:2) SVG-Export.scm (Score:2, Informative) If anyone would like, I'm making it available here [eastfarthing.com]. Save it in your shared/scripts directory with the other scm files. Then flatten your images to indexed and go Script-Fu -> File -> Export SVG. Enjoy! (And if any of you have any weight with the GIMP team and still want to include it in the distro, you're welcome.) Off-topic: The Slashdot Gimp icon (Score:2, Funny) Re:Off-topic: The Slashdot Gimp icon (Score:2) SWEET (Score:2) -bill! (Tux Paint dude) I'm already using GIMP 1.3 (Score:2) If you run Debian, "apt-get install gimp-1.3" and try it out. P.S. My biggest wish right now would be for XSane support for GIMP 1.3. Debian doesn't seem How about (Score:2) 2. 48-bit color support (and don't point me to buggy cinepaint) 3. COLOR MANAGEMENT. 4. L*a*b color space Sheesh. Re:OFFTOPIC - Alternate story (Score:2) Re:Almost there (Score:3, Informative) has more info. Re:Almost there (Score:2) Re:Almost there (Score:2) If you need more, take a look at our list of Free and other predominantly non-commercial software at. Re:Am I missing something? GIMP sucks - for me (Score:3, Informative) You can certainly make GIMP palettes with pantone color names for RGB approximations, but don't distribute them to anyone or Pantone, Inc.'s lawyers will come down on you HARD.
https://slashdot.org/story/03/10/07/130202/gimp-goes-svg
CC-MAIN-2018-17
refinedweb
4,150
72.97
I have a feature class with four fields (A1, A2, B1, B2) of type Double - each field has values as well as nulls. I want to be able to populate a second set of fields (A1A2, B1B2) with the sum of the values e.g. A1A2 = A1+A2, B1B2=B1+B2.. A simple addition using Field Calculator fails where A1 and/or A2 are nulls (which I expected). I can copy values to another field using a python expression to deal with nulls - Prelogic Script Code def updateValue(value): if value == None: return '0' else: return value and expression A1A2 = (updateValue(!A1!)) (this copies A1 to A1A2 field and changes the nulls to 0 in the process. If I change the expression to try and add two values ( i.e. A1A2 = (updateValue(!A1!)) + (updateValue(!A2!)) ), this works when both A1 and A2 are not nulls, but falls over if either or both fields are null (gives a value in A1A2 of null). Can someone point me towards a calculator expression (or other method) of achieving this result. I want to have the expression run within a model, with the intent to have a process that I am presently doing part manually setup as a model so that a basic user (one of my managers) can run herself when requiredwithout bothering me.all the time. I should be able to set it up so that I have new fields e.g. tmpA1 and tmpA2 which are populated with the A1 and A2 values with nulls changed to 0, and then perform the addition (and will implement this if required), but to me this is a bit of a kludge and would like to have a more elegant solution. I want to maintain the original fields with their null values as these are meaningful ( null is not 0). I feel a solution should be possible within field calculator but understand that this may or may not be possible, in which case I am willing would try something else. I have some modelbuilder skills (more like - will this work yes/no if no then try something else) without necessarily a deep understanding of what I am doing, and no real python knowledge or coding background. If I read you correctly. You never check to see if something is equal to None, you check to see if something IS None. So if both are not None, then do something with them, like add them def updateValue(v0, v1): if (v0 is not None) and (v1 is not None): return v0 + v1 else: return 0 You could try something like this I suppose: def main(): a1 = 2.5 a2 = None b1 = None b2 = 1.23 print(SumValues([a1, a2])) print(SumValues([b1, b2])) print(SumValues([a1, b2])) print(SumValues([a2, b1])) def SumValues(lst_val): lst_val = [a for a in lst_val if a != None] try: return sum(lst_val) except: return 0 if __name__ == '__main__': main() This will yield: 2.5 1.23 3.73 0
https://community.esri.com/t5/data-management-questions/calculations-with-null-values/td-p/755081
CC-MAIN-2022-21
refinedweb
499
66.67
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo It looks like using the %shared_ptr directive on a class breaks the iterators of STL containers of that class, unless those are also containers of shared_ptrs. The iterators "forget" about the non-shared_ptr definition of the class, yielding generic and useless SwigPyObjects instead of instances of the wrapped class (along with the familiar "no destructor found" warning). It only appears to happen if the class is defined in a namespace. I think this means it's presently impossible to use both vector<T> and shared_ptr<T> in the same project, if T is in a namespace. I've attached what I think is an example that demonstrates the problem; note that everything works if you comment out either the %shared_ptr directive, or the remove the namespace. Any ideas for a workaround? Thanks! Jim
http://sourceforge.net/p/swig/mailman/message/30065197/
CC-MAIN-2015-22
refinedweb
153
58.52
# Announcing TypeScript 3.4 RC Some days ago we announced the availability of our release candidate (RC) of TypeScript 3.4. Our hope is to collect feedback and early issues to ensure our final release is simple to pick up and use right away. To get started using the RC, you can get it [through NuGet](https://www.nuget.org/packages/Microsoft.TypeScript.MSBuild), or use npm with the following command: ``` npm install -g typescript@rc ``` You can also get editor support by * [Downloading for Visual Studio 2017](https://marketplace.visualstudio.com/items?itemName=TypeScriptTeam.TypeScript-340rc-vs2017) (for version 15.2 or later) * Following directions for [Visual Studio Code](https://code.visualstudio.com/Docs/languages/typescript#_using-newer-typescript-versions) and [Sublime Text](https://github.com/Microsoft/TypeScript-Sublime-Plugin/#note-using-different-versions-of-typescript). Let’s explore what’s new in 3.4! ![](https://habrastorage.org/r/w780q1/webt/rh/0u/3d/rh0u3dh_1qkmnyia12ksyuuozso.jpeg) [This article in our blog.](https://devblogs.microsoft.com/dotnet/net-core-container-images-now-published-to-microsoft-container-registry/) Faster subsequent builds with the `--incremental` flag ------------------------------------------------------ Because TypeScript files are compiled, it introduces an intermediate step between writing and running your code. One of our goals is to minimize built time given any change to your program. One way to do that is by running TypeScript in `--watch` mode. When a file changes under `--watch` mode, TypeScript is able to use your project’s previously-constructed dependency graph to determine which files could potentially have been affected and need to be re-checked and potentially re-emitted. This can avoid a full type-check and re-emit which can be costly. But it’s unrealistic to expect *all* users to keep a `tsc --watch` process running overnight just to have faster builds tomorrow morning. What about cold builds? Over the past few months, we’ve been working to see if there’s a way to save the appropriate information from `--watch` mode to a file and use it from build to build. TypeScript 3.4 introduces a new flag called `--incremental` which tells TypeScript to save information about the project graph from the last compilation. The next time TypeScript is invoked with `--incremental`, it will use that information to detect the least costly way to type-check and emit changes to your project. ``` // tsconfig.json { "compilerOptions": { "incremental": true, "outDir": "./lib" }, "include": ["./src"] } ``` By default with these settings, when we run `tsc`, TypeScript will look for a file called `.tsbuildinfo` in our output directory (`./lib`). If `./lib/.tsbuildinfo` doesn’t exist, it’ll be generated. But if it does, `tsc` will try to use that file to incrementally type-check and update our output files. These `.tsbuildinfo` files can be safely deleted and don’t have any impact on our code at runtime – they’re purely used to make compilations faster. We can also name them anything that we want, and place them anywhere we want using the `--tsBuildInfoFile` flag. ``` // front-end.tsconfig.json { "compilerOptions": { "incremental": true, "tsBuildInfoFile": "./buildcache/front-end", "outDir": "./lib" }, "include": ["./src"] } ``` As long as nobody else tries writing to the same cache file, we should be able to enjoy faster incremental cold builds. ### Composite projects Part of the intent with composite projects (`tsconfig.json`s with `composite` set to `true`) is that references between different projects can be built incrementally. As such, composite projects will **always** produce `.tsbuildinfo` files. ### `outFile` When `outFile` is used, the build information file’s name will be based on the output file’s name. As an example, if our output JavaScript file is `./output/foo.js`, then under the `--incremental`flag, TypeScript will generate the file `./output/foo.tsbuildinfo`. As above, this can be controlled with the `--tsBuildInfoFile`flag. ### The `--incremental` file format and versioning While the file generated by `--incremental` is JSON, the file isn’t mean to be consumed by any other tool. We can’t provide any guarantees of stability for its contents, and in fact, our current policy is that any one version of TypeScript will not understand `.tsbuildinfo` files generated from another version. Improvements for `ReadonlyArray` and `readonly` tuples ------------------------------------------------------ TypeScript 3.4 makes it a little bit easier to use read-only array-like types. ### A new syntax for `ReadonlyArray` The `ReadonlyArray` type describes `Array`s that can only be read from. Any variable with a handle to a `ReadonlyArray` can’t add, remove, or replace any elements of the array. ``` function foo(arr: ReadonlyArray) { arr.slice(); // okay arr.push("hello!"); // error! } ``` While it’s often good practice to use `ReadonlyArray` over `Array`for the purpose of intent, it’s often been a pain given that arrays have a nicer syntax. Specifically, `number[]` is a shorthand version of `Array`, just as `Date[]` is a shorthand for `Array`. TypeScript 3.4 introduces a new syntax for `ReadonlyArray` using a new `readonly` modifier for array types. ``` function foo(arr: readonly string[]) { arr.slice(); // okay arr.push("hello!"); // error! } ``` ### `readonly` tuples TypeScript 3.4 also introduces new support for `readonly` tuples. We can prefix any tuple type with the `readonly` keyword to make it a `readonly` tuple, much like we now can with array shorthand syntax. As you might expect, unlike ordinary tuples whose slots could be written to, `readonly` tuples only permit reading from those positions. ``` function foo(pair: readonly [string, string]) { console.log(pair[0]); // okay pair[1] = "hello!"; // error } ``` The same way that ordinary tuples are types that extend from `Array` – a tuple with elements of type `T``1`, `T``2`, … `T``n` extends from `Array<` `T``1` | `T``2` | … `T``n` `>` – `readonly` tuples are types that extend from `ReadonlyArray`. So a `readonly` tuple with elements `T``1`, `T``2`, … `T``n` extends from `ReadonlyArray<` `T``1` | `T``2` | … `T``n` `>`. ### `readonly` mapped type modifiers and `readonly` arrays In earlier versions of TypeScript, we generalized mapped types to operate differently on array-like types. This meant that a mapped type like `Boxify` could work on arrays and tuples alike. ``` interface Box { value: T } type Boxify = { [K in keyof T]: Box } // { a: Box, b: Box } type A = Boxify<{ a: string, b: number }>; // Array> type B = Boxify; // [Box, Box] type C = Boxify<[string, boolean]>; ``` Unfortunately, mapped types like the `Readonly` utility type were effectively no-ops on array and tuple types. ``` // lib.d.ts type Readonly = { readonly [K in keyof T]: T[K] } // How code acted \*before\* TypeScript 3.4 // { readonly a: string, readonly b: number } type A = Readonly<{ a: string, b: number }>; // number[] type B = Readonly; // [string, boolean] type C = Readonly<[string, boolean]>; ``` In TypeScript 3.4, the `readonly` modifier in a mapped type will automatically convert array-like types to their corresponding `readonly` counterparts. ``` // How code acts now *with* TypeScript 3.4 // { readonly a: string, readonly b: number } type A = Readonly<{ a: string, b: number }>; // readonly number[] type B = Readonly; // readonly [string, boolean] type C = Readonly<[string, boolean]>; ``` Similarly, you could write a utility type like `Writable` mapped type that strips away `readonly`-ness, and that would convert `readonly` array containers back to their mutable equivalents. ``` type Writable = { -readonly [K in keyof T]: T[K] } // { a: string, b: number } type A = Writable<{ readonly a: string; readonly b: number }>; // number[] type B = Writable; // [string, boolean] type C = Writable; ``` ### Caveats Despite its appearance, the `readonly` type modifier can only be used for syntax on array types and tuple types. It is not a general-purpose type operator. ``` let err1: readonly Set; // error! let err2: readonly Array; // error! let okay: readonly boolean[]; // works fine ``` `const` assertions ------------------ When declaring a mutable variable or property, TypeScript often *widens* values to make sure that we can assign things later on without writing an explicit type. ``` let x = "hello"; // hurray! we can assign to 'x' later on! x = "world"; ``` Technically, every literal value has a literal type. Above, the type `"hello"` got widened to the type `string` before inferring a type for `x`. One alternative view might be to say that `x` has the original literal type `"hello"` and that we can’t assign `"world"` later on like so: ``` let x: "hello" = "hello"; // error! x = "world"; ``` In this case, that seems extreme, but it can be useful in other situations. For example, TypeScripters often create objects that are meant to be used in discriminated unions. ``` type Shape = | { kind: "circle", radius: number } | { kind: "square", sideLength: number } function getShapes(): readonly Shape[] { let result = [ { kind: "circle", radius: 100, }, { kind: "square", sideLength: 50, }, ]; // Some terrible error message because TypeScript inferred // 'kind' to have the type 'string' instead of // either '"circle"' or '"square"'. return result; } ``` Mutability is one of the best heuristics of intent which TypeScript can use to determine when to widen (rather than analyzing our entire program). Unfortunately, as we saw in the last example, in JavaScript properties are mutable by default. This means that the language will often widen types undesirably, requiring explicit types in certain places. ``` function getShapes(): readonly Shape[] { // This explicit annotation gives a hint // to avoid widening in the first place. let result: readonly Shape[] = [ { kind: "circle", radius: 100, }, { kind: "square", sideLength: 50, }, ]; return result; } ``` Up to a certain point this is okay, but as our data structures get more and more complex, this becomes cumbersome. To solve this, TypeScript 3.4 introduces a new construct for literal values called *`const`* assertions. Its syntax is a type assertion with `const` in place of the type name (e.g. `123 as const`). When we construct new literal expressions with `const`assertions, we can signal to the language that * no literal types in that expression should be widened (e.g. no going from `"hello"` to `string`) * object literals get `readonly` properties * array literals become `readonly` tuples ``` // Type '10' let x = 10 as const; // Type 'readonly [10, 20]' let y = [10, 20] as const; // Type '{ readonly text: "hello" }' let z = { text: "hello" } as const; ``` Outside of `.tsx` files, the angle bracket assertion syntax can also be used. ``` // Type '10' let x = 10; // Type 'readonly [10, 20]' let y = [10, 20]; // Type '{ readonly text: "hello" }' let z = { text: "hello" }; ``` This feature often means that types that would otherwise be used just to hint immutability to the compiler can often be omitted. ``` // Works with no types referenced or declared. // We only needed a single const assertion. function getShapes() { let result = [ { kind: "circle", radius: 100, }, { kind: "square", sideLength: 50, }, ] as const; return result; } for (const shape of getShapes()) { // Narrows perfectly! if (shape.kind === "circle") { console.log("Circle radius", shape.radius); } else { console.log("Square side length", shape.sideLength); } } ``` Notice the above needed no type annotations. The `const`assertion allowed TypeScript to take the most specific type of the expression. ### Caveats One thing to note is that `const` assertions can only be applied immediately on simple literal expressions. ``` // Error! // A 'const' assertion can only be applied to a string, number, boolean, array, or object literal. let a = (Math.random() < 0.5 ? 0 : 1) as const; // Works! let b = Math.random() < 0.5 ? 0 as const : 1 as const; ``` Another thing to keep in mind is that `const` contexts don’t immediately convert an expression to be fully immutable. ``` let arr = [1, 2, 3, 4]; let foo = { name: "foo", contents: arr, }; foo.name = "bar"; // error! foo.contents = []; // error! foo.contents.push(5); // ...works! ``` Type-checking for `globalThis` ------------------------------ It can be surprisingly difficult to access or declare values in the global scope, perhaps because we’re writing our code in modules (whose local declarations don’t leak by default), or because we might have a local variable that shadows the name of a global value. In different environments, there are different ways to access what’s effectively the global scope – `global` in Node, `window`, `self`, or `frames` in the browser, or `this` in certain locations outside of strict mode. None of this is obvious, and often leaves users feeling unsure of whether they’re writing correct code. TypeScript 3.4 introduces support for type-checking ECMAScript’s new `globalThis` – a global variable that, well, refers to the global scope. Unlike the above solutions, `globalThis` provides a standard way for accessing the global scope which can be used across different environments. ``` // in a global file: let abc = 100; // Refers to 'abc' from above. globalThis.abc = 200; ``` `globalThis` is also able to reflect whether or not a global variable was declared as a `const` by treating it as a `readonly`property when accessed. ``` const answer = 42; globalThis.answer = 333333; // error! ``` It’s important to note that TypeScript doesn’t transform references to `globalThis` when compiling to older versions of ECMAScript. As such, unless you’re targeting evergreen browsers (which already support `globalThis`), you may want to [use an appropriate polyfill](https://github.com/ljharb/globalThis) instead. Convert to named parameters --------------------------- Sometimes, parameter lists start getting unwieldy. ``` function updateOptions( hue?: number, saturation?: number, brightness?: number, positionX?: number, positionY?: number positionZ?: number) { // .... } ``` In the above example, it’s way too easy for a caller to mix up the order of arguments given. A common JavaScript pattern is to instead use an “options object”, so that each option is explicitly named and order doesn’t ever matter. This emulates a feature that other languages have called “named parameters”. ``` interface Options { hue?: number, saturation?: number, brightness?: number, positionX?: number, positionY?: number positionZ?: number } function updateOptions(options: Options = {}) { // .... } ``` The TypeScript team doesn’t just work on a compiler – we also provide the functionality that editors use for rich features like completions, go to definition, and refactorings. In TypeScript 3.4, our intern [Gabriela Britto](https://github.com/gabritto) has implemented a new refactoring to convert existing functions to use this “named parameters” pattern. [![A refactoring being applied to a function to make it take named parameters.](https://habrastorage.org/getpro/habr/post_images/f33/305/761/f3330576172c94151d7e13408c82a418.gif)](https://camo.githubusercontent.com/921efb48cfe5c0bbc123119e0ee805d663553583/68747470733a2f2f646576626c6f67732e6d6963726f736f66742e636f6d2f747970657363726970742f77702d636f6e74656e742f75706c6f6164732f73697465732f31312f323031392f30332f7265666163746f72546f4e616d6564506172616d734f626a6563742d332e342e676966) While we may change the name of the feature by our final 3.4 release and we believe there may be room for some of the ergonomics, we would love for you to try the feature out and give us your feedback. Breaking changes ---------------- ### Top-level `this` is now typed The type of top-level `this` is now typed as `typeof globalThis`instead of `any`. As a consequence, you may receive errors for accessing unknown values on `this` under `noImplicitAny`. ``` // previously okay in noImplicitAny, now an error this.whargarbl = 10; ``` Note that code compiled under `noImplicitThis` will not experience any changes here. ### Propagated generic type arguments In certain cases, TypeScript 3.4’s improved inference might produce functions that are generic, rather than ones that take and return their constraints (usually `{}`). ``` declare function compose(f: (arg: T) => U, g: (arg: U) => V): (arg: T) => V; function list(x: T) { return [x]; } function box(value: T) { return { value }; } let f = compose(list, box); let x = f(100) // In TypeScript 3.4, 'x.value' has the type // // number[] // // but it previously had the type // // {}[] // // So it's now an error to push in a string. x.value.push("hello"); ``` An explicit type annotation on `x` can get rid of the error. What’s next? ------------ TypeScript 3.4 is our first release that has had an [iteration plan](https://github.com/Microsoft/TypeScript/issues/30281)outlining our plans for this release, which is meant to align with [our 6-month roadmap](https://github.com/Microsoft/TypeScript/issues/29288). You can keep an eye on both of those, and on our rolling [feature roadmap page](https://github.com/Microsoft/TypeScript/wiki/Roadmap) for any upcoming work. Right now we’re looking forward to hearing about your experience with the RC, so give it a shot now and let us know your thoughts!
https://habr.com/ru/post/443996/
null
null
2,600
55.54
: buildLabel = ENV["GO_PIPELINE_LABEL"] || "LOCAL" There isn’t a function in Scala that does that so we initially ended up with this: def buildLabel() = [{ System.getenv("GO_PIPELINE_LABEL") match { case null => "LOCAL" case label => label } } My colleague Mushtaq suggested passing the initial value into an Option like so… def buildLabel() = { Option(System.getenv("GO_PIPELINE_LABEL")).getOrElse("LOCAL") } …which I think is pretty neat! I tried to see what the definition of an operator to do it the Ruby way would look like and ended up with the following: class RichAny[A](value:A ) { def || (default:A ) = { Option(value).getOrElse(default) } } implicit def any2RichAny[A <: AnyRef](x: A) = new RichAny(x) Which we can use like so: def buildLabel() = { System.getenv("GO_PIPELINE_LABEL") || "LABEL" } I imagine that’s probably not the idiomatic Scala way to do it so I’d be curious to know what is. From (Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.) Erik Post replied on Mon, 2011/06/13 - 6:05am Erwin Mueller replied on Mon, 2011/06/13 - 2:30pm I choose the Java way: That way I can still understand it 2 weeks later. I'm all for new languages and stuff, but do Scala developers have nothing better to do as to try to invent the most cryptic code possible? Who can get more cryptic code? Cloves Almeida replied on Mon, 2011/06/13 - 7:24pm in response to: Erwin Mueller Scala have an unnatural tendency to use symbols where words would be a better match. Why not the SQLish: Erik Post replied on Thu, 2012/05/24 - 5:51pm in response to: Erwin Mueller
http://java.dzone.com/articles/scala-setting-default-value
CC-MAIN-2014-35
refinedweb
281
67.69
Search Criteria Package Details: mosesdecoder 3.0.0-2 Dependencies (11) - python2 - xmlrpc-c (xmlrpc-c-svn) - boost-libs (boost171) (make) - cmph (make) - gcc (gcc-multilib-x32, gcc6-gcccompat, fastgcc, gcc-git, gcc-multilib-git) (make) - git (git-git) (make) - libtool (libtool-git) (make) - zlib (zlib-static, zlib-git, zlib-asm, minizip-asm, zlib-ng-git) (make) - boost>=1.48 (make) - giza-pp-git (optional) – for training models - mgiza (optional) – multithreaded giza for training models Latest Comments panosk commented on 2018-04-16 08:15 The package will be updated soon for release 4. A change is needed in the cmph package though, so once I have sorted this out with the cmph maintainer I will update -- or I will remove the dependency. panosk commented on 2017-01-29 16:33 Updated the package to use a newly created fork of the stable release branch that includes fixes to allow compiling with c++14. Also the xmlrpc-c package has been updated and now it links fine, so everything should work as expected now. thephoenix97 commented on 2017-01-27 22:51 @panosk: Yes. Downgrading the xmlrpc-c to penultimate version (1.43.03-1) and then compiling with c++98 fixed everything. The fact that it was a penultimate might be helpful in pinning down the problem. Good for now, although I am hoping that people upstream will fix this issue soon since its a stable release. Thank you for support, dear sir. You have a good day. :) panosk commented on 2017-01-27 21:29 I've already notified moses' developers to backport a few fixes from master to stable release 3 so it can compile with c++11 (and c++14 for that matter). It shouldn't be a trouble, the relative fixes apply only to 2 files, phrase-extract/extract-mixed-syntax/Main.cpp and biconcor/Vocabulary.cpp -- too bad I spent hours to fix them and found afterwards these fixes were applied to master already... Lets hope they will apply the fixes, otherwise there's no point maintaining the stable release anymore. As for the linking error, I have already contacted Alexander Rødseth, the maintainer of the xmlrpc package. Version 1.41.01 links fine, later versions have changed ABI, hence the problem. I also can't compile current version of xmlrpc using the PKGBUILD, libxml2 seems broken. Maybe relevant: So, until everything settles down, the solution is to make the change for c++98 in makepkg.conf, downgrade xmlrpc, or completely uninstall the package (if bjam doesn't find xmlrpc, it won't try to compile mosesserver -- in that case, edit the PKGBUILD when asked and remove the "--with-xmlrpc-c" line). thephoenix97 commented on 2017-01-27 20:54 The compilations proceeds normally when -std=c++98 is given in /etc/makepkg.conf. However, linking still fails. The problem in this case appears to be with xmlrpc as linker tries to link to std::__cxx11::... namespaces from xmlrpc. There are two such fails reported, both related to xmlrpc. It would seem that updated xmplrpc-c from the community db might now be using c++11. Wasn't the case long before, as I remember building everything cleanly a month ago. panosk commented on 2017-01-27 14:40 @thephoenix97: Thanks, I think I spotted the problem, ostream classes cannot implicitly convert to void* after C++11, so I'll try to fix the code locally and then notify the devs upstream. thephoenix97 commented on 2017-01-27 12:01 with gcc 6.3, std=gnu++14 is default. Therefore compiling fails. Please account for the fact in the packaging, that gcc has now been updated to 6.3 on arch linux.
https://aur.archlinux.org/packages/mosesdecoder/
CC-MAIN-2020-24
refinedweb
618
56.55
Docs | Forums | Lists | Bugs | Planet | Store | GMN | Get Gentoo! Not eligible to see or edit group visibility for this bug. View Bug Activity | Format For Printing | XML | Clone This Bug Thanks Created an attachment (id=98510) [edit] ebuild Created an attachment (id=98513) [edit] Patch for check.py Created an attachment (id=98514) [edit] Patch for Makefile I created an 'work-around' ebuild. It's my first own ebuild, based on version 0.4.3. Please review it! dev-python/elementtree should be added to rdepend or you'd get an error like this onip @ Hal9000 ~ $ listen Traceback (most recent call last): File "/usr/lib/listen/listen.py", line 209, in ? ListenApp() File "/usr/lib/listen/listen.py", line 136, in __init__ from widget.listen import Listen File "/usr/lib/listen/widget/listen.py", line 35, in ? from widget.player_ui import PlayerUI File "/usr/lib/listen/widget/player_ui.py", line 32, in ? from widget.dynamic_playlist import DynamicPlaylist File "/usr/lib/listen/widget/dynamic_playlist.py", line 29, in ? from lastfm import lastfm_info File "/usr/lib/listen/lastfm.py", line 23, in ? from elementtree.ElementTree import fromstring as XMLFromString ImportError: No module named elementtree.ElementTree Created an attachment (id=98610) [edit] updated ebuild ebuild with elementtree added Another patch is needed (but I don't know hot to do it). As reported by syntaxerrormmm ( ) listen.py script isn't correct and needs this to start sed -i 's/LD_LIBRARY_PATH=\([\/a-z-]*\) \([\/a-z]*\)/LD_LIBRARY_PATH="\1 \2"/' /usr/bin/listen Thanks (In reply to comment #7) > Another patch is needed (but I don't know hot to do it). Well, it can be done in pkg_install with: sed -i 's/LD_LIBRARY_PATH=\([\/a-z-]*\) \([\/a-z]*\)/LD_LIBRARY_PATH="\1 \2"/' ${D}/usr/bin/listen But I will check first if the script file is static or dinamically generated by the ebuild :) Created an attachment (id=98752) [edit] Makefile1.patch Substitute this Makefile patch with the other one previously posted. This will fix also the launching script issue. HTH, please feedback. Hi to all! I enounter this error: [..] Checking for DBUS: found Checking for PyGSt >= 0.10: found Checking for python-libgpod: Traceback (most recent call last): File "./check.py", line 100, in ? import gpod File "/usr/lib/python2.4/site-packages/dbus/__init__.py", line 1, in ? from _dbus import * File "/usr/lib/python2.4/site-packages/gpod.py", line 46, in ? sw_get_playlists = _gpod.sw_get_playlists AttributeError: 'module' object has no attribute 'sw_get_track' make: *** [check] Error 1 !!! ERROR: media-sound/listen-0.5 failed. Call stack: ebuild.sh, line 1546: Called dyn_compile ebuild.sh, line 937: Called src_compile listen-0.5.ebuild, line 82: Called die Thanks for all i changed a few lines in the ebuild and i'm testing it. nice job with the patchs :) Created an attachment (id=99471) [edit] listen ebuild with optional libsexy support I've added support for sexy-python, via a USE flag. Ebuilds can be found in bugzilla or in sunrise overlay. I found those to work correctly now in cvs enjoy :) so it need test. i have problems with musicbrainz. you need to unmask some package and it doenst works to well.
http://bugs.gentoo.org/148931
crawl-002
refinedweb
525
53.37
Given a list I need to return a list of lists of unique items. I'm looking to see if there is a more Pythonic way than what I came up with: def unique_lists(l): m = {} for x in l: m[x] = (m[x] if m.get(x) != None else []) + [x] return [x for x in m.values()] print(unique_lists([1,2,2,3,4,5,5,5,6,7,8,8,9])) [[1], [2, 2], [3], [4], [5, 5, 5], [6], [7], [8, 8], [9]] >>> L=[1,2,2,3,4,5,5,5,6,7,8,8,9] >>> from collections import Counter >>> [[k]*v for k,v in Counter(L).items()] [[1], [2, 2], [3], [4], [5, 5, 5], [6], [7], [8, 8], [9]]
https://codedump.io/share/rEGmdc9G8gJb/1/unique-lists-from-a-list
CC-MAIN-2017-47
refinedweb
126
87.45
If you like this tool, support it by voting, if you don't like it, make your vote verbose... The XsdTidy tool has been entirely rebuild from scratch using CodeDom which is much easier to handle than Emit. The new version is called Refly and available at the Refly article. XsdTidy is a refactoring tool to overcome some silly limitations of the exceptional Xsd.exe (see [1]) tool provided with the .NET framework. More specifically, XsdTidy addresses the following problems: Addor Remove. XsdTidy uses ArrayListfor more flexibility. XsdTidy achieves refactoring by recreating new classes for each type exported by the Xsd.exe tool using the System.Reflection.Emit namespace. It also takes care of "transferring" the Xml.Serialization attributes to the factored classes. Hence, the factored classes are more .NET-ish and still outputs the same XML. Moreover, there is no dependency between the refactored code and the original code. As a nice application of the tool, a full .NET wrapper of the DocBook schema (see [3]) is provided with the project. This .NET wrapper lets write or generate DocBook XML easily with the help of Intellisense.. There is much room for improvement on the list of words and the algorithm to split the name, any contribution welcome. Arrays are replaced by System.Collection.ArrayList which are much more flexible. Moreover, array fields are created by default using their default constructor. This is to economize you the hassle of creating a collection before using it. Fields are hidden in properties, which is more convenient to use. Moreover, collection fields do not have set property according to FxCop rule. public class testclass { [XmlElement("values",typeof(int)] public int[] values; } becomes: public class TestClass { private ArrayList values; [XmlElement("values",typeof(int)] public ArrayList Values { get { return this.values; } } } The System.Reflection.Emit namespace is truly and amazingly powerful, it enables you to create new types at runtime and execute them or store them to assemblies for further use. Unfortunately, there are not much tutorials and examples on this advanced topic. In this chapter, I will try to explain my limited understanding of this tool. The Emit namespace gives you the tools to write IL (Interpreted Language) instructions and compile them to types. Hence, you can basically do anything with Emit. A typical emit code will look like this: // emit ILGenerator il = ...; il.Emit(OpCodes.Ldarg_0); il.Emit(OpCodes.Ldarg_1); il.Emit(OpCodes.Stfld, fb); il.Emit(OpCodes.Ret); // C# equivalent this.fb = value; If you are a newcomer, it can look cryptic, but we'll try to explain the above a bit. The problem with Emit is that debugging is complicated: if you generate wrong IL code, the framework will not execute it, throwing an error without giving any clue.. Moreover, you usually don't have the time to learn the dozens of the code that are part of the OpCodes class. Therefore, it would be nice to always have some "model" IL and then try to implement it with Emit. Hopefully, creating this model is easy! It is possible, using decompilers such as Reflector (see [4]), to read the IL code of any .NET assembly. The idea is simple: open a dummy project where you create the model class that needs to be factored, compile and use a decompiler to read the IL of your model and there you go...you have IL code! I will cover some very basic facts about using Emit. As mentioned above, the most efficient way to learn is to work with a dummy project and Reflector on the side. We will see here how to make a basic C# statement in some instance method where value is the first argument and field is a member of the class. if (value==null) throw new ArgumentNullException("value"); this.field=value; Usually, you start by creating an AssemblyBuilder, then a ModuleBuilder, then a TypeBuilder and finally you can add methods to the TypeBuilder using TypeBuilder.DefineMethod which returns a MethodBuilder. This instance is then used to retrieve an ILGenerator object which we use to output IL code: MethodBuilder mb = ...; ILGenerator il = mb.GetGenerator(); The OpCodes class contains all the IL operations. It has to be used in conjunction with ILGenerator.Emit as we will see in the following. Each time you call a method (static or non-static), the method arguments are accessible through OpCodes.Ldarg_0, OpCodes_1, ... In an instance method, OpCodes.Ldarg_0 is the " this" address. Labels are used to make jumps in the IL code. You need to set up Labels if you want to build instructions such as if... else.... A Label is defined as follows: Label isTrue = il.DefineLabel(); Once the Label is defined, it can be used in an instruction that makes a jump. When you reach the instruction that the Label should mark, call MarkLabel: il.MarkLabel(isTrue); Comparing a value to null is done using the OpCodes.Brtrue_S. This instruction makes a jump to a Label if the value is not null. Label isTrue = il.DefineLabel(); il.Emit(OpCodes.Ldarg_1); // pushing value on the stack il.Emit(OpCodes.Brtrue_S,isTrue); // if non null, jump to label // IL code to throw an exception here ... // marking label il.MarkLabel(isTrue); ... To create object, you must first retrieve the ContructorInfo of the type, push the constructor arguments on the stack and call the constructor using OpCodes.NewObj. If we use the default constructor of ArgumentNullException, we have: ConstructorInfo ci = typeof(ArgumentNullException).GetConstructor(Type.EmptyTypes); Label isTrue = il.DefineLabel(); il.Emit(OpCodes.Ldarg_1); il.Emit(OpCodes.Brtrue_S,isTrue); il.Emit(OpCodes.NewObj,ci); // creating new exception il.Emit(OpCodes.Throw); // throwing the exception il.MarkLabel(isTrue); ... You can clearly see the "jump across the exception" with the label isTrue. The last step is to assign the field with the value (stored in the first argument). To do so, we need to push the " this" address on the stack ( OpCodes.Ldarg_0), push the first argument ( OpCodes.Ldarg_1) and use OpCodes.Stdfld: // Type t is the class type FieldInfo fi = t.GetField("field"); il.Emit(OpCodes.Ldarg_0); il.Emit(OpCodes.Ldarg_1); il.Emit(OpCodes.Stdfld,fi); To close a method, use OpCodes.Ret: il.Emit(OpCodes.Ret); The refactoring is handled by the XsdWrappedGenerator class. The main factoring steps are: AssemblyBuilderand define a new ModuleBuilder. TypeBuilderin the ModuleBuilder. During the process of factoring, special care is taken about nullable/non nullable types and collection handling: Once the factoring is finished, the types are created and saved to an assembly. The XsdWrappedGenerator encapsulates all the "wrapping" functionalities: create a new instance, add the types you need to be refactored and save the result to a file: XsdWrapperGenerator gen = new XsdWrapperGenerator( "CodeWrapper", // output namespace and assembly name new Version(1.0), // outputed assembly version ); // adding types gen.AddClass( typeof(myclass) ); ... // refactor gen.WrapClasses(); // save to file, this invalidates gen. gen.Save(); The name passed to the constructor is used as default namespace and output assembly name. XsdWrapperGenerator comes with a minimal console application that loads an assembly, searches to types, refactors them and output the results. Calling convention is as follows: XsdTidy.Cons.exe AssemblyName WrappedClassNamespace OutputNamespace Version where AssemblyNameis the name of the assembly to scan (without .dll) WrappedClassNamespaceis the namespace from which the types are extracted OutputNamespaceis the factored namespace Versionis the version number: major.minor.build.revision DocBook is an XML standard to describe a document. It is a very powerful tool since the same XML source can be rendered in almost all possible output formats: HTML, CHM, PDF, etc... This richness comes to a price: DocBook is complicated for the beginner and it tends to be XML-ish. This was the starting of the article for me: I needed to generate DocBook XML to automatically generate code in GUnit (see [5]) but I wanted to take advantage of VS intellisense. The first step was to generate the .NET classes mapping the DocBook schema using the Xsd.exe tool. The generated code had some problems that would make it unusable: non-nullable fields where not initialized automatically and this would lead to a lot of manual work. Hence, the second was to write XsdTidy and apply it to DocBook. So here's an example of use: // creating a book object Book b = new Book(); // title is nullable, so we must allocate it b.Title = new Title(); // text is a collection, preallocated b.Title.Text.Add("My first book"); // nullable b.Subtitle = new Subtitle(); b.Subtitle.Text.Add("A subtitle"); Toc toc = new Toc(); b.Items.Add(toc); toc.Title = new Title(); toc.Title.Text.Add("What a Toc!"); Part part = new Part(); b.Items.Add(part); part.Title = new Title(); part.Title.Text.Add("My first page"); // generate xml using XmlSerialization tools using (StreamWriter sw = new StreamWriter("mybook.xml")) { XmlTextWriter writer = new XmlTextWriter(sw); writer.Formatting = Formatting.Indented; XmlSerializer ser = new XmlSerializer(typeof(Book)); ser.Serialize(writer,b); } The output of this snippet looks like this: <?xml version="1.0" encoding="utf-8"?> <book xmlns: <title>My first book</title> <subtitle>A subtitle</subtitle> <toc> <title>What a Toc!</title> </toc> <part> <title>My first page</title> </part> </book> Now, with Intellisense on our side, I am much more comfortable with DocBook... System.Reflection.Emit is a powerful tool that deservers more attention than it currently has. It can be used to generate optimized parsers (like Regex is doing), runtime typed DataSets, etc... General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/cs/xsdtidy.aspx
crawl-002
refinedweb
1,574
59.3
Class in C# can be declared using sealed keyword, using sealed keyword enables you to prevent the inheritance of a class or certain class members that were previously marked virtual. The sealed modifier is used to prevent derivation from a class. An error occurs if a sealed class is specified as the base class of another class. Sealed class and class members in C# Classes can be declared as sealed by putting the keyword sealed before the class definition. For example: public sealed class SealedClassDemo { // Class members here. } C# sealed class cannot be derived by any class, here are the few points to remember for sealed classes : - A sealed class cannot be used as a base class. For this reason, it cannot also be an abstract class. - Sealed classes prevent derivation. - Because they can never be used as a base class, some run-time optimizations can make calling sealed class members slightly faster. - Sealed class is the last class in the hierarchy. Let's see an example when we try to inherit sealed class in C#, suppose here the Code using System; namespace CSharpSealedClass { public sealed class Animal { public void eat() { Console.WriteLine("eating..."); } } public class Dog : Animal { public void bark() { Console.WriteLine("barking..."); } } public class Program { public static void Main(string[] args) { Dog d = new Dog(); d.eat(); d.bark(); } } } You will get the error as shown in below in the image Now, let's take an working example for sealed class in C# using System; namespace CSharpSealedClass { public sealed class SealedClass { public int x; public int y; } class CSharpSealedClass { static void Main() { SealedClass sc = new SealedClass(); sc.x = 110; sc.y = 150; Console.WriteLine("x = {0}, y = {1}", sc.x, sc.y); Console.ReadLine(); } } } When executing above code, you will get output as below Sealed methods example: public class D : C { public sealed override void DoWork() { } } Here is the more detailed example, in the following example, Z inherits from Y but Z cannot override the virtual function F that is declared in X and sealed in Y. using System; namespace CSharpSealedClass { class X { public virtual void F() { Console.WriteLine("X.F"); } public virtual void F2() { Console.WriteLine("X.F2"); } } class Y : X { sealed public override void F() { Console.WriteLine("Y.F"); } public override void F2() { Console.WriteLine("Y.F2"); } } class Z : Y { // Attempting to override F causes compiler error CS0239. // protected override void F() { Console.WriteLine("Z.F"); } // Overriding F2 is allowed. public override void F2() { Console.WriteLine("Z.F2"); } } class CSharpSealedClass { static void Main() { Z sm = new Z(); sm.F(); sm.F2(); Console.ReadLine(); } } } Output: Y.F. That's it, hope this article cleared your concepts related to Sealed class & methods in C#.
https://qawithexperts.com/article/c-sharp/sealed-class-in-c-explanation-with-example/172
CC-MAIN-2019-39
refinedweb
446
65.73
Java Multiple Choice Questions Earlier, we have discussed Java Multiple Choice Questions . Today, we have come with Java multiple choice questions and answers. These Java multiple choice questions will help you to test yourself in Java Programming Language. Answers all these java multiple choice questions and follow the link to get a better understanding. This section focuses on the “History” of Java programming. These Java Multiple Choice Questions (MCQ) should be practiced to improve the Java programming skills required for various interviews (campus interviews, walk-in interviews, company interviews), placements and other competitive examinations. So, let’s start Java Multiple Choice Questions. Java Multiple Choice Questions for Beginners 1) If a thread goes to sleep (a) It releases all the locks it has. (b) It does not release any locks. (c) It releases half of its locks. (d)). 2) What is the default encoding for an OutputStreamWriter? (a) UTF-8 (b) Default encoding of the host platform (c) UTF-12 (d)(); } } (a) Instance block, method, static block, and constructor (b) Method, constructor, instance block, and static block (c) Static block, method, instance block, and constructor (d)). Java Multiple Choice Questions for Experience 4) _____ is used to find and fix bugs in the Java programs. (a) JVM (b) JRE (c) JDK (d)). 5) Which of the following is a valid declaration of a char? (a) char ch = ‘\utea’; (b) char ca = ‘tea’; (c) char cr = \u0223; (d) char cc = ‘\itea’; Answer: (a) char ch = ‘). 6) What is the return type of the hashCode() method in the Object class? (a) Object (b) int (c) long (d) void Answer: (b) int Explanation: In Java, the return type of hashCode() method is an integer, as it returns a hash code value for the object. Hence, the correct answer is the option (b). 7) What does the expression float a = 35 / 0 return? (a) 0 (b) Not a Number (c) Infinity (d)). 8) Evaluate the following Java expression, if x=3, y=5, and z=10: ++z + y – y + z + x++ (a) 24 (b) 23 (c) 20 (d)). 9) Which of the following is a valid long literal? (a) ABH8097 (b) L990023 (c) 904423 (d) 0xnf029L Answer: (d) 0xnf029L. For example, java multiple choice questions and answers pdf Lowercase l: 0x466rffl Uppercase L: 0nhf450L Hence, the correct answer is an option (d). 10); } } (a) 10, 5, 0, 20, 0 (b) 10, 30, 20 (c) 60, 5, 0, 20 (d) 60, 30, 0, 20, 0 Answer: (d) 60, 30, 0, 20, 0 Explanation: In the above code, there are two values of variable a, i.e., 10 and 60. Similarly, there are two values of variable b, i.e., 5 and 30.. The value of a = 10 and b = 5 are of no use. And the value of variables c and m is 0 as we have not assigned any value to them. Hence, the correct answer is an option (d). Best 50 Java Multiple Choice Questions for Beginners 11) The \u0021 article referred to as a (a) Unicode escape sequence (b) Octal escape (c) Hexadecimal (d) Line feed Answer: (a) Unicode escape sequence Explanation: In Java, Unicode characters can be used in string literals, comments, and commands, and are expressed by Unicode Escape Sequences. A Unicode escape sequence is made up of the following articles: A backslash ‘\’ ). 12) What will be the output of the following program? public class Test { public static void main(String[] args) { int count = 1; while (count <= 15) { System.out.println(count % 2 == 1 ? “***” : “+++++”); ++count; } // end while } // end main } (a) 15 times *** (b) 15 times +++++ (c) 8 times *** and 7 times +++++ (d) Both will print only once Answer: (c) 8 times *** and 7 times +++++ Explanation: In the above code, we have declared count = 1. The value of count will be increased till 14 because of the while (count<=15) statement. If the remainder is equal to 1 on dividing the count by 2, it will print (***) else print (+++++). Therefore, for all odd numbers till 15 (1, 3, 5, 7, 9, 11, 13, 15), it will print (***), and for all even numbers till 14 (2, 4, 6, 8, 10, 12, 14) it will print (+++++). Hence, an asterisk (***) will be printed eight times, and plus (+++++) will be printed seven times. 13) Which of the following tool is used to generate API documentation in HTML format from doc comments in source code? (a) javap tool (b) javaw command (c) Javadoc tool (d) javah command Answer: (c) Javadoc tool Explanation: The Javadoc is a tool that is used to generate API documentation in HTML format from the Java source files.. Hence, the correct answer is option (c). 14) Which method of the Class.class is used to determine the name of a class represented by the class object as a String? (a) getClass() (b) intern() (c) getName() (d)). 15) In which process, a local variable has the same name as one of the instance variables? (a) Serialization (b) Variable Shadowing (c) Abstraction (d)). 16) Which of the following is true about the anonymous inner class? (a) It has only methods (b) Objects can’t be created (c) It has a fixed class name (d) It has no class name Answer: (d) It has no class name. Hence, the correct answer is option(d). java Multiple Choice questions and answers pdf 17) What do you mean by nameless objects? (a) An object created by using the new keyword. (b) An object of a superclass created in the subclass. (c) An object without having any name but having a reference. (d)). Best 50 Java Multiple Choice Questions 18) An interface with no fields or methods is known as a ______. (a) Runnable Interface (b) Marker Interface (c) Abstract Interface (d)). 19) Which of these classes are the direct subclasses of the Throwable class? (a) RuntimeException and Error class (b) Exception and VirtualMachineError class (c) Error and Exception class (d) IOException and VirtualMachineError class Answer: (c) Error and Exception class Explanation: According to the class hierarchy of Throwable class, the Error and Exception classes are the direct subclasses of the Throwable class, as shown below. Java Multiple Choice Questions The RuntimeException, IOException, and VirtualMachineError classes are the subclasses of the Exception and Error classes. Hence, the correct answer is option (c). 20) What do you mean by chained exceptions in Java? (a) Exceptions occurred by the VirtualMachineError (b) An exception caused by other exceptions (c) Exceptions occur in chains with discarding the debugging information (d)). 21) In which memory a String is stored, when we create a string using new operator? (a) Stack (b) String memory (c) Heap memory (d) Random storage space Answer: (c) Heap memory. Hence, the correct answer is option (c). java multiple choice questions quiz source code 22) What is the use of the intern() method? (a) It returns the existing string from memory (b) It creates a new string in the database (c) It modifies the existing string in the database (d) None of the above Answer: (a) It returns the existing string from the memory. Hence, the correct answer is option (a). 23) Which of the following is a marker interface? (a) Runnable interface (b) Remote interface (c) Readable interface (d)). 24) Which of the following is a reserved keyword in Java? (a) object (b) strictfp (c) main (d)). Top 50 Java Multiple Choice Questions for Beginners 25) Which keyword is used for accessing the features of a package? (a) package (b) import (c) extends (d) export Answer: (b), Java Multiple Choice Questions java programming questions and answers). 26) In java, jar stands for_____. (a) Java Archive Runner (b) Java Application Resource (c) Java Application Runner (d)). 27) What will be the output of the following program? public class Test2 { public static void main(String[] args) { StringBuffer s1 = new StringBuffer(“Complete”); s1.setCharAt(1,’i’); s1.setCharAt(7,’d’); System.out.println(s1); } } (a) Complete (b) Iomplede (c) Cimpletd Coipletd Answer: (c) Cimpletd.” Hence, the correct answer is option (c). 28) Which of the following is false? (a) The rt.jar stands for the runtime jar (b) It is an optional jar file (c) It contains all the compiled class files (d)). 29) Which package contains the Random class? (a) java.util package (b) java.lang package (c) java.awt package (d) java.io package Answer: (a) java.util package Explanation: The Random class is available in the java.util package.. Hence, the correct answer is option (a). 30) What is the use of \w in regex? (a) Used for a whitespace character (b) Used for a non-whitespace character (c) Used for a word character (d)). 31) Which of the given methods are of Object class? (a) notify(), wait( long msecs ), and synchronized() (b) wait( long msecs ), interrupt(), and notifyAll() (c) notify(), notifyAll(), and wait() (d)). 32) Given that Student is a class, how many reference variables and objects are created by the following code? Student studentName, studentId; studentName = new Student(); Student stud_class = new Student(); (a) Three reference variables and two objects are created. (b) Two reference variables and two objects are created. (c) One reference variable and two objects are created. (d)). advanced java multiple choice questions and answers 33) Which of the following is a valid syntax to synchronize the HashMap? (a) Map m = hashMap.synchronizeMap(); (b) HashMap map =hashMap.synchronizeMap(); (c) Map m1 = Collections.synchronizedMap(hashMap); (d)). 34) Which of the following is a mutable class in java? (a) java.lang.String (b) java.lang.Byte (c) java.lang.Short (d)). 35) Which of the following is an immediate subclass of the Panel class? (a) Applet class (b) Window class (c) Frame class (d)). 36) Which option is false about the final keyword? (a) A final method cannot be overridden in its subclasses. (b) A final class cannot be extended. (c) A final class cannot extend other classes. (d)). 37) What will be the output of the following program? abstract class MyFirstClass { abstract num (int a, int b) { } } (a) No error (b) Method is not defined properly (c) Constructor is not defined properly (d)). 38) What is meant by the classes and objects that dependents on each other? (a) Tight Coupling (b) Cohesion (c) Loose Coupling (d)). 39) Given, int values[ ] = {1,2,3,4,5,6,7,8,9,10}; for(int i=0;i< Y; ++i) System.out.println(values[i]); Find the value of value[i]? (a) 10 (b) 11 (c) 15 (d)). 40) Given, ArrayList list = new ArrayList(); What is the initial quantity of the ArrayList list? (a) 5 (b) 10 (c) 0 (d)). 41) Which of the following code segment would execute the stored procedure “getPassword()” located in a database server? (a) CallableStatement cs = connection.prepareCall(“{call.getPassword()}”); cs.executeQuery(); (b) CallabledStatement callable = conn.prepareCall(“{call getPassword()}”); callable.executeUpdate(); (c) CallableStatement cab = con.prepareCall(“{call getPassword()}”); cab.executeQuery(); (d)). 42) How many threads can be executed at a time? (a) Only one thread (b) Multiple threads (c) Only main (main() method) thread (d)). 43) If three threads trying to share a single object at the same time, which condition will arise in this scenario? (a) Time-Lapse (b) Critical situation (c) Race condition (d)). 44) Which of the following creates a List of 3 visible items and multiple selections abled? (a) new List(false, 3) (b) new List(3, true) (c) new List(true, 3) (d) new List(3, false) Answer: (b) new List(3, true). Hence, the correct answer is option (b). 45) Which of the following for loop declaration is not valid? (a) for ( int i = 99; i >= 0; i / 9 ) (b) for ( int i = 7; i <= 77; i += 7 ) (c) for ( int i = 20; i >= 2; – -i ) (d) for ( int i = 2; i <= 20;). 46) Which of the following modifiers can be used for a variable so that it can be accessed by any thread or a part of a program? (a) global (b) transient (c) volatile (d)). 47) What is the result of the following program? public static synchronized void main(String[] args) throws InterruptedException { Thread f = new Thread(); f.start(); System.out.print(“A”); f.wait(1000); System.out.print(“B”); } (a) It prints A and B with a 1000 seconds delay between them (b) It only prints A and exits (c) It only prints B and exits (d)). 48) Which of the following option leads to the portability and security of Java? (a) Bytecode is executed by JVM (b) The applet makes the Java code secure and portable (c) Use of exception handling (d) Dynamic binding between objects). The Java programs executed by the JVM that makes the code portable and secure. Because JVM prevents the code from generating its side effects. The Java code is portable, as the same byte code can run on any platform. Hence, the correct answer is option (a). 49) Which of the following is not a Java features? (a) Dynamic (b) Architecture Neutral (c) Use of pointers (d)). java multiple choice questions quiz source code 50) In character stream I/O, a single read/write operation performs _____. (a) Two bytes read/write at a time. (b) Eight bytes read/write at a time. (c) One byte read/write at a time. (d)). java multiple choice questions and answers pdf<property> and get<property>.(){ ……………. } } 1 thought on “Best 50 Java Multiple Choice Questions in 2021”
https://expskill.com/best-50-java-multiple-choice-questions-in-2021/
CC-MAIN-2021-21
refinedweb
2,225
66.13
I have implemented two single thread process A & B with two msg queue [separate queue for send and receive]. Process A will send a message to B and wait for reply in the receive queue. I want to send a time-stamp from process A to process B. If process B receives the message after 10 second, i want to send a Error string from process B to A. Accuracy should be in milliseconds. In process A i used , struct timespec msg_dispatch_time; clock_gettime(CLOCK_REALTIME, &msg_dispatch_time); : : add_timestamp_in_msg(msg, msg_dispatch_time); : if (msgsnd(msqid, msg, sizeof(msg), msgflg) == -1) perror("msgop: msgsnd failed"); struct timespec msg_dispatch_time; struct timespec msg_receive_time; : clock_gettime(CLOCK_REALTIME, &msg_received_time); : if( !(time_diff(msg_received_time, msg_dispatch_time) >= 10 )) msgsnd(msqid, &sbuf, buf_length, msg_flag) else { /*send the error string.*/ //msgsnd(msgid,) } if( !(time_diff(msg_received_time, msg_dispatch_time) >= 10 )) /********Existing time diff code******************/ long int time_diff (struct timeval time1, struct timeval time2) { struct timeval diff, if (time1.tv_usec < time2.tv_usec) { time1.tv_usec += 1000000; time1.tv_sec--; } diff.tv_usec = time1.tv_usec - time2.tv_usec; diff.tv_sec = time1.tv_sec - time2.tv_sec; return diff.tv_sec; //return the diff in second } If you wish to keep using the struct timespec type, then I recommend using a difftime() equivalent for struct timespec type, i.e. double difftimespec(const struct timespec after, const struct timespec before) { return (double)(after.tv_sec - before.tv_sec) + (double)(after.tv_nsec - before.tv_nsec) / 1000000000.0; } However, I think there exists a better option for your overall use case. If you are satisfied for your program to work till year 2242, you could use a 64-bit signed integer to hold the number of nanoseconds since Epoch. For binary messages, it is a much easier format to handle than struct timespec. Essentially: #define _POSIX_C_SOURCE 200809L #include <stdint.h> #include <time.h> typedef int64_t nstime; #define NSTIME_MIN INT64_MIN #define NSTIME_MAX INT64_MAX nstime nstime_realtime(void) { struct timespec ts; if (clock_gettime(CLOCK_REALTIME, &ts)) return NSTIME_MIN; return ((nstime)ts.tv_sec * 1000000000) + (nstime)ts.tv_nsec; } double nstime_secs(const nstime ns) { return (double)ns / 1000000000.0; } struct timespec nstime_timespec(const nstime ns) { struct timespec ts; if (ns < 0) { ts.tv_sec = (time_t)(ns / -1000000000); ts.tv_nsec = -(long)((-ns) % 1000000000); if (ts.tv_nsec < 0L) { ts.tv_sec--; ts.tv_nsec += 1000000000L; } } else { ts.tv_sec = (time_t)(ns / 1000000000); ts.tv_nsec = (long)(ns % 1000000000); } } You can add and substract nstime timestamps any way you wish, and they are suitable for binary storage, too (byte order (aka endianness) issues notwithstanding). (Note that the code above is untested, and I consider it public domain/CC0.) Using clock_gettime() is fine. Both CLOCK_REALTIME and CLOCK_MONOTONIC are system-wide, i.e. should report the exact same results in different processes, if executed at the same physical moment. CLOCK_REALTIME is available in all POSIXy systems, but CLOCK_MONOTONIC is optional. Both are immune to daylight savings time changes. Incremental NTP adjustments affect both. Manual changes to system time by an administrator only affect CLOCK_REALTIME. The epoch for CLOCK_REALTIME is currently Jan 1, 1970, 00:00:00, but it is unspecified for CLOCK_MONOTONIC. Personally, I recommend using clock_gettime(CLOCK_REALTIME,), because then your application can talk across processes in a cluster, not just on a local machine; cluster nodes may use different epochs for CLOCK_MONOTONIC.
https://codedump.io/share/quvHL634oV1h/1/which-clock-should-be-used-for-inter-process-communication-in-linux
CC-MAIN-2017-04
refinedweb
521
58.89
In which we disassemble the help system, rethink how we present help to the user, and leave our practices laying in ruins. In which we rise from the ashes of a long-dead but still-breathing behemoth. In which we lay the foundations of tomorrow and dream of the future. You're an engineer. You have an important project in front of you that requires you to take the derivative of an exponential, but you've forgotten how. So you find a mathematician and ask him. The mathematician tells you to enroll in his semester-long calculus class, and that somewhere in the middle, you'll learn how to take the derivative of an exponential. Do you enroll in the class? Of course not. You fire up a web browser and open MathWorld or Wikipedia and move on with your life. Maybe you even buy Eric Weisstein a beer. Or maybe that's just me. Most online help for software is like our mathematician: arrogant and condescending, long-winded where it's not needed, short-winded where it is needed, and ultimately useless. If a user is looking at the help, it means he's stumped by the interface. Instead of helping him, we stump him with the help navigation. Users quickly learn that hitting F1 just isn't worth the effort. I was recently setting up a mail account in Evolution. The Evolution Account Editor dialog is six tabs thick, and has no Help button. There's all sorts of stuff in there that I don't understand. Put more bluntly, I'm a geek, and I need help with this dialog. Unfortunately, nobody has made a Wikipedia entry for "Evolution 2.4 Account Editor dialog, Receiving Options tab", and somehow I doubt anybody ever will. So what does "Override server-supplied folder namespace" do? It certainly doesn't appear to do what I thought it would do, so I'll go wade through the help book to find out. By choosing this option you can rename the folders that the server provides. If you select this option, you need to specify the namespace to use. Thanks. Reading the interface back to me is not helpful. Mark Finlay often said that good software doesn't need a manual. And in an ideal world, maybe he's right. The problem is, we're not making that software. Can we clean up our interfaces to make them more understandable? Absolutely, and we should. There's no question that we should have dedicated usability folks involved at every step of our software design. But the interfaces of today can only be so explanatory. Sometimes, you deal with software that needs to be explained. There's a certain level of complexity and configurability that our software must have to function in the real world. In an ideal world, I should be able to give Evolution my email address and have everything work. But our world doesn't work that way. As long as there are questions, our users will need help. Our documents today read like stereo instructions. They consist of a long sequence of directions, each relying on the ones before. Each one is as dry and lifeless as it can be, and generally provides the absolute minimum amount of necessary information to declare the documentation finished. Instead, our help should be topic-oriented. We should focus our help around individual, self-contained topics. As a guideline, topics should be no longer than about two pages. Their language should be casual and digestible, while still being accurate and precise. We should not fear redundancy. If two topics provide some of the same information, we should consider our documents better for it. How do we do this? Certainly, we could try to write our DocBook to be more topic-oriented. Unfortunately, it's difficult to provide the level of rendering and navigation control we really need to make this actually useful. Good topic-oriented help needs to have a distinct and reliable page output, rather than the whatever-the-stylesheets decide mechanism we use to chunk DocBook. And good topic-oriented help needs to be cross-linked up the wazoo. So forget linear documents and deep nested sections. Every topic is its own file, ready to be displayed to the user. These are digestible chunks. They're easy to write, easy to maintain, easy to modify, and easy to read. Sure, the topics themselves might have sections, maybe even subsections. But they are not themselves sections of some book. Instead, they are standalone pages. So say I've written the program Beanstalk. It's a fairly nifty little program, but my users need help with parts of it. Hey, it's not easy to make an intuitive interface for giant beanstalks. My help contains the following files: beanstalk.xml fe.xml fi.xml fo.xml fum.xml And here's fe.xml: <![CDATA[ ]]>]]> Fe Boring, right? It just looks like DocBook with a new wrapper element on top. The point is that topic is a top-level element, period. It's not nested in an article or a book. It stands on its own. So how do we get to it? <![CDATA[ ]]>]]> Beanstalk Help The top-level help file has declared that it should link to fe, fi, and fo. When rendered, it will show the introductory block-level content followed by nice links to each of the given pages. If we add a short description to each topic, say with a description element, we can show that on the top-level help page as well. But what about fum.xml? As it turns out, the fum functionality is provided by a plug-in called Fumstalk. When Fumstalk is installed, it installs a help page that looks like this: <![CDATA[ ]]>]]> Fum Page links are symmetric. Our help viewer notices that fum links to the top-level help file. That means that the top-level help file implicitly links to fum. The Beanstalk manual is pretty small and trivial. A few of the Gnome help manuals can get by with just a top-level topic list, but what about the rest? For larger works, we have node pages. These are simple pages whose only purpose is to link to other pages. Here's one: <![CDATA[ Fe & Fi]]> The page is then rendered with the introductory block-level content followed by links to fe and fi, as well as reciprocal link back to the top-level page. Remember, the top-level page will now have a link to fe-fi, because links are symmetric. The net effect is that you have an interconnected web of pages. Each page presents a single, digestible topic. Users can easily find related information without having to scan through a poorly-indexed book. Reciprocal linking eases the integration of third-party topics, including documentation for plug-ins and distro-specific documentation. Documentation becomes easier to write and maintain, easier to manage downstream, and easier to read and understand. Everybody wins. Up to now, we haven't discussed the markup of the actual content. To be clear, when I say block-level content, I mean the basic content model of DocBook. [Cue sighs of disappointment from stage left, sighs of relief from stage right.] But I don't mean the entirety of DocBook. DocBook is too big and too complex. We mere mortals can't hope to keep the entirety of DocBook's semantics in our heads, and that includes me. Something must be done. Let's look at a breakdown of the sort of markup we encounter: And now here's a short list of problems: Structural markup doesn't fit our needs. DocBook's structural markup is designed around books and articles. It has parts, chapters, appendices, and sections. It has additional structural elements for indexes and glossaries. It has very specialized structural markup for reference pages, essentially man pages created in DocBook. We don't need any of this. Our structural markup is simple. We have pages, and pages can have sections. There's no confusion as to how information is chunked. Nested block content is evil. DocBook allows block content inside of paragraphs. This presents difficulties for processing applications and for translators. It also creates content models that list dozens of elements, confusing people who just want to create a document. We don't need this. Inline markup is too semantic. DocBook contains over 70 elements that can be used as general inline markup. Often, there is an element to mark up exactly what you need, just as filename. Other times, you have to abuse an existing element, or just use something less specific. The presence of very specific semantic elements leads authors to expect to mark everything with specific elements. When they can't, long threads break out on mailing lists. We can't encode every concept you might encounter in software systems. Instead, we should provide semantic markup that has sufficiently broad meaning to include all reasonable uses. The replaceable element is useful; the structname element is not. This is only an overview. A full treatise on trimming down DocBook would burden this document. Such a discussion deserves a document of its own. The points raised in this document provide a direction, but only a team of enthusiastic and dedicated hackers can make change happen.
http://www.gnome.org/~shaunm/quack/mallard.xml
crawl-002
refinedweb
1,560
67.86
Practical programming manipulating data HW Review A comment on comments this_code_will_run = 10 + 20 # but_this_code_wont = 20 but_this_will = 200 / 20 ''' not = 10 a = 20 single = 30 line = 40 here = 50 will = 60 either = 70 ''' This is called a comment: # a single-line comment ''' A multi-line comment. ''' Lots of things days_of_week = ['S', 'M', 'T', 'W', 'Th', 'F', 'Sa'] print(len(days_of_week)) # prints out "7" print(days_of_week[3]) # prints out "W" print(days_of_week[-1]) # prints out "Sa" for day in days_of_week: if day == 'M': # Only for Mondays print('I hate Mondays') else: # Otherwise print('TGINM') days_of_week.append('Sa') # Now contains ['S', 'M', 'T', 'W', 'Th', 'F', 'Sa', 'Sa'] days_of_week.remove('M') # Now contains ['S', 'T', 'W', 'Th', 'F', 'Sa', 'Sa'] Unique New York webpage_views = ['mom', 'linda', 'mom', 'mom', 'joe', 'mom', 'elliott'] print(len(webpage_views)) # prints out "7" snowflakes = set(webpage_views) # contains {'elliott', 'joe', 'linda', 'mom'} print(len(snowflakes)) # prints out "4" for i in range(1000): snowflakes.add('mom') # still contains {'elliott', 'joe', 'linda', 'mom'} This is called a set, of the form: unique_things = set(my_list) A Webster special defintions = { 'Python': 'A really fun programming language', 'dictionary': 'A set of terms with their corresponding definitions', 'Jeremy Lucas': 'A great ninja with many skillz' } print(len(defintions)) # prints out "3" print(defintions['Python']) # prints out "A really fun programming language" defintions['data'] = 'A set of qualitative or quantitative values' print(len(defintions)) # prints out "4" This is called a dictionary, of the form: d = { 'key1': value1, 'key2': value2, ... } Skimming the dictionary defintions = { 'Python': 'A really fun programming language', 'dictionary': 'A set of terms with their corresponding definitions', 'Jeremy Lucas': 'A great ninja with many skillz' } for term, definition in definitions.iteritems(): print(term + ' = ' + definition) ''' Python = A really fun programming language dictionary = A set of terms with their corresponding definitions Jeremy Lucas = A great ninja with many skillz ''' To iterate over adictionary, use the form: for k, v in d.iteritems(): # Do stuff with each key and value E I E I/O - "Input/Output" - The essence of digital communications - Keyboard input - Display output - Working with "files" - Reading files - Writing files - In Unix systems, this could mean data streams, hardware devices, or even network sockets Open for business data_file = open('/tmp/worldcup.csv') for line in data_file.readlines(): # prints each line in the file print(line) # make sure to free up your resources data_file.close() # let Python close the file for us when we're done with open('/tmp/worldcup.csv') as data_file: for line in data_file.readlines(): # prints each line in the file print(line) # data_file is still open # now it's closed Gooooaaaaaaal data_file = open('/tmp/galaxycup.csv', 'w') # write the outcomes of the intergalactic matches data_file.write('3000-07-02,Mars,10,Jupiter,1000\n') data_file.write('3000-07-07,Uranus,2,Neptune,3\n') data_file.write('3000-07-10,Pluto,0,Earth,1\n') # make sure to free up your resources data_file.close() data_file = open('/tmp/galaxycup.csv', 'a') # write another match data_file.write('3000-07-02,Venus,4,Mercury,2\n') # make sure to free up your resources data_file.close() with open('/tmp/galaxycup.csv', 'a') as data_file: # write another match data_file.write('3000-07-02,Venus,4,Mercury,2\n') Struck sure - Delimited formats - CSV (commas) - TSV (tabs) - Self-describing formats - JSON (Javascript object notation) - YAML (yet another markup language) - HTML (hypertext markup language) - XML (eXtensible markup language) - Binary formats - Excel spreadsheets - JPEG images Imported goods import csv # now we can use the CSV library, yayyyyyy import json # now we can use the JSON library, yayyyyyy from __future__ import division # now we can use division from the future!! Use an import to include extra functionality: import my_module # use my_module from my_other_module import my_cool_thing # use my_cool_thing CSV please import csv with open('/tmp/worldcup.csv') as data_file: structured = csv.reader(data_file) for record in structured: # print out the first part of each record in the file (match date) print(record[0]) date 2014-06-12 2014-06-13 2014-06-17 2014-06-18 2014-06-23 2014-06-23 ... Headers to the rescue! import csv with open('/tmp/worldcup.csv') as data_file: structured = csv.DictReader(data_file) for record in structured: # print out the teams involved in each match print(record['team_1'] + ' vs. ' + record['team_2']) Brazil vs. Croatia Mexico vs. Cameroon Brazil vs. Mexico Cameroon vs. Croatia Cameroon vs. Brazil Croatia vs. Mexico Spain vs. Netherlands ... HW: GOOOOOOAAAAAAL Write a program to calculate the total number of goals scored by each team during the 2014 world cup (). The scores should be output to a new CSV file with the following format: team,total_goals Brazil,114 Germany,103 United States,101 ... HW (Hints) Python has a special function for treating a string as a number: score_1 = '2' score_2 = '4' total = score_1 + score_2 # oops, this is "24" total = int(score_1) + int(score_2) # much better! Practical programming: Manipulating data By Jeremy Lucas Practical programming: Manipulating data
http://slides.com/jerluc/practical-programming-manipulating-data
CC-MAIN-2018-13
refinedweb
813
53.92
On Thu, Mar 01, 2001 at 10:32:21PM -0800, rbb@covalent.net wrote: > On Thu, 1 Mar 2001, Roy T. Fielding wrote: > > On Thu, Mar 01, 2001 at 09:40:44PM -0800, rbb@covalent.net wrote: > > > We still need to APR namespace protection. We tried to not namespace > > > protect things to begin with, and Apache and APR were conflicting > > > horribly. > > > > Because the method I described was not used. Goody for you, Oh Omniscient One. We are just poor souls who don't have your wisdom, so we fucked up the code. > Doesn't your model require that all APR applications define their > configuration macros the same way? If an APR application is forced to do > all of their macros in a given way, then I am against this model. If this > works regardless of how the app defines it macro's, then cool. If an application includes an APR header, and then includes its own header, then it will receive errors/warnings. Invariably, I always structure my headers (say) in Subversion like so: #include <some_apr_header.h> #include <another_apr_header.h> #include "public_svn_header.h" #include "another.h" #include "module_private_header.h" In the above scenario, my SVN headers would create duplicates. "So fix SVN." Okay, I happen to be able to do that. Now, let's add an Expat include in there. Ooops. Hey, I have privs to fix that, too. Now SVN includes Berkeley DB headers. Fuck. autoconf headers are just not meant to be mixed. If APR is intending to export the values, then they must be namespace protected. That implies that a plain old AC_CHECK_HEADER(foo.h) is not sufficient. We need to follow that up with setting a shell variable (fooh=0/1), then using that to def/undef or 0/1 an APR-namespace symbol. > > > Add to that, that we define our own macros that no other > > > package should have. > > > > That is a separate issue -- anything that is defined by an APR-specific > > macro should be name-protected. I am only talking about the standard > > names that every autoconf package defines. We don't have a problem with the standard names. Those go into include/arch/unix/apr_private.h(.in). The issue is the generation of APR_HAVE_FOO_H in the public apr.h header. For those, we need logic beyond a simple AC_CHECK_HEADER(). > What happens with a package that doesn't use autoconf? Presumbly, they wouldn't be using HAVE_STDIO_H, so a conflict won't occur. >... > > As I mentioned when I started the build blues thread, I read the archives > > first. Most of the decisions back then were made because APR had to be > > integrated with a non-configure-based httpd. I have the benefit of > > That's not true. Most of the problems weren't even discovered until we > integrated APR into a configure-based Apache. Exactly. Mix one autoconf'd guy with another, and you're in for trouble. > > hindsight and a holistic view of the build system, so it shouldn't be > > too surprising that I can think of a solution that may not have been > > possible back then. And you're saying that we don't have that either? That only you possess that knowledge? That isn't fair, Roy. The current spate of problems was due to some M4 coding that I was doing to simplify and optimize our configuration. Unfortunately, despite M4's age, it appears to be rather non-portable and prone to flakiness. We backed off a bit, but we're still in a very good and stable spot. No more deep magic, and it should all work quite fine. We don't do anything beyond normal autoconf setups. > I hope you have found new ideas, but the current APR configuration system > is much more autoconf-like than the Apache one, and most of the problems > haven't been with the APR configure system. Exactly. This recent spate was M4. My fault. Our autoconf system has been working quite well. > > The reason I bring it up right now is because every step we have taken away > > from a standard autoconf setup has resulted in significant problems on > > one platform or another, leading to another step away and more problems, > > ad nauseum. Some steps, like not using the standard generating mechanism > > for the Makefile files, have benefits that outweigh the pain. Others do not. > > I look at the APR configuration system, and I see a system that is very > close to what autoconf expects. Exactly. It is much more complex than many because it touches "all" of an operating system's features. Many apps just don't have that kind of breadth, so their configure scripts are quite short (look at Expat). > Apache, on the other hand is completely > non-autoconf. However, this model is not completely broken, and it is > what PHP uses, which is why it was originally chosen. I am 100% for > fixing this mess, but I am concerned by the number of times that this > issue has been raised, and never resolved. When the Apache stuff was first put in, based on the PHP stuff, I wasn't paying much attention to it. It seemed to work, so that was fine. But over the past year or so (or however long it has been), I've had to deal with it more and more. Gads. It definitely does things "different" in an attempt to do things faster. The whole APACHE_SUBST and APACHE_FAST_OUTPUT and APACHE_OUTPUT was done to avoid variable substitution. Yet I find that to be a poor tradeoff (I could go either way with this; sounds like Roy likes it; with a general cleaning of the Apache autoconf, it might be more obvious and useful to use). There was also a lot of crud remaining in Apache simply because it was autoconf'd in a big whack before we settled on what should be done by APR and what should be done by Apache. And the build stuff is just messy (quick: what is the difference between library.mk ltlib.mk program.mk and special.mk?) And the whole build2.mk thang. I'm a big proponent of revamping the build system for Apache (more like APR(UTIL)), and a basic cleaning of its config system. Cheers, -g -- Greg Stein,
http://mail-archives.apache.org/mod_mbox/apr-dev/200103.mbox/%3C20010302004128.S2297@lyra.org%3E
CC-MAIN-2015-40
refinedweb
1,044
75.2
EC: Swift can return corrupted Data and be able to go data lost at isa_l_rs_vand policy with >=5 parities Bug Description This is what I have been fighting in the last week and here I'd like to summarize/describe for the relations and impact to Swift/PyECLib/ I DO NOT think this is a security risk because this doesn't cause an information leak or significant resource attack from outside, but this includes significant impact for users because of the risk of data lost. This is why I set this as private-security temporary. We can make this as open after notmyname confirm the impact and according to his decision. To Swift API: Using isa_l_rs_vand EC policy, pyeclib <= 1.3.1, liberasurecode <=1.2.1 with >= 5 ec_n_parity setting, you can see corrupted data *RARELY* *WITHOUT ANY ERRORS* at downloading. If you are using 'swift' command to download, the error appears most likely, ubuntu@saio:~$ swift download ectest vagrant/ Error downloading object 'ectest/ And one more significant point is NOTHING QUARANTINED from the fact so swift is now regarding the corrupted one as a correct object. The reason, I said "RARELY" above is that occurs due to the fragments combinations. You can see such buggy combinations with following script, that asserts isa_l_rs_vand ec_ndata=10, ec_n_parity=5 case: from pyeclib.ec_iface import ECDriver from itertools import combinations if __name__ == '__main__': driver = ECDriver( orig_data = 'a' * 1024 * 1024 encoded = driver. # make all fragment like (index, frag_data) format to feed to combinations frags = [(i, frag) for i, frag in enumerate(encoded)] bad_ check_ for check_frags in check_frags_ decoded = driver. # No No don't kidding me try: assert decoded == orig_data except AssertionError: if bad_combinations: ratio = float(len( print 'Bad combinations found: %s/%s (ratio: %s)' % ( print bad_combinations The result of the script should be: Bad combinations found: 10/3003 (ratio: 0.00333000333) [[0, 1, 2, 3, 5, 6, 8, 10, 11, 14], [0, 1, 2, 3, 5, 7, 8, 10, 13, 14], [0, 1, 2, 4, 5, 7, 9, 10, 11, 14], [0, 1, 2, 4, 6, 7, 9, 10, 13, 14], [0, 1, 3, 4, 6, 8, 9, 10, 11, 14], [0, 1, 3, 5, 6, 8, 9, 10, 13, 14], [0, 2, 3, 5, 7, 8, 9, 10, 11, 14], [0, 2, 4, 5, 7, 8, 9, 10, 13, 14], [1, 2, 4, 6, 7, 8, 9, 10, 11, 14], [1, 3, 4, 6, 7, 8, 9, 10, 13, 14]] So the bad combinations appears approximately in 1/300 of the time. The bug reason: The bug exists in liberasurecode isa-l backend implementation. I've already asked to the isa-l maintainer about the possiblity that isa-l can return such a corrupted data without errors[1]. He honestly point out the issue that gf_gen_rs_matrix (which is used in isa_l_rs_vand encode/ I made a patch[2] that fixes to enable liberasurecode to be able to return Error instead of corrupted data. With [2], Swift will be able to return Error (sort of 5xx) in the bad pattern case. Still impact to existing clusters: If you have existing cluster with >=5 parities isa-l policy, it is still problematic even if you patch the fix [2]. That is probably because you are running the *object- from pyeclib.ec_iface import ECDriver from itertools import combinations def test_ec_driver(k, m): driver = ECDriver( orig_data = 'a' * 1000 encoded = driver. # make all fragment like (index, frag_data) format to feed to combinations frags = [(i, frag) for i, frag in enumerate(encoded)] an_ok_pattern = None for check_frags in combinations(frags, k): decoded = driver. # No No don't kidding me try: assert decoded == orig_data if not an_ok_pattern: # save this ok pattern except AssertionError: # find missing index for x in range(k + m): if x not in check_frags_dict: else: print 'try reconstruct index %s with %s' % ( x, check_frags_ # try to reconstruct reconed = driver. try: except AssertionError: if x not in an_ok_pattern: # sanity check # try to decode again including the garbage reconstructed one if __name__ == '__main__': params = [(10, 5)] for k, m in params: As you can see, the bad pattern can infect the corrupted frags to a good pattern. And then, right now, we have no way to find a good pattern, including the infected garbage as a bad pattern unless we calculate the original object etag with decoding whole fragments in the fragment archive. Even if once we can find the bad pattern, I don't think the way to detect which one or both is (are) bad one(s) exist so that it means, once infected, we cannot stop the pandemic anymore and the infected object will go to the garbage in all fragment archives step by step. This is the worst problem. I think, to the new incoming objects and brand new cluster deployment, the fix [2] is enough if you can accept to see the errors in 1/300 of the time. What we can do for this pandemic: For the possible patches perspective, we should land [2] which can prevent *future pandemic* as soon as possible. For existing cluster users, sorry, what I can do is just saying "Please plan to migrate to a safer policy as soon as possible before the pandemic goes to data lost". And I'm trying to add the greedy testing [5] to ensure the result consistency in decode/reconstruct that we can know which parameters can be safe. Otherwise, we may be able to leave same pandemic possiblity to another policy. For isa-l perspective, we know isa-l is the most high-performance open sourced engine in the available backends so the constraint will be a problem to choose. To solve the bad combination, the isa-l maintainer suggested to use another matrix (gf_gen_ PyECLib/ In my opinion, we should fix [2] as soon as possible and set it as the Swift requirement to prevent the pandemic in the future. However, right now we don't have the way to set the c lang binary version constraint. To enable the requirement, my idea is as following steps: 1. [2] land at liberasurecode master 2: bump liberasurecode version 3: make hard-coded constraint for PyECLib to require the liberasurecode >= (step2) (this can be painful becuase xenial or yakkety is =< 1.2.1) 4: bump pyeclib version 5: set the pyeclib requirement in Swift requirements.txt (<- this can have a tag #closes-bug for this bug report in the commit message) Or any better idea is always welcome to me. 1: https:/ 2: https:/ 3: https:/ 4: https:/ 5: https:/ Since this report concerns a possible security risk, an incomplete security advisory task has been added while the core security reviewers for the affected project or projects confirm the bug and discuss the scope of any vulnerability along with potential solutions. I agree this is not a security bug, and thus doesn't need to be embargoed because of that. However, I think Kota is wise for keeping this private for now. Although there has been some limited public discussion of this issue in IRC, this is the bug that is tracking the issue. I'd prefer that this issue stay private until we have a better-defined (and implemented or in-progress) plan for users who may be affected by this. Kota's plan in the bug description is good. I'd add that we should consider providing a migration script for users to run if they are affected. Thanks John for the comment. I talked with Clay about the migration process and I'm realizing that re-putting all objects in the isa-l 5>=paritis containers can be mitigation from destroying other healthy fragments. That is because the reputing process can work as decode/encode and then the re-stored object is ensured the md5 value because the copy process set the etag from source to the destination. Once we installed the liberasurecode to prevent corrupted frag from bad pattern (and raise Error) with [2] in the description, and then reputing all, probably we can prevent the future corruption and just allowing 1/300 times errors on GET/reconstruct. The attachment file is a draft version of the reputting script. I don't know how we should make it much robust (i.e. catch failure cases when running the reputing process) The current version works to get all containers with a specified policy and try to reput the objects in the containers and then, if it got errors like 412 unprocessable entity (etag mismatch), 500 internal server error (will occur with [2] in the bad pattern), etc..., dump the failure objects with json format which can be feed-able to the script again. Anyway, IMO, we need [2] in the description in the liberasurecode master. This script seems to be for a single *account* at least it mostly seems to operate "given creds => list all containers; if policy_name == options. That's good. I think we also need the script to be able to take a list of account names and a set of superadmin/ That tooling ready for master + package/release the liberasurecode change [1] is all that we need for a plan for now, IMHO. 1. https:/ @Clay: Yes, that's for an account and that's why I separated the reput_objects function as callable with for each account. For another thought, I found an issue for the reputter scripts. That works *basically* for stored objects but I found a type of object deosn't. That is a delete marker in versioned write history mode including ';' in the content-type. That results in 400 bad request for re-putting. I pushed a solution for that as separated patch, here https:/ However, I'm not sure right now, if swift has another middleware to make such a content-type which can not be re-put-able. If it's only a delete-maker, it's ok to skip them because it should be 0-byte object which is not necessary to re-decode/encode. One more thing, we have to consider, how to force to use liberasruecode> I think we have a couple of ways to solve the dependency 1. use bindep requirements with version Since openstack project uses bindep, we can make the constraint like as https:/ However, I confirmed the bindep requirement refers to the packaging version (e.g. apt, yum) so even if we installed the newest libersurecode version 1.3.1 from source, bindep requirements still fails due to liberasurecoe 1.1.0 in xenial. 2. Use hard coded requirement With https:/ 2-1. we cannot ensure the liberasurecode if the .so file is overwritten from source. That is because the assertion works from the erasurecode_ 2-2. with the https:/ after today's meeting, I think we have a reasonable plan for how to get releases going: we release swift with a changelog mentioning that deployers shoudl upgrade libec. then we also release pyeclib with a warning if libec<1.3.1. then after libec is in distros, roll those hard requirements forward through the dep chain ...from http:// ---- The last thing I'd like to see before we open this bug is a script (or process) to point people to that gives a mechanism to move existing at-risk data to a safe configuration. For deployers who are using isa_l_rs_vand with more than 4 parity, you should take immediate action to protect data in your cluster. 1. Upgrade to liberasurecode 1.3.1 or later 2. For any objects that are currently in an affected storage policy, re-put those into the cluster to fix any repairable data corruption. The script on comment #4 of this bug report can be used. We expect that liberasurecode 1.3.2 will include support for isa_l_rs_cauchy (for new policies) which will not have a restriction on the number of parity bits. Reviewed: https:/ Committed: https:/ Submitter: Jenkins Branch: master commit d163972bb04faf1 Author: Kota Tsuyuzaki <email address hidden> Date: Wed Nov 16 20:42:27 2016 -0800 Add soft warning log line when using liberasurecode <1.3.1 To apply the fix for a liberasurecode issue [1], we need hard depencency of liberasurecode requires >=1.3.1. However current binary dependency maintainance tool "bindep" works only for packagers' repository. (i.e. it refers the version of apt/yum/etc...) And nothing is cared for the binary built from source. This patch provides a way to detect incompatible liberasurecode and makes a warning log line to syslog which suggest "you're using older liberasurecode which will be deprecated, please upgrade it". NOTE: - This dependency managemnet depends on erasurecode_ file in liberasurecode. i.e. it cannot care of overwritten .so library after PyECLib built once. Partial-Bug: #1639691 1: Icee788a0931fe6 Change-Id: Ice5e96f0a59096 I think the critical nature of this bug is reduced with liberasurecode 1.3.1 I don't think swift's handling of isa-l policies can change in a reasonable way to close this issue until isa-l-rs-cauchy is available. And as soon as isa-l-rs-cauchy support is released in pyeclib swift deployments can consume it - so I don't think this bug is a swift release blocker. We really only talk about isa-l and liberasure here: http:// so now that isa-l-cauchy is available this issue we should consider what Swift can do when you have isa-l-rs-vand policy with > 4 parity frags that is not acknowledged as pre-existing... I would support a change that would require such policies to be deprecated - then something in the release notes: * you must immediately deprecate isa-l-rs-vand policies with >4 parity or your swift won't start * you should migrate data out of them * you may upgrade liberasurecode/ Some openstack policy on upgrades probably says that we can't merge a fix that require a config change to upgrade - but my better senses says preventing the proliferation of the risk of data loss is a reasonable exception. I'd feel like a real piece of work if someone ended up *creating* such a policy just because it's "only a warning". Reviewed: https:/ Committed: https:/ Submitter: Jenkins Branch: master commit c7bffd6cee400b7 With the nice warning in place I think we've done a fair job here trying to dissuade new storage policies with bad configs. At some point we can enforce deprecation. Definitely will need to; may even want to have some "double secret deprecation" where you can't even create new objects in the affected containers. Reviewed: https:/ Committed: https:/ Submitter: Jenkins Branch: stable/newton commit bd75a97ad47fa28 Reviewed: https:/ Committed: https:/ Submitter: Jenkins Branch: master commit 2c3ac543f410617 Author: Tim Burke <email address hidden> Date: Thu May 25 09:40:47 2017 -0700 Require that known-bad EC schemes be deprecated We said we were going to do it, we've had two releases saying we'd do it, we've even backported our saying it to Newton -- let's actually do it. Upgrade Consideration === Erasure-coded storage policies using isa_l_rs_vand and nparity >= 5 must be configured as deprecated, preventing any new containers from being created with such a policy. This configuration is known to harm data durability. Any data in such policies should be migrated to a new policy. See https:/ information. UpgradeImpact Related-Change: I50159c9d19f238 Change-Id: I8f9de0bec01032 Closes-Bug: 1639691 This issue was fixed in the openstack/swift 2.15.0 release. Set this as private because it seems sensitive but i don't think this is security issue. Just to leave the decision how we can handle this to notmyname.
https://bugs.launchpad.net/swift/+bug/1639691
CC-MAIN-2021-21
refinedweb
2,598
57.2
My tsconfig.json: { "compilerOptions": { "module": "es6", "target": "es6", "lib": ["dom", "es2016"], "moduleResolution": "node", "baseUrl": "src", "allowSyntheticDefaultImports": true, "noImplicitAny": false, "sourceMap": true, "jsx": "preserve" }, "files": [ "typings/Core.d.ts" ] } My project structure: - node_modules/ - scripts/ - src/ - components/ - services/ - typings/ - www/ - tsconfig.json Whenever I create a component and try to import a service like this: import { logout } from 'services/authentication/index'; I get an error: TS2307: Cannot find module 'services/authentication/index' Even though simply compiling the Typescript through the CLI works fine. You can also see error like: Warning:Cannot find parent tsconfig.json Right? WebStorm does support 'baseURL' and correctly generates/resolves imports using it. The problem is that you have "files":[] section that only includes a file from typings. If there is a 'files' section in tsconfig.json, WebStorm applies config settings to a file only if it is included in this section (tsc, BTW, works in the same way - see): So,it doesn't use settings in your tsconfig.json when compiling your files, it uses default TS options instead. if you like to use TypeScript service, you have to set up your tsconfig.json accordingly. Please see the answer in for possible solutions I'm experiencing the same issue using the latest version of idea. I don't have the issue when compiling using the CLI or when using vscode. I have a very simple tsconfig.json file. The error I see in idea is "cannot resolve directory 'features'" when doing an import like `import { foo } from 'features/foo'` please attach a screenshot of the error. Do you have TypeScript service enabled in Settings | Languages & Frameworks | TypeScript? This is the problem I see in my project as described by others. The project run just fine, it's just IntelliJ complain about this. Please provide your tsconfig.json This is my tsconfig.json file. Please note that tsconfig.json file is stored not in the root of the project but in ./app/ui directory - if that matters. you will see the same issue if you cd to your ./app/ui directory and run `tsc -p .` there... the problem is that your are using " module": " system" in your tsconfig.json TypeScript has two strategies to resolve module names: Strategy can be specified via moduleResolutioncompiler option. If this option is omitted compiler will use nodewhen target module kind is commonjsand classicotherwise So, in your case classic resolution is used, and modules from node_modules are not found. I'd suggest specifying resolution explicitly in your config: does it help? Unfortunately still seeing this issue. I have `moduleResolution: node` set as well. It seems that Webstorm is struggling to re-index and find modules - it happens at strange times, even on modules that I haven't been rebuilding. Every time something like this crops up I have to restart my project and then the "error" goes away. Could have something to do with the fact that these modules are symlinked in a parent node_modules folder (I use yarn workspaces). But it works fine after a restart, so I'm thinking that Webstorm just doesn't refresh itself properly and recheck those symlinked modules again. Its been a rather common occurrence actually with Webstorm and Typescript - whenever something goes wrong with the indexing and Webstorm complains, I restart and it goes away. I must reset my IDE at least 20 times a day... File -> Synchronize does nothing btw. I've looked around, seems restart is the only thing that works. I wish there was a "Do whatever happens during restart that makes things work again" button... >It seems that Webstorm is struggling to re-index and find modules - it happens at strange times, even on modules that I haven't been rebuilding TSC errors don't normally have any relation to re-indexing. What Typescript version do you use? Please check Settings (Preferences) | Languages & Frameworks | TypeScript >Could have something to do with the fact that these modules are symlinked in a parent node_modules folder Typescript service does indeed have issues with symlinks - see and linked tickets. But such issues normally are not intermittent >Its been a rather common occurrence actually with Webstorm and Typescript - whenever something goes wrong with the indexing and Webstorm complains, I restart and it goes away If the errors come from WebStorm own parser, try invalidating caches (File | Invalidate caches, Invalidate and restart) I'm using the latest version of Typescript, I try keep it up to date. So that would be 2.6.1 as of now. Unfortunately I can't access that issue you linked. I've tried invalidating caches, it just closed all my WebStorm windows (I usually have multiple open for all the different modules I'm working on) and restarted, and then worked - but pretty much exactly the same as what I was doing before, restarting when there was an issue. It's still running into issues with finding my symlinked modules in the index after a bit of work / re-building of those modules. Another problem that's popping up is that for some reason it refuses to find some exported members from within my symlinked modules at all now, I just have to type them out completely (no indexed autocomplete while typing) and then it suddenly finds them and works. In this case, restarting actually doesn't help. It's a really strange error, because sometimes it will find exported members within a module no problem, and other exported members it will ignore and I will have to manually add. Upon further investigation into my second issue there, it appears that if I change the exported members from regularly exported functions, such as: "export function someFunction() {}" into: "export const someFunction = () => {}" then it works and I get auto-complete for "someFunction" in my other module code. So there is clearly some issue with the indexing of exported functions, I've also noticed this with TypeScript enum objects. >I'm using the latest version of Typescript, I try keep it up to date. So that would be 2.6.1 as of now. That's the issue. Typescript service integration doesn't work with TypeScript 2.6.x due to breaking API changes. Please try using bundled TypeScript instead. Problem is addressed in upcoming 2017.3 Completion for exported functions works fine for me. If changing TypeScript version doesn't make things any better, please provide exact code snippets/files I can use to recreate the issue when will the problem be solved? what problem? If you have problems, please don't add comments to old threads not related to your issue, create a new one. Webstorm doesn't find the tsconfig.json, which is in the rootdir the message indicates that the file being compiled is not included in tsconfig.json. See above It seems this issue was never solved. I read the all comments and it just goes on circles. I have a similar issue. Using IntelliJ idea ultimate. I have added my d.ts file using : However, for some reason Intellij does not pickup the type definitions from here. TS7016: Could not find a declaration file for module 'luciad/view/Map'. 'C:/Luciad/LuciadRIA_2018.0.03/web/luciad/view/Map.js' implicitly has an 'any' type. The webpack runs perfectly well but IntelliJ IDE keeps reporting the TS7016 error. >It seems this issue was never solved. Most issues in this thread are configuration problems, not bugs, and all them are resolved. > I have a similar issue doesn't look similar to any issue reported in this thread... Also, using for adding type definitions is definitely not a recommended practice. Please provide a sample project that can be used to recreate the. Restarting the typescript service sort of worked for me. I clicked Restart TypeScript Service. That killed it, but it wouldn't restart by itself. I had to close and reopen WebStorm, then it started back up. Caches.. Never forget to clear your caches and restart. @Elena, brilliant! So, I'm using PHPStorm 2020.2 and TypeScript 3.9.7 and I STILL see this same issue. I've been trying to find a way to fix this for a couple of days now with no success. Invalidatinc caches, restarting the TypeScript service, nothing works for me. Saying that this issue is not a bug but a configuration issue doesn't seem right to me. If I can compile my project perfectly fine with Webpack, if PHPStorm ACTUALLY finds the file (becuase it's doing code completion on it) BUT the internal typescript errors out, this means something is wrong with the way PHPStorm interprets tsconfig.json files within the context of the service. Here's my tsconfig for reference: { "compilerOptions": { "outDir": "./public/js/", "strict": true, "moduleResolution": "node", "module": "esnext", "target": "es5", "sourceMap": true, "baseUrl": "./assets", "paths": { "styles/*": ["scss/*"], "@/*": ["vue/*"], "vue": ["@vue/runtime-dom"] } }, "exclude": [ "node_modules" ] } I don't know what to post here... The same issue is posted over and over and over again with similar code examples... tsconfig is in the root directory (it was under /assets/ts, then moved up to /assets then moved up to root... Nothing works... I'd appreciate a workaround while you guys re-open any tickets related to this bug and fix it. Also, if the solution is include the files within the tsconfig as I read here: WS2016.3 applies config settings to a file only if the file is included in 'files' or 'include' tsconfig.json section. [More info about tsconfig.json] This is just plain wrong. This is not the way webpack uses the typescript compiler at all. Adding this to the tsconfig: "include": [ "assets/ts/*", "assets/vue/*" ], removed errors for imports: import FormInput from '@/Components/contact-form/form-input'; under "assets/vue" but I'm still seeing errors for: import ContactForm from '@/Components/contact-form'; under "assets/ts" crazy... [edit] Finally, changing the import this: import ContactForm from '@/Components/contact-form/index.vue'; Fixed the issue in the ts file. My opinion is that the way typescript runs as a service in PHPStorm is so different than the way it runs under Webpack that it will break in most common configurations. This should be addressed. >My opinion is that the way typescript runs as a service in PHPStorm is so different than the way it runs under Webpack that it will break in most common configurations. This should be addressed. when using webpack, ts files are piped through webpack loaders and not passed to the Typescript compiler directly; and with tsserver the files are processed according to the compiler logic. It's not a bug, and it's not going to be addressed Hello, just wanted to say that it's 2021, latest Intellij version and this issue is still there: That is my config, 99% is auto-generated by Nativescript schematics.
https://intellij-support.jetbrains.com/hc/en-us/community/posts/205979284-Typescript-does-not-resolve-modules-through-tsconfig-json-s-baseUrl-
CC-MAIN-2021-17
refinedweb
1,803
64.81
First. Thanks for posting this. I would like to see either python-stddeb or python-mkdebian become blessed, and the other one go away. Pitti worked on python-mkdebian, I think it is in the python-distutils-extra package, and I had been making improvements there, but if python-stddeb is the future it would be nice to consolidate. Wow, I never even knew about python-mkdebian. It's pretty well hidden (no manpage, even kind of hidden in the source branch). Thanks for the pointer, I will have a look at it. I definitely agree there should be OOWTDI. :) Elliot, fwiw, python-mkdebian has problems with namespace packages (or it might be a more general problem): See addendum 2 above. stdeb seems like the answer. What is missing from it that python-mkdebian gives you? I have just released stdeb 0.6.0 which has the debianize command. Thanks for the testing, feedback, and ideas. See also 'bzr dh-make' in newer versions of the bzr-builddeb plugin. 'bzr merge-upstream' is also a useful command. I think is an OT :) but... take a look on quicky! Cheers! Hi, great post - helped me a lot! I got stuck near the end, though. How do you properly include icons in the Ubuntu package? I added the files as package_data in my setup.py (googling told me that was the OK way of doing it) and it works if I pip install my application. When I try to package it, though, it breaks with a message dpkg-source: error: cannot represent change to popcorn_data/popcorn.png: binary file contents changed It points me at adding this stuff manually using dpkg-source and some quilt patches that add more files and folders, which just doesn't seem like a really elegant solution. I tried looking at different existing packages, but most of them have custom-made functions or imported modules (such as DistutilsExtra in case of Quickly apps) in setup.py. Is there some nice, canonical way of doing this? You can see my code so far at:
http://www.wefearchange.org/2010/05/from-python-package-to-ubuntu-package.html
CC-MAIN-2015-06
refinedweb
346
76.22
Chatlog 2012-01-19 From RDFa Working Group Wiki See CommonScribe Control Panel, original RRSAgent log and preview nicely formatted version. 14:36:08 <RRSAgent> RRSAgent has joined #rdfa 14:36:08 <RRSAgent> logging to 14:36:10 <trackbot> RRSAgent, make logs world 14:36:10 <Zakim> Zakim has joined #rdfa 14:36:12 <trackbot> Zakim, this will be 7332 14:36:12 <Zakim> ok, trackbot; I see SW_RDFa()10:00AM scheduled to start in 24 minutes 14:36:13 <trackbot> Meeting: RDF Web Applications Working Group Teleconference 14:36:13 <trackbot> Date: 19 January 2012 14:48:11 <MacTed> MacTed has joined #rdfa 15:00:29 <niklasl> niklasl has joined #rdfa 15:00:41 <ShaneM> ShaneM has joined #rdfa 15:00:42 <Zakim> SW_RDFa()10:00AM has now started 15:00:49 <Zakim> +scor 15:00:52 <Zakim> +OpenLink_Software 15:01:01 <MacTed> Zakim, OpenLink_Software is temporarily me 15:01:01 <Zakim> +MacTed; got it 15:01:04 <MacTed> Zakim, mute me 15:01:04 <Zakim> MacTed should now be muted 15:01:04 <scor> scor has joined #rdfa 15:01:23 <Zakim> + +1.540.961.aaaa 15:01:27 <Zakim> - +1.540.961.aaaa 15:01:34 <Zakim> +??P25 15:01:54 <Zakim> +??P27 15:01:58 <manu1> zakim, I am ??P27 15:01:58 <Zakim> +manu1; got it 15:02:07 <niklasl> zakim, I am ??P25 15:02:07 <Zakim> +niklasl; got it 15:02:20 <manu1> zakim, who is on the call? 15:02:20 <Zakim> On the phone I see scor, MacTed (muted), niklasl, manu1 15:02:34 <Zakim> + +1.612.217.aabb 15:02:40 <ShaneM> zakim, I am aabb 15:02:40 <Zakim> +ShaneM; got it 15:02:56 <ivan> zakim, dial ivan-voip 15:02:56 <Zakim> ok, ivan; the call is being made 15:02:58 <Zakim> +Ivan 15:03:51 <manu1> Agenda: 15:04:03 <MacTed> Zakim, unmute me 15:04:21 <Zakim> MacTed should no longer be muted 15:04:25 <MacTed> scribenick: MacTed 15:04:28 <MacTed> scribe: Ted 15:09:06 <MacTed> Agenda review, one change, discuss HTML+RDFa document conformance at end of call 15:09:07 <manu1> Topic: ISSUE-84: fragment identifiers 15:09:07 <MacTed> manu1: This issue has been re-opened because the TAG has new language that they'd like to use. Do we want to use the new language as requested by the WWW TAG? 15:09:52 <ShaneM> The spec now says:. 15:10:12 <MacTed> No concerns about new language. 15:10:12 <manu1> PROPOSAL: Adopt the WWW TAG proposed language on fragment identifiers and place it into RDFa Core 1.1 15:10:14 <manu1> +1 15:10:16 <niklasl> +1 15:10:16 <ShaneM> +1 15:10:23 <ivan> +1 15:10:33 <MacTed> MacTed: +1 15:10:35 <scor> +1 15:10:37 <ivan> RESOLVED: Adopt the WWW TAG proposed language on fragment identifiers and place it into RDFa Core 1.1 15:10:51 <manu1> Topic: ISSUE-125: Refine CURIE syntax 15:11:04 <manu1> 15:12:47 <MacTed> manu: SPARQL & Turtle use a different CURIE definition than current RDFa; request is that RDFa be aligned to the others 15:13:51 <niklasl> q+ 15:14:11 <MacTed> ... concern is that requested revision differs from Turtle and SPARQL by explicitly disallowing http:// (and possibly some other scheme patterns) 15:14:39 <manu1> ack niklasl 15:15:24 <MacTed> niklasl: not quite clear on what's allowed and not, by these syntaxes... forward slashes seem to require backslash-escaping 15:15:52 <MacTed> ... Gavin appears to have misinterpreted some docs 15:17:29 <MacTed> ivan: I sympathize with the goal of same syntax across RDFa, SPARQL, & Turtle; but if request is to put an extra restriction in RDFa, not sure that's needed 15:18:34 <manu1> Here's the definition they'd like us to use: 15:18:42 <MacTed> manu: I think this group would agree that 1 syntax for CURIEs across all specs/languages would be good. I'm concerned about ways that the requested definition differs from Turtle and SPARQL. 15:20:26 <scor> does the current RDFa syntax definition for CURIEs allow http:// ? 15:21:12 <ShaneM> I am not in favor of making changes to this at this time. 15:21:14 <niklasl> scor: I'm afraid so 15:21:23 <MacTed> ivan: sees 2 concerns from Gavin. 1 = discrepancy between RDFa and SPARQL/Turtle CURIE def; 2 = prefix restriction(s) 15:24:26 <MacTed> (...discussion...) suggested syntax BNF in would disallow many currently acceptable CURIEs 15:25:41 <ShaneM> this BNF is making my eyes bleed 15:28:07 <MacTed> (...discussion...) schema.org extensions via '/' would be invalidated by this syntax... so would dbpedia. That is if someone wanted to do this: schema:Person/Engineer/ElectricalEngineer they couldn't without escaping the slashes like so: schema:Person\/Engineer\/ElectricalEngineer ... people using schema.org are not going to understand why they have to backslash escape that stuff... and they don't have to with RDFa today. 15:29:39 <scor> ivan: it's only for external parties willing to extend these types 15:29:44 <ivan> 15:29:47 <scor> not on schema.org itself 15:30:45 <scor> we should bounce the / issue back to Gavin and the RDF WG 15:31:03 <scor> re the use of / in DBpedia and schema.org extensions 15:31:09 <MacTed> manu: We've now confirmed two real-world use cases requiring unescaped slashes in CURIEs, with third-party extensions of schema.org and dbpedia:/resource/Albert_Einstein - if we limit the syntax, we limit the ability to shorten URIs, and that's the whole purpose of CURIEs. We can't accept a solution that requires characters like /, & and ? to be backslash escaped... it's too restrictive. 15:31:39 <manu1> PROPOSAL: Accept the SPARQL 1.1 Query Language definition of PN_LOCAL and PN_PREFIX for CURIEs in RDFa 1.1 15:31:50 <manu1> -1 15:31:53 <scor> -1 15:32:03 <ivan> -1 15:32:05 <ShaneM> -1 15:32:21 <ShaneM> I am a big fan of dbpedia reference 15:32:23 <niklasl> +0 15:32:47 <MacTed> Ted: +0 15:33:00 <manu1> RESOLVED: Reject the SPARQL 1.1 Query Language definition of PN_LOCAL and PN_PREFIX for CURIEs in RDFa 1.1 15:34:43 <ShaneM> q+ about why this is all a bad idea 15:35:12 <niklasl> q+ 15:35:32 <manu1> ack niklasl 15:35:39 <niklasl> I still have concerns about this, outlined here: 15:37:45 <ShaneM> BTW we published the CURIE spec as a note when the XHTML working group shut down: 15:40:40 <MacTed> (...discussion...) colon as separation character is where the trouble lies, but fixing that is an enormous task touching many specs, the discussion would be unending and all to solve a problem that we've never heard anybody complain about. The proposed solution is worse than what we have in RDFa right now. 15:41:32 <ShaneM> I am actually not opposed to preventing a slash as a first character of a reference 15:44:39 <manu1> PROPOSAL: Prevent a slash as the first character in the reference part of a CURIE. 15:42:56 <ivan> 0 15:44:41 <manu1> -1 15:44:47 <ivan> 0 15:45:02 <ivan> q+ 15:45:16 <MacTed> (...discussion...) past specs allowed what we're considering disallowing... which means we're doing something that is backwards incompatible 15:45:28 <manu1> ack ivan 15:46:02 <MacTed> (...discussion...) current refinement suggestions are further departure from CURIE specs in SPARQL and Turtle, not alignment... if we are to align these specifications, resolving to not allow a slash as the first character in the reference part of a CURIE is a bad way to start the discussion. We should coordinate more on RDF WG on this, but this is not for RDFa 1.1. 15:47:34 <ShaneM> 0 15:47:39 <niklasl> +1 15:47:39 <ivan> -0 15:47:55 <MacTed> Ted: +0 15:48:01 <scor> -0 15:48:55 <MacTed> Manu: We don't have consensus on this, we've discussed all of this before, no new information... let's move on. 15:49:59 <manu1> Topic: Use of RDFa in XML-based languages 15:50:11 <ivan> 15:50:11 <MacTed> manu: Yes, this was my comment. I don't think that we should limit that RDFa attributes in XML documents MUST be in the XHTML namespace... authors are just not going to do that. We should instead say that RDFa attributes can be in the 'no namespace'. 15:54:11 <MacTed> Shane: What about clashes w/ languages that use stuff like @href and @src? Maybe we should allow them in XHTML namespace as well, so attributes can be used in both 'no namespace' and xhtml namespace... processors much check both. 15:55:16 <MacTed> General agreement on changes, no opposition. 15:56:24 <MacTed> Niklas: We should warn authors about conflicts in 'no namespace'. 15:58:59 <manu1> PROPOSAL: Change XML+RDFa such that RDFa attributes are defined in 'no namespace' and XHTML namespace and also caution authors that they must pay attention to XML-based languages where the RDFa attributes and Host Language attributes may overlap. 16:00:16 <ivan> +1 16:00:19 <manu1> +1 16:00:22 <MacTed> Ted: +1 16:00:24 <ShaneM> +1 16:00:25 <niklasl> +1 16:00:36 <scor> +1 16:00:38 <manu1> RESOLVED: Change XML+RDFa such that RDFa attributes are defined in 'no namespace' and XHTML namespace and also caution authors that they must pay attention to XML-based languages where the RDFa attributes and Host Language attributes may overlap. 16:01:23 <manu1> Topic: @resource in RDFaLite 1.1 16:03:53 <MacTed> Ivan: We need to make a decision on whether or not we're going to replace @about with @resource in RDFa Lite 1.1 16:04:53 <MacTed> Manu: To avoid another LC, we should note it clearly in the document that we /may/ do this after LC, so folks should weigh in on it during LC. That'll cover us from having to do another LC. 16:04:53 <MacTed> Ivan: We need to make this decision eventually. 16:06:53 <MacTed> RRSAgent, draft minutes 16:06:53 <RRSAgent> I have made the request to generate MacTed 16:06:56 <MacTed> RRSAgent, make logs public 16:06:57 <Zakim> -Ivan 16:06:59 <Zakim> -manu1 16:07:04 <MacTed> trackbot, end meeting 16:07:04 <trackbot> Zakim, list attendees 16:07:04 <Zakim> As of this point the attendees have been scor, MacTed, +1.540.961.aaaa, manu1, niklasl, +1.612.217.aabb, ShaneM, Ivan 16:07:07 <Zakim> -scor 16:07:07 <trackbot> RRSAgent, please draft minutes 16:07:07 <RRSAgent> I have made the request to generate trackbot 16:07:08 <trackbot> RRSAgent, bye 16:07:08 <RRSAgent> I see no action items 16:07:09 <Zakim> -ShaneM 16:07:10 <Zakim> -niklasl # SPECIAL MARKER FOR CHATSYNC. DO NOT EDIT THIS LINE OR BELOW. SRCLINESUSED=00000140
http://www.w3.org/2010/02/rdfa/wiki/Chatlog_2012-01-19
CC-MAIN-2014-52
refinedweb
1,868
67.79
0 Hi there, I'm back again and this time I have another question: So I figured out that I have to use Array Lists for my last question, but now, I have another problem: How do I get all the user inputs separated into different strings so that the user may be able to input data about those separate strings? The code below is what I have so far, I'm sorry if my stupidity is over 9000 lol: import java.util.Scanner; import java.util.ArrayList; public class SheetCounter { static Scanner userin = new Scanner (System.in); static ArrayList <String> bleh = new ArrayList <String>(); static ArrayList <Integer> numberofsheets = new ArrayList <Integer>(); public static void main (String[] args) { System.out.println("Welcome to the (Creative Name Here)"); String test = ""; while (test != "n") { System.out.println("Please input the name of a user you wish to count:"); bleh.add(userin.next()); test = userin.nextLine(); System.out.println("Do you wish to add any other names?"); System.out.println("Type 'y' if you do, type 'n' if you do not"); test = userin.nextLine(); if (test.equals("n")) { break; } } System.out.println("These are all the users you have added: " + bleh); } } Edited by PratikM
https://www.daniweb.com/programming/software-development/threads/461716/array-list-input-question
CC-MAIN-2018-43
refinedweb
201
66.54
08 May 2012 07:34 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The base consists 17 tanks with a capacity of 100,000 cbm each and construction of the facility was completed on 26 April, according to a statement on Sinopec's online newswire. The company has been operating a 1.1m cbm Phase I facility which comprise 11 tanks of a capacity of 100,000 cbm each at the same site since 29 September 2011, the statement said. Crude oil stored at both the Phase I and II facilities will be primarily distributed to refineries in the east coast of China, it said. The Phase II crude storage base is the final part of the Chinese oil and petrochemical major’s commercial crude storage project at the site in Rizhao,
http://www.icis.com/Articles/2012/05/08/9557113/chinas-sinopec-to-operate-phase-ii-rizhao-crude-storage-in-june.html
CC-MAIN-2014-41
refinedweb
131
67.59
A few days ago, Google released the official YouTube Chromeless Player AS3 API that gives us an exciting perspective of building flash applications integrating seamless Youtube videos. Check out their announcement and the YouTube ActionScript 3.0 Player API Reference Now let’s see a simple example that integrates a YouTube video into an ActionScript 3 Flash application with this API. 1. Create a new flash file (actionscript 3) and save it as youtube.fla. Set the stage size to 640×360 pixels. Rename “layer 1 ” to “actions” and open the actions panel. 2. Following the YouTube ActionScript 3.0 Player API Reference, copy and paste the following code: import flash.system.Security; Security.allowInsecureDomain("*"); Security.allowDomain("*"); // This will hold the API player instance once it is initialized. var player:Object; var loader:Loader = new Loader(); loader.contentLoaderInfo.addEventListener(Event.INIT, onLoaderInit);; player.loadVideoById("nxrmJtaZBA0"); player.setSize(640,); } We display the video by calling the loadVideoById(videoId:String) method that loads and plays the specified video. 3. That’s it, test the movie to see it in action. very usefull but i when i publish it i get some error player ready: player state: -1 video quality: medium Erreur d’ouverture de l’URL ‘’ player state: 3 player state: 1 Erreur d’ouverture de l’URL ‘ &fexp=903900,904510&ps=chromeless&cfps=0&bd=17424&bc=19472&len=601&hbd=6766308&ns=yt&hbt=7.151&plid=&splay=1&scoville=1&h=360&bt=0.328&docid=uLEMqN-Za_U&w=480&rt=1.211&vid=_LO0hpCkOM-jfg_3mTXcofS6u7mu_SgkC&fv=WIN 10,0,2,54&el=embedded&et=0.067&fmt=5’ Erreur d’ouverture de l’URL ‘’ and some ID doesnt work ID of the video i used : uLEMqN-Za_U How do i do into have my swf exported ? when i compil it and i play my swf out of flash i have no video which playing but those error SecurityError: Error #2028: Le fichier SWF local-système de fichiersétaire/Bureau/MYSPACE%20ALL/Mon%20myspace/youtube.swf ne peut pas accéder à l’URL Internet. at flash.display::Loader/_load() at flash.display::Loader/load() at youtube_fla::MainTimeline/frame1() thanks Thanks, that will be helpful for my future work Pingback: 45+ Advanced Adobe Flash Actionscript Trainings | Master Design Seek and yea shall find! This is an incredible. You just saved me days of anquish! Thanks you very much! Digital D Pingback: Pambaa – Develop with the Official YouTube Chromeless Player AS3 API awesome thanks! One question… i’m trying to do is to allow a user to paste any youtube video URL into the input text field and when the user presses a button the video starts to play Pingback: Advanced Adobe Flash Actionscript Trainings « Flash Criminals You should change the youtube video ’cause the one you have is restricted, it does not show up in Italy. @ GIUSEPPE: Hi, thanks for your remark, we’ve changed the video. Indeed the old one has been restricted since. Thanks again. This is great just what i needed. Hope it works the way i think. Pingback: The Best of 2009 in Flash Web Design - Flash Web Design and Design Photography | DesignOra I have done this and it works perfectly, but i was wondering if it was possible to have this on a different frame other than frame one. I have tried it but the video just won’t play then. Any help is highly appreciated. Hello, I’m not clear as to whether or not the new AS3 API still requires Javascript like the AS2 version did. lee Hi, this is the first time that I have been able to get YouTube integrated into my flash site. I took your formula and placed it in one of my pages, adjusted the x and y coordinates, now when you go to that page it plays automatically. Now my problem is getting it to stop. What code do I use to have the movie stop when the user clicks another frame? Rob. Pingback: Free I Share 分享资源 分享快乐 » Blog Archive » 45个高级Flash Actionscript应用 Pingback: Create a 3D Flipping Youtube Player | Web design studio Pingback: 45+ Advanced Adobe Flash Actionscript Trainings - Flash24h.com | Thế giới Flash của bạn! does anyone know why Im getting a blank screen then an output message saying Error opening URL ‘’ Error #2044: Unhandled IOErrorEvent:. text=Error #2035: URL Not Found. Great work!! thanks! One more question… i’m trying to do a function that stop the video (and sound in background) I could remove the instance but the sound keep playing… The YouTube ActionScript 3.0 Player API Reference tells us to use player.destroy() ??? Forget my last post, player.destroy() works !!!!! Hi! Awesome, thx! Is it possible to play private videos? Can I pass some account credentials to get a private video running? Any help is appreciated! Thx Sab Pingback: 45+ Advanced Tutorials of Adobe Flash ActionScript « CSS Tips Pingback: 45+ Advanced Tutorials of Adobe Flash ActionScript | JS Tips great tutotial…..great work…… 🙂
http://www.riacodes.com/flash/develop-with-the-official-youtube-chromeless-player-as3-api/
CC-MAIN-2019-09
refinedweb
830
64.41
Struts Projects learning easy Using Spring framework in your application Project in STRUTS Framework using MYSQL database as back end Struts Projects are supported be fully... Struts Projects Easy Struts Projects to learn and get into development struts Hi, i want to develop a struts application,iam using eclipse... as such. Moreover war files are the compressed form of your projects... projects so you need not worry about them. have you read document struts struts shopping cart project in struts with oracle database connection shopping cart project in struts with oracle database connection Have a look at the following link: Struts Shopping Cart using MySQL ms access ms access how to delete the autonumber from ms access using java delete code Struts - Struts are looking for Struts projects to learn struts in details then visit at http...Struts What is Struts Framework? Hi,Struts 1 tutorial with examples are available at Struts 2 Tutorials...("jdbc:oracle:thin:@localhost:1521:orcl","system","system Struts Articles ) framework, and has proven itself in thousands of projects. Struts was ground-breaking... can be implemented in many ways using Struts and having many developers working... experience using Struts in a Servlet environment and that you want to take advantage struts struts <p>hi here is my code in struts i want to validate my... }//execute }//class struts-config.xml <struts..."/> </plug-in> </struts-config> validator... in nature because: ? Building configuration using maven are portable to another java projects java projects i have never made any projects in any language. i want to make project in java .i don't know a bit about this .i am familar with java.please show me the path please...... Hi, You can develop Outsourcing PHP Projects, Outsource PHP Projects , design, develop and deliver your PHP projects using Agile and Scrum development... and personal websites using PHP and MySQL technologies. Outsource PHP projects...Outsourcing PHP Projects - Outsource your PHP development projects Connecting Oracle database with struts - Struts Connecting Oracle database with struts Can anyone please provide me some solutions on Connection between Oracle database and struts Java projects Struts Struts How to retrive data from database by using Struts How to build a Struts Project - Struts How to build a Struts Project Please Help me. i will be building a small Struts Projects, please give some Suggestion & tips Open Source projects open source projects remain secret. Lots of companies are using our products...Open Source projects Mono Open Source Project Mono provides...; SGI Open Source Project List The following projects Marketing projects Java Marketing projects Java Marketing struts Hi how struts flows from jsp page to databae and also using validation ? Thanks Kalins Na excel to ms access excel to ms access hello sir i am developing a application which requires extracting data from excel sheet and place in the ms access database using java programming for example: in excel sheet i have 2 columns and more than Struts Books for applying Struts to J2EE projects and generally accepted best practices as well... Struts, and it is quite easy to implement the MVC architecture using the standard... using Struts, and his focus on the 1.1 version of the Framework Upload file to MS Access Upload file to MS Access Iam beginner and need to upload file to msaccess using jsp . Iam using netbeans platform. Kindly explain do I need to mention Attachment format in the JAva Projects - Java Magazine JAva Projects I need Some Java Projects struts struts how to generate bar code for continuous numbers in a loop using struts ... or atleast for a given number struts struts Hi,... please help me out how to store image in database(mysql) using struts problem in jsp using ms-access problem in jsp using ms-access after starting server(tomcat) wen v... and select the driver Microsoft Access Driver(*.mdb). 3)After selecting... the driver Microsoft Access Driver(*.mdb). 3)After selecting the driver, click struts struts hi i would like to have a ready example of struts using"action class,DAO,and services" so please help me Struts Tutorial: Struts 2 Tutorial for Web application development, Jakarta Struts Tutorial of Jakarta Struts shows you how to develop Struts applications using ant and deploy... Struts 1 is reached end of life of the project. So, if you are still using... Struts 2 Tutorials - Jakarta Struts Tutorial Learn Struts Threads in realtime projects Threads in realtime projects Explain where we use threads in realtime projects with example struts technologies like servlets, jsp,and struts. i am doing one struts application where i doing validations by using DynaVAlidatorForm in that have different fields... the following links: java &ms access - JDBC java &ms access Seasons Greetings, Am, creating an application for a food festival using Java swing as front end and MS access as Backend... ve managed to populate the JComboBox using MS Access. The problem is only 8 per MS Access connct using flex with java MS Access connct using flex with java Hi All, Can anybody help me how to connect to the MS access database and getting the data using flex in java technology with tomcat server what is struts? - Struts what is struts? What is struts?????how it is used n what... of the Struts framework is a flexible control layer based on standard technologies like Java... Commons packages. Struts encourages application architectures based on the Model 2 java projects - Java Beginners java projects hi, im final yr eng student.plz give me latest java or web related topics for last yr integration with EJB in JBOSS3.2 is to write about EJB2.0 in JBOSS3.2 using STRUTS FRAMEWORK. The EJB... projects group. Struts is a frame work for building really complex..., Struts can interact with standard data access technologies, like JDBC Hi - Struts know it is possible to run struts using oracle10g....please reply me fast its...:// Thanks. Hi Soniya, We can use oracle too in struts... ,usr and pwd.There r no problems in using oracle in struts.U can go ahead easily projects on cyber cafe projects on cyber cafe To accept details from user like name Birth date address contact no etc and store in a database Hi Friend, Try this: import java.awt.*; import java.sql.*; import javax.swing.*; import 2 project samples struts 2 project samples please forward struts 2 sample projects like hotel management system. i've done with general login application and all. Ur answers are appreciated. Thanks in advance Raneesh struts database program struts database program Can u show me 1 example of Struts jsp using oracle 10g as database! with struts config file Struts 2 Tutorial big projects. Struts 2 Actions Struts 2 Actions.... Validations using Struts 2 Annotations... that are available for Struts 2 Development using Eclipse IDE.   Apache Ant - Building Simple Java Projects Apache Ant - Building Simple Java Projects  ... will introduce you to Apache's Ant technology, which is a mean to build java projects... to build his/her own java projects; and if you are already exposed to Ant, then Current Running Projects on roseindia.net communication tutorial design using ASP.NET - Call Flow 1 SS7 real message communication using TCP/IP Sockets using C++ - Call Flow 1 For complete list of Projects... Current Running Projects on roseindia.net   how can i get dynavalidation in my applications using struts... : *)The form beans of DynaValidatorForm are created by Struts and you configure in the Struts config : *)The Form Bean can be used jdbc warning regarding to ms access jdbc warning regarding to ms access shows warning msg while compiling using ms access : warning: sun.jdbc.odbc.JdbcOdbcDriver is Sun proprietary API..., it will give you the warning. You can suppress the warning using following code access data from mysql through struts access data from mysql through struts I am Pradeep Kundu. I am making a program in struts in which i want to access data from MySQL through struts. I am using Strut 1.3.8 , Netbean 6.7.1 and MySQL 5.5. In this program ,I want Struts 2 - History of Struts 2 ; Strut2 contains the combined features of Struts Ti and WebWork 2 projects... Struts 2 History Apache Struts... taken over by the Apache Software Foundation in 2002. Struts have provided how to insert values from jsp into ms access how to insert values from jsp into ms access how to insert values using jsp into ms access database Struts - Framework using the View component. ActionServlet, Action, ActionForm and struts-config.xml... Access and Java - Java Beginners who work in projects, also computer based systems that uses following programs: MS Access, A employee database using Access and SQL which keeps the records...Access and Java A construction company has specialized in planning MS access MS access how do i access my Ms-Access file placed in the same jar file where my application code/class file r present??? Want to access it via Code. Can anyone help me ? Please give reply urgent... give me Struts hi can anyone tell me how can i implement session tracking in struts? please it,s urgent........... session tracking? you mean session management? we can maintain using class HttpSession. the code follows Java - Struts . It is serialize, has no-argument constructor, and allows access to properties using... be done in Struts program? Hi Friend, 1)Java Beans are reusable software Ms Access Ms Access How to get query for Primary key from MsAccess? SELECT column_name FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE WHERE table_name = 'data'; SELECT column_name FROM INFORMATION_SCHEMA.KEY_COLUMN
http://www.roseindia.net/tutorialhelp/comment/93708
CC-MAIN-2014-52
refinedweb
1,588
65.12
Content HAL API Introduction The Content HAL API add-on enables you to provide JSON Hypertext Application Language based REST APIs from your delivery applications. JSON Hypertext Application Language is a specification draft for HATEOAS (Hypermedia as the Engine of Application State), a constraint of the REST application architecture. Responses of HAL services are in the application/hal+json media type, and the HAL specification mostly uses Plain Old JSON as its primary design goals are generality and simplicity. Swagger API Document Support The Content HAL API Add-on supports Swagger API Document URL by default. For example, if you installed the Content HAL API on /api mount (e.g,), then the Swagger API Document URL will be available automatically at /api/api-docs/swagger.json (e.g,) by default. So, if you installed a Swagger UI web application, then you can navigate the Content HAL API by explorering the Swagger Document URL (e.g,). Content HAL API URL Patterns See the API page how to access the REST APIs. Every Response Body is a HAL Resource In a HAL API like the Content HAL API Add-on, every response must be a HAL resource. Minimal HAL Resource Example A minimal HAL resource can be an empty JSON object: { } Typical HAL Resource Example However, an empty HAL resource wouldn't be very useful, so it is very common to have links including "self" at least in a reserved property, "_links", like the following example: { "_links": { "self": { "href" : "/orders/523" } } } So, it represents a resource with the link to itself at least. But, in practical use cases, a resource should have more fields like the following example: { "_links": { "self": { "href": "/orders/523" }, "invoice": { "href": "/invoices/873" } }, "currency": "USD", "status": "shipped", "total": 10.20 } A HAL resource can have properties with any names (currency, status, total, etc.) with links (in a reserved property, "_links") as long as it is a valid JSON object. HAL Resource Example Embedding Other Resource(s) A HAL resource can embed other resources, and the embedded resource may include all the fields or parts of its fields: { "_links": { "self": { "href": "/orders" } }, "_embedded": { "orders": [{ "_links": { "self": { "href": "/orders/523" } }, "total": 30.00 },{ "_links": { "self": { "href": "/orders/524" } }, "total": 20.00 }] } "currentlyProcessing": 14, "shippedToday": 20 } So, suppose you get the above response from. The resource in this response may include all the order collection data inside the reserved property, "_embedded", as well as other fields, like shown above. How Hippo Document is mapped to HAL Resource Now, the question is how a Hippo document is mapped to a HAL resource. That's where the Content HAL API is expected to contribute. The Content HAL API Add-on takes a straightforward approach. It takes the field names which were already defined through document type (a.k.a "namespace") designs. Let's see the following document type example ("Event" document type). The "Event" document type contains multiple fields including "title", "introduction", "content", "startdate", "enddate", "location", etc. In "Path" configuration, it is defining the logical field names ("title", "introduction", "content", "startdate", "enddate", "location", etc.) which are different from the captions such as "Title", "Introduction", "Content", etc. By the way, if you look at those in CMS Console, you will realize that those "Path" configurations are actually stored in hipposysedit:nodetype/hipposysedit:nodetype configuration in a document type (a.k.a "namespace") definition. As the captions are only for displaying purpose, it makes sense to take the logical field names configured in "Path" configurations and use those for JSON properties in HAL resource representations like the following example: So, an "Event" document can be represented as a JSON HAL resource by converting all the fields of the document into JSON properties by adopting the same logical field names (as configured in "Path") like the example shown above. Also, it can have links under the reserved "_links" property in the resource object to represent its own link ("self") that can be generated by HST-2 API. Now, how about a compound field like "content" field which contains RichText data? The same rule applies to that, too! A "Event" document has the "content" field which is type of "hippostd:html". It means that we can convert the "hippostd:html" node to another JSON object by converting each property of "hippostd:html" node type. An example is shown below: Since "hippostd:html" node type contains a "content" property, the "content" field is expanded to a new JSON object containing "content" property that is expanded to a string JSON property. By the way, it is just a coincidence to have two "content" fields in the example; one is defined in "Event" document, and the other is defined in "hippostd:html" compound node type itself. If you named the field in "Event" document to something else like "body", the first one would have been "body" instead. Now, how about linked images/assets or documents in a document? For example, an "Event" document can have "image" field that has a link data to other CMS image asset node. How can we represent those links in a document in a HAL resource? Since the "image" field contains only link information only ("hippo:docbase") which is very CMS-specific, it makes sense to include those metadata in a HAL resource, in a special property, "_meta". All the CMS-specific metadata are included in a reserved property, "_meta", like the example shown above. The "_meta" property may contains JCR primary type name ("type"), and it may contain the link information in "mirror" (including both node UUID and node path for developer's convenience) JSON object if there's any link data. By the way, unlike "_links", "_meta" is introduced by Content HAL API Add-on as a HAL extension. How to Include Site Link? By default, a HAL resource includes a "self" link for a document (e.g, a document at content/documents/myproject/events/2019/01/introduction-speech) like the following example: { "_links": { "self": { "href": "" } } } Suppose you want to include "site" link as well like the following: { "_links": { "self": { "href": "" }, "site": { "href": "" } } } In this case, you should configure HST-2 mount alias mapping properly. Here are the configuration steps you should follow: - Set hst:alias property of the parent mount (e.g, /hst:root) of Content HAL API mount (e.g, /hst:root/api) to "site". That is, /hst:root/@hst:alias="site". - Add hst:mountsite property (String) with a value, "site", to the Content HAL API mount (e.g, /hst:root/api). That is, /hst:root/api/@mountsite="site" - Then Content HAL API can find the parent mount through the mapping (@hst:mountsite="site"), in its mount configuration, to the specific mount having @hst:alias="site" (i.e. /hst:root), and generate the "site" link to include it in _links property.
https://documentation.bloomreach.com/14/library/enterprise/services-features/content-hal-api/content-hal-api.html
CC-MAIN-2021-31
refinedweb
1,130
52.09