text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
[Raymond Hettinger] I'm quite pleased with the version already in CVS. It is a small masterpiece of exposition, sophistication, simplicity, and speed. A class based interface is not necessary for every algorithm. [David Eppstein] It has some elegance, but omits basic operations that are necessary for many heap-based algorithms and are not provided by this interface. I think Raymond was telling you it isn't intended to be "an interface", rather it's quite up-front about being a collection of functions that operate directly on a Python list, implementing a heap in a very straightforward way, and deliberately not trying to hide any of that. IOW, it's a concrete data type, not an abstract one. I asked, and it doesn't feel like apologizing for being what it is <wink>. That's not to say Python couldn't benefit from providing an abstract heap API too, and umpteen different implementations specialized to different kinds of heap applications. It is saying that heapq isn't trying to be that, so pointing out that it isn't falls kinda flat.. Then some of those will want a different implementation of a heap. The algorithms in heapq are still suitable for many heap applications, such as maintaining an N-best list (like retaining only the 10 best-scoring items in a long sequence), and A* on a search tree (when there's only one path to a node, decrease-key isn't needed; A* on a graph is harder). To see how important the lack of these operations is, I decided to compare two implementations of Dijkstra's algorithm. I don't think anyone claimed-- or would claim --that a heapq is suitable for all heap purposes., A heapq *is* a list, so you could loop over the list to find an old object. I wouldn't recommend that in general <wink>, but it's easy, and if the need is rare then the advertised fact that a heapq is a plain list can be very convenient. Deleting an object from "the interior" still isn't supported directly, of course. It's possible to do so efficiently with this implementation of a heap, but since it doesn't support an efficient way to find an old object to begin with, there seemed little point to providing an efficient delete-by-index function. Here's one such: import heapq def delete_obj_at_index(heap, pos): lastelt = heap.pop() if pos >= len(heap): return # The rest is a lightly fiddled variant of heapq._siftup. endpos = len(heap) # Bubble up the smaller child until hitting a leaf. childpos = 2*pos + 1 # leftmost child position while childpos < endpos: # Set childpos to index of smaller child. rightpos = childpos + 1 if rightpos < endpos and heap[rightpos] <= heap[childpos]: childpos = rightpos # Move the smaller child up. heap[pos] = heap[childpos] pos = childpos childpos = 2*pos + 1 # The leaf at pos is empty now. Put lastelt there, and and bubble # it up to its final resting place (by sifting its parents down). heap[pos] = lastelt heapq._siftdown(heap, 0, pos). It surprised me that you tried using heapq at all for this algorithm. I was also surprised that you succeeded <0.9 wink>. -). Rest easy, it's not.. Depends on the specific algorithms in question, of course. No single heap implementation is the best choice for all algorithms, and heapq would be misleading people if, e.g., it did offer a decrease_key function -- it doesn't support an efficient way to do that, and it doesn't pretend to. - In Dijkstra's algorithm, it was easy to identify and ignore outdated heap entries, sidestepping the inability to decrease keys. I'm not convinced that this would be as easy in other applications of heaps. All that is explaining why this specific implementation of a heap isn't suited to the task at hand. I don't believe that was at issue, though. An implementation of a heap that is suited for this task may well be less suited for other tasks. -. You can wrap any interface you like around heapq (that's very easy to do in Python), but it won't change that heapq's implementation is poorly suited to this application. priorityDictionary looks like an especially nice API for this specific algorithm, but, e.g., impossible to use directly for maintaining an N-best queue (priorityDictionary doesn't support multiple values with the same priority, right? if we're trying to find the 10,000 poorest people in America, counting only one as dead broke would be too Republican for some peoples' tastes <wink>). OTOH, heapq is easy and efficient for *that* class of heap application. Overall, while heapq was usable for implementing Dijkstra, I think it has significant shortcomings that could be avoided by a more well-thought-out interface that provided a little more functionality and a little clearer separation between interface and implementation. heapq isn't trying to separate them at all -- quite the contrary! It's much like the bisect module that way. They find very good uses in practice. I should note that I objected to heapq at the start, because there are several important heap implementation techniques, and just one doesn't fit anyone all the time. My objection went away when Guido pointed out how much like bisect it is: since it doesn't pretend one whit to generality or opaqueness, it can't be taken as promising more than it actually does, nor can it interfere with someone (so inclined) defining a general heap API: it's not even a class, just a handful of functions. Useful, too, just as it is. A general heap API would be nice, but it wouldn't have much (possibly nothing) to do with heapq.
https://mail.python.org/archives/list/python-dev@python.org/message/C4I7V774OQPZAU6R7NFMWXLEYFBWS4M5/
CC-MAIN-2021-21
refinedweb
962
61.77
16×2 LCD Interface With Arduino Today I am going to tell you how to interface 16×2 LCD with Arduino. Circuit Diagram LCD Display has very importance in embedded systems. It is used to interact humans with system. Instructions and device function results are shown on displays. Without display we cannot see what operation is being performed inside the arduino/microcontroller. LCDs are most widely used in embedded systems to provide user interface. In this example I am using HD44780 based 16×2 LCD. This LCD can be interfaced using 8 bit or 4 bit mode. In 8 bit mode all 8 data pins are used while in 4 bit mode only 4 pins of LCD are used. In this project I have used 4 bit mode of operation. 16×2 means it has 16 columns and 2 rows. HD44780 controller is installed on other sizes of LCDs like 16×1, 16×4, 20×2, 20×4 etc. Pin-Out We can read & write data to LCD but to keep things simple we have hardwired R/W line to ground for only writing. It means we can only print on LCD but cannot read back content written in LCD RAM. Software In arduino we use builtin library for LCD interfacing “LiquidCrystal.h”. And this makes LCD interfacing with arduino simple and easy. #include <LiquidCrystal.h> // Define object of LiquidCrystal class. LiquidCrystal lcd(7, 6, 5, 4, 3, 2); void setup() { // put your setup code here, to run once: lcd.begin(16, 2); // Print welcome message // to LCD. lcd.print("Welcome!"); // Set cursor to next line and // first column. lcd.setCursor(0,1); lcd.print("micro-digital.net"); } void loop() { // put your main code here, to run repeatedly: }
http://www.micro-digital.net/16x2-lcd-interface-with-arduino/
CC-MAIN-2017-51
refinedweb
288
67.25
/* ******************************************************************************* * * Copyright (C) 1999-2008, International Business Machines * Corporation and others. All Rights Reserved. * ******************************************************************************* * file name: ucol_wgt.h * encoding: US-ASCII * tab size: 8 (not used) * indentation:4 * * created on: 2001mar08 * created by: Markus W. Scherer */ #ifndef UCOL_WGT_H #define UCOL_WGT_H #include "unicode/utypes.h" #if !UCONFIG_NO_COLLATION /* definitions for CE weights */ 00026 typedef struct WeightRange { uint32_t start, end; int32_t length, count; int32_t length2; uint32_t count2; } WeightRange; /** * Determine heuristically * what ranges to use for a given number of weights between (excluding) * two limits. * * @param lowerLimit A collation element weight; the ranges will be filled to cover * weights greater than this one. * @param upperLimit A collation element weight; the ranges will be filled to cover * weights less than this one. * @param n The number of collation element weights w necessary such that * lowerLimit<w<upperLimit in lexical order. * @param maxByte The highest valid byte value. * @param ranges An array that is filled in with one or more ranges to cover * n weights between the limits. * @return number of ranges, 0 if it is not possible to fit n elements between the limits */ U_CFUNC int32_t ucol_allocWeights(uint32_t lowerLimit, uint32_t upperLimit, uint32_t n, uint32_t maxByte, WeightRange ranges[7]); /** * Given a set of ranges calculated by ucol_allocWeights(), * iterate through the weights. * The ranges are modified to keep the current iteration state. * * @param ranges The array of ranges that ucol_allocWeights() filled in. * The ranges are modified. * @param pRangeCount The number of ranges. It will be decremented when necessary. * @return The next weight in the ranges, or 0xffffffff if there is none left. */ U_CFUNC uint32_t ucol_nextWeight(WeightRange ranges[], int32_t *pRangeCount); #endif /* #if !UCONFIG_NO_COLLATION */ #endif
http://icu.sourcearchive.com/documentation/4.8.1-2/ucol__wgt_8h_source.html
CC-MAIN-2018-13
refinedweb
265
57.87
The QSharedMemory class provides access to a shared memory segment. More... #include <QSharedMemory> Inherits QObject. This class was introduced in Qt 4.4.: Remember to lock the shared memory with lock() before reading from or writing to the shared memory, and remember to release the lock with unlock() after you are done. Unlike QtSharedMemory, QSharedMemory automatically destroys the shared memory segment when the last instance of QSharedMemory is detached from the segment, and no references to the segment remain. Do not mix using QtSharedMemory and QSharedMemory. Port everything to QSharedMemory. Constructs a shared memory object with the given parent and with its key set to key. Because its key is set, its create() and attach() functions can be called. See also setKey(), create(), and attach(). Constructs a shared memory object with the given parent. The shared memory object's key is not set by the constructor, so the shared memory object does not have an underlying shared memory segment attached. The key must be set with setKey() before create() or attach() can be used. See also setKey().(). Attempts to attach the process to the shared memory segment identified by the key that was passed to the constructor or to a call to set(). Creates a shared memory segment of size bytes with the key passed to the constructor or set with setKey(), attaches to the new shared memory segment with the given access mode, and returns true. If a shared memory segment identified by the key already exists, the attach operation is not performed, and false is returned. When the return value is false, call error() to determine which error occurred. See also error(). Returns a pointer to the contents of the shared memory segment, if one is attached. Otherwise it returns null. Remember to lock the shared memory with lock() before reading from or writing to the shared memory, and remember to release the lock with unlock() after you are done. See also attach(). This is an overloaded member function, provided for convenience.(). Returns a value indicating whether an error occurred, and, if so, which error it was. See also errorString(). Returns a text description of the last error that occurred. If error() returns an error value, call this function to get a text string that describes the error. See also error(). Returns true if this process is attached to the shared memory segment. See also attach() and detach(). Returns the key assigned to this shared memory. The key is the identifier used by the operating system to identify the shared memory segment. When QSharedMemory is used for interprocess communication, the key is how each process attaches to the shared memory segment through which the IPC occurs. See also setKey(). This is a semaphore that locks the shared memory segment for access by this process. If another process has locked the segment, lock() will block until the lock is released. It returns true if it obtains the lock. It should always return true. If it returns false, it means you have a program bug. If your code calls lock() when you already have the lock, your process will block forever. See also unlock() and data(). Sets a new key for this shared memory object. If key and the current key are the same, the function returns without doing anything. If the shared memory object is attached to an underlying shared memory segment, it will detach from it before setting the new key. This function does not do an attach(). See also key() and isAttached(). Returns the size of the attached shared memory segment. If no shared memory segment is attached, 0 is returned. See also create() and attach(). Releases the lock on the shared memory segment and returns true, if the lock is currently held by this process. If the segment is not locked, or if the lock is held by another process, nothing happens and false is returned. See also lock().
http://doc.qt.nokia.com/4.4/qsharedmemory.html
crawl-003
refinedweb
653
67.45
Java SE 7 type inference I taught an introductory Java session on generics, and of course demonstrated the shorthand introduced in Java SE 7 for instantiating an instance of a generic type: // Java SE 6 List<Integer> l = new ArrayList<Integer>(); // Java SE 7 List<Integer> l = new ArrayList<>(); This inference is very friendly, especially when we get into more complex collections: // This Map<String,List<String>> m = new HashMap<String,List<String>>(); // Becomes Map<String,List<String>> m = new HashMap<>(); Not only the key and value type of the map, but the type of object stored in the collection used for the value type can be inferred. Of course, sometimes this inference breaks down. It so happens I ran across an interesting example of this. Imagine populating a set from a list, so as to speed up random access and remove duplicates. Something like this will work: List<String> list = ...; // From somewhere Set<String> nodup = new HashSet<>(list); However, this runs into trouble if the list could be null. The HashSetconstructor will not just return an empty set but will throw NullPointerException. So we need to guard against null here. Of course, like all good programmers, we seize the chance to use a ternary operator because ternary operators are cool. List<String> list = ...; // From somewhere Set<String> nodup = (null == list) ? new HashSet<>() : new HashSet<>(list); And here’s where inference breaks down. Because this is no longer a simple assignment, the statement new HashSet<>() can no longer use the left hand side in order to infer the type. As a result, we get that friendly error message, “ Type mismatch: cannot convert from HashSet<capture#1-of ? extends Object> to Set<String>”. What’s especially interesting is that inference breaks down even though the compiler knows that an object of type Set<String> is what is needed in order to gain agreement of types. The rules for inference are written to be conservative by doing nothing when an invalid inference might cause issues, while the compiler’s type checking is also conservative in what it considers to be matching types. Also interesting is that we only get that error message for the new HashSet<>(). The statement new HashSet<>(list) that uses the list to populate the set works just fine. This is because the inference is completed using the listparameter. Here’s the constructor: public class HashSet<E> extends ... implements ... { ... public HashSet(Collection<? extends E> c) { ... } ... } The List<String> that we pass in gets captured as Collection<? extends String> and this means that E is bound to String, so all is well. As a result, we wind up with the perfectly valid, if a little funny looking: List<String> list = ...; // From somewhere Set<String> nodup = (null == list) ? new HashSet<String>() : new HashSet<>(list); Of course, I imagine most Java programmers do what I do, which is try to use the shortcut and then add the type parameter when the compiler complains. Following the rule about not meddling in the affairs of compilers (subtle; quick to anger), normally I would just fix it without trying very hard to understand why the compiler liked or didn’t like things done in a certain way. But this one was such a strange case I figured it was worth a longer look. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/generics-and-capture
CC-MAIN-2017-26
refinedweb
560
61.16
): #include "sizecbar.h" #include "scbarg.h" 3. Derive a class from CSizingControlBarG (you have an example in mybar.* files). 4. In mainfrm.h, include your class' header: #include "mybar.h"then add a member variable. Window IDs:, controls on dialog bars, panes of status bar, etc.). However, I found it very useful to update the look of the "x" flat button in CSizingControlBarG and the color of the caption in CSizingControlBarCF ).. Precompiler flags: There are 2 symbols which can be defined to cause the floating bars to have different appearance and functionality: _SCB_REPLACE_MINIFRAMEcan be used to plug in CSCBMiniDockFrameWnd, which is a custom miniframe class. The main gain of using this class is that the floating bars can be resized dynamically, like all other windows. The other advantage is that the miniframe caption can be turned off, allowing the bar to display its own gripper, for increased functionality and/or custom designs. m_pFloatingFrameClassmember of the main frame (see the advanced example above). _SCB_MINIFRAME_CAPTIONcan be defined only if the previous flag is also defined. It causes the custom miniframe to keep the tool window caption. CSizingControlBarGand CSizingControlBarCFclasses do not display a gripper when floating if this flag is set. See also for class reference, a full changelog, FAQ, a dedicated message board and more. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/toolbars/sizecbar.aspx
crawl-002
refinedweb
220
56.96
Hi! Jan Nieuwenhuizen <address@hidden> skribis: > Ludovic Courtès writes: > > Hello! > >> Jan Nieuwenhuizen <address@hidden> skribis: >> >>>>> +#if !__GNU__ >>>>> int status = pid.wait(true); >>>>> if (status != 0) >>>>> throw Error(format("cannot kill processes for uid `%1%': %2%") % >>>>> uid % statusToString(status)); >>>>> +#endif >>>> >>>> Do you know what the rationale was? It looks like it could leave >>>> zombies behind us. >>> >>> No, maybe Manolis knows? What I do know is why I used the patch: before >>> applying this patch I could only build up to binutils-boot0. >>> binutils-boot0 would always fail like so >>> >>> ./pre-inst-env guix build -e '(@@ (gnu packages commencement) >>> binutils-boot0)' --no-offload >>> XXX fails: Workaround for nix daemon >>> phase `compress-documentation' succeeded after 0.4 seconds >>> error: cannot kill processes for uid `999': Operation not permitted >>> guix build: error: cannot kill processes for uid `999': failed with exit >>> code 1 >> >> But is the build process actually running as UID 999? If you pass >> ‘--disable-chroot’, then I think build users are not used at all, right? > > It seems that they are; I'm running Oh, OK. […] >> Other options: >> >> 1. Implement clone(2) with CLONE_NEW* in libc on GNU/Hurd. >> >> 2. Add a “sandbox” abstraction in the daemon, with OS-specific >> implementations of the abstraction (the Nix daemon did that at some >> point, with the goal of supporting proprietary macOS etc.) >> >> For GNU/Linux, it’d use chroot(2)+clone(NEWNS) etc. as root. >> >> On GNU/Hurd, it could spawn the process in a sub-Hurd, i.e., with >> its own proc server, root file system server, and without a pfinet >> server running. >> >> Option #2 can be fun to implement and probably easier and less >> controversial than Option #1. However, it does mean adding more code of >> the C++ code base, which is sad. > > I'm assuming that 1.is what Manolis wanted to support with his > libhurdutil? In fact, I forward ported (minimal effort) the patch > > > > > but haven't tried linking against this yet. That would be a nice first > step. 2. sounds fun, but it would need more getting familiar with the > Hurd for me :-) You never know.. I suppose the commit you link to could have been used by libc to implement #1? Oh, actually, IIRC, Manolis was working on implementing mount(2) and umount(2) in libc (which would also be needed), and probably the settrans utilities were part of that effort. Thanks, Ludo’.
https://lists.gnu.org/archive/html/bug-guix/2020-03/msg00154.html
CC-MAIN-2022-27
refinedweb
396
66.74
Unanswered: Object Persistence / Lifecycle Question Hello friends - I'm struggling a bit with changing values on labels after an event and having that change stick across page loads/hides/shows. For example, I'd like to have a label on a toolbar that shows me the name of a user after they've logged in. I like to share this value on a toolbar that is linked across several views; however, a change at controller action time doesn't seem to make the change persistent. I can get the value to change (i.e., 'Welcome', to 'Welcome, Brian') at one page load, but when I move on to another page, I'm stuck back with the initial html for the object. Here is a short example I built showing what I'm experiencing. Views: [Note: sharing toolbar] Code: Ext.define('MyApp.view.MyToolbar', { extend: 'Ext.Toolbar', alias: 'widget.MyToolbar', config: { docked: 'bottom', html: 'Initial HTML!s', id: 'MyToolbar', itemId: 'MyToolbar', items: [ { xtype: 'label', html: 'Fun Label', id: 'MyLabel', itemId: 'MyLabel', width: 150 }, { xtype: 'button', html: 'Push here', id: 'MyButton', itemId: 'MyButton', text: 'MyButton' } ] } });Code: Ext.define('MyApp.view.MyPanel', { extend: 'Ext.Panel', requires: [ 'MyApp.view.MyToolbar' ], config: { html: 'First Panel', id: 'MyPanel', itemId: '', items: [ { xtype: 'MyToolbar' }, { xtype: 'button', id: 'ButtonTwo', itemId: 'ButtonTwo', text: 'Click me!' } ] } });Code: Ext.define('MyApp.view.anotherPanel', { extend: 'Ext.Panel', alternateClassName: [ 'anotherPanel' ], requires: [ 'MyApp.view.MyToolbar' ], config: { html: 'Another Panel', id: 'anotherPanel', itemId: 'anotherPanel', items: [ { xtype: 'MyToolbar' } ] } }); Code: Ext.define('MyApp.controller.myPanelController', { extend: 'Ext.app.Controller', config: { refs: { anotherPanel: '#anotherPanel', myPanel: '#MyPanel', toolbarButton: '#MyLabel', ButtonTwo: '#ButtonTwo' }, control: { "button#ButtonTwo": { tap: 'onButtonTap' } } }, onButtonTap: function(button, e, options) { console.log("button hit"); var anotherPanel = this.getAnotherPanel(); var firstPanel = this.getMyPanel(); var buttonTwo = this.getButtonTwo(); var theButton = this.getToolbarButton(); theButton.setHtml("super test deluxe"); console.log("the html should be rendering differently"); console.log(theButton.getHtml()); // LOGS UPDATED VALUE IN THE CONSOLE console.log(theButton); Ext.Viewport.add(anotherPanel); anotherPanel.show(true); // AT SHOW TIME, THE BUTTON RENDERS ORIGINAL VALUE firstPanel.hide(); }, }); Any insight is GREATLY appreciated. brian - Join Date - Mar 2007 - Location - Gainesville, FL - 38,669 - Vote Rating - 1146 - Answers - 3731 I would store the user info in a model instance somewhere like on the application namespace and have the toolbar use that to generate the text to've got a model set up to store the data, but what event can I tie to at controller action time to retrieve and reset the value? I've read elsewhere that the painted event is not accessible through a controller action (even though Architect will allow that choice to be made). Similarly, tying console.logs to a controller action at show() and activate() times fail to fire. So having a model is fine and good. What event do I tie to for a toolbar style object that is shared across views to populate the correct value everytime it renders? I know that my query is correct, b/c if I log in the console at initialize time, the logger goes wild. Unfortunately, I can't figure out what event to tie to in order to allow appropriate rendering once I have a value (which doesn't happen until long after init)? On a more general note, where can I go to get an understanding of the lifecycle of an object? init makes plenty of sense? But does activate fire before show()? This data is somewhere. Where? brian
https://www.sencha.com/forum/showthread.php?258493-Object-Persistence-Lifecycle-Question&p=947245&viewfull=1
CC-MAIN-2015-48
refinedweb
569
50.23
Check which ports are open to the outside world. Helps make sure that your firewall rules are working as intended. One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community. Type type = ev.GetType(); FieldInfo[] fields = type.GetFields(); public class Event : iEvent { #region iEvent Members private string identifierName; public string IdentifierName { get { return identifierName; } set { identifierName = value; } } private int subclassId; public int SubclassId { get { return subclassId; } set { subclassId = value; } } private string name; public string Name { get { return name; } set { name = value; } } private DateTime startTime; public DateTime StartTime { get { return startTime; } set { startTime = value; } } } Are you are experiencing a similar issue? Get a personalized answer when you ask a related question. Have a better answer? Share it in a comment. Open in new window FieldInfo[] fields = type.GetFields(BindingFlag Open in new window
https://www.experts-exchange.com/questions/27303330/c-reflection-GetFields-not-returning-values.html
CC-MAIN-2018-17
refinedweb
147
58.28
Nit: by and large this list looks to be in alphabetical based on the checker name (except the last one), not sure if thats by accident or design fixed ordering of cppcoreguidelines module checks This looks good to me but I would wait for one of @JonasToth or @alexfh to perhaps take a look, maybe you should add some nested scope example too using the same name, I know the compiler should warn about a shadow variable anyway but std::mutex m; m.lock(); { std::mutex m; m.lock(); m.unlock(); } m_unlock(); and what about std::mutex m1; std::mutex m2; m1.lock(); m1.unlock(); m2.lock(); m2.unlock(); and std::mutex m1; std::mutex m2; m1.lock(); m2.lock(); m2.unlock(); m1.unlock(); std::mutex m1; std::mutex m2; m1.lock(); m2.lock(); m1.unlock(); m2.unlock(); I'll improve the tests :) +1 i was just going to comment - this needs much more tests. Also, it would be really great if you could supply the differential's description.T This should be a config param Terminology: *this* doesn't match anything. It's a matcher, yes, but it's just a lambda. The actual match happens at the end. Separate with newline fixed nested use case removed unneccesary include handle other basiclockable types renamed option for cppcoreguidelines-use-raii-locks fixed documentation fixed documentation again fixed documentation formatting I was trying to describe its intent, not its action. Does anyone have any suggestions for a clearer comment? it seems that most checks are this way. it was autogenerated by the `add_new_check.py` script. improved tests support try_lock_for and try_lock_until remove support for try_lock_for and try_lock_until Ugh, could you please avoid doing lots a tiny changes every 5 minutes ? This causes spam on cfe-commits /: If we allow boost, pre c++11 is ok as well. In general, plz use proper grammar, punctuation and full sentences in comments. please add proper punctutation in comments. we aim for correct text you can read and understand, with proper spelling and grammar. please add the * for pointers to emphasize the difference between values and pointers. In general we do not add const to values (as i believe is done in later lines), but only for pointers and references. Please don't retrieve the name like this. Too error prone and compilatcated. You can compare use DeclRefExpr () for your MutexExpr instead. From there you go to the Decl and compare on pointer equality. It might be a good idea to add the boost types as well? I believe they are interface-compatible, given the std version is derived from them. Please add more tests What happens for this? void foo() { std::mutex m; m.lock(); m.unlock(); m.lock(); m.unlock(); m.try_lock(); m.lock(); m.unlock(); } refactor and improved tests added support for boost lockable types I've added a test case for your example, templates, macros and loops. I can't catch the case std::mutex m1; std::mutex &m2 = m1; // usage, but i can catch trivial cases. Yes, your not supposed to catch those. But i feel things like this should be documented. In theory catching this particular case is possible (we do similar analysis for const. But it is totally acceptable to leave as is! I got another one that I think defeats the analysis: while (true) { my_label: m.lock(); if (some_condition) goto my_label; m.unlock(); } Usage of goto can interfer and make the situation more complicated. I am not asking you to implement that correctly (not sure if even possible with pure clang-tidy) but I think a pathological example should be documented for the user. Please use static for functions instead of anonymous namespaces. See LLVM coding guidelines. Please don't use auto where return type is not obvious. Unnecessary empty line. Missing space before ``std::mutex Please synchronize first statement with Release Notes. Please use single ` for options values. added example in docs and explicitly specified types for some variables why would this defeat the analysis? Because goto allows you to reorder you code-locations, so the ordering of what comes before what is not so ez. match lock() and unlock() calls by decl let me restate: You are comparing the occurences of lock and unlock by linenumber, so physical appearance in the source code. goto allows you to jump wildly through your code, so that physical appearance does not match actual control flow. I am not saying that you need to resolve that (not easily done within clang-tidy), but documenting it is worth it. And if you mix goto and locking mechanisms, you get what you deserve, which is no help from tooling ;) IMHO the check is close to being finished. please address the notes and mark them as done if finished with it. So its clear to see whats outstanding. In my opinion the user-facing documentation misses a "Limitations" sections that shows the artificial goto example, that would show that the used mechanism doesn't handle it.
https://reviews.llvm.org/D58818?id=189157
CC-MAIN-2021-17
refinedweb
834
65.32
One of the projects I’m working on for a client makes heavy use of Backbone.js to run functionality in the browser. The back end system is built on ASP.NET MVC with an evolving CQRS based architecture. In order to keep the front end as flexible as possible and be able to work with the various back end pieces that are being put in place, I need to follow many of the same patterns and principles in my JavaScript. Functional Areas Of A JavaScript App As our applications get larger and larger, it’s important to decouple the various functional areas. Decoupling them – or rather, coupling them correctly – allows much greater flexibility by letting us change how each functional works, without affecting the other areas. As an example of functional areas, here’s a wireframe of something similar to what I’m currently working on: The application shown here is a very simple item management app. On the left, you have a tree with a hierarchy of items. On the top right, you have a grid view to show the child items of the item that was selected in the tree. On the bottom right, you have an add/edit form. The important thing to note here is that each of the primary control groups is a functional area of the app and needs it’s own modularized JavaScript code. That is, the treeview, the grid and the add/edit form are all separate areas of functionality within this application. Each of these areas of functionality should be placed in their own file and modularized / decoupled so that none of them knows about the other directly. Doing this will make the over all application easier to manage and maintain, allowing greater flexibility in adding and removing features. There are likely hundreds of ways that you could build this application, but many of them would lead to large monolithic beasts of unmaintainable code. In recent years though, maintainable and scalable JavaScript has become a hot topic, and there are some good patterns and architectures that have emerged as a result. I’ve only just begun to explore these patterns and architectures in the last few months, but I wanted to share my perspective and what I’m currently doing. JavaScript Modules A JavaScript module isn’t a special construct or keyword in the language itself. Rather, it’s a way to take advantage of functions closures, to create scope. The core of a JavaScript module is usually an “immediate function”: The use of the parenthesis around the function definition allows the function to be returned, ready to go (if you forget the parenthesis around the function, you’ll get a syntax error).The second pair of parenthesis immediately execute the returned function. The result can be assigned to a variable, which becomes a reference to the module’s public API (if anything was returned). There’s been a lot of talk around the web about JavaScript modules already, so I won’t go in to much more detail here. Two of the more recent articles are focused specifically on Backbone, and I highly recommend reading them: Organizing Your Backbone.js Applications With Modules – This article from Bocoup.com covers the basics of file organization and creating a way to easily reference your modules from other modules, using strings as names. Organzing your application using Modules (Require.js) – This article from BackboneTutorials covers much of the same ground, but does it using the Require.js framework. Of course, there’s more than these two methods of modularizing your JavaScript code. You don’t have to stick with the patterns that someone else shows you (include what I’m going to show you) but it’s always a good idea to learn from what others are doing, even if you don’t stick with it. External / Shared Code In spite of our desire to separate each of the functional areas of the application, there is a high likelihood that they will need to talk to each other and have access to some of the same code. You’ll also want to have access to some external libraries, such as jQuery, Underscore.js, Backbone.js, or any of a number of other libraries and tools that your code uses. The usual method of doing this with a plain JavaScript module is to pass the dependencies into the module function. For example, if you need jQuery and Backbone in your module, you might add a $ and Backbone parameters to your module definition. Then you would pass the references to them when you execute the module function: The same thing applies to your own modules and libraries. If you have a module defined somewhere else and you know it will be available before the module your currently working on is loaded, you can pass in a reference like this. (You should also know that once you start heading down this path, you may want to think about using asynchronous module definitions: AMDs. Tools such as RequireJS make AMDs easier to deal with by handling the dependency management, module definition and other common tasks for you.) Module Initialization Writing a bunch of modules in a decoupled manner is great, but it introduces another problem: initialization. Most JavaScript applications have some public API that you call when you want to initialize the app and make everything run. If you’re building your app in a modular manner, you don’t want to have to call a separate public API for every module in your system. This would become a nightmare over time by requiring a lot of app initialization code, and worse. To work around this problem, modules need to have some sort of initialization code in them. Module frameworks like RequireJS have this built in to them. In the app I’m writing, I’m not using RequireJS or any other module frameworks, so I wrote my own registration. It’s pretty simple when it comes down to it. Since every module I write for my app receives an app object, I attached a `addInitializer` function to it. This allows each module to add it’s own initializer code as a callback function, like this: The application object keeps track of all the callbacks that were passed in to the `addInitializer` function. When the overall application is being initialized, a call to the app object’s `initialize` method will loop through each of the registered module initializers and call them. Each module gets to do it’s own thing and spin itself up. Event Driven Architecture Modules are great for organizing your code, but present a challenge when you realize that you need these modules to communicate with each other. Your code is now separated into different files, encapsulated in different modules, and generally unable to make direct calls in to your other modules and objects like you may be used to. Don’t worry, though. You’re only bumping in to the next steps of decoupling your applications correctly. There are many different ways that you can solve this problem, of course. One of my favorite ways is the use of an event aggregator. I’ve blogged about the use of an event aggregator with Backbone already. If you need an introduction to the idea, check out that post and some of my many other Winforms / Application Controller posts. The benefit of an event aggregator in this case, is that is gives you a simple, decoupled way to facilitate communication between your modules. To get started, you need to have a module or other object that is defined prior to any other modules being defined. This object needs to be passed in to each of the modules that you’re defining, so that these modules can have access to the event aggregator. In my app, I put the event aggregator directly on the top level application namespace, and then pass the namespace object into each of my modules: Now each of my modules can bind to and trigger events from the event aggregator. This allows one module to send notification of something that happened, without having to know specifically which parts of the application are going to respond to that notification, or how. One thing you need to think about, in advance, is what events you’re going to be triggering through the event aggregator. It’s a good idea to have thought through this at least a little bit, so that you don’t end up with a giant mess of similar and undocumented events. I made this mistake and paid for it when I couldn’t figure out which events needed to be fired, when. By taking the time to think through which events needed to be fired when, though, I was able to plan the communication between my modules, document the catalog of events, and clean up the number of events that were being used. Having this documented is important so that other developers (or you, a month from now) can see if there’s already an event that has the semantics they need, when writing new code. Runtime Packaging Once you have all of these pieces in place, you need to pull them all together and send them down to the browser. You could go ahead and reference each individual JavaScript file from your HTML. This would technically work, but would cause performance problems for the browser and end user. Each file that your reference requires another request to the server, which requires more time to download and potentially blocks another part of the app from loading or running (once you reach the magic limits of how many requests a browser is allowed to make). To solve this problem, you need to use a packaging system. These systems will take all of the files that you specify, bundle them into a single file, optionally minify the whole thing and then provide that one file as a single resource for the browser. RequireJS has an associated optimizer to handle this. Rails 3.1 has the asset pipeline to do this. There’s the ruby gem “Jammit” which also does this for Ruby / Rails app (and Sinatra, etc). In .NET land, there’s several available including SquishIt. And there are still more in these and in other languages and environments. If you don’t make use of one of these packaging tools, you’re going to cause problems for you users. Pick one, learn it, use it. So Much More: Resources At this point, you should be able to wire together a very simple, composite JavaScript application. This is only the tip of the iceberg, though. There’s so much more to writing well organized, scalable, modularized and composite JavaScript applications. If you would like to continue down this path, be sure to read the posts I’ve linked to. You’ll also want to check out these resources: - Scalable JavaScript Application Architecture – and the framework that was produced with it, on GitHub: - (Thanks to Aaron Mc Adam for the tip on the slides!) - Patterns For Large Scale JavaScript Application Architecture – a great resource for getting ideas on architecture and why you want to - JavaScript Patterns – my favorite JavaScript book. It has provided more valuable information on JS patterns and implementation ideas than any other book or resource I’ve read (note: it’s not specifically about architecture, but you need to know these patterns to create good architecture). - Essential JavaScript Design Patterns For Beginners – this is a very comprehensive, but very large single page, list of patterns and implementations - JavaScript Architecture – Aaron Hardy has great series on JS architecture. It starts simple and gets more and more detailed, providing a tremendous amount of links and information. I’m sure there are additional resources as well. If you have any favorite resources for JavaScript architecture and composite application design, please post them in the comments!
https://lostechies.com/derickbailey/2011/11/17/introduction-to-composite-javascript-apps/
CC-MAIN-2016-44
refinedweb
1,993
59.03
Create Animated React Apps With React Spring One thing that is pivotal to creating great interactive web applications is animations. Animations add life to your applications and improve the overall user experience. In this tutorial, we'll be looking at how to create simple yet lovely animations in your React apps using an npm package called react-spring, specifically, the Spring component of the package. React Spring is a great animation package that has been endorsed by some of the React core team including Dan Abramov, and is generally considered one of the best animation packages for React apps out there. It utilises spring like physics in its core animations making it easy to configure. In this tutorial we'll be focused on the Spring component which is one of react-spring's easier to use and more flexible components. With Spring we are able to: Manipulate values(numbers)of any sort from measurement units to actual data. Manipulate HTML attributes Manipulate SVG paths Adjust CSS And much more! Springs are cumulative, meaning they'll remember all values passed to them. Let's look at how we can get started with Springs, complete with an example making use of the newly announced React Hooks. Examples What We'll Build We'll be building a simple sliding and fading animation to show you how easily you can achieve animations.\-spring\-demo Setting Up We'll be setting up our React environment with create-react-app which will also generate some boilerplate code that will allow us to get started. To install it run npm install -g create-react-app Now you can use it to create your app. Run: create-react-app react-spring-demo A folder named react-spring-demo will be created containing our project. Cd into that directory and install our primary dependency, the react-spring package by running: yarn add react-spring You will notice we're using yarn as the package manager for this project as it is the default package manager used by create-react-app. Make sure to have yarn installed by running npm install -g yarn We are now set up to go, let's create our first animated page. Animating Styles Spring can be used to animate styles, to do this, we'll use it to animate the transition into a newly loaded page. To do this we'll wrap the jsx context of App.js in a Spring component. The Spring component will take two props, from and to which represents the values to be interpolated by our animation. In our case we want to create the effect of a page dropping down from above and fading in. To do this, we'll set the initial top margin of the page elements to be a negative value and bring it to 0 during the animation creating a dropping motion. To create the fade in effect, we'll set the initial value of the opacity to 0 and bring that value to 1 at the end of the animation. Luckily for us, the boilerplate generated by create-react-app has the perfect background to show this effect at work so we won't need to change it for now. This is what it will look like in our App.js file: // import React, { Component } from 'react'; import logo from './logo.svg'; import './App.css'; import { Spring } from 'react-spring';> </header> </div> </div> ) } </Spring> ); } } export default App; Now fire up your application by running this command. yarn start Your browser will open up and you should see the page load with the contents having the desired drop and fade in animations. Nice, isn't it? You can use spring to create any even more style animations adjusting a variety of styles. It is however advised to stick to animating opacity and translations to keep your app light. Animating innerText Animating styles is great but we can also use Spring to animate the value of contents shown on the screen. To show this, we'll be creating a counter that starts at 0 and ends at 10 using Spring. As expected, from will hold our initial value and to will hold the final value to be displayed. Under the src directory, create a folder called components and in it a file called Counter.jsx. Add the following code to Countrt.jsx : //src/Counter.jsx import React from 'react'; import { Spring } from'react-spring'; const counter = () => ( <Spring from={{ number: 0 }} to={{ number: 10 }} {props => <div>{props.number.toFixed()}</div>} </Spring> ) export default counter; Now import our counter into App.js and add it under the header element to render it in our app. //App.js ... import Counter from './components/Counter';> <Counter /> </header> </div> </div> ) } </Spring> ); } } export default App; Opening up your browser you will notice the counter just under the Learn React text, like this: Just one catch, our animation is happening so soon that we are missing most of it as it occurs while our initial page is animating into visibility. Luckily, we can delay our animation by adding a delay prop which will be equated to a value in milliseconds, this is the amount of time our animation will wait before starting. Adding a 1 second delay, the counter function will now look like this: const counter = () => ( <Spring from={{ number: 0 }} to={{ number: 10 }} {props => <div>{props.number.toFixed()}</div>} </Spring> ) Checking the browser, the counter now starts after the page animations are finished. Another method we can use to add this delay is through the config prop which we'll come to when discussing the Spring configurations shortly. Spring config As mentioned before,Springs are physics based. This means we don't have to manually deal with durations and curves. This is great as it takes away some of the heavy math we may have to cover. However, we can still adjust the behaviour of our Spring by tweaking it's tension, friction, delays, mass and other behaviour through the config prop. Don't wish to deal with this but still want to adjust your animations? Don't worry, react-spring comes with some inbuilt presets that we can use to tweak our Springs. All we have to do is import config from the react-spring package and feed them to the config prop of the Spring. Before we get confused about which config is which, let's take a look at an example. import React from 'react'; import { Spring, config } from'react-spring'; const counter = () => ( <Spring from={{ number: 0 }} to={{ number: 10 }} delay= '1000' config = { config.molasses }> {props => <div>{props.number.toFixed()}</div>} </Spring> ) export default counter; In the example above, we've used the molasses preset which is a high tension, high friction preset provided by react-spring . The presets typically define the tension and friction properties of out Spring. These presets include molasses, default, slow, stiff and wobbly. While the presets only define the tension and friction, you can manually configure other properties of the Spring animation which include but are not limited to delay, mass, velocity and duration. For a full list of properties you can configure, along with other options that can be passed as props, check out this page. Usage With Hooks The React team recently introduced React Hooks, this allows us to create functional components that can permanently store data and cause effects, basically adding state to functional components. Hooks are currently only available in React 16.7 alpha as we await a stable release. To use hooks you will need to upgrade to the 16.7 alpha versions of react and react-dom. To do this, run the following commands: yarn remove react-dom && yarn add [email protected] yarn remove react && yarn add [email protected] We can use hooks out of the box with react-spring which exports a hook called useSpring. This hook allows us to define and update data and will generally consist of the same values you would pass as props and useSpring will turn it into animated data. To showcase this, let's look at how we can have more text rendered after our previous animations are done animating. Here's how we can do that, let's create a new component file called Hooks.jsx and add the following code. //Hooks.jsx import React from 'react'; import { useSpring, animated } from 'react-spring'; const HookedComponent = () => { const [props] = useSpring({ opacity: 1, color: 'white', from: { opacity: 0 }, delay: '2000' }) return <animated.div style={props}>This text Faded in Using hooks</animated.div> } export default HookedComponent; We pass the spring settings as an object of arguments to useSpring which will then pass these values to the animated element that then creates our animated spring. We've set our delay to 2000ms to ensure the text from our hooked component fades in after the counter is finished. Now let's import this into App.js and use the HookedComponent in our app. After cleaning up some of the initial boilerplate code from `create-react-app`, it should end up looking like this: Fire up your final application an see the magic. You now have the tools to get started using react-spring. While it's the easiest to use component of react-spring, Spring provides a simple yet effective means of animating React applications while taking away a huge amount of the workload from the developer. Here's the CodeSandbox:\-spring\-demo You can build on Spring by making use of react-spring's other components such as Transition which animates component lifecycles and Trail which animates the first element of an array and has the rest follow it in a natural trail. Overall, react-spring is a great package with a variety of options depending on your animation needs.
http://brianyang.com/create-animated-react-apps-with-react-spring/
CC-MAIN-2019-04
refinedweb
1,626
60.95
Mach Overview The fundamental services and primitives of the OS X kernel are based on Mach 3.0. Apple has modified and extended Mach to better meet OS X functional and performance goals. Mach 3.0 was originally conceived as a simple, extensible, communications microkernel. It is capable of running as a stand–alone kernel, with other traditional operating-system services such as I/O, file systems, and networking stacks running as user-mode servers. However, in OS X, Mach is linked with other kernel components into a single kernel address space. This is primarily for performance; it is much faster to make a direct call between linked components than it is to send messages or do remote procedure calls (RPC) between separate tasks. This modular structure results in a more robust and extensible system than a monolithic kernel would allow, without the performance penalty of a pure microkernel. Thus in OS X, Mach is not primarily a communication hub between clients and servers. Instead, its value consists of its abstractions, its extensibility, and its flexibility. In particular, Mach provides object-based APIs with communication channels (for example, ports) as object references highly parallel execution, including preemptively scheduled threads and support for SMP a flexible scheduling framework, with support for real-time usage a complete set of IPC primitives, including messaging, RPC, synchronization, and notification support for large virtual address spaces, shared memory regions, and memory objects backed by persistent store proven extensibility and portability, for example across instruction set architectures and in distributed environments security and resource management as a fundamental principle of design; all resources are virtualized Mach Kernel Abstractions Mach provides a small set of abstractions that have been designed to be both simple and powerful. These are the main kernel abstractions: Tasks. The units of resource ownership; each task consists of a virtual address space, a port right namespace, and one or more threads. (Similar to a process.) Threads. The units of CPU execution within a task. Address space. In conjunction with memory managers, Mach implements the notion of a sparse virtual address space and shared memory. Memory objects. The internal units of memory management. Memory objects include named entries and regions; they are representations of potentially persistent data that may be mapped into address spaces. Ports. Secure, simplex communication channels, accessible only via send and receive capabilities (known as port rights). IPC. Message queues, remote procedure calls, notifications, semaphores, and lock sets. Time. Clocks, timers, and waiting. At the trap level, the interface to most Mach abstractions consists of messages sent to and from kernel ports representing those objects. The trap-level interfaces (such as mach_msg_overwrite_trap) and message formats are themselves abstracted in normal usage by the Mach Interface Generator (MIG). MIG is used to compile procedural interfaces to the message-based APIs, based on descriptions of those APIs. Tasks and Threads OS X processes and POSIX threads (pthreads) are implemented on top of Mach tasks and threads, respectively. A thread is a point of control flow in a task. A task exists to provide resources for the threads it contains. This split is made to provide for parallelism and resource sharing. A thread is a point of control flow in a task. has access to all of the elements of the containing task. executes (potentially) in parallel with other threads, even threads within the same task. has minimal state information for low overhead. A task is a collection of system resources. These resources, with the exception of the address space, are referenced by ports. These resources may be shared with other tasks if rights to the ports are so distributed. provides a large, potentially sparse address space, referenced by virtual address. Portions of this space may be shared through inheritance or external memory management. contains some number of threads. Note that a task has no life of its own—only threads execute instructions. When it is said that “task Y does X,” what is really meant is that “a thread contained within task Y does X.” A task is a fairly expensive entity. It exists to be a collection of resources. All of the threads in a task share everything. Two tasks share nothing without an explicit action (although the action is often simple) and some resources (such as port receive rights) cannot be shared between two tasks at all. A thread is a fairly lightweight entity. It is fairly cheap to create and has low overhead to operate. This is true because a thread has little state information (mostly its register state). Its owning task bears the burden of resource management. On a multiprocessor computer, it is possible for multiple threads in a task to execute in parallel. Even when parallelism is not the goal, multiple threads have an advantage in that each thread can use a synchronous programming style, instead of attempting asynchronous programming with a single thread attempting to provide multiple services. A thread is the basic computational entity. A thread belongs to one and only one task that defines its virtual address space. To affect the structure of the address space or to reference any resource other than the address space, the thread must execute a special trap instruction that causes the kernel to perform operations on behalf of the thread or to send a message to some agent on behalf of the thread. In general, these traps manipulate resources associated with the task containing the thread. Requests can be made of the kernel to manipulate these entities: to create them, delete them, and affect their state. Mach provides a flexible framework for thread–scheduling policies. Early versions of OS X support both time-sharing and fixed-priority policies. A time-sharing thread’s priority is raised and lowered to balance its resource consumption against other time-sharing threads. Fixed-priority threads execute for a certain quantum of time, and then are put at the end of the queue of threads of equal priority. Setting a fixed priority thread’s quantum level to infinity allows the thread to run until it blocks, or until it is preempted by a thread of higher priority. High priority real-time threads are usually fixed priority. OS X also provides time constraint scheduling for real-time performance. This scheduling allows you to specify that your thread must get a certain time quantum within a certain period of time. Mach scheduling is described further in Mach Scheduling and Thread Interfaces. Ports, Port Rights, Port Sets, and Port Namespaces With the exception of the task’s virtual address space, all other Mach resources are accessed through a level of indirection known as a port.. In most cases, the resource that is accessed by the port (that is, named by it) is referred to as an object. Most objects named by a port have a single receiver and (potentially) multiple senders. That is, there is exactly one receive port, and at least one sending port, for a typical object such as a message queue. The service to be provided by an object is determined by the manager that receives the request sent to the object. It follows that the kernel is the receiver for ports associated with kernel-provided objects and that the receiver for ports associated with task-provided objects is the task providing those objects. For ports that name task-provided objects, it is possible to change the receiver of requests for that port to a different task, for example by passing the port to that task in a message. A single task may have multiple ports that refer to resources it supports. For that matter, any given entity can have multiple ports that represent it, each implying different sets of permissible operations. For example, many objects have a name port and a control port (sometimes called the privileged port). Access to the control port allows the object to be manipulated; access to the name port simply names the object so that you can obtain information about it or perform other non-privileged operations against. Port rights can be copied and moved between tasks via IPC. Doing so, in effect, passes capabilities to some object or server. One type of object referred to by a port is a port set. As the name suggests, a port set is a set of port rights that can be treated as a single unit when receiving a message or event from any of the members of the set. Port sets permit one thread to wait on a number of message and event sources, for example in work loops. Traditionally in Mach, the communication channel denoted by a port was always a queue of messages. However, OS X supports additional types of communication channels, and these new types of IPC object are also represented by ports and port rights. See the section Interprocess Communication (IPC), for more details about messages and other IPC types.. Tasks acquire port rights when another task explicitly inserts them into its namespace, when they receive rights in messages, by creating objects that return a right to the object, and via Mach calls for certain special ports ( mach_thread_self, mach_task_self, and mach_reply_port.) Memory Management As with most modern operating systems, Mach provides addressing to large, sparse, virtual address spaces. Runtime access is made via virtual addresses that may not correspond to locations in physical memory at the initial time of the attempted access. Mach is responsible for taking a requested virtual address and assigning it a corresponding location in physical memory. It does so through demand paging. A range of a virtual address space is populated with data when a memory object is mapped into that range. All data in an address space is ultimately provided through memory objects. Mach asks the owner of a memory object (a pager) for the contents of a page when establishing it in physical memory and returns the possibly modified data to the pager before reclaiming the page. OS X includes two built-in pagers—the default pager and the vnode pager. The default pager handles nonpersistent memory, known as anonymous memory. Anonymous memory is zero-initialized, and it exists only during the life of a task. The vnode pager maps files into memory objects. Mach exports an interface to memory objects to allow their contents to be contributed by user-mode tasks. This interface is known as the External Memory Management Interface, or EMMI. The memory management subsystem exports virtual memory handles known as named entries or named memory entries. Like most kernel resources, these are denoted by ports. Having a named memory entry handle allows the owner to map the underlying virtual memory object or to pass the right to map the underlying object to others. Mapping a named entry in two different tasks results in a shared memory window between the two tasks, thus providing a flexible method for establishing shared memory. Beginning in OS X v10.1, the EMMI system was enhanced to support “portless” EMMI. In traditional EMMI, two Mach ports were created for each memory region, and likewise two ports for each cached vnode. Portless EMMI, in its initial implementation, replaces this with direct memory references (basically pointers). In a future release, ports will be used for communication with pagers outside the kernel, while using direct references for communication with pagers that reside in kernel space. The net result of these changes is that early versions of portless EMMI do not support pagers running outside of kernel space. This support is expected to be reinstated in a future release. Address ranges of virtual memory space may also be populated through direct allocation (using vm_allocate). The underlying virtual memory object is anonymous and backed by the default pager. Shared ranges of an address space may also be set up via inheritance. When new tasks are created, they are cloned from a parent. This cloning pertains to the underlying memory address space as well. Mapped portions of objects may be inherited as a copy, or as shared, or not at all, based on attributes associated with the mappings. Mach practices a form of delayed copy known as copy-on-write to optimize the performance of inherited copies on task creation. Rather than directly copying the range, a copy-on-write optimization is accomplished by protected sharing. The two tasks share the memory to be copied, but with read-only access. When either task attempts to modify a portion of the range, that portion is copied at that time. This lazy evaluation of memory copies is an important optimization that permits simplifications in several areas, notably the messaging APIs. One other form of sharing is provided by Mach, through the export of named regions. A named region is a form of a named entry, but instead of being backed by a virtual memory object, it is backed by a virtual map fragment. This fragment may hold mappings to numerous virtual memory objects. It is mappable into other virtual maps, providing a way of inheriting not only a group of virtual memory objects but also their existing mapping relationships. This feature offers significant optimization in task setup, for example when sharing a complex region of the address space used for shared libraries. Interprocess Communication (IPC) Communication between tasks is an important element of the Mach philosophy. Mach supports a client/server system structure in which tasks (clients) access services by making requests of other tasks (servers) via messages sent over a communication channel. The endpoints of these communication channels in Mach are called ports, while port rights denote permission to use the channel. The forms of IPC provided by Mach include The type of IPC object denoted by the port determines the operations permissible on that port, and how (and whether) data transfer occurs. There are two fundamentally different Mach APIs for raw manipulation of ports—the mach_ipc family and the mach_msg family. Within reason, both families may be used with any IPC object; however, the mach_ipc calls are preferred in new code. The mach_ipc calls maintain state information where appropriate in order to support the notion of a transaction. The mach_msg calls are supported for legacy code but deprecated; they are stateless. IPC Transactions and Event Dispatching When a thread calls mach_ipc_dispatch, it repeatedly processes events coming in on the registered port set. These events could be an argument block from an RPC object (as the results of a client’s call), a lock object being taken (as a result of some other thread’s releasing the lock), a notification or semaphore being posted, or a message coming in from a traditional message queue. These events are handled via callouts from mach_msg_dispatch. Some events imply a transaction during the lifetime of the callout. In the case of a lock, the state is the ownership of the lock. When the callout returns, the lock is released. In the case of remote procedure calls, the state is the client’s identity, the argument block, and the reply port. When the callout returns, the reply is sent. When the callout returns, the transaction (if any) is completed, and the thread waits for the next event. The mach_ipc_dispatch facility is intended to support work loops. Message Queues Originally, the sole style of interprocess communication in Mach was the message queue. Only one task can hold the receive right for a port denoting a message queue. This one task is allowed to receive (read) messages from the port queue. Multiple tasks can hold rights to the port that allow them to send (write) messages into the queue. A task communicates with another task by building a data structure that contains a set of data elements and then performing a message-send operation on a port for which it holds send rights. At some later time, the task with receive rights to that port will perform a message-receive operation. A message may consist of some or all of the following: pure data copies of memory ranges port rights kernel implicit attributes, such as the sender’s security token The message transfer is an asynchronous operation. The message is logically copied into the receiving task, possibly with copy-on-write optimizations. Multiple threads within the receiving task can be attempting to receive messages from a given port, but only one thread can receive any given message. Semaphores Semaphore IPC objects support wait, post, and post all operations. These are counting semaphores, in that posts are saved (counted) if there are no threads currently waiting in that semaphore’s wait queue. A post all operation wakes up all currently waiting threads. Notifications Like semaphores, notification objects also support post and wait operations, but with the addition of a state field. The state is a fixed-size, fixed-format field that is defined when the notification object is created. Each post updates the state field; there is a single state that is overwritten by each post. Locks A lock is an object that provides mutually exclusive access to a critical section. The primary interfaces to locks are transaction oriented (see IPC Transactions and Event Dispatching). During the transaction, the thread holds the lock. When it returns from the transaction, the lock is released. Remote Procedure Call (RPC) Objects As the name implies, an RPC object is designed to facilitate and optimize remote procedure calls. The primary interfaces to RPC objects are transaction oriented (see IPC Transactions and Event Dispatching) When an RPC object is created, a set of argument block formats is defined. When an RPC (a send on the object) is made by a client, it causes a message in one of the predefined formats to be created and queued on the object, then eventually passed to the server (the receiver). When the server returns from the transaction, the reply is returned to the sender. Mach tries to optimize the transaction by executing the server using the client’s resources; this is called thread migration. Time Management The traditional abstraction of time in Mach is the clock, which provides a set of asynchronous alarm services based on mach_timespec_t. There are one or more clock objects, each defining a monotonically increasing time value expressed in nanoseconds. The real-time clock is built in, and is the most important, but there may be other clocks for other notions of time in the system. Clocks support operations to get the current time, sleep for a given period, set an alarm (a notification that is sent at a given time), and so forth. The mach_timespec_t API is deprecated in OS X. The newer and preferred API is based on timer objects that in turn use AbsoluteTime as the basic data type. AbsoluteTime is a machine-dependent type, typically based on the platform-native time base. Routines are provided to convert AbsoluteTime values to and from other data types, such as nanoseconds. Timer objects support asynchronous, drift-free notification, cancellation, and premature alarms. They are more efficient and permit higher resolution than clocks. Copyright © 2002, 2013 Apple Inc. All Rights Reserved. Terms of Use | Privacy Policy | Updated: 2013-08-08
https://developer.apple.com/library/archive/documentation/Darwin/Conceptual/KernelProgramming/Mach/Mach.html
CC-MAIN-2018-47
refinedweb
3,181
53.61
On 1/18/06, Kristen Accardi <kristen.c.accardi@intel.com> wrote:> On Wed, 2006-01-18 at 23:23 +0100, Pavel Machek wrote:> <snip>> > Device GDCK looks like dock to my untrained eye. Unfortunately its> > type>>> so the problem that I see is that this dsdt defines two separate dock> devices, one outside the scope of pci, and one within it. The one> outside the scope of pci defines the _EJ0 and _DCK methods. So, when> acpiphp loads, it scans the pci slots for ejectable slots, finds none> (because _EJ0 is defined in the dock device that is outside the scope of> pci) and exits. This dsdt is different from the others I've used in> that most of them define all methods related to docking under the actual> dock bridge (within the scope of pci). perhaps some acpi people can> shed some light on the best way to handle this - otherwise I'm sure I> can hack something up that will be less than acceptable :).>ACPI has (had?) a braindamage - it drops devices that do not presentwhen initially scanning ACPI namespace. So if you boot undocked - toobad. Driver won't ever see your docking station.--Dmitry-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2006/1/19/140
CC-MAIN-2014-52
refinedweb
226
72.16
As we already knew, Django is a web-based framework developed and it is using Python as its main programming language to be used for building web application. It is also becoming quite make sense in order to start building web application based on Django, there is a prerequisite for Python installation before start programming and using it. First of all, type the following command in the bash prompt or any command line available in the operating system after successfully installing Python as follows : python Successfully executing the command above will actually redirect the bash prompt into Python Interpreter which is presented in a form of shell. In this shell, it is possible to check the version of Django installed as long as Django has already been installed in the local host or workstation. Just type the following syntax in the Python Interpreter shell : import django; The above is a syntax of Python used as a basic import statement which is used to import Django which is in this context Django itself is considered as a module. So, in order to be able to check the version of Django installed, first of all, it is needed to be imported. print(django.get_version()); After importing Django module, the next thing is to type the above syntax to be able to check the version of the currently installed Django. The above execution syntax in the Python Interpreter in form of a shell is shown as the output result below : user@hostname:~$ python Python 2.7.11+ (default, Apr 17 2016, 14:00:29) [GCC 5.3.1 20160413] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import django; >>> print(django.get_version()); 1.9.4 >>> quit Use quit() or Ctrl-D (i.e. EOF) to exit >>> Beside the execution shown of the above syntax inside in a Python Interpreter in form of a shell in order to retrieve the Django version, there is also another method that can be executed as follows : user@hostname:~$ python -m django --version 1.9.4 user@hostname:~$ python One thought on “Checking Django Version in a Command Prompt”
http://www.dark-hamster.com/programming/checking-django-version-in-a-command-prompt/
CC-MAIN-2020-05
refinedweb
354
55.68
Mobile first, cloud first We all understand the importance of mobile and cloud strategy and that is why the Microsoft Dynamics CRM development team have released native mobile applications to accommodate all the major mobile platforms today – Windows Store, Windows Phone, Android, iOS, Office Outlook on Windows, and we have CRM Mobile Express for other platforms. However, it is also true that how each customer CRM usage varies depending on industry, work style, and other factors. Being a user of mobile apps ourselves, we know that many successful mobile apps enable specific tasks centered around the user and what they are looking to accomplish. For some organizations a successful app may be a simple approval app allowing a user to easily view, annotate, approve, or reject records and for others it could be a more integrated solution using location and other sensors to create and augment data in the field – or a company may want both experiences based on the role of the user. While the existing Dynamics CRM mobile apps usually work very well for many general CRM purposes your company may need more – for that 3rd parties have CRM mobile applications meeting specific demands but what if you want need to build your own applications to meet specific requirements like the ones mentioned above? If you’ve asked yourself this question or are curious as to how you can enable new ways to engage your Dynamics CRM users then please read on! Below we’ll outline some challenges and decision points and we will also show you a new way to help developers build mobile apps for Dynamics CRM. Development Methods This is the most exciting time for developers as there are so many choices out there how to develop an application. C# or any .NET Framework languages with Visual Studio? Sure! (and that’s my favorite). HTML5/CSS for Windows Universal App? Of course. Xcode and Objective-C for iOS? or Java and Android Studio for Android OS? Why not! When you see cross platform development, you still have many choices like Xamarin, Cordova, etc – and new ones are coming out all the times it seems. Dynamics CRM: SOAP vs. REST endpoint In addition to selecting programming languages, IDEs and platforms, there are several technology choices to build an application targeting Microsoft Dynamics CRM. A fully featured SOAP (or also referred to as web) endpoint or the lightweight and easy to develop REST endpoint. Most who have developed in Dynamics CRM understand the REST endpoint has some limitations in which only supports CRUD operations though we do have ways to enable Execute actions with some clever configuration and programming (we can cover that in a future blog post). While the REST endpoing covers most of the common operations, you still need Execute method for several operations like “Send” emails, “Book” appointments or “SetState records. Even “Assign” records to other users. Direct service connection vs. host middleware service When you build a mobile application, you can write a code to directly consume Microsoft Dynamics CRM Web services, or you can let middleware services to consume CRM web services and your mobile application consumes the middleware services. Microsoft Azure Mobile Service is one of the popular choice, as you can use C# to write business logic, then use Authentication or offline capability which Azure Mobile Service offers without writing your own code. A Solution: Introducing Mobile Development Helper Code for Dynamics CRM I hear developers won’t consume SOAP endpoint directly, because there was no library for such purpose and constructing and parsing SOAP request/response is not the most enjoyable task for developers. We have good news for developers, the Dynamics CRM SDK content publishing team released a library called “Mobile Development Helper Code for Dynamics CRM”, which you can directly use in your mobile solution to consume SOAP endpoint just like normal Dynamics CRM SDK. You can also use “CRM Service Utility for Mobile Development” for early bound development. Please refer to the library page for overview. How to use the library What we will cover in this post: - Create a Visual Studio solution - Add ADAL NuGet Packages - Add the Mobile Helper Library - Add Json.NET NuGet Packages - Add code to acquire AccessToken from Azure AD - Register your app to Azure AD - Run the app to obtain AccessToken - Update the code - Demo! The easiest way to try the library is to use it in existing SDK sample app, the sample already has a basic setup for obtaining an AccessToken. However, I will explain step by step here how to use the library for you. Following are prerequisites. - Visual Studio 2013 installed on Windows 8.1+ - Microsoft Dynamics CRM Online instance. (You are able to use trial version) - Windows Azure Subscription (You are able to use trial version) - Download the library from here, and extract it. Create a Visual Studio solution 1. Open Visual Studio 2013. 2. Click FILE | New | Project 3. Select Visual C# | Windows Apps | Blank App *In this example, I use Windows Store app template, but you can use Windows Phone or Universal template if you need to. 4. Give any name. I name it “Sample CRM Mobile App” The easiest way to obtain AccessToken for OAuth2 from Azure AD is to use ADAL (Azure AD Authentication Library). If you have preferred way to obtain AccessToken, you can use your own way – in fact we’d love to hear how you’re doing this today. 1. In Visual Studio, Right click the project folder in Solution Explorer pane, and click the Manage NuGet Package menu item. 2. In the manage package window, Select Online from left pane, and type ADAL in the search box and press Search. 3. Selected Active Directory Authentication Library from the results and click Install. 4. Complete the installation, then click Close. Add the Mobile Helper Library 1. Right click the solution folder in the Solution Explorer pane, and click Add | Existing Project. 2. Browser to the mobile helper folder which can be downloaded and extracted from here:. 3. Select Microsoft.Crm.Sdk.Mobile.csproj file, and click Open. 4. In the Solution Explorer pane, under your new project (not the mobile helper), right click References | Add Reference. 5. In the add reference dialog, select Solution | Project | and select Microsoft.Crm.Sdk.Mobile and click OK. Add Json.NET NuGet Packages Both your project and mobile helper project requires Json.NET. 1. Right click solution folder in the Solution Explorer pane, and click Manage NuGet Packages for Solution menu. 2. Select Online from left pane, and type Json.net for search box and search. 3. Selected Json.NET from the result and click Install. 4. Complete the installation by check all projects. 5. Click Close when install completes. Add code to acquire AccessToken from Azure AD 1. Open MainPage.xam.cs file. 2. Replace inside MainPage class with following code. const string aadInstance = "{0}"; const string tenant = "[Enter tenant name, e.g. contoso.onmicrosoft.com]"; const string clientId = "[Enter client ID as obtained from Azure Portal, e.g. 82692da5-a86f-44c9-9d53-2f88d52b478b]"; static string authority = String.Format(CultureInfo.InvariantCulture, aadInstance, tenant); private AuthenticationContext authContext = null; private Uri redirectURI = WebAuthenticationBroker.GetCurrentApplicationCallbackUri(); private string ResourceId = "[Your CRM Online Org Address. e.g.]"; public MainPage() { this.InitializeComponent(); GetAccessToken(); } private async void GetAccessToken() { authContext = new AuthenticationContext(authority); // Use ADAL to get an access token. AuthenticationResult result = await authContext.AcquireTokenAsync(ResourceId, clientId, redirectURI); if (result.Status == AuthenticationStatus.Success) { var AccessToken = result.AccessToken; } } 3. Update tenant and ResouceId to match to your environment. Register your app to Azure AD You need to register your application to Azure AD. 1. First of all, obtain RedirectUri value which you use to register your application, Run the existing application by setting breakpoint after getting redirectURI. 2. Note the redirectURI value and stop debugging. 3. Go to Microsoft Azure portal page. () 4. Select Active Directory on the left pane. 5. Click APPLICATIONS menu on the top, then click ADD button in the bottom. 6. Select “Add an application my organization is developing. 7. Enter any name and select NATIVE CLIENT APPLICATION. 8. Enter RedirectUri you just obtained. 9. Once registration completed, click CONFIGURE menu. 10. Note a CLIENT ID, which you will update the code by it. 11. Navigate to bottom to setup impersonation. 12. Select “Dynamics CRM” from Select application dropdown, and select “Access CRM Online as organization users.” 13. Click Save button. 14. Go back to Visual Studio 2013, then update clientid const string. Run the app to obtain AccessToken 1. Set breakpoint at line 51 where you obtain AccessToken from result. Run the app. You will see Sign In page soon. 2. Enter username and password by which you can connect to CRM Online. 3. You will see consent screen only if you register your app in Azure Ad for separate instance. In such case, please OK to accept it. 4. Breakpoint will be hit and you can confirm AccessToken is retrieved successfully. Finally use mobile helper library to issue SOAP request. 1. Add following as main class member. private OrganizationDataWebServiceProxy proxy; 2. To resolve the above code, add following at using section. using Microsoft.Xrm.Sdk.Samples; 3. Add following code at constructor to initialize proxy. 4. Assign proxy.AccessToken after obtaining an AccessToken, then call ExecuteWhoAmI method, which you implement next. 5. Add ExecuteWhoAmI method as below. 6. Add necessary using statement to resolve names. 1. Set break point right after Execute method above. 2. Run the application. 3. When the application breaks into break point, confirm the returned value. What’s Next? I hope you get the idea how to use the library to consume SOAP endpoint. The next step is to use CrmSvcUtil.exe to generate file for Early Bound development. I will explain how to use the tool next time. Ken Premier Mission Critical/Premier Field Engineer Microsoft Japan Which ADAL library did you exactly use for this sample? AuthenticationResult is missing the status and the method AcquireTokenAsync(ResourceId, clientId, redirectURI) seems to be wrong from its parameters. Best Regards Tobias, Thanks for your comment. I am using ADAL 2.x for Windwos Store. If you are using it against Windows Phone, then method parameters are different. In that case, please refer to sample app below. code.msdn.microsoft.com/Activity-Tracker-Plus-f62d80a5 Ken Hi all have anybody else try to authenticate with IFD(ADFS) Hi Ismail, Yes I did, with Windows Server 2012 R2 AD FS, which suports OAuth. Any issue? Ken Hello Kenichiro, Thanks for your great post. I got the authentication running within my Project. Could you may give an update for the generation of early bound classes in a mobile Project? I used the following statement in the command line: CrmSvcUtil.exe /url:…com/XRMServices/2011/Organization.svc /out:GeneratedCode.cs /username:"me@blabla.onmicrosoft.com" /password:"mine" /codeCustomization:"Microsoft.Crm.Sdk.Samples.CodeCustomizationService, CrmSvcMobileUtil" /namespace:Xrm However there are still some references to the Microsoft Dynamics SDK which is as far as I know not supported within Windows RT. Best Regards, Tobias Hi Tobias, Thanks for your comment. This is a sneak peak. 1. Download a tool from code.msdn.microsoft.com/CRM-Service-Utility-for-4ca0c93b. 2. Extract the zip. 3. Open CrmSvcMobileUtil.sln In the extracted foloder. 4. Add latest CrmSvcUtil.exe and Microsoft.Xrm.Sdk.dll as Reference to the project. 5. Open FilteringService.cs and modify GenerateEntity method. Specify entities you want to use, or simply return true to generate all. 6. Build the project. 7. Go to folder where dll is generated. 8. Execute command below. >CrmSvcUtil.exe /codecustomization:"Microsoft.Crm.Sdk.Samples.CodeCustomizationService,CrmSvcMobileUtil" /codewriterfilter:"Microsoft.Crm.Sdk.Samples.FilteringService,CrmSvcMobileUtil" /url:<your org endpoing> /username:<user> /password:<password> /out:XrmData.cs /namespace:<ns> Hope this helps. Ken Hello Kenichiro, Thank you so much! That was exactly what I needed to continue my project. Looking forward to hear more from your blogposts. Best Regards, Tobias Hello Kenichiro, do you have experience with CRM and Phonegap/cordova? I need a application which authenticate me with CRM and gets a few data sets. Kind regards Sid Hi Sid, Unfortunately, I don't have much experience with Cordova. For HTML5/Javascript, please consider using REST endpoint or refer to Soap.js as reference if you need SOAP. blogs.msdn.com/…/new-microsoft-crm-sdk-sample-sdk-soap-js.aspx Ken Hi Kenichiro, Thank you for the answer, but msdn says about Sdk.Soap.js -> "This library does not provide code to authenticate." 🙁 code.msdn.microsoft.com/SdkSoapjs-9b51b99a sid Sid, You are right, as the Soap.js is supposed to use in Webresource, thus no need to authenticate. Therefore please just refer to the library for the sake of construct and parse SOAP part. Or you may want to stick with REST endpoint as it is way easier. Yet you need to authenticate. For authentication, with CRMOL, you may want to consider checking ADAL for JavaScript. github.com/…/dev I didn't test this personally, so I cannot guarantee this fits to your solution. yet good to know tool. Ken Hey Kenichiro, thank you for the great link. Do you know, if your code work with xamarin forms? Sid Hi sid, Yes it does for both normal Xamarin app as well as Xamarin Forms, because its C# at the end. However, you need to change a bit for the code. 1. Open Microsoft.Xrm.Sdk.Samples.cs 2. Comment out line 34 using Windows.ApplicationModel; 3. Comment out inside EnableProxyTypes method as it depends on Windows runtime. 4. if you want to use EarlyBound for Xamarin, then feed your data type to typeList like below. Please also note that I changed method signature as it doesn't need async/await anymore. public void EnableProxyTypes() { List<TypeInfo> typeList = new List<TypeInfo>(); typeList.Add((typeof(Account).GetTypeInfo())); // Add typeinfo of Entity you want to use for Early Bound. types = typeList.ToArray(); } 5. Compile. 6. When you instantiate the proxy, then call proxy.EnableProxyTypes(); method. Now you can use Earlybound like below. Account myaccount = (Account) await proxy.Retrieve(Account.EntityLogicalName, <guid>, new ColumnSet(<columns>)); Or var results = await proxy.RetrieveMultiple(query); foreach(Account account in results.Entities) { // do work. } Actually this is good question, and I will write another post for more detail, but you can go ahead to implement by using this information. Hope this helps. ken Thank you Ken 🙂 it works! sid I keep trying this solution and I can never authenticate. The ADFS Screen show up and when I authenticate it always shows Can't Connect to the service then it just sit there if I click the back arrow then I get the status of the request. Any ideas? The browser based authentication dialog failed to complete. The system cannot locate the resource specified. (Exception from HRESULT: 0x800C0005)" Here is the authority I am using for ADFS https://[crm ifd web url]/adfs/ls/XRMServices/2011/Organization.svc/web Thanks I figured it out I needed to use the ADFS oauth URL not the CRM IFD URL. I was using that because of the CRM documentation. msdn.microsoft.com/…/dn531009.aspx Anyway hope this helps someone Fernando, Great share! I will post another article regarding OnPreimse setup. Ken Hi Ken, Could you may help me out with some questions about authentication as its currently part of my bachelorthesis? It would be great if you could explain the authentication flow of a windows-store-app to dynamics crm 2015 online in more detail. If I understand it correctly the authentication to dynamics crm 2015 online is done via OAuth 2.0 authorization code grant type with a public client?I did find a link from azure active directory: msdn.microsoft.com/…/dn499820.aspx and i think the scenario of: native application to web api would be the right one. But somehow it is hard to make a connection between the scenario of a mobile app authentication with dynamics crm online and the oauth authorization flow. I guess that the web-api in this case would be the organizationwebservice of dynamics crm right? I also have the feeling that a lot of complexity is gone by using adal. While this is pretty nice for implementing it doesn't help with explaining the concept of authentication with a mobile client to dynamics crm 2015 online. I hope you can help me out. Best Regards, Tobias Hi Fernando, sorry for late replay. Did you register your application to ADFS and using the same Client Id to your app? I received similar question how to use this to OnPremise IFD several times. The auth url seems correct. i will write an article how to do so by using existing sample soon. Ken Hi Tobias, There is a great article how the OAuth 2.0 authorization works with Azure AD here.…/adal-for-windows-phone-8-1-deep-dive Another great way to learn how the authentication work if you don't mind reading JavaScript is to use connected service for Office 365 in Visual Studio. When you add Office 365 connected service, it will generate JavaScript to get code and then accessToken. (and refreshToken). By reading the script, it is very helpful to understand all the detail flow. This is how i get the script. 1. Open Visual Studio 2013 (or 2015) 2. Create project under JavaScript (Store or Cordova) 3. Right click project and Add > Connected Service. 4. Sign In to your Office 365 and register your application. 5. Select any privilege you need. I just register them all. 6. Then you see helper codes will be added. Read o365auth.js file under services|office365|scripts. Though a bit old, this blog helps you navigate through the flow. blogs.office.com/…/office-365-api-tool-visual-studio-2013-summer-update Ken Thanks for the article! Am I correct in thinking this is the wrong approach to use for building a general purpose app, thats published on the store for instance? The problem I have is that it requires the end user to go to their azure portal, and add "an application we are developing for our organisation" to obtain a client id, then go back to the app they have downloaded, and enter in the client id? The flow I really am after is that: 1. End user downloads some app from an app store. 2. When the app starts up, they set up their connection with CRM (Miminal fuss) 3. They use the app – which now has access to their CRM. I'd love any pointers here – does the same approach you have discusses in your article apply here or is your approach just for in house applications developed for one organisation? Thanks for all the helpful information thus far. Darrell Hi Darrell, I see you point and I have good answer to it. If you want to distribute you app to customers who uses CRM online out of your control, you can use cconsent feature. You simple register you app to your azure ad, and when use download and run your app and tried to login, they see login screen and then they see consent screen to let them allow app to access their user information. You only need to register the app directly to their azure ad when they don't want to see this consent screen. blogs.msdn.com/…/dynamics-crm-developers-build-your-own-mobile-apps-for-windows-ios-and-android-part-3.aspx The article above has brief expansion but if that's not enough, please let me know so that I can describe it very detail. Ken Hi Kenichiro, in the "Register your app to Azure AD" step, you said in 11th and 12th point that we have to do this : "11.Navigate to bottom to setup impersonation. 12. Select “Dynamics CRM” from Select application dropdown, and select “Access CRM Online as organization users.” but i can't find how to do it could you please explain me how to do it. thanks Hi aziz, Thanks for your comment. Do you mean you don't see Dynamics CRM service in the list of services? In that case, your AD tenant does not have CRM Online service. One workaround is to register your app directly to your existing CRM Online tenant. You can related the tenant from your current Azure AD so that you dont have to sign up for Azure subscription for each org. Please read my latest blog for how to do it. blogs.msdn.com/…/dynamics-crm-developers-build-your-own-mobile-apps-for-windows-ios-and-android-part-3.aspx Hello, I try to use your instructions to create an application with xamarin.forms. I used a project with portable Library and I want use it for centralized code to authentication. When I import ADAL I have this error : Installation de « Microsoft.IdentityModel.Clients.ActiveDirectory 2.16.204221202 ». Installation de « Microsoft.IdentityModel.Clients.ActiveDirectory 2.16.204221202 » terminée. Ajout de « Microsoft.IdentityModel.Clients.ActiveDirectory 2.16.204221202 » à AppMobile2. Désinstallation de « Microsoft.IdentityModel.Clients.ActiveDirectory 2.16.204221202 ». Désinstallation de « Microsoft.IdentityModel.Clients.ActiveDirectory 2.16.204221202 » terminée. Échec de l’installation. Restauration… Impossible d’installer le package « Microsoft.IdentityModel.Clients.ActiveDirectory 2.16.204221202 ». Vous essayez d’installer ce package dans un projet ciblant « portable-net40+sl50. Same error when I try target with Framework 4.5. Thanks for help Thomas, You need to use ADAL v3 for Xamarin, which is still preview. Please see this article for more detail.…/adal-v3-preview-march-refresh I am using the latest preview release for ADAL. Ken Thanks for answer. I try to install this version of ADAL on my portable class and receive this error : PM> Install-Package Microsoft.IdentityModel.Clients.ActiveDirectory -Version 3.1.203031538-alpha -Pre Installation de « Microsoft.IdentityModel.Clients.ActiveDirectory 3.1.203031538-alpha ». Vous téléchargez Microsoft.IdentityModel.Clients.ActiveDirectory à partir de Microsoft Corporation. Le contrat de licence est disponible à l’emplacement go.microsoft.com/fwlink. Vérifiez si le package contient des dépendances susceptibles de faire l’objet de contrats de licence supplémentaires. Votre utilisation du package et des dépendances confirment votre acceptation de leurs contrats de licence. Si vous n’acceptez pas les contrats de licence, supprimez les composants correspondants de votre ordinateur. Installation de « Microsoft.IdentityModel.Clients.ActiveDirectory 3.1.203031538-alpha » terminée. Ajout de « Microsoft.IdentityModel.Clients.ActiveDirectory 3.1.203031538-alpha » à CrmSample. Désinstallation de « Microsoft.IdentityModel.Clients.ActiveDirectory 3.1.203031538-alpha ». Désinstallation de « Microsoft.IdentityModel.Clients.ActiveDirectory 3.1.203031538-alpha » terminée. Échec de l’installation. Restauration… Install-Package : Impossible d’installer le package «Microsoft.IdentityModel.Clients.ActiveDirectory 3.1.203031538-alpha». Vous essayez d’installer ce package dans un projet ciblant « portable-net45. Au caractère Ligne:1 : 1 + Install-Package Microsoft.IdentityModel.Clients.ActiveDirectory -Version 3.1.203 … + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:) [Install-Package], InvalidOperationException + FullyQualifiedErrorId : NuGetCmdletUnhandledException,NuGet.PowerShell.Commands.InstallPackageCommand To try your solution, I have create a new project in VS2013 and write command install ADAL V3. Where is my mistake ? Thomas, Thanks for update and I figured the issue. The thing is i am using shared project for Xamarin.Forms and that's because ADAL v3 does not support Siliverlight yet. So what i am doing is to create Xamarin.Forms project by using Share Project, then add ADAL 3.x for iOS and Android project, ADAL v2 for Siliverlight project. In addition, i make Siliverlight project from 8.0 to 8.1 and install HttpClient nuget package so that it works with CRM mobile library. Xamarin.Forms 1.4 now supports WinRT rather than Silverlight and in this case, on the other hand, you need to use PCL as it does not support Shared Project yet. I will write blog article later for Xamarin.Forms if you think that would benefit the community. Ken Ok, I get ADAL 3 for Xamarain Droid/iOS and 2 for WinPhone (Windows Phone 8.0) I add Microsoft.Crm.Sdk.Mobile. Before I have compil it in other instance of VS2013 to be safe. 1) VS2013 don't want include the project like ref to winphone projet, target not matching. Try to move both in 8.1 but same error. 2) When paste your code in MainPage.xaml.cs, IdentityModel is not recognized. Thanks for help Thomas Thomas, I am sorry but I do not know what is causing the issue from the description, and it is really before CRM thing I believe. As there are many steps you need to take until you get the Authentication done for Xamarin.Forms even before calling WhoAmI message and I think the easiest way to explain that is to write step by step blog article. Its going to be long article but I will find my time to write it up for all of us 🙂 My question for you are: 1. Do you prefer PCL? 2. Do you prefer WinRT compare to Silverlight? CRM part doesn't change though Ken Hello, Not sorry, I'm new in Xamarin and Xamarin.Forms and CRM. I'm web dev .Net and try to understand and make application can connect to CRM for my new job. I think I jump over some steps 🙁 I prefer PCL to reuse in different projects, and don't know difference between WInRT and Silverlight. I have think that windows phone run in 8 or 8.1 was ligth version of desktop version, not different. Thanks for your help and your work. Thomas Can I develop mobile app for CRM2011? Do I need to build WCF server as middle layer and mobile app consume WCF Can you please guide me to build Mobile/Surface Tablet app for CRM2011 Thanks Udayan Hi Udayan, The difficult part of Dynamics CRM 2011 is that it does not support OAuth 2.0, so you have to authenticate user with SAML. There is a sample code you can refer to here: code.msdn.microsoft.com/…/CRM-Online-2011-WebServices-14913a16 However it is easier for sure to put middle tier like you mentioned. Ken Hi Kenichiro, Thank you for detailed article. Did you happen to write article on CRM ADFS onpremise authentication? I couldn't find good resource around it. Can you please point me few? Thanks, Amol Thanks for your comment. The white paper is here.…/details.aspx What kind of information you are looking for? Ken Hey ken, thank you for this great article. I’m wondering if ADAL approach is the only way to implement CRM mobile? I’m looking at Xamarin to develop and as you mentioned in the previous replies. Do you know if there is different approach rather than Azure AD? Thanks, Jason I’m using CRM Online 2016 Update 1 For online, you really need to use OAuth2.0. You don’t have to use ADAL but that’s the easiest. Please let me know if you have any issue. I’m looking for a way to authenticate without registering in Azure AD. For my own knowledge, ADAL is the way to authenticate for mobile (to use Web API/REST). What scenario you have exactly? Thank you for your reply. My scenario is simply log in to CRM and have a button that fires work flow from CRM. Hello Kenichiro, Thanks for your great post. That was exactly what I looking for my project. Looking forward to hear more from your side. Best Regards, Arpita Singh Thanks Arpita, This article is a bit old one. Still valid for authentication, but not for CRM as SOAP endpoint is officially deprecated. I will write another blog articles regarding to Xamarin and Xamarin.Forms with CRM Web API. Ken Hello Kenichiro , Thank you so much for your kind reply.
https://blogs.msdn.microsoft.com/crminthefield/2015/01/12/dynamics-crm-developers-build-your-own-mobile-apps-for-windows-ios-and-android/
CC-MAIN-2018-34
refinedweb
4,633
59.09
#. Before we jump into C# code we have to choose the best way to build solution of our Windows application.(Thanks, thanks DOT.NET ! Now we have inheritance, user controls, and so on. And we must use that!).I strongly recommend to include in our solution at least two projects with different "Output Type": Before using any control in the first project, you have to create a control in the second project (for example, to create Button_C control that inherits from standard Button control ) and to use only controls from the second project on your forms. This process takes no more than 20 seconds but advantage is obvious: at any time just for some seconds you can add to your control all that you want ( properties, events, color, and so on ). And just in seconds all addings and changes work on every your form (ten forms or hundred - it doesn't matter!). Let's create our test project. Step 1. Make a new solution named " WinProjects_Test".Step 2. Add a "Windows Application" project named "CloseForms".Step 3. Add a "Windows Control Library" project named "CloseForms_Controls".Step 4. Delete "UserControl1" from "CloseForms_Controls" project (we just don't need it).Step 5. Add Windows Form named "FormBasic" to "CloseForms_Controls" project. Change property BackColor to "Cornsilk" (just to see that it is our form with our BackColor). Step 6. Build "CloseForms_Controls" project (right-click and select Build). After every changes it is recommended to rebuild the project ! Step 7. Build "CloseForms" project. Step 8. To References of "CloseForms" project add a reference to CloseForms_Controls.dll (Right-click References of "CloseForms" project, select Add Reference, from a tabbed dialog box select .NET Framework and with the help of "Browse ...". Find out the physical CloseForms_Controls.dll on the disk. Double-click CloseForms_Controls.dll to add it to the selected components list. Click OK to add this reference to the project). Step 9. Rebuild solution. Step 10. All our forms have to be inherited from FormBasic form. So far there is only one form in our "CloseForms" project : this is Form1. Open code of Form1, find out the line public class FormBasic : System.Windows.Forms.Form and change it to public class Form1 : CloseForms_Controls.FormBasic Step 11. OK. Now our ship is ready for sailing off. We just have to add a few forms and user controls to make our travel more interesting. Add Form2, Form3, Form4 to "CloseForms" project and make them inherited from FormBasic. (To add inherited form to "CloseForms" project you can just right-click and select Add|Add Inherited Form ,then from Add New Item dialog box select Inherited Form and then from Inheritance Picker dialog box select our FormBasic). Step 12. Add User Control , named Button_C, to CloseForms_Controls project. Open code of this control, find out line public class Button_C : System.Windows.Forms.UserControl public class Button_C : Button Now we have our own Button_C control with all properties, events of Button control. "To prove" that this is ours control let's change default property BackColor : double-click on Button_C.cs, right-click on the page (Button_C.cs[Design]) and select Properties, set BackColor to, for example, "AliceBlue". (Do you remember ?! After every changes it is recommended to rebuild the solution). Step 13. Add one more User Control, named Buttons, to CloseForms_Controls project. Double-click on Button_C.cs, inside Toolbox click on "My User Controls" tab and drag and drop Button_C. Name it "button_CHome" with the text "Home". Drag and drop two more Button_C : button_CBack with the text "Back" and button_CNext with the text "Next". Pay attention ! We have created Buttons control with "Home" button. But, in fact, we could add "Home" button at any moment ("on demand" of the boss) without any serious problems (Don't forget to rebuild solution !). Step 14. You should force to work buttons according to your desire on any form. With this purpose add to class Buttons three event Handlers: #region "forClass" public event EventHandler ClickNext; public event EventHandler ClickBack; public event EventHandler ClickHome; #endregion Open design of the Buttons.cs and double-click on the button_CNext button. Add the next code (to "private void button_CNext_Click"): private void button_CNext_Click(object sender, System.EventArgs e){ if (ClickNext != null) { ClickNext(this, e); } } Similar "process" have to be done for the button_CBack and button_CHome controls: private void button_CBack_Click(object sender, System.EventArgs e) { if (ClickBack !=null) { ClickBack(this,e); } } private void button_CHome_Click(object sender, System.EventArgs e) if (ClickHome !=null) ClickHome(this,e); } Our control is now ready to act. And our solution you can see on Figure 1. Figure 1. Step 15. Now we can add some "actions" to our project. Inside CloseForms project from "My User Controls" tab drag and drop on every form Buttons control. Double-click on Form1.cs , right-click on Buttons1 control and select Properties, click "Events" button, double-click on "ClickBack", then - on "ClickHome", and then - on "ClickNext". Now Form1.cs code page has three new functions: buttons1_ClickBack, buttons1_ClickHome, buttons1_ClickNext. Similar "clicks" have to be done for the Form2, Form3, Form4. Step 16. Add a few lines of code to Form1.cs code page (to above mentioned functions): private void buttons1_ClickBack(object sender, System.EventArgs e) //Just to see that our ClickBack event can do something MessageBox.Show ("This is ClickBack. There is no any 'Back' way ! " + "We are on the Home Form ! ","CloseForms Project"); private void buttons1_ClickHome(object sender, System.EventArgs e) //Just to see that our ClickHone event can do something MessageBox.Show ("This is ClickHome. There is no any 'Home' way ! " + private void buttons1_ClickNext(object sender, System.EventArgs e) //To open the next form ( in this case : Form2 ) Form2 fOpen = new Form2 (); fOpen.ShowDialog (); To Form2 (only for "buttons1_ClickNext"): //To open the next form ( in this case : Form3 ) Form3 fOpen = new Form3 (); To Form3 (only for "buttons1_ClickNext"): //To open the next form ( in this case : Form4 ) Form4 fOpen=new Form4(); fOpen.ShowDialog (); To Form4 (only for "buttons1_ClickNext"): Now you have possibilities to travel from form to form without any restrictions : Form1-Form2-Form3-Form4-Form2-Form3-... But ... only forward ( not "backward and forward"). Step 17. In order to return "step by step" to the previous forms add to Buttons.cs code page function "closeForm()" private void closeForm() if (ParentForm.Name!= "Form1") ParentForm.Close (); and now add this function to "button_CBack_Click" : closeForm (); Step 18. We closely have approached to our goal: there are tens (or may be hundreds) of opened forms and you should close all forms except for the Form1 (just to return to the "Home" form) by means only one click. OK ! We are going to do that by adding a few lines of code (without using collection of opened forms, and so on).First of all you have to add some "command" to close all form. Certainly this command should be clear and visible for all forms. Let it be some variable : public static bool closeForms. Of course while loading any form we have to command : "Don't close now !" (closeForms = false;). Well. While closing any form (I don't mean Form1) the previous one is activated . OK ! We can catch this and , if there is command " To close forms " (closeForms = true;), we can close this form too ... and so on. And at last we should define who will give a command "Home". Of course it will be our "Buttons" control and "button_CHome". Well, now we shall pass to action. Double-click on FormBasic.cs, right-click on FormBasic[Design] and select Properties, click "Events" button, double-click on "Activated". Add to FormBasic.cs code page the next code : public static bool closeForms ;//command to close all forms private void FormBasic_Load(object sender, System.EventArgs e) closeForms = false;//command : "Don't close now !" //just to inform where we are and to hide ControlBox if (this.Name != "Form1") this.Text = " My name is " + this.Name ; this.ControlBox =false; else this.Text = "Home Form. Where you would not sail you can " + "come back to me by means of one click !!!" ; this.ControlBox =true; //to make control width depended on length of the text this.Width += this.Text.Trim().Length*4 ; private void FormBasic_Activated(object sender, System.EventArgs e) //to catch command "To close forms" if (this.Name != "Form1" && closeForms ) this.Close (); Open Buttons.cs code page and add command "Close forms ! " (that is "Go home ! ") to button_CHome_Click : FormBasic.closeForms = true;//command : "Close Forms ! " That is all ! Now you may test the project. Just run your project and open (click on the "Next" button) as many forms as you want. Click on "Home" button and ... All your forms will have been closed and you will have arrived "home". CONCLUSION In order to return to your "Home" form in some Windows Forms project all that you need are only a few things . You have to Good luck in programming ! Using static variable and Activated Event for Building "Home" Button of Windows application Building Control in Visual Studio 2005 with XML as Data Source. Great Article good use of user controls,but i am not able to get the closeform() functiontionality when we r using "back". What r we getting from "ParentForm.Name". Michael, thank you for writing the article! It helped me in my work. Thanks again.
http://www.c-sharpcorner.com/UploadFile/LivMic/Static_Activated_Home04082006112159AM/Static_Activated_Home.aspx
crawl-003
refinedweb
1,536
68.36
From Prefix ability, but from something it calls “Syntax lowering” In addition to minification, Parcel CSS handles compiling CSS modules, tree shaking, automatically adding and removing vendor prefixes for your browser targets, and transpiling modern CSS features like nesting, logical properties, level 4 color syntax, and much more. Yep that’s right, with Parcel CSS you can use things like native CSS Nesting, the HWB() color function, etc. ~ The package — quite obviously — integrates nicely with Parcel itself. It can also be used in standalone mode. To get started with this, you can check out this example/demo project that I’ve created. It compiles a source src/styles.css file using Parcel CSS to build/styles.css. The build file has support for Nesting enabled, and looks like this: import css from "@parcel/css"; import * as fs from "fs"; let { code, map } = css.transform({ filename: "src/styles.css", // Needed for sourcemap code: fs.readFileSync("src/styles.css"), // Read contents from src/styles.css minify: true, sourceMap: true, targets: { safari: (13 << 16) | (2 << 8), // Safari 13.2.0 }, drafts: { nesting: true, // Nesting FTW! }, }); // Write all to ./build/… fs.writeFileSync("build/styles.css", code.toString()); fs.writeFileSync("build/styles.css.map", map.toString()); Note that it’s the code property that reads the source file contents. The filename value just above it is used in the stylemap. I’ve also included a watch command in the example project so that you can simply do an npm run watch to re-build the file whenever a change is detected. ~ Parcel CSS → Announcing Parcel CSS → Parcel CSS Example Project →
https://www.bram.us/2022/01/16/parcel-css-a-new-css-parser-compiler-and-minifier-written-in-rust-example-project/
CC-MAIN-2022-40
refinedweb
265
66.84
This is RenderMan 21 Documentation you are viewing. The Rmanwiki home page will redirect you to the current documentation. Contents "it" scripting is based on the Python scripting language (see). Python is an easy to use language for typing one line commands as well as being a popular programming language for scripts and larger programs. It also comes with a large library of modules. Familiarity with Python is assumed throughout this document. "it" Console The best place to start learning "it" scripting is through the Console window. To start the Console window, go to Window->Console; this should bring up a window similar to the image below: "it" and the IceMan Image Computing Environment There are two main parts to the scripting environment; the IceMan imaging commands and the "it" application objects. These parts are separated into two python modules (it and ice) and preloaded into the python interpreter. IceMan Please refer to the "it" - IceMan documentation page. "it" Application Access to the "it" application functions is through the it python module. This gives the script writer access to the Catalog and Image Element objects (covered in detail later) that organize the images presented in the UI. This means, for example, that operations such as typing in notes into the Inspector panel for an image that a user can perform in the UI can also be performed from a script calling a method on the appropriate ImageElement object. Secondly inside the it module the objects that make up the Qt framework (see) are available. Principally the QApplication object appears as it.app an so users can write extensions to "it" that interact directly with the event loop, widgets, menus etc of Qt. Qt is a very large, sophisticated framework but as we'll see later very little effort is needed to add simple command dialogs to "it". The last category of things in the it module are functions that perform common operations with both images and "it". For instance some useful operations for creating and manipulating images can be found in the sub-module it.util. These can be thought of as macros that just perform a sequence of common steps which are convenient both when interactively type commands to "it" and while writing new extensions. For example it.util.Ramp is a method for generating ramp images providing a easy-to-use wrapper to the ice.RampIceMan operator. It is encouraged to both use the python builtin help() function, by typing help(it.util.Ramp) in the Console window for general use, and also reading the source code of the factory supplied extensions (in $RMSTREE/lib/.../it/util.py) if you want to learn how to add your own extensions. Extensions to "it" can come in a few different forms. Python extensions can be just be functions called from the interactive Console window as with a normal interactive python interpreter. Extensions may also register themselves with the "it" application to be included on the Commands menu and interact with the user by constructing UI's using all of Qt4's powerful widget set and interacting with both existing "it" UI and its application scripting API. There are also special "event" handlers that are executed when the renderer opens and closes images. Simple Examples In the Console window, to navigate to a directory full of images type: it.os.chdir('/path/to/images') Open the image by typing the following command in the Console window: it.AddImage( ice.Load('image.tif') ) it.AddImage() returns an It3ImageElement instance, which can be saved to a variable for later use. If you look at the "it" window title or Inspector window, the image will appear with an e0 label. You can specify a label by passing in a second parameter to it.AddImage after the file name, e.g. it.AddImage( ice.Load('image.tif'), label='_it1' ) The console features an autocompletion feature. As you type, the console will present a list of candidate commands. You can use the "up" and "down" arrow keys to select the appropriate command and then press the Enter key to have it autocomplete the command.You can also use the "up" and "down" keys to cycle through all the commands you've typed. Previous commands can be edited this way. Playing with Images Let's say you have two images and you want to find the differences between them. First, load two images, and assign the ice.Image instances to variables img1 and img2: img1 = ice.Load('image1.tif') img2 = ice.Load('image2.tif') Unlike the previous example we don't call it.AddImage so while the images are loaded into "it" they aren't displayed. Go ahead and call it.AddImage with the images if you want to view them. To compute the difference, from the console type: diff = img2.Subtract(img1) Subtract() returns an ice.Image instance. You can then call it.AddImage() to add the resultant image to the catalog: diff = img2.Subtract(img1) it.AddImage(diff) You should see something similar to the image below: Let's say that the differences were really small, and you can't quite see them, so you want to multiply them by some factor. This is where using the "up" arrow becomes useful. Press that precious little button and then edit the previous command to read: diff = img2.Subtract(img1).Multiply(ice.Card(ice.constants.FLOAT, [10])) Again, call it.AddImage() to add the new diff image to the catalog and view it, and you should see something similar to: You might be wondering what ice.Card(...) did above. The short answer is it made a special kind of ice.Image called a Card which is conceptually infinite in size and has the same value at every pixel. In this case the value of the pixels in the created card is 10.0. This is necessary as the Multiply method requires an ice.Image instance as input. Advanced Example Once you've run through the basics, you're ready to delve a little deeper into scripting "it". In this section we'll go over the steps to make a script that will create a Web page out of a catalog, including making thumbnails and images notes. This will demonstrate the two major objects available in the scripting environment; the Catalog and the Image. For this tutorial, we'll need a directory full of images to play with. First, let's set up a new extension script file. This is a file that will get sourced into "it"'s brain each time "it" starts. "it" uses initialization files (.ini) that can be customized by the user. We strongly recommend that you do not edit the .ini files in the installation; instead, create supplemental .ini files and place them in a directory that you can point at with the $RMS_SCRIPT_PATHS environment variable. In this case, if you don't already have one, create an it.ini file and add the following line: LoadExtension python Web.py Previous versions of "it" would require a full pathname to the file Web.py which you can still specify. However LoadExtension will look for the file Web.py relative to the it.ini file being processed so it is much simpler just to put that file in the same directory. Next, let's verify that the extension is getting loaded properly. Create the aforementioned Web.py file and put the following in it: it.app.Notice("Web Page Extension") Make sure you have saved both files (the .ini and .py) and then launch "it". Once "it" is open, go to the Message Log window ( Window > Message Log...). Change the Message Filter to its most verbose setting, which is "Debug". You will see all the files that were loaded as "it" started, ending with our new one: Web.py. You can also see that Web.py produced something of its own: the message "Web Page Extension". Getting Your Script On Now lets make our script actually make a small Web page. The important tasks this script is performing are: - Getting the current catalog - Getting a list of all the images in that catalog - For every image we find out its name Here's what our simple Web page script looks like: import it import ice def makeWebPage(filename): f = open(filename, 'w') f.write('<html>\n') f.write('<ul>\n') cat = it.app.GetCurrentCatalog() for i in range (0, cat.GetChildCount()): child = cat.GetChild(i) name = it.os.path.basename( child.GetFilename() ) f.write('<li>%s</li>\n' % (name) ) f.write('</ul>\n') f.write('</html>\n') f.close() To run this, first load a few images in "it". Open the "it" console and run the script with the following command: it.extensions.makeWebPage('/tmp/index.html') The script will generate a simple bulleted list of the images loaded in our catalog. You can use the Save Session... and Restore Session... functions here to help you get a catalog set up quickly. Web Bling What say we trick out our little Web page? Let's make a dazzling four column table with thumbnail images of the contents. This will demonstrate how to perform some basic image processing, including resizing the images in our catalog and saving the thumbnails to disk. To create the thumbnail images we use the handy Reformat operator. Reformat can change and/or resize an image in many different ways, depending on how you want to crop or squeeze your images into the new shape. For the Web page we want to create uniform-sized images that are letterboxed if they are not the right shape. In Reformat terminology that means "preserve aspect ratio and don't crop". As it happens, "it" images can be annotated. Go to Window > Inspector and to the right of the image window, you should a large text box for you to add notes. When you save a session the notes get saved along with it. Before you run the example, add notes to a few of the images in your catalog to see how that will come out. When you run the script this time you'll see that a directory called thumbs is created alongside the html file, and in the small jpeg versions of the images are saved therein. You'll also see in the "it" Catalog window that the Catalog now has a new image for each thumbnail that was made. You can delete them if you like. Here's the updated script: import it import ice it.app.Info('Defining Web Page Extension') def makeWebPage2 (filename): f = open(filename, 'w') topDir = it.os.path.dirname(filename) thumbDir = topDir + '/thumbs' if it.os.path.lexists(thumbDir) is False: it.os.mkdir(thumbDir) f.write('<html>\n') f.write('<table cellspacing="10" align="center>\n') cat = it.app.GetCurrentCatalog() col = 0 for i in range(0, cat.GetChildCount()): child = cat.GetChild(i) if col == 0: f.write('<tr>\n') f.write('<td valign="top">\n') name = it.os.path.basename( child.GetFilename() ) h = it.os.path.basename( child.GetLabel() ) thumbFile = thumbDir + '/' + h + '.jpg' thumbIceImage = child.GetImage() reformat = thumbIceImage.Reformat([0,200,0,200], True, False) reformat.Save( thumbFile, ice.constants.FMT_JPEG) f.write('<img src="thumbs/%s.jpg" alt="%s">\n' % (h, name)) f.write('<br><b>%s</b>' % (name)) notes = child.GetNotes() if notes != '': f.write('<small><pre>\n%s</pre></small>\n' % (notes)) f.write('</td>\n') col = col + 1 if col >= 4: f.write('</tr>\n') col = 0 if col != 0: f.write('</tr>\n') f.write('</table>\n') f.write('</html>\n') f.close() Replace the original script with the script above, save the file, and execute as you did before. This time you'll get a slightly nicer looking page with thumbnails, notes, and labels. Here is some more information about developing scripts
https://rmanwiki.pixar.com/pages/viewpage.action?pageId=11468981
CC-MAIN-2020-45
refinedweb
1,975
67.35
The: fixed (int* p = arr) ... // equivalent to p = &arr[0] fixed (char* p = str) ... // equivalent to p = &str[0] You can initialize multiple pointers, as long as they are all of the same type: fixed (byte* ps = srcarray, pd = dstarray) {...} To initialize pointers of different type, simply nest fixed statements: fixed (int* p1 = &p.x) { fixed (double* p2 = &array[5]) { // Do something with p1 and p2. } } After the code in the statement is executed, any pinned variables are unpinned and subject to garbage collection. Therefore, do not point to those variables outside the fixed statement. Pointers initialized in fixed statements cannot be modified. In unsafe mode, you can allocate memory on the stack, where it is not subject to garbage collection and therefore does not need to be pinned. For more information, see stackalloc. // statements_fixed.cs // compile with: /unsafe using System; class Point { public int x, y; } class FixedTest { //); } } 25 6 For more information, see the following sections in the C# Language Specification: 18.3 Fixed and moveable variables 18.6 The fixed statement
http://msdn.microsoft.com/en-us/library/f58wzh21(VS.80).aspx
crawl-002
refinedweb
174
66.94
This tutorial applies to STAMP-P。 You need to connect a downloader for STAMP-PIOC before burning. USB-TTL Recording board , And connect according to the screen printing of the board. The driver corresponding to the burning board is installed on the PC side. The most convenient way is Choose the package version of STAMP-PICO with downloader , The wire sequence of the matching downloader is the same as that of STAMP-PICO, and it can be plugged and recorded directly without wiring. Click here to go to the following page to download the driver for the matching downloader CP210x & CH9102 1.Double-click to open the Burner Burning tool①Select the corresponding device type in the left menu STAMP,Select the firmware version you need STAMP-PICO,Click the download button to download. 2.Then connect the downloader to the computer through the Type-C data cable, select the corresponding COM port, the baud rate can use the default configuration in M5Burner, click "Burn" to start burning, and fill in WIFI Configuration Information(This information will be used for devices to connect to the network, and in this tutorial, we will program in USB mode, which is not required.). 3.>When the burning log prompts Successfully, it means that the firmware has been burned. If you keep pressing the button when the power is off and then power on, you will enter the mode switch state. In this state, LED and others will cycle between Green', Blue , Yellow andPurple `, and different colors represent different modes. Release the button when LED switches the corresponding color to enter the corresponding mode. The detailed functional model is described below. Green: Online programming mode, used to connect online version of UIFlow, You need to configure WIFI before you can connect。 Blue: Offline programming mode, connected via USB cable Yellow: WIFI configuration mode, the device will automatically enable AP, and users can connect to the AP through the mobile device and access the 192.168.4.1 page to configure WIFI Purple: APP mode. Default is to run the last downloaded program. Download. VSCode IDE: Install the M5Stack plug-in: Search the plug-in market for M5Stack and install the plug-in, as shown below. Keep pressing the button when the power is off, then connect to the PC and turn on the power, wait for the LED cycle to switch to blue and release, you can enter the USB programming a small yellow light. If the device is reset, click the refresh button to reopen the file tree. from m5stack import * from m5ui import * from uiflow import * rgb.setColorAll(0xffff33)
https://docs.m5stack.com/en/quick_start/stamp_pico/mpy
CC-MAIN-2021-43
refinedweb
436
57.81
Groovin' with Webwork2 For the uninitiated Groovy is a new scripting language for the JVM. There is even a movement afoot to make it the 'standard' scripting language for Java. This Article Covers Languages RELATED TOPICS Rod Cope's recent talk on Groovy at our last Boulder JUG meeting inspired me to do something Groovy. For the uninitiated Groovy is a new scripting language for the JVM. There is even a movement afoot to make it the 'standard' scripting language for Java (see JSR # 241. Groovy is the most semantically dense scripting language I have ever seen; there are shortcuts for everything. I've talked to a few active Groovy users who say that Groovy = 50% java code and 50% the development time for simple tasks (your mileage may vary). There is one particular aspect of Groovy that really caught my attention at Rod's talk : the Java language is (99%) a sub-set of Groovy. Rod explained that there are still a few Java language features that aren't part of Groovy yet, but the Groovy evangelists don't seem to miss them. But basically 99% of the time you can take a .java, rename it .groovy, run it through the groovy interpreter and get expected results. This feature got me to thinking about writing Webwork actions in Groovy. Webwork actions seem like a perfect candidate for scripting because a) they are pretty simple b) I want to change them a lot while I develop and not have to recompile and redeploy my web application. For this to work well there has to be a smooth mechanism for interacting with groovy scripts from Java. Turns out there is. You can invoke a method on an object which you build from a .groovy script by doing the following: 1) create a plain old .java interface for the class you want to write in groovy. 2) write the implementation for the class just as you would in java and save it to a .groovy file. 3) use something like the following java code to instantiate an object from the .groovy script and invoke a method: ClassLoader cl = Thread.currentThread().getContextClassLoader(); GroovyClassLoader groovyCl = new GroovyClassLoader(cl); Class groovyClass = groovyCl.parseClass( cl.getResourceAsStream("FooImplementation.groovy")); FooInterface foo = (FooInterface) groovyClass.newInstance(); foo.callSomeMethod(); So, to write Webwork actions in Groovy, a little surgery has to be done on the mechanism that loads action classes in xwork. In xwork (the guts of Webwork) there is a simple ObjectFactory with a getClassInstance(String classname) method. This method simply loads the specified class by name and returns it. This only works for .java classes. A simple addition allows this to work with .groovy scripts too using the classloading method above. The final com.opensymphony.xwork.ObjectFactory.getClassInstance(String classname) method looks like this : public Class getClassInstance(String className) throws ClassNotFoundException { Class clazz; // check className if (className.matches(".*.groovy")) { ClassLoader cl = Thread.currentThread().getContextClassLoader(); GroovyClassLoader groovyCl = new GroovyClassLoader(cl); try { clazz = groovyCl.parseClass(cl.getResourceAsStream(className)); } catch (CompilationFailedException e) { throw new ClassNotFoundException("Error parsing " + className, e); } catch (IOException e) { throw new ClassNotFoundException("Error reading " + className, e); } } else { clazz = (Class) classes.get(className); if (clazz == null) { clazz = ClassLoaderUtil.loadClass(className, this.getClass()); classes.put(className, clazz); } } return clazz; } A groovy action configured in the xwork.xml file looks like this : <action name="TestGroovyAction" class="TestGroovyAction.groovy"> <param name="foo">bar</param> <result name="success">/index.jsp</result> <result name="error">/hello.jsp</result> </action> The TestGroovyAction.groovy looks like this : import com.opensymphony.xwork.Action; import org.apache.log4j.Logger; public class TestGroovyAction implements Action { Logger logger = Logger.getLogger("com.vitagroup.TestGroovyAction"); String foo; public TestGroovyAction() { } public String execute () { logger.debug("TestGroovyAction is executing foo is " + foo); return "error"; } public void setFoo (String foo) { this.foo = foo; } } Notice there is no throws Exception clause on the execute method. This is one of the Java language features that hasn't been added to Groovy yet. The only piece left to work out is reloading the groovy classes automatically when the .groovy script is changed. As it stands the web application has to be reloaded in order for the action class to be re built. Hopefully that won't be to hard to solve. This is working well for me so far and I'm enjoying writing webwork actions in groovy. Check back soon to see the solution to the reloading problem. ;) Part Two Scripting webwork actions is down right groovy. Earlier I posted about modifying webwork to allow actions to be written in Groovy. The modification is actually to xwork which is used by webwork. After the first pass the basic functionality was working fine except that modifications to the .groovy scripts while the application was deployed in the servlet container had no effect. I suspected xwork was caching the actions somewhere. Actually Tomcat was the culprit. Tomcat uses it's own classloader implementation called WebappClassloader. This classloader caches calls to getResourceAsStream() and the cache doesn't get invalidated until the webapp is reloaded. However calls to getResoruce which return URLs are not cached. So changing to using getResource solved the caching problem. Now it's possible to load a page, edit the .groovy behind it, reload the page, and see it change. Pretty slick for speedy developing. But it's not necessary to re-compile the .groovy each time the webpage is reloaded. I took a tip from Brian McCallister and looked at Nanocontainer at Codehaus. Nanocontainer has a feature called nanoweb which is an ultra light WW like action framework that allows actions to be implemented in groovy. Instead of recompiling the .groovy action each time instead it's easy to add the compiled groovy class to a cache using the file's timestamp as a key. With each page reload the timestamp of the .groovy is compared against the cache timestamp and the .groovy is recompiled only if it's newer than the cached version. With this in place there is no difference performance-wise between .groovy actions and .java actions. Now it's on to sorting out how to implement a base action in groovy. CP. Dig Deeper on Software programming languages PRO+ Content Find more PRO+ content and other member only offers, here. Start the conversation
http://www.theserverside.com/news/1376868/Groovin-with-Webwork2
CC-MAIN-2018-09
refinedweb
1,041
59.6
Machine Learning is a branch of Artificial Intelligence (AI) which deals with the computer algorithms being used on any data. It focuses on automatically learning from the data being fed into it and it gives us results by improving on the previous predictions every time. Top Machine Learning Algorithms Used in Python Below are some of the top machine learning algorithms used in Python, along with code snippets shows their implementation and visualizations of classification boundaries. 1. Linear Regression Linear regression is one of the most commonly used supervised machine learning technique. As its name suggests, this regression tries to model the relationship between two variables using a linear equation and fitting that line to the observed data. This technique is used to estimate real continuous values like total sales made, or cost of houses. The line of best fit is also called the regression line. It is given by the following equation: Y = a*X + b where Y is the dependent variable, a is the slope, X is the independent variable and b is the intercept value. The coefficients a and b are derived by minimizing the square of the difference of that distance between the various data points and the regression line equation. # synthetic dataset for simple regression from sklearn.datasets import make_regression plt.figure() plt.title( ‘Sample regression problem with one input variable’ ) X_R1, y_R1 = make_regression( n_samples = 100, n_features = 1, n_informative = 1, bias = 150.0, noise = 30, random_state = 0 ) plt.scatter( X_R1, y_R1, marker = ‘o’, s = 50 ) plt.show() from sklearn.linear_model import LinearRegression X_train, X_test, y_train, y_test = train_test_split( X_R1, y_R1, random_state = 0 ) linreg = LinearRegression().fit( X_train, y_train ) print( ‘linear model coeff (w): {}’.format( linreg.coef_ ) ) print( ‘linear model intercept (b): {:.3f}’z.format( linreg.intercept_ ) ) print( ‘R-squared score (training): {:.3f}’.format( linreg.score( X_train, y_train ) ) ) print( ‘R-squared score (test): {:.3f}’.format( linreg.score( X_test, y_test ) ) ) Output linear model coeff (w): [ 45.71] linear model intercept (b): 148.446 R-squared score (training): 0.679 R-squared score (test): 0.492 The following code will draw the fitted regression line on the plot of our data points. plt.figure( figsize = ( 5, 4 ) ) plt.scatter( X_R1, y_R1, marker = ‘o’, s = 50, alpha = 0.8 ) plt.plot( X_R1, linreg.coef_ * X_R1 + linreg.intercept_, ‘r-‘ ) plt.title( ‘Least-squares linear regression’ ) plt.xlabel( ‘Feature value (x)’ ) plt.ylabel( ‘Target value (y)’ ) plt.show() Preparing a Common Dataset For Exploring Classification Techniques The following data is going to be used to show the various classification algorithms which are most commonly used in machine learning in Python. The UCI Mushroom Data Set is stored in mushrooms.csv. %matplotlib notebook import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.decomposition import PCA from sklearn.model_selection import train_test_split df = pd.read_csv( ‘readonly/mushrooms.csv’ ) df2 = pd.get_dummies( df ) df3 = df2.sample( frac = 0.08 ) X = df3.iloc[:, 2:] y = df3.iloc[:, 1] pca = PCA( n_components = 2 ).fit_transform( X ) X_train, X_test, y_train, y_test = train_test_split( pca, y, random_state = 0 ) plt.figure( dpi = 120 ) plt.scatter( pca[y.values == 0, 0], pca[y.values == 0, 1], alpha = 0.5, label = ‘Edible’, s = 2 ) plt.scatter( pca[y.values == 1, 0], pca[y.values == 1, 1], alpha = 0.5, label = ‘Poisonous’, s = 2 ) plt.legend() plt.title( ‘Mushroom Data Set\nFirst Two Principal Components’ ) plt.xlabel( ‘PC1’ ) plt.ylabel( ‘PC2’ ) plt.gca().set_aspect( ‘equal’ ) We will use the function defined below to get the decision boundaries of the different classifiers we’ll use on the mushroom dataset. def plot_mushroom_boundary( X, y, fitted_model ): plt.figure( figsize = (9.8, 5), dpi = 100 ) for i, plot_type in enumerate( [‘Decision Boundary’, ‘Decision Probabilities’] ): plt.subplot( 1, 2, i + 1 ) mesh_step_size = 0.01 # step size in the mesh x_min, x_max = X[:, 0].min() – .1, X[:, 0].max() + .1 y_min, y_max = X[:, 1].min() – .1, X[:, 1].max() + .1 xx, yy = np.meshgrid( np.arange( x_min, x_max, mesh_step_size ), np.arange( y_min, y_max, mesh_step_size ) ) if i == 0: Z = fitted_model.predict( np.c_[xx.ravel(), yy.ravel()] ) else: try: Z = fitted_model.predict_proba( np.c_[xx.ravel(), yy.ravel()] )[:, 1] except: plt.text( 0.4, 0.5, ‘Probabilities Unavailable’, horizontalalignment = ‘center’, verticalalignment = ‘center’, transform = plt.gca().transAxes, fontsize = 12 ) plt.axis( ‘off’ ) break Z = Z.reshape( xx.shape ) plt.scatter( X[y.values == 0, 0], X[y.values == 0, 1], alpha = 0.4, label = ‘Edible’, s = 5 ) plt.scatter( X[y.values == 1, 0], X[y.values == 1, 1], alpha = 0.4, label = ‘Posionous’, s = 5 ) plt.imshow( Z, interpolation = ‘nearest’, cmap = ‘RdYlBu_r’, alpha = 0.15, extent = ( x_min, x_max, y_min, y_max ), origin = ‘lower’ ) plt.title( plot_type + ‘\n’ + str( fitted_model ).split( ‘(‘ )[0] + ‘ Test Accuracy: ‘ + str( np.round( fitted_model.score( X, y ), 5 ) ) ) plt.gca().set_aspect( ‘equal’ ); plt.tight_layout() plt.subplots_adjust( top = 0.9, bottom = 0.08, wspace = 0.02 ) 2. Logistic Regression Unlike linear regression, logistic regression deals with the estimation of discrete values (0/1 binary values, true/false, yes/no). This technique is also called logit regression. This is because it predicts the probability of an event by using a logit function to train the given data. It’s value always lies between 0 and 1 (since it is calculating a probability). The log odds of the results is constructed as a linear combination of the predictor variable as follows: odds = p / (1 – p) = probability of event occurring or probability of event not occurring ln( odds ) = ln( p / (1 – p) ) logit( p ) = ln( p / (1 – p) ) = b0 + b1X1 + b2X2 + b3X3 + … + bkXk where p is the probability of presence of a characteristic. from sklearn.linear_model import LogisticRegression model = LogisticRegression() model.fit( X_train, y_train ) plot_mushroom_boundary( X_test, y_test, model ) 3. Decision Tree This is a very popular algorithm that can be used to classify both continuous and discrete variables of data. At every step, the data is split into more than one homogenous sets based on some splitting attribute/conditions. from sklearn.tree import DecisionTreeClassifier model = DecisionTreeClassifier( max_depth = 3 ) model.fit( X_train, y_train ) plot_mushroom_boundary( X_test, y_test, model ) 4. SVM SVM is short for Support Vector Machines. Here the basic idea is the classify the data points by using hyperplanes for separation. The goal is the find out such a hyperplane that has the maximum distance (or margin) between the data points of both the classes or categories. We choose the plane in such a way to take care of classifying unknown points in the future with the highest confidence. SVMs are famously used because they give high accuracy while taking up very less computational power. SVMs can also be used for regression problems. from sklearn.svm import SVC model = SVC( kernel = ‘linear’ ) model.fit( X_train, y_train ) plot_mushroom_boundary( X_test, y_test, model ) 4. Naïve Bayes As the name suggests, Naïve Bayes algorithm is a supervised learning algorithm based on the Bayes Theorem. Bayes Theorem uses conditional probabilities to give you the probability of an event based on some given knowledge. Where, P (A | B): The conditional probability that event A occurs, given that event B has already occurred. (Also called posterior probability) P(A): Probability of event A. P(B): Probability of event B. P (B | A): The conditional probability that event B occurs, given that event A has already occurred. Why is this algorithm named Naïve, you ask? This is because it assumes that all occurrences of events are independent of each other. So each feature separately defines the class a data point belongs to, without having any dependencies among themselves. Naïve Bayes is the best choice for text categorizations. It will work sufficiently well with even small amounts of training data. from sklearn.naive_bayes import GaussianNB model = GaussianNB() model.fit( X_train, y_train ) plot_mushroom_boundary( X_test, y_test, model ) 5. KNN KNN stands for K-Nearest Neighbours. It is a very wide used supervised learning algorithm which classifies the test data according to its similarities with the previously classified training data. KNN does not classify all data points during training. Instead, it just stores the dataset and when it gets any new data, it then classifies those data points based on their similarities. It does so by calculating the Euclidean distance of the K number of nearest neighbours (here, n_neighbors) of that data point. from sklearn.neighbors import KNeighborsClassifier model = KNeighborsClassifier( n_neighbors = 20 ) model.fit( X_train, y_train ) plot_mushroom_boundary( X_test, y_test, model ) 6. Random Forest Random forest is a very simple and diverse machine learning algorithm that uses a supervised learning technique. As you can sort of guess from the name, random forest consists of a large number of decision trees, acting as an ensemble. Each decision tree will figure out the output class of the data points and the majority class will be chosen as the model’s final output. The idea here is that more trees working on the same data will tend to be more accurate in results than individual trees. from sklearn.ensemble import RandomForestClassifier model = RandomForestClassifier() model.fit( X_train, y_train ) plot_mushroom_boundary( X_test, y_test, model ) 7. Multi-Layer Perceptron Multi-Layer Perceptron (or MLP) is a very fascinating algorithm coming under the branch of deep learning. More specifically, it belongs to the class of feed-forward artificial neural networks (ANN). MLP forms a network of multiple perceptrons with at least three layers: an input layer, output layer and hidden layer(s). MLPs are able to distinguish between data that are non-linearly separable. Each neuron in the hidden layers uses an activation function to proceed to the next layer. Here, the backpropagation algorithm is used to actually tune the parameters and hence train the neural network. It can mostly be used for simple regression problems. from sklearn.neural_network import MLPClassifier model = MLPClassifier() model.fit( X_train, y_train ) plot_mushroom_boundary( X_test, y_test, model ) Also Read: Python Project Ideas & Topics Conclusion We can conclude that different machine learning algorithms yield different decision boundaries and hence different accuracy results in classifying the same dataset. There is no way to declare anyone algorithm as the best algorithm for all kinds of data in general. Machine learning requires rigorous trial and errors for various algorithms to determine what works best for each dataset separately. The list of ML algorithms doesn’t obviously end here. There is a vast sea of other techniques which are waiting to be explored in the Scikit-Learn library of Python. Go ahead and train your datasets using all of those and have fun!.
https://www.upgrad.com/blog/most-used-machine-learning-algorithms-in-python/
CC-MAIN-2021-43
refinedweb
1,720
52.15
Arduino Forum > Using Arduino > Networking, Protocols, and Devices (Moderator: fabioc84 ) > Wireless Bluetooth Sketch Uploading Print Go Down Pages: [1] Topic: Wireless Bluetooth Sketch Uploading (Read 3049 times) previous topic - next topic 9Vortex Guest Wireless Bluetooth Sketch Uploading Jun 24, 2012, 05:04 pm Hey. I am trying to find a Bluetooth module that will allow for wireless sketch uploading. In my research I think I read that any Bluetooth module can do this if you press the reset button at the right time (right after pressing upload?). So would both of the following modules work for wireless sketch uploading and if not could anyone suggest a good module for this? And is there a different wiring configuration for this functionality or will it work wired the same way? Thanks for the help guys Claghorn Guest Re: Wireless Bluetooth Sketch Uploading #1 Jun 25, 2012, 02:48 am I got bluetooth updating to work (sometimes), but the timing is very tricky with the reset button. You need to be able to run avrdude in a command line so you can release the reset button at the same time you hit return (and even then it may not work). You also need to be sure the bluetooth module is set to communicate at the same baud rate the boot loader uses, and that avrdude is also using that same baud rate. Down near the bottom of this web page is my success story using an ITead bluetooth module: 9Vortex Guest Re: Wireless Bluetooth Sketch Uploading #2 Jun 25, 2012, 05:33 pm Thanks for the help! Reading your trials was very helpful. Confirmed and expanded upon what I had researched. I may look into a hardware solution to the problem... orangeLearner Guest Re: Wireless Bluetooth Sketch Uploading #3 Jun 25, 2012, 07:40 pm I got my Arduino Nano to remotely program with my el-cheapo bluetooth JY-MCU module, however it does require you press the reset button at the right time OR build a circuit to reset at the right time, which is what I did. My JY-MCU has an LED that blinks when it is not connected. When the bluetooth module tries to program, it will blink at about 10Hz....then remain solid on for 1200ms, then flash again at 10Hz for about 1000ms, then go solid on again. During this second solid on is when I press the button and it remotely programs, although it takes 5x longer than it would over a wire. I made an ATtiny85 circuit to monitor this blinking light and reset based on the pattern I described. The other (much more documented) option is to buy a pair of Pololu Wixels. They cost $40 but they can act as a remote serial pipe up to 200k baud and they already have a sketch with a DTR pin so you can remotely program Arduinos. Plus they are their own microcontroller so you can put whatever code on them afterward. Yet another option is to use wireless Xbee programming. Ladyada and Sparkfun have tutorials on this but it is not cheap. (probably about $50, and the XBees are not as flexible as the Wixels) Print Go Up Pages: [1]
http://forum.arduino.cc/index.php?topic=111478.0
CC-MAIN-2017-22
refinedweb
535
66.07
Java 5's DelayQueue The queue classes in the Java 5 package java.util.concurrency package provide solutions for common queuing needs. The DelayQueue class provides a blocking queue from which objects cannot be removed until they have been in the queue for a minimum period of time. Elements in the DelayQueue must be of type java.util.concurrent.Delayed, an interface that requires two methods to be defined: getDelay, a method that returns how much time is left before the delay completes, and compareTo. The Delayed interface extends the java.lang.Comparable interface, so Delayed implementations must specify how they should be ordered with respect to other Delayed objects. As an example, consider a fax server tied to a single phone line. The outgoing phone line can handle only one call at a time, and transmitting a fax takes many second or even minutes. The fax server cannot lose any incoming fax requests while the server is currently transmitting. As a simple solution, the server can place all incoming fax requests in a queue, returning immediately to the client requesting the transmission. A separate thread on the server pulls entries off the queue and processes them in the order received. When a request is initially made, it's marked to indicate that it should be sent without delay. When attempting to send a fax, sometimes the line is busy or the line drops during transmission. If a fax transmission attempt fails, the fax server must place the transmission request back into the queue. At this point, the server marks the request with a delay period (ten seconds, in the below implementation). This wait period allows for the remote connection to be reset or to become available. The wait period also allows for other waiting faxes to have a opportunity to attempt transmission. The below code demonstrates use of the DelayQueue as the core of a fax server implementation. The building block classes of this application are Fax, Dialer, and Transmitter. In fact, Dialer and Transmitter are represented here as just interfaces--we're not not concerned with the communication details. These interfaces are shown in Listing 1, as well as the definition of a simple thread utility class used by other code in this example. Listing 2 shows the Fax class, a simple data class. Listing 1. Dialer and Transmitter interfaces; ThreadUtil. // Dialer.java public interface Dialer { boolean connect(String number); } // Transmitter.java public interface Transmitter { void send(Fax fax); } // ThreadUtil.java package util; public class ThreadUtil { public static void pause(int seconds) { try { Thread.sleep(seconds * 1000L); } catch (InterruptedException e) { } } } Listing 2. The Fax class. public class Fax { private String to; private String from; private String text; public Fax(String to, String from, String text) { this.to = to; this.from = from; this.text = text; } public String to() { return to; } public String from() { return from; } public String text() { return text; } public String toString() { return String.format("[fax to: %s from: %s]", to, from); } } The meat of the application is in the FaxServer (Listing 3) and FaxTransmission (Listing 4) classes. A FaxTransmission holds onto a Fax object, and contains the logic to determine whether a Fax needs to wait. I'll provide more details on the FaxTransmission class shortly. The FaxServer encapsulates a DelayQueue that stores FaxTransmission objects. Listing 3. The FaxServer class. import java.util.concurrent.*; public class FaxServer { private DelayQueue<FaxTransmission> queue = new DelayQueue<FaxTransmission>(); private Dialer dialer; private Transmitter transmitter; public FaxServer(Dialer dialer, Transmitter transmitter) { this.dialer = dialer; this.transmitter = transmitter; } public void start() { new Thread(new Runnable() { public void run() { while (true) { try { transmit(queue.take()); } catch (InterruptedException e) { } } } }).start(); } private void transmit(FaxTransmission transmission) { if (dialer.connect(transmission.getFax().to())) { System.out.printf("sending %s.", transmission); transmitter.send(transmission.getFax()); System.out.println("completed"); } else { System.out.printf( "busy, queuing %s for resend%n", transmission); transmission.setToResend(); queue.add(transmission); } } public void send(Fax fax) { System.out.printf("queuing %s%n", fax); queue.add(new FaxTransmission(fax)); } } A client requests a Fax to be sent by calling the send method on FaxServer. The FaxServer code wraps the Fax object in a FaxTransmission, which then gets enqueued on the DelayQueue. Control returns immediately to the client. A separate thread on the server, defined in the start method, loops infinitely. The body of the loop calls the method take against the DelayQueue object. This call blocks until there is an appropriate element to remove from the queue (i.e. one that has waited the specified minimum amount of time): transmit(queue<<
https://www.developer.com/java/other/article.php/3654721/Java-5s-DelayQueue.htm
CC-MAIN-2018-17
refinedweb
755
50.02
Possessed PC Pranks for Halloween - Posted: Oct 29, 2007 at 3:03 PM - 2,226 Views - 5 Comments Something went wrong getting user information from Channel 9 Something went wrong getting user information from MSDN Something went wrong getting the Visual Studio Achievements Last year I wrote two articles on how to annoy your friends and family by dripping blood down their computer screen, or by squirting them with (hopefully) fake blood from (even more hopefully) a fake skull. Halloween is just around the corner, so here are two more applications which you can use to further bother the people around you. If you have ever seen the horror film The Ring, then you can likely guess the basic premise of this application. For those that haven't, the film is about a video tape which, when watched, will cause the viewer to die within a week of viewing it. So, what better way to annoy and terrify your friends than by playing the "cursed video" at a time you schedule without their knowing? This application will use a library from Managed DirectX to play the video. If you haven't already, grab the full DirectX SDK and install. Note that we will be creating a dependency on DirectX, so the victim's PC will need to have DirectX 9.0c installed. Almost every XP desktop has it installed at this point, but be sure to test the app on the target PC. Additionally, I have had mixed success drawing a semi-transparent video with every method I've tried (DX, WPF, etc.). It is very dependent on video drivers, so the intended effect may not be perfect on all PCs. Again, be sure to test. Next, create a new C#/VB Windows Application project and set references to the Microsoft.DirectX and Microsoft.DirectX.AudioVideoPlayback assemblies as shown: Next, we need to make some modifications to the form. The video will be played at 50% transparency against the victim's desktop, which gives a very creepy effect. The form should also be hidden from view until the video is ready to be played at the selected time. To do this, set the following properties on the form: Next, we need to provide a surface onto which the video can be drawn. Drop a PictureBox control onto the form and set its properties as follows: Now we need a method which will play the video. Bring in the Microsoft.DirectX.AudioVideoPlayback namespace with the using/Imports keyword and then add the following method to the form: C# using Microsoft.DirectX.AudioVideoPlayback; ... Video _video; public void StartShow() { // create a new video from a filename this._video = Video.FromFile("ring.wmv"); // set the drawable surface to the picturebox this._video.Owner = pictureBox1; // setup an event handler so we can pop up a msgbox when the video is over this._video.Ending += new EventHandler(video_Ending); // play it! this._video.Play(); } void video_Ending(object sender, EventArgs e) { // called when the video ends MessageBox.Show("Happy Halloween!"); this.Close(); } VB Imports Microsoft.DirectX.AudioVideoPlayback ... Private _video As Video Public Sub StartShow() ' create a new video from a filename Me._video = Video.FromFile("ring.wmv") ' set the drawable surface to the picturebox Me._video.Owner = pictureBox1 ' setup an event handler so we can pop up a msgbox when the video is over AddHandler _video.Ending, AddressOf video_Ending ' play it! Me._video.Play() End Sub Private Sub video_Ending(ByVal sender As Object, ByVal e As EventArgs) ' called when the video ends MessageBox.Show("Happy Halloween!") Me.Close() End Sub This simply creates a new Video object by loading a specific filename (ring.wmv), then assigns the video's Owner property to the PictureBox, which is where the video will be drawn. Next, an event is setup so we are notified when the clip has ended, and finally, the video is played with the Play method. The video_Ending event handler displays a message box and closes down the application. x86/x64 The Managed DirectX assemblies are compiled to run in x86 mode only. If you are compiling the application on an x64 OS, you will have to ensure that the main application is built as an x86 target. To do so, choose Configuration Manager from the build menu. In the dialog that appears, choose <New...> from the Active solution platform drop down: Finally, choose x86 from the Type or select the new platform drop down: Get back to the project and rebuild and you will have an x86-native application which will run successfully on both x86 and x64 platforms. The code which handles the configuration, scheduling and testing will be the same for both of these applications, so please jump to that section by clicking here. If you have seen The Shining, you can probably guess where this one is heading, too. The basic plot is that an author and his family retreat to an isolated hotel for the winter, and some paranormal activity causes the father to go a bit insane, among other things. In the film, as the author goes crazy, he starts typing "All work and no play makes Jack a dull boy" over and over again on the typewriter. So, this application will mimic that by, at the appointed time, popping up an instance of Notepad and having the computer type the phrase over and over again on its own. Once again, create a new VB/C# Windows Application project. As with the previous project, we want the main form to be hidden, but this time, it should be hidden at all times. So, set the form properties as follows: Next, drag a timer onto the form named tmrType and create an event handler for the Tick event which we will fill in later. Create a method named StartShow as follows: C# using System.Diagnostics; ... private Process _process; public void StartShow() { _process = Process.Start("notepad.exe"); tmrType.Start(); } VB Imports System.Diagnostics ... ' the Notepad process Private _process As Process Public Sub StartShow() _process = Process.Start("notepad.exe") tmrType.Start() End Sub This code starts an instance of Notepad and then starts the tmrType ticking. That tick event will send the appropriate letter to the Notepad window from the phrase: C# using System.Runtime.InteropServices; ... [DllImport("user32")] public static extern int SetForegroundWindow(IntPtr hwnd); // counter variables private int i, j; // the message to type out private const string msg = "All work and no play makes Jack a dull boy."; ... private void tmrType_Tick(object sender, EventArgs e) { // make sure the Notepad window is at the front SetForegroundWindow(_process.MainWindowHandle); // send the next letter to the window if(i < msg.Length) SendKeys.Send(msg[i++].ToString()); else { // send a carriage return if we're at the end of the line SendKeys.Send("{ENTER}"); // write the line 5 times if(++j < 5) i = 0; else { // when done, stop the timer, display the msgbox and close it up tmrType.Stop(); MessageBox.Show("Happy Halloween!"); this.Close(); } } } VB Imports System.Runtime.InteropServices ... <DllImport("user32")> _ Public Shared Function SetForegroundWindow(ByVal hwnd As IntPtr) As Integer End Function ' counter variables Private i, j As Integer ' the message to type out Private Const msg As String = "All work and no play makes Jack a dull boy." Private Sub tmrType_Tick(ByVal sender As Object, ByVal e As EventArgs) Handles tmrType.Tick ' make sure the Notepad window is at the front SetForegroundWindow(_process.MainWindowHandle) ' send the next letter to the window If i < msg.Length Then SendKeys.Send(msg.Chars(i).ToString()) i += 1 Else ' send a carriage return if we're at the end of the line SendKeys.Send("{ENTER}") ' write the line 5 times j = j + 1 If j < 5 Then i = 0 Else ' when done, stop the timer, display the msgbox and close it up tmrType.Stop() MessageBox.Show("Happy Halloween!") Me.Close() End If End If End Sub At the start of the tick, a call is made to the Win32 API function SetForegroundWindow, passing in the window handle (_process.MainWindowHandle) of the Notepad process. This ensures that the Notepad window is focused before we send the next letter to the window using the SendKeys method. When the end of the phrase is reached, a carriage return is sent using SendKeys and the string {ENTER}. This repeats 5 times at which point the timer is stopped, the message box is displayed, and the application closes. The final piece is to setup a time at which the effect will display on the victim's PC. Create a simple dialog box with a DateTimePicker control that allows the choosing of the appropriate date/time to run: Next, create a new application setting named Time as shown: Next, setup the events on the Configuration form as follows: C# private void ConfigForm_Load(object sender, EventArgs e) { // if no time has been specified, show today's date, otherwise, show the date previously selected if(Properties.Settings.Default.Time == DateTime.MinValue) dateTimePicker1.Value = DateTime.Now.Date; else dateTimePicker1.Value = Properties.Settings.Default.Time; } private void btnSave_Click(object sender, EventArgs e) { // save the currently selected date/time to the Settings collection Properties.Settings.Default.Time = dateTimePicker1.Value; Properties this.DialogResult = DialogResult.OK; this.Close(); } private void btnCancel_Click(object sender, EventArgs e) { // close it up this.DialogResult = DialogResult.Cancel; this.Close(); } VB Private Sub ConfigForm_Load(ByVal sender As Object, ByVal e As EventArgs) Handles MyBase.Load ' if no time has been specified, show today's date, otherwise, show the date previously selected If My.Settings.Default.Time = DateTime.MinValue Then dateTimePicker1.Value = DateTime.Now.Date Else dateTimePicker1.Value = My.Settings.Default.Time End If End Sub Private Sub btnSave_Click(ByVal sender As Object, ByVal e As EventArgs) Handles btnSave.Click ' save the currently selected date/time to the Settings collection My.Settings.Default.Time = dateTimePicker1.Value My Me.DialogResult = System.Windows.Forms.DialogResult.OK Me.Close() End Sub Private Sub btnCancel_Click(ByVal sender As Object, ByVal e As EventArgs) Handles btnCancel.Click ' close it up Me.DialogResult = System.Windows.Forms.DialogResult.Cancel Me.Close() End Sub The code loads the current value at start (if it exists), saves it when the Save button is clicked, and does nothing when Cancel is clicked. Back on the main form of either application, drag and drop a timer control named tmrScheduler to the main design surface. Set the properties as follows: This will start the timer immediately and call the Tick event once every minute. That event will check to see whether the requested time has passed and, if so, starts the show: C# - Cursed Video private void tmrScheduler_Tick(object sender, EventArgs e) { // if the current time is greater than the time set by the user if(DateTime.Now >= Properties.Settings.Default.Time) { // bring up the window this.WindowState = FormWindowState.Maximized; // disable this timer tmrScheduler.Enabled = false; // bring it to the top this.BringToFront(); // play the movie StartShow(); } } VB - Cursed Video.Maximized ' disable this timer tmrScheduler.Enabled = False ' bring it to the top Me.BringToFront() ' play the movie StartShow() End If End Sub C# - The Shining private void tmrScheduler_Tick(object sender, EventArgs e) { // if the current time is greater than the time set by the user if(DateTime.Now >= Properties.Settings.Default.Time) { // disable this timer tmrScheduler.Enabled = false; // start typing StartShow(); } } VB - The Shining Private Sub tmrScheduler_Tick(ByVal sender As Object, ByVal e As EventArgs) Handles tmrScheduler.Tick ' if the current time is greater than the time set by the user If DateTime.Now >= My.Settings.Default.Time Then ' disable this timer tmrScheduler.Enabled = False ' start typing StartShow() End If End Sub To make it easy to reset the launch time, and to test the effect without scheduling it, two command line parameters can be added to Program.cs/vb to re-run the configuration or test. Add the following code prior to the call to Application.Run: C# // test the effect if(Environment.CommandLine.IndexOf("test") > -1) { Form1 f = new Form1(); f.StartShow(); f.ShowDialog(); return; } // if we haven't set the time, or the user requested the config dialog, display it if(Properties.Settings.Default.Time == DateTime.MinValue || Environment.CommandLine.IndexOf("config") > -1) { if(new ConfigForm().ShowDialog() == DialogResult.Cancel) return; } VB ' test the effect If Environment.CommandLine.IndexOf("test") > -1 Then Dim f As Form1 = New Form1() f.StartShow() f.ShowDialog() Return End If ' if we haven't set the time, or the user requested the config dialog, display it If My.Settings.Default.Time = DateTime.MinValue OrElse Environment.CommandLine.IndexOf("config") > -1 Then If New ConfigForm().ShowDialog() = DialogResult.Cancel Then Return End If End If If -test is passed on the command line, the form is created and the show is started. If -config is passed on the command line, the Configuration dialog is displayed and the application continues as it would the first time it ran. Cursed Video Copy the exe, Microsoft.DirectX and Microsoft.DirectX.AudioVideoPlayback to a folder on the user's PC along with a video clip named ring.wmv. Of course, you can replace that video clip with any clip of your choosing, though if you wish to use a different filename, you will have to change the code and recompile. The Shining Copy the exe to a folder on the user's PC. For either, create a shortcut to it in the Startup program group, or set it up to run via the registry using the following key for the logged in user: Simply create a new string key with any Name, and a Data value of the path to the executable. Next, start the application once on their PC to setup the date and time for the show to begin. Once that is done, the application will remain running in the background. If the PC is restarted, and the application is setup to run at startup as described above, it will start silently and remain running, waiting for the date and time specified. Two Halloween applications for the price of one. Not that you paid for either. Be sure to have fun with them this Halloween and disturb anyone you can. Enjoy! Thanks to Giovanni Montrone for testing out these applications during development. C# 2008 Express but the configuration manager in the build menu is not active, what can be happening? @Hiram: use the contact page so I can get a better understanding of what is happening. Here is a list of my current Coding4Fun articles: WiiEarthVR Animated Musical Holiday Light Show - Version What Type do you select for the Time settings, it is unclear from the picture.. @Will If I understand what you're asking, that is a control built into the framework. Remove this comment Remove this threadclose
http://channel9.msdn.com/coding4fun/articles/Possessed-PC-Pranks-for-Halloween
CC-MAIN-2015-11
refinedweb
2,441
54.93
I wonder if the next announcement is TinyCLR OS running on the Octavo Module… Why on Earth would you want that??? @ ianlee74 - Total Control. Question: Can I stream data to a GHI F20-uSD module from a GHI G80TH using the TinyCLR OS? (i.e. is support for the GHI filesystem modules already there?) (I’m currently downloading VS2017RC, and the TinyCLR files - so I haven’t explored what is available yet with the TinyCLR namespaces) F20 accepts simple serial commands that you can send from any system, including TinyCLR. @ Gus - Great If only i had one, I could use that to log telemetry on my rocket Edit: I succeeded in getting vs2017rc installed, along with TinyCLR, and I can happily build tinyCLR apps. (No usable hardware for now, but that will change sometime soon) In other news, VS2017 looks amazing. At least the very clear integration with so many [em]fun [/em]things looks amazing. The Visual Studio Installer Tool is a nice upgrade. I think I will just be exploring these for the next few “boss-isnt-coming-into-work” days I’m excited about the TinyCLR/VS2017 combination. Does anyone have performed some performance benchmarks using TinyClr vs last .NetMF/GHI Sdk? @ leforban - I wouldn’t do that yet. You are comparing a netmf production release to TinyCLR experiential release. It still would be interesting :think: I wouldn’t expect that there would be any significant differences. It’s still NETMF, still running the interpreter, just some changed-up APIs. Gus, correct me if I’m wrong here. @ godefroi - different native compiler, different managed compiler… Just for a start. That’s why it is of a big interest for those who need low latency and/or have more or less DSP oriented applications. @ leforban - “different managed compiler” is probably not actually a net performance gain, not at this point anyway. Roslyn is still young, where the old compiler had more than a decade of work on it. The new native compiler would be a safer bet for a performance gain, but it’s evolutionary, not revolutionary. DSP wasn’t remotely possible before, and that’s not going to change until and unless the interpreter goes away. @ godefroi - Minor correction: I do all kinds of real time digital signal processing (DSP) on the G400 and previously on the Spider board. Luckily, the signals I have to deal with are all less than about 10 Hz. You can do all kinds of real time DSP on any processor you care to use if the signal bandwidth is low enough. The only reason I bring it up is I’m hoping .Net Micro and/or TinyCLR take over the entire embedded world (or at least 75% of it) and I don’t want people to get the wrong impression that TinyCLR "Can’t do DSP’ or “Can’t do real time”. It can do both quite nicely for a vast array of real world tasks. Not sonar array processing or real time video image processing or microsecond interrupt handling, but those kind of tasks are a pretty small fraction of the whole embedded world. Happy New Year @ Gene - Thank you @ Gene - You’re correct, of course, that you can do anything if either a) you’re willing to wait long enough, b) willing to use a 400 MHz microcontroller where a 16 MHz 8-bit might otherwise be sufficient, or c) are doing things so slowly that just about any hardware will be sufficient. I doubt 75% of the embedded world is willing to operate under those constraints, though, and that’s why I think that native compilation is unquestionably the way forward. @ godefroi - I disagree. .NET on PCs was a bad idea to many few years ago, now it is the way to go. Similarly, using C was a bad idea on micros as you should use assembly!!! Lose performance, gain everything else. I do native development everyday and I do hate it every single time I use it. This is my real life experience, not a theory. @ Gus - The only arguments against .NET I ever heard were from those who believed it was interpreted. Of course, it never was; it was JIT compiled from the very beginning. If it were interpreted, the criticisms would’ve been correct; there would be whole classes of performance-sensitive problems for which it would not have been an appropriate solution (unless, again, you’re willing to solve the problem very slowly, or you’re willing to overspec your hardware by an order of magnitude or two). You’re not doing your customers any favors by keeping hopes alive on significant performance increases. The reality is that, until native compilation is implemented, performance may vary by a few percentage points one way or another (or may even shift dramatically when functionality currently implemented in managed code is moved into native code), but no revolutionary changes are coming. You know that the interpreter is the limiting factor in performance, and until it’s gone, performance generally is going to look pretty close to how it does today. You owe it to your customers to be honest about that. @ godefroi - I already said, lose performance, gain everything else. Our customers, including commercial customers, rather be more productive than anything else. In case you didn’t notice, you are the only one here against this community’s vision. We all want to see a future for TinyCLR and are very excited about it. But you can use whatever fits your needs. @ Gus - Absolutely. NETMF (and therefore even more so TinyCLR) have enormous strengths, and they should be emphasized, because while not all of them are unique in isolation, they are unique in this specific combination. It’s an immensely useful platform, but performance is not one of its particular strengths. However, when one of your customers asks, The honest thing to say is, “no, because we don’t expect there to be significant differences in the performance. We do however plan to provide X feature and Y feature and Z feature that were not previously available.” When you say things like: and You’re giving false hope and setting people up for disappointment. We’re all grown-ups here, we can handle the truth. If we were looking for bleeding-edge performance, we’d be using other hardware and software combinations, after all.
https://forums.ghielectronics.com/t/introducing-tinyclr-os-a-new-path-for-our-netmf-devices/203?page=3
CC-MAIN-2018-17
refinedweb
1,067
61.77
Steps for Execution: sudo python kmean.py Pyhton Code: import sys import math import random import subprocess def main(): # How many points are in our dataset? num_points = 10 # For each of those points how many dimensions do they have? dimensions = 2 # Bounds for the values of those points in each dimension lower = 0 upper = 200 # The K in k-means. How many clusters do we assume exist? num_clusters = 3 # When do we say the optimization has 'converged' and stop updating clusters opt_cutoff = 0.5 # Generate some points points = [makeRandomPoint(dimensions, lower, upper) for i in xrange(num_points)] # Cluster those data! clusters = kmeans(points, num_clusters, opt_cutoff) # Print our clusters for i,c in enumerate(clusters): for p in c.points: print " Cluster: ", i, "\t Point :", p class Point: ''' An point in n dimensional space ''' def __init__(self, coords): ''' coords - A list of values, one per dimension ''' self.coords = coords self.n = len(coords) def __repr__(self): return str(self.coords) class Cluster: ''' A set of points and their centroid ''' def __init__(self, points): ''' points - A list of point objects ''' if len(points) == 0: raise Exception("ILLEGAL: empty cluster") # The points that belong to this cluster self.points = points # The dimensionality of the points in this cluster self.n = points[0].n # Assert that all points are of the same dimensionality for p in points: if p.n != self.n: raise Exception("ILLEGAL: wrong dimensions") # Set up the initial centroid (this is usually based off one point) self.centroid = self.calculateCentroid() def __repr__(self): ''' String representation of this object ''' return str(self.points) def update(self, points): ''' Returns the distance between the previous centroid and the new after recalculating and storing the new centroid. ''' old_centroid = self.centroid self.points = points self.centroid = self.calculateCentroid() shift = getDistance(old_centroid, self.centroid) return shift def calculateCentroid(self): ''' Finds a virtual center point for a group of n-dimensional points ''' numPoints = len(self.points) # Get a list of all coordinates in this cluster coords = [p.coords for p in self.points] # Reformat that so all x's are together, all y'z etc. unzipped = zip(*coords) # Calculate the mean for each dimension centroid_coords = [math.fsum(dList)/numPoints for dList in unzipped] return Point(centroid_coords) def kmeans(points, k, cutoff): # Pick out k random points to use as our initial centroids initial = random.sample(points, k) # Create k clusters using those centroids clusters = [Cluster([p]) for p in initial] # Loop through the dataset until the clusters stabilize loopCounter = 0 while True: # Create a list of lists to hold the points in each cluster lists = [ [] for c in clusters] clusterCount = len(clusters) # Start counting loops loopCounter += 1 # For every point in the dataset ... for p in points: # Get the distance between that point and the centroid of the first # cluster. smallest_distance = getDistance(p, clusters[0].centroid) # Set the cluster this point belongs to clusterIndex = 0 # For the remainder of the clusters ... for i in range(clusterCount - 1): # calculate the distance of that point to each other cluster's # centroid. distance = getDistance(p, clusters[i+1].centroid) # If it's closer to that cluster's centroid update what we # think the smallest distance is, and set the point to belong # to that cluster if distance < smallest_distance: smallest_distance = distance clusterIndex = i+1 lists[clusterIndex].append(p) # Set our biggest_shift to zero for this iteration biggest_shift = 0.0 # As many times as there are clusters ... for i in range(clusterCount): # Calculate how far the centroid moved in this iteration shift = clusters[i].update(lists[i]) # Keep track of the largest move from all cluster centroid updates biggest_shift = max(biggest_shift, shift) # If the centroids have stopped moving much, say we're done! if biggest_shift < cutoff: print "Converged after %s iterations" % loopCounter break return clusters def getDistance(a, b): ''' Euclidean distance between two n-dimensional points. Note: This can be very slow and does not scale well ''' if a.n != b.n: raise Exception("ILLEGAL: non comparable points") ret = reduce(lambda x,y: x + pow((a.coords[y]-b.coords[y]), 2),range(a.n),0.0) return math.sqrt(ret) def makeRandomPoint(n, lower, upper): '''Returns a Point object with n dimensions and values between lower and upper in each of those dimensions''' p = Point([random.uniform(lower, upper) for i in range(n)]) return p if __name__ == "__main__": main()
http://www.professionalcipher.com/2018/04/kmean-machine-learning-algorithm-for-clustering-task-in-big-data-analytics.html
CC-MAIN-2020-05
refinedweb
718
56.66
select match though row does not exists (1) By agraeper on 2020-11-04 10:15:37 [link] [source] building e(etage/floor). axis x-y where x is int and y char 1-A .. 12-I table axis ( r integer primary key -- is rowid e integer x integer y varchar -- i tried 'char' and 'text' before ) i check for (0,1,"A"). does not exists and i insert. check again. ok rowid=1 i check for (0,1,"B"). does not exists and i insert. check again. ok rowid=2 i check for (0,1,"C") and it finds rowid 1 and 2. so (*,*,C) is not inserted. i check for (*,*,"x") with x unequal C works. for all e,x the same. now, i use integer for y too and use something like i=ord(c) c=chr(i) to convert, but i like to know. thanks in advance, Andreas (2) By Gunter Hick (gunter_hick) on 2020-11-04 11:25:37 in reply to 1 [link] [source] please show the SQL you are executing and its results. It is not clear what you mean by "check". Changing the declared type of y is unlikely to make a difference, especially as all your attempts are interpreted exactly the same way. (4) By agraeper on 2020-11-04 12:49:34 in reply to 2 [link] [source] now: ( int = c - 'A' not precisely what ord() chr() would do. merely the 'A'-offset) snprintf(q,127,"select x from achse where e=%d and i=%d and c=%d;",e,i,ORD(c)); this works. before: snprintf(q,127,"select x from achse where e=%d and i=%d and c="%c";",e,i,c); // int e,i; char c; check (select x from t where .. and .. and ..;) : prepare step+ finalize returning the last rowid (>0) if there is at least one match. otherwise returning 0 the select finds all (e,i,*) if c is "C". xubuntu 18.04 ( and debian 10.3, sqlite3-version i cannot check this moment ) sqlite3 --version 3.22.0 2018-01-22 18:45:57 0c55d179733b46d8d0ba4d88e01a25e10677046ee3da1d5b1581e86726f2alt1 new problem : sqlite3 x.db SQLite version 3.22.0 2018-01-22 18:45:57 Enter ".help" for usage hints. sqlite> .schema kostenstelle CREATE TABLE kostenstelle ( x integer primary key -- , k integer unique not null -- ); sqlite> select * from kostenstelle ; 1|444 2|445 3|453 4|455 5|456 6|457 sqlite> // c-output query 'select x from kostenstelle where k=456;' error (17/database schema has changed) 'no such column: x' query 'insert into kostenstelle(k) values(456);' error (19/constraint failed) 'UNIQUE constraint failed: kostenstelle.k' obviously columns x,k exists and there is an entry. error-message contains result of sqlite3_exec() and sqlite3_errstr(result) and sqlite3_errmsg(handle) new question: when i inside one c-funktion open,exec,close the most thing works fine. when i use three functions : h=xopen() returning a handle and xexec(h), xclose(h) using it, then inside xexec() tables cannot be found. is there a difference ? i guess, someone has installed different versions, builds from source. i tried to uninstall everything and install debian-packages again. but still the same error. thanks andreas (5) By Gunter Hick (gunter_hick) on 2020-11-04 13:43:50 in reply to 4 [link] [source] You are misusing quotes. SQLite uses single quotes for string literals and double quotes for identifiers (sometimes, if there is no identifier, SQLite will assume you misused the quotes and silently accept a string literal in double quotes). The condition c="C" compares field c (case insensitive names) with itself, which is always true. The "new" probem is interference from several connections to the same database file. Apparently the schema has been changed (column x renamed) between preparing and executing the select statement. Your error handling is questionable, as getting "no such column:x" is not indicative of a successful result but you are inserting the value anyway, which correctly results in a constraint failure. Your new question is probably caused by not passing the database handle correctly between functions. (7) By agraeper on 2020-11-05 13:30:27 in reply to 5 [link] [source] the first problem c="C" is gone now. c='C' works fine. thanks! but the others still resist. i have only one process/no threads and i check the db-file before and after and check everything i try in c with sqlite3-client before. when i use exec() to select and use the void*udata (first argument to callback) to return the rowid (if there is a match) then it works. but prepare()/v2/v3 returns with error select x from t where .. ; -> result=17 schema changed. no column 'x' select * from t where .. ; -> result= 1 no tables specified i have dropped all 'x integer primary key' and use rowid directly select rowid from t where .. ; -> result=17 schema changed. no column 'r' (8) By Gunter Hick (gunter_hick) on 2020-11-05 15:10:38 in reply to 7 [link] [source] Changing your schema is highly unlikely to fix your broken C code; neither is describing your code as opposed to posting the parts that don't work. (9) By agraeper on 2020-11-05 16:23:18 in reply to 8 [link] [source] i take everyone`s advice seriously .. all my life. ok. i understand this as an invitation to post my code. hope, i am not totally wrong. what i do is: (T task) T1 check wether date/row exist in db T2 if not, insert (i could get rowid with lastinserted here) T3 check again and get rowid (redundant, could have been done in T2) two way to get a task done A exec() B prepare() step()+ finalize() (1) T1:A,T2:A,T3:A this works (2) T1:B,T2:A,T3:B this (B) does not work in c-code merely # if 0 <- this 0 has to be exchanged by 1 (i.e.) .. (1) .. good case # else .. (2) .. bad case # endif -c---------------------------------------------------------------------------- # include <stdio.h> # include <stdlib.h> # include <sqlite3.h> # define os(s) printf("%s\n",s) # define oe(s,e) printf("error (%s) %d/%s %s\n",s,e,sqlite3_errstr(e),sqlite3_errmsg(h)) int c(void*u,int n,char **a,char **b){ int i; for(i=0;i<n;++i){ printf(" %d %s %s\n",i,*(a+i),*(b+i)); if(u){ (*(long int *)u)=atoi(*(a+0)); } } return 0; // <>0 -> error } // sqlite3_prepare () // sqlite3_prepare_v2() // sqlite3_prepare_v3() long int t(sqlite3*h,char const *q){ sqlite3_stmt*s=0; char const *q0; int e,x; long int r=-1; if(SQLITE_OK==(e=sqlite3_prepare_v2(h,q,sizeof(q),&s,&q0))){ r=0; x=1;while(x){ switch((e=sqlite3_step(s))){ case SQLITE_BUSY : os("bsy"); oe("step",e); x=0; break; case SQLITE_DONE : os("don"); oe("step",e); x=0; break; case SQLITE_ERROR : os("err"); oe("step",e); x=0; break; case SQLITE_MISUSE : os("mis"); oe("step",e); x=0; break; case SQLITE_ROW : os("row"); r=sqlite3_column_int64(s,0); break; }} if(SQLITE_OK!=(e=sqlite3_finalize(s))){ oe("finalize",e); } } else { oe("prepare",e); } return r; } # define E 0 # define M 0 # define R 1 int main() { sqlite3*h=0; char q0[128],q1[128],q2[128]; char*m=0; int e; long int r; if((SQLITE_OK==sqlite3_initialize()) && (SQLITE_OK==sqlite3_open("x.db",&h))){ snprintf(q0,127,"select rowid from raum;"); snprintf(q1,127,"select rowid from raum where e=%d and m=%d and r=%d;",E,M,R); snprintf(q2,127,"insert into raum values(%d,%d,%d);",E,M,R); # if 0 // test with exec() only. THIS WORKS r=-2; if(SQLITE_OK!=(e=sqlite3_exec(h,q1,c,&r,&m))){ oe("A",e); } else if(r<=0) { if(SQLITE_OK!=(e=sqlite3_exec(h,q2,c,0,&m))){ oe("B",e); } else if(SQLITE_OK!=(e=sqlite3_exec(h,q1,c,&r,&m))){ oe("C",e); } else if(r<=0){ os("ne"); } else { printf("j2 r=%ld\n",r); } } else { printf("j1 r=%ld\n",r); } # else // test with prepare/step/finalize inside t(). THIS DOES NOT WORK if((r=t(h,q1))<=0){ if(SQLITE_OK!=(e=sqlite3_exec(h,q2,c,0,&m))){ oe("insert",e); } else if((r=t(h,q1))<=0){ printf("ne\n"); } else { printf("j2 %ld\n",r); } // exists after insert } else { printf("j1 %ld\n",r); } // already exists # endif sqlite3_close(h); sqlite3_shutdown(); } return 0; } thanks in advance, andreas (10) By Gunter Hick (gunter_hick) on 2020-11-05 17:13:28 in reply to 9 [link] [source] Please check what the sizeof() builtin function does. Maybe you meant strlen() instead? If so, just pass -1 and don't bother to count the string length. Using the actual length is only faster if your compiler does the counting at compile time. As in char sql[] = "SELECT 1"; sqlite3_prepare( .. sql, sizeof(sql), .. ) (6) By Gunter Hick (gunter_hick) on 2020-11-04 13:52:56 in reply to 4 [link] [source] I strongly suggest you try out your SQL in the sqlite shell first, before running it from your C program. And making sure that your c program is single threaded and each copy is running against a different database file. That way you can differentiate between incorrect SQL, wrong C programming and multiuser interference. (3) By Ryan Smith (cuz) on 2020-11-04 11:51:56 in reply to 1 [source] What does "i check for..." mean? We cannot possibly know what you are trying to do or implying even. Please show ALL the actual SQL calls that you do, both with what you expect the result should be, and what you are seeing. Also, which version of SQLite? Which OS? Alternatively, show any simple SQL script that any of us can run, which doesn't work, or fail, or return unexpected results, then we might be able to help.
https://sqlite.org/forum/info/898ae0f883076511
CC-MAIN-2022-33
refinedweb
1,639
72.26
Details New Feature - Status: Resolved Minor - Resolution: Fixed - None - - - None Description Coordinate Reference System (CRS) definitions will be provided by the EPSG database. However we need an easy for allowing users to provide their own CRS, because updating the EPSG database is uneasy. One of the easiest way is to provide a list of CRS as plain key - value pair, where the key is the code and the value is the CRS in WKT (Well Known Text) format. Such pairs can be provided in two ways: - As *.properties files - In the PostGIS spatial_refsys table. This issue is only about properties files. The above CRS codes are valid only in a namespace. The easiest way to specify the namespace would be to use the name of the properties file. So a “custom.properties” file might define the CRS in the CUSTOM namespace. One open question is where (in which directory) to put those files.
https://issues.apache.org/jira/browse/SIS-117
CC-MAIN-2021-49
refinedweb
153
65.93
Alan,#include <std_disclaimer.h>> 2.2.18pre7> o Identify chip and also handle MTRR for the (me)> Cyrix IIIlinux/arch/i386/kernel/mtrr.c fails to compile;"case X86_VENDOR_CENTAUR:" is duplicated, and boot_cpu.x86 should, Ibelieve, be boot_cpu_data.x86 in two places.I'm attaching a patch, but since there are two ways to fix thecase-tags, there's at least a 51% probability I got it wrong; at leastit compiles here... since I don't have a box with a Cyrix chip,successful compilation was my #1 priority.-- \Peter.--- linux/arch/i386/kernel/mtrr.c.orig Thu Sep 14 09:28:25 2000+++ linux/arch/i386/kernel/mtrr.c Thu Sep 14 09:38:50 2000@@ -442,10 +442,10 @@ /* Cyrix have 8 ARRs */ case X86_VENDOR_CENTAUR: /* and Centaur has 8 MCR's */- if(boot_cpu.x86==5)+ if(boot_cpu_data.x86==5) return 8; /* the cyrix III has intel compatible MTRR */- if(boot_cpu.x86==6)+ if(boot_cpu_data.x86==6) { rdmsr (MTRRcap_MSR, config, dummy); return (config & 0xff);@@ -474,7 +474,6 @@ return (config & (1<<10)); /*break;*/ case X86_VENDOR_CYRIX:- case X86_VENDOR_CENTAUR: return 1; /*break;*/ }
https://lkml.org/lkml/2000/9/14/72
CC-MAIN-2020-10
refinedweb
181
58.28
" nah now even the other pages are not working Originally Posted by Imam Gains nah now even the other pages are not working i got the same problems. Can you please help me What version of PHP are you running this under? (Anything earlier than 5.3.0, and PHP won't understand the namespace notation [or concept] being used in your code.) "Well done....Consciousness to sarcasm in five seconds!" ~ Terry Pratchett, Night Watch How to Ask Questions the Smart Way (not affiliated with this site, but well worth reading) My Blog cwrBlog: simple, no-database PHP blogging framework); Originally Posted by NogDog Do any of the files in question actually have a "namespace" declaration anywhere (normally before any other PHP code)? yh, the db.php file has namespace Blog\DB on the top line before any code.. Have you added this dbconnect funtion code with your program'; W3C MarkUp Validation | W3C CSS Validation | Document Type Definitions I suspect it's not an include problem, as require() will throw a fatal error that's pretty obvious if it can't find the specified file. I'm more inclined to think it's a namespace issue. There are currently 1 users browsing this thread. (0 members and 1 guests) Forum Rules
http://www.webdeveloper.com/forum/showthread.php?293803-Call-to-undefined-function-Blog-DB-connect&mode=hybrid
CC-MAIN-2017-26
refinedweb
211
63.09
Contents - allow 'admins' to delete/overwrite attachments? - Disallow removal of own admin permission - Is it possible to disable read on ALL pages except login? - Why is block all except login possible with the following scenario? - Administrating ACLs is cumbersome - You can't change ACLs on this page since you have no admin rights on it! - Lost Admin Rights? - attachments only for certain users - Project group members need permission to create new group pages? - How to change a protected page into a non protected page - Adding Myself As An Administrator - How to prevent user-creation? - Adding users - How do I set the limit on the size of attachments? - Prevent Editing When Not Logged In - Disable overwriting attachments for common users - Changing ACL-Rights on user-created pages without giving every user ACL-'admin' rights on the whole Wiki? - Cannot get AutoAdmin to work - Also cannot get AutoAdmin to work - NewPage macro + ACL = You are not allowed to edit this page - Groups in ACL are ignored - Groups are ignored in ACLs - After upgrade to 1.9.2, from 1.8.1, acl_hierarchic does not work the same allow 'admins' to delete/overwrite attachments? At the moment only superadmins are allowed to delete or overwrite attachments on our page, is there any way to configure a 1.8 moinmoin to make this possible for normal admins/special usergroups? Tried several searches here on that theme but wasnt very successfull as it seems. I don't think your problem is related to "superuser" configuration, but rather a standard ACL problem. For deleting or overwriting (which is deleting and writing) attachments, you need to have "write" and "delete" rights on the respective wiki page. If you are not logged in, you usually don't have "delete" rights. If it still does not work after logging in, check your acl_rights_* configuration and the ACLs on that page. Disallow removal of own admin permission Users often make mistakes editing ACL strings and inadvertantly change the page permission such that they cannot revert the change. There should be a configuration option disallow removal of your own read/write/admin permission on a page. Or maybe a confirmation dialog warning them that they are about to remove their own permission from the page. -- VitaliyShchupak 2008-07-20 11:48:22 This is why you don't give admin rights to users not capable of correctly using it. -- ThomasWaldmann 2008-07-20 12:10:37 Is it possible to disable read on ALL pages except login? I want to prevent viewing of every single page (all, even the help pages) in my wiki. Users in a certain group will be able to view pages after logging in. The problem is that many pages (Help, recent changes, etc...) specify read for All in their ACL. The only way to override that is to specifly no privs for All in acl_rights_before, but then that would block everyone from the login page too. The FAQ above for closed community says to use acl_rights_default and set All : blank, but that won't stop people from seeing help and recent changes. Is there any way to do this other than to edit every page in the system to remove the All : read line? - You have two options: - Change the ACL of all problem system pages and add All: to block anonymous users. You want to do this change with a script, so you can fix the ACLs after upgrading the system. Write custom SecurityPolicy that block anonymous users on all pages expect the login page. This should be easier and avoid upgrading issues. System / Help page ACLs were changed in moin 1.7 (they just take away "write" rights now and otherwise use Default ACLs. So if you change your default ACL, the change is also implemented by the system / help pages. Why is block all except login possible with the following scenario? In MoinMoin ver. 1.6.0, with acl_hierarchic = True and acl_rights_after = u"All:", the above (block all pages except login) is accomplished. We have acl_rights_before set to allow a specific admin group all privileges, and acl_rights_default set to allow a user group read,write,delete,revert. Only users in either the admin or user groups can view pages at all. My question is, why does this work? Is it a bug or a feature? - Sorry, I have trouble to understand your question. There is a bug in hierarchic ACL processing, see SecurityFixes. <- That's why it's behaving in that manner. Thanks. - Strictly taken, using "All:" ACL doesn't make much sense (it does no harm, but it is not necessary). Administrating ACLs is cumbersome Here's what I want to do. I want a certain user to be able to manage ACLs for a certain area of the wiki... say everything under a certain page. Without any hierarchical access control scheme, I can't just give him admin rights to the top page and let him go. It means that every single page he creates that needs any sort of access control, I have to go and manually add them to each page ONE AT A TIME! This is not only an annoyance for me, it also slows him down. So I came up with an Idea I thought might work... Create a template that contains an ACL line giving this author admin rights, and restricting read rights so that only he has access to that template. I thought this might allow him to create pages by basing them off of this template. But alas, it appears that creating a page from a template amounts to a cut and paste of the text from the template, so non-admin types are not allowed to "add" the ACL's to the new page (even though they were already granted in the template). Does anyone know of a way to give someone the ability to create certain pages with them as an admin, but without giving them admin rights to the entire wiki? -- SteveDavison 2007-03-01 22:03:43 Have a look at HelpOnAutoAdmin! -- DavidLinke 2007-03-01 23:50:18 Sure enough. Thanks David, that solves most of my problem. I'll file a FeatureRequest for the rest. You can't change ACLs on this page since you have no admin rights on it! I get this error msg when I try and save a newpage w/ ACL's on it and I the TrustedGroup has admin rights to both. Here's the ACLs on the page #acl TrustedGroup:read,write,delete,revert,admin Here's the ACLs on the template for the new pages created by this group #acl TrustedGroup:read,write,delete,revert,admin the result is the new pages fail to save w/ this error You can't change ACLs on this page since you have no admin rights on it! thanks in advance --Andrew I added acl_rights_before = u"admin:admin,read,write,delete,revert TrustedGroup:admin,read,write,delete,revert" to the wikiconfig.py and trusted group can add pages now! Seems that before doing this TrustedGroup could admin, edit pages, edit the ACL lines on the pages, just not save a new page w/ an ACL line included. Not sure if a bug or pilot error... Lost Admin Rights? I think I'm in over my head. I am completely new to wikis. But yesterday I set up a wiki page and specified myself as the administrator. I then set up a users page, and I went to edit one of the names I listed, and it said "You are not allowed to change this page." I can't figure out what happened. Thank you -- Laura 19 Dec 06 The page HelpOnAccessControlLists in your wiki describes how it works. Keep in mind that "admin" right only means "being able to change ACLs", it does not include any other right like "read" or "write". attachments only for certain users Is it possible to finetune ACL to allow anonymous and new registered users to edit the page but WITHOUT the permission to add attachments. and add an ACL rule to each user that should be allowed to add attachments. thank you for your responds. HatschMa 01/12/2006 Not with a standard moin, it just uses the page ACLs for the page's attachments. For moin 1.5 you could patch AttachFile.py to not check for "write" rights, but for "attachwrite" right and add that right to the valid ACLs list. In some future moin version (not 1.6), file attachments will be first class items with own ACLs. Project group members need permission to create new group pages? I set up some independent groups, each one with it's own adminstrator. The default permissions permit Known users to read/write/edit but not admin for default pages. I set up templates for each group that allow the group to read/write/edit their pages and their admin user to read/write/edit/admin. I find that the group users get a "you can't change the permission of this page" when they try to create a new page using the group template. Is there a trick here to make that work? Is that what AutoAdmin is for? Many thanks in advance for your help. Any suggestions about where to look on this one? 2012-05-26 20:20:53 How to change a protected page into a non protected page In my local wiki (MoinMoin Version 1.5.2 on IRIX64) the page WikiSandBox is a protected page ("Geschützte Seite"). Even though I have administrator rights I did not found a way so that any user can experiment with this page. What can I do to change the status of this page? How are ACLs configured in wikiconfig.py? acl_rights_default = u'StudentGroup:read,write,delete,revert All:read' acl_rights_before = u'AdminGroup:read,write,delete,revert,admin ' I belong to the AdminGroup but I cannot edit this page, too. Is there an ACL line in your WikiSandBox? I think it is the default WikiSandBox without any ACL and because of this problem I had no chance to modify it. (Of course you can change the source code via the operating system.) It has the following lines in the beginning of the page: {{{## Please edit system and help pages ONLY in the moinmaster wiki! For more #format wiki #language en (...)}}} - Maybe check filesystem access rights for the underlay directory. And probably who is the owner of this dir and who is the owner of the webserver process. Does it match? I made some tests varying owner and file permissions but had no success. It remains protected. I solved the problem for me by copying the directory WikiSandBox manually from underlay/pages into the data/pages directory and changed its file permissions according to other files in this directory. Beside my special problem I have some difficulties in understanding the general concept of the underlay directory. I understand that pages in this directory are normally read-only and should not be altered by users. There are only few exceptions like e.g. the WikiSandBox. But what do you have to do when you like to change such a page (in a perfect installed wiki setup). How do you change the state of a page from "protected" to "non-protected"? Adding Myself As An Administrator I just installed the latest stable version of Moin on FreeBSD + Apache in my $HOME directory on a shared web server. Everything is working very well except for the fact that I can't act as an administrator in my wiki. I found the MoinMoinQuestions/Administration "How To Become An Administrator", and followed its directions. First, I registered with the site using a username of TomPurl. I then checked the $INSTANCE/data/user directory and made sure that the TomPurl username existed. I then added the following line to $INSTANCE/wikiconfig.py: acl_rights_before = u"TomPurl:read,write,delete,revert,admin" I then logged into Firefox, but was unable to delete, rename, or add attachments to pages. To troubleshoot, I deleted all of my cookies and then logged in again. Still, no luck. I've even restarted Apache. Can anyone else think of something that I could try to get this to work? - probably you have missed to set allowed_actions = ['DeletePage', 'AttachFile', 'RenamePage'] in your wikiconfig.py -- ReimarBauer 2005-12-30 18:19:39 Thanks Reimar! I missed that direction in the HelpOnInstalling/ApacheOnUnix instructions. -- TomPurl How to prevent user-creation? I don't want to have any users except me. I'm running Version 1.5.0beta4, so that won't run. (i tried everything, but i didn't get it) Use access control lists. - I can't find anything on that page that mentions disabling user creation. The login page seems to double as an account creation page. I need access to a login page, but not account creation. How do we fix that? Please add this as FeatureRequest Strange that this is a new feature - nobody wants his Wiki to be potentially spammed by billions of bots or something. Should be a basic feature I've set the ACL on UserPreferences for All to nothing ("All: ") while keeping rights for registered users ("MoinPagesEditorGroup:read,write,delete,revert"). It seems to me that this lets people still login but disables creating new user accounts. Am I missing something? Adding users I'm doing a wiki page for a company. My boss will be the admin, got all the rights etc. This wiki will not be edited by unknown people, only the users that my boss allow. How do the admin add the users to a group or what? Thanks.~ Do read HelpOnAccessControlLists How do I set the limit on the size of attachments? MoinMoin does not offer this facility, but if you use apache >= 1.3.2 then you might like to look at the LimitRequestBody directive Prevent Editing When Not Logged In I've searched and searched for how to do this, but can find nothing. I think the ability exists, since I have seen mention in several other questions of someone not being able to edit a page because they are not logged in. Are there instructions anywhere that describe how to configure MoinMoin to require login for edits? Thanks. --SteveDavison What you search is part of access control, read HelpOnAccessControlLists. Disable overwriting attachments for common users Common users (anonymous and logged in) can't delete attachments, only administrators can do that. Which is good, because these changes can't be reverted. But I found out that anonymous users can overwrite attachments with whatever they want, it's enough to check "Overwrite existing attachment of same name" checkbox (moinmoin 1.5.7). This way evil anonymous users can destroy all my images/other attachments on my wiki in a few minutes, and I cannot revert it. How can I disable overwriting feature for non-admins? --Kamil It is a bug: MoinMoinBugs/OverwriteAttachmentShouldDependOnDeleteRight --Kamil Changing ACL-Rights on user-created pages without giving every user ACL-'admin' rights on the whole Wiki? I want to make it possible for normal Users to set ACL write-permission only for themselves on their pages, to create a simple Character-Database ( ) is this possible without giving every ACL-'admin' rights for the whole wiki, and without intervention of an Admin or TrustedUser on every Charpage? You should define "their" pages. Moin does not have the concept of page "owner". However, if you to give write access to sub pages of the user page, you can do this with a custom security policy. Subclass MoinMoin.security.Permissions, and define a write method that check the current page name against the current user name and allow only the user to write on his sub pages ignoring the page ACL. See SecurityPolicy for more info. Version 1.5 has autoadmin security policy that give admin rights to user pages, but it is not recommended for common users, because they will not be able to set ACL correctly. see HelpOnAutoAdmin. - I am confused about this as well. "their" := the page they created. This is a variation I need: I want to be able to approve the potential users when applying. However, once someone is an user, they should be able to create "private" pages (e.g. set their own permissions and edit the list of users who are 'collaborating' on the page). Is this possible? Thanks! -- ranko Cannot get AutoAdmin to work I want to enable users the ability to setup their Homepage using the HomepageTemplate with a pre-defined ACL. To do this they need 'admin' rights. I am using AutoAdmin and below is what I did - Added the following line to wikiconfig.py, using Moin version 1.6.1 from MoinMoin.security.autoadmin import SecurityPolicy Created AutoAdminGroup page Added usernames to the AutoAdminGroup page which should enable these users 'admin' rights But it does not seem to enable 'admin' rights. Am I missing something, can you help? Kim Tran - 2012-05-26 20:20:53 Also cannot get AutoAdmin to work The HelpOnAutoAdmin page is woefully inadequate. It mentions things but does not explain them fully or even tell you what they do. Are page/ReadGroup and page/ReadWriteGroup supposed to work on projects as well as users? And how do you get them to work? I can only assume that if a user is in ReadGroup or ReadWriteGroup that they are able to read all subpages by default, and the ReadWriteGroup also can edit. But I set up the ReadGroup and ReadWriteGroup pages with nothing in them, and I can still see and edit all of the subpages. Do I have to add ReadGroup, etc., to the AutoAdminGroup page as well as the page/AdminGroup in order to activate them? HelpOnAutoAdmin mentions that you can set up project/ReadGroup, project/ReadWriteGroup, etc., but doesn't say what the "etc." includes. Can I create a project/RevertGroup or project/DeleteGroup? Or what about a WriteRevertGroup? And another thing that needs explanation is, how exactly do these groups affect the ACL's of a page? Is it sort of like an automatic #acl +ReadGroup:read +ReadWriteGroup:read,write +AdminGroup:admin added to each subpage? If the subpage has its own ACL line, does this prevent the AutoAdmin ACLs from working (i.e., do the automatic rights work like "default", "before", or "after" ACL rights? Oh, I should mention that I'm using version 1.5.8; I checked the latest HelpOnAutoAdmin and it seems to be identical to the 1.5.8 page, so I assume there haven't been significant changes since. Thanks for any help... -- SteveDavison 2008-07-03 04:52:36 NewPage macro + ACL = You are not allowed to edit this page I am using Moin 1.7 and am trying to set it to use the NewPage macro to make structured page creation easier on the users. I also have ACL's (hierarchic, if this matters) enabled. Now a certain user belonging to a certain group can easily create sub pages, perform edits, and do what they need to do. However, when they try to perform this task through the NewPage macro they get an error that says "You are not allowed to edit this page." As I have said before, creating this page and loading the proper template by hand works just fine. I don't know if this is the proper avenue to have this question answered but any help would be appreciated. Cameron Groups in ACL are ignored Just updated from 1.5.5a to 1.7.1. To be more precise, old wiki was on Windows server with IIS and CGI. Then I installed 1.7.1 on Linux machine with Apache and fastcgi. Copied ./data, removed cache, removed all old macros, parsers and themes, performed pages migration. I took wikiconfig.py from the new distro and updated it accordingly. I have very tight security policy with lots of groups. Now after upgrade only users who are specified explicitly in acl_rights_before, acl_rights_default or in a page's ACL get proper authorization. If user is the member of some group then he doesn't get authorized despite this group being mentioned in ACL. Look like group membership is just ignored. I've tried switching acl_hierarchic off, searched through this site, read CHANGES and MIGRATION documents, looked in the sources, but still can't find the cause of the problem. You likely missed the part of docs/CHANGES talking about page_*_regex configuration having changed. If you just need the usual "for English" behaviour, you can just delete all those regexes from wiki config. If you need to have it recognize something complex (like e.g. matching group page names for different languages), you have to read docs/CHANGES. -- ThomasWaldmann 2008-08-11 12:19:17 - Well, I did really miss that part. But all the pages in my wiki have English names, so I didn't get any positive result after editing or disabling those regexes. Do you have stopped the server and cleaned the old dict cache? You can verify as superuser using the SystemAdmin page if all works after you had restarted the server process. -- ReimarBauer 2008-08-11 14:29:00 Yes, I tried both /etc/init.d/apache2 stop/start and apache2ctl restart. Cleaned cache manually by removing everything from ./data/cache before server restart. On SystemAdmin page I can only see user accounts and list of attachments. -- AlexanderAgibalov 2008-08-12 05:49:07 Can you pastebin your wikiconfig.py? And please check if the timestamp of the pyc file is newer as the py file. I don't know what is wrong yet but it is easier to figure out if you meet us at chat.freenode.net #moin. -- ReimarBauer 2008-08-12 18:36:28 Here it is. I removed all the commented lines. .pyc file is recreated each time after I make changes in .py -- AlexanderAgibalov 2008-08-14 06:55:40 # -*- coding: utf-8 -*- from MoinMoin.config.multiconfig import DefaultConfig class Config(DefaultConfig): sitename = u'Ext.Wiki' logo_string = u'<img src="/moin_static171/common/moinmoin.png" alt="MoinMoin Logo">' html_head = '''<link rel="shortcut icon" href="/moin_static171/favicon.ico">''' page_front_page = u"FirstPage" data_dir = '/db/extwiki/data/' data_underlay_dir = '/db/extwiki/underlay/' url_prefix_static = '/moin_static171' superuser = [u"AlexanderAgibalov", u"GrigoryBaytsur"] acl_rights_before = u"AlexanderAgibalov:read,write,delete,revert,admin +MxGroup:read,write" acl_rights_default = u"AlexanderAgibalov,MxGroup:read,write,delete,revert,admin Known,All:none" acl_hierarchic = True surge_action_limits = None # disable surge protection navi_bar = [ u'%(page_front_page)s', u'RecentChanges', u'FindPage', u'HelpContents', ] theme_default = 'modern' language_default = 'en' #page_category_regex = u'^Category[A-Z]' page_category_regex = ur'(?P<all>Category(?P<key>\S+))' page_dict_regex = u'[a-z]Dict$' page_form_regex = u'[a-z]Form$' page_group_regex = u'[a-z]Group$' page_template_regex = u'[a-z]Template$' show_hosts = 1 Remove the old rules or use the new syntax for page_dict_regex = u'[a-z]Dict$' page_form_regex = u'[a-z]Form$' page_group_regex = u'[a-z]Group$' page_template_regex = u'[a-z]Template$' see * HINT: page_*_regex processing had to be changed to fix category search. in the CHANGES file or look into MoinMoin.config.multiconfig. e.g. page_category_regex = ur'(?P<all>Category(?P<key>(?!Template)\S+))' page_dict_regex = ur'(?P<all>(?P<key>\S+)Dict)' page_group_regex = ur'(?P<all>(?P<key>\S+)Group)' page_template_regex = ur'(?P<all>(?P<key>\S+)Template)' you can remove the vars completly if they are the defaults. -- ReimarBauer 2008-08-14 09:05:37 Note that this doesn't make too much sense: acl_rights_before = u"AlexanderAgibalov:read,write,delete,revert,admin +MxGroup:read,write" acl_rights_default = u"AlexanderAgibalov,MxGroup:read,write,delete,revert,admin Known,All:none" Reasons: - there is no ACL right called "none" - what you likely want is to not give Known and All any rights - you do that by just not giving them any rights - if you don't have a conflicting ACL otherwise, it is enough to not mention Known and All - if you don't explicitely give some user or group some rights, they won't have rights (note that the internal default of acl_rights_default DOES give Known and All group some rights, but for your case this does not matter as you are overriding acl_rights_default) - if Alexander is in acl_rights_before and gets all rights there, you don't need to give him rights in acl_rights_default not sure what your MxGroup ACL settings are for. Looks like you want to give them read and write rights ever (you don't need to repeat that in default ACL), but delete,revert,admin only by default if the page ACL does not override the default. Thus, you maybe want this: acl_rights_before = u"AlexanderAgibalov:read,write,delete,revert,admin +MxGroup:read,write" acl_rights_default = u"MxGroup:delete,revert,admin" Oh. You were right since the beginning. The problem was indeed with regex. Sorry for wasting your time. -- AlexanderAgibalov 2008-08-14 13:18:49 Groups are ignored in ACLs 2008-09-09 FrankSteinhauer I have the same problem as AlexanderAgibalov (and Cameron) above - my Groups seem to be ignored. We're using MoinMoin 1.6.2. From wikiconfig.py: acl_rights_valid = ['read', 'write', 'delete', 'revert', 'admin', 'approve', 'review'] acl_rights_before = u"my-admin,AdminGroup:read,write,delete,revert,admin,approve,review" acl_rights_default = u"Known:read,write,delete,revert All:read" #commented out the page_xxx_regex lines Page TestApprovalGroup: (created by a member of the AdminGroup) #acl TestApprovalGroup:admin,read,write,delete,revert Known:read All: === All members of the ACL group "TestApprovalGroup" === * SomeUser (part of the test department) <<BR>> ---- ~-this page belongs to the CategoryAdministratorSection-~ Now SomeUser is not allowed to change the page TestApprovalGroup. Furthermore, he's not able to create a new page and set any ACLs. This looks like a "feature", since he has no admin rights yet he's not allowed to give himself the admin rights - but still it's strange. Now I have to add all "XxxApprovalGroup"s to the acl_rights_before What is the approve right? Have you restarted the server process after removing the page_*_regex and do you have cleaned the dict cache? -- ReimarBauer 2008-09-09 15:41:34 After upgrade to 1.9.2, from 1.8.1, acl_hierarchic does not work the same The pertinent lines of my wikiconfig.py file are: acl_rights_default = u"AdminGroup:read,write,delete,revert,admin DnDPlayersGroup:read DnDDMsGroup:read All:" acl_hierarchic = True I have a page called DnD, for which I want admins to have full rights and players and DMs to only be able to read. It has the ACL line: #acl DnDDMsGroup:read DnDPlayersGroup:read I have a child page called DnD/DMs, for which I want admins to have full rights, players none, and DMs to have read and write access. It has the ACL line: #acl DnDDMsGroup:read,write DnDPlayersGroup: I then have grandchild page DnD/DMs/DragonLance, for which I want admins to have full rights and players and DMs none. It has the ACL line: #acl DnDDMsGroup: Everyhing was working fine in 1.8.1. After the upgrade, admins can no longer access the child page nor the grandchild page. Oddly, players can access the grandchild page. What happened?. -- EugeneSyromyatnikov 2010-06-04 07:00:12
http://www.moinmo.in/MoinMoinQuestions/Permissions
crawl-003
refinedweb
4,538
64.91
I've found that java.lang.Integer implementation of compareTo method looks as follows: java.lang.Integer compareTo public int compareTo(Integer anotherInteger) { int thisVal = this.value; int anotherVal = anotherInteger.value; return (thisVal<anotherVal ? -1 : (thisVal==anotherVal ? 0 : 1)); } How does Java handle integer underflows and overflows? Leading on from that, how would you check/test that this is occurring? I am looking for an efficient formula working in Java which calculates the following expression: (low + high) / 2 i know this is an old question, asked many times. but i am not able to find any satisfactory answer for this, hence asking again. can someone explain what exactly happens in ... When doing this: int x = 100; int result = 1; for (int i = 1; i < (x + 1); i++) ... I understand the overflows in java, but what is called underflows? and how can java handle it ? Hello, Considering that Java does not automatically throw an exception (or otherwise object) to integer overflow (it just "wraps around"), what is the best/preferred/whatever style to handle this condition when it is unknown that the inputs and subsequent calculations will for certain stay within bounds (of Integer.MAX_VALUE and Integer.MIN_VALUE in the case of type int)? Should values first be stored as ... [Lambert Stein]: Simply checking against the same result done in long doesn't help - it's got the overhead of casting. I think this attitude is impractical - anything you do here will be slower than a single unchecked addition or subtraction. You can't reject a possibility because you think it's slower than you'd like - you can only choose the best ... On a daily base, making sure that my values don't overflow anywhere in my code gives me the biggest headache. Even if the language threw an exception in this case, it wouldn't change that you have to check everywhere whether there's a possibility for overflow. Still, using unchecked exceptions would have been much much better than silent overflow. If your program ...
http://www.java2s.com/Questions_And_Answers/Java-Data-Type/Integer/overflow.htm
CC-MAIN-2017-30
refinedweb
336
58.28
How to build a Google mockup page in Kentico CMS Boris Pocatko — Aug 14, 2013 Sometimes you need to showcase features of your software that require third-party services to be connected to your program. In our case, for example, we required a nice demo page to showcase our EMS functionality based on the Google search results page. This article describes one of the ways to achieve this in Kentico CMS. I usually don’t recommend using the ASPX development approach to create Kentico functionality, because the Portal engine allows you to do the exact same thing and usually with much less programing involved, giving you an additional set of features which otherwise would have had to have been hand-coded from scratch. In this case, however, I am using a custom ASPX template. Of course I would be able to achieve the same functionality with the Portal engine development model, but it wouldn’t make much sense to create, for example, a Google widget or web part. It’s not like you are going to re-use this functionality over and over again on your website. It’s simply one page for showcasing some built-in functionality that won’t be reused anywhere else. Additionally, this page requires specific CSS styling and other custom references. Also, use of such created web part functionality would include custom styling, which would break the standard Kentico layout . These are some of the reasons I’ve chosen to implement the given functionality as an ASPX page and not as a web part. At first I created a new ASPX page skeleton as described in our documentation. The most important thing is to specify, for example, the base class TemplatePage for this template (namespace CMS.UIControls). This will allow you to register the template in the CMSSiteManager. Once you do this, you can start creating the content of your template. First of all, I copied over the relevant HTML and CSS code from the official Google search page. The CSS code was copied into a new Kentico stylesheet with the code name GoogleMockup. This allows me to link the stylesheet the following way: <link href="~/CMSPages/GetCSS.aspx?stylesheetname=GoogleMockup" type="text/css" rel="stylesheet"/> Then I had to determine which part rendered the search box, so I could replace it with a standard .NET TextBox control. The fact that the HTML code is minified and that the IDs are shortened made this task somewhat difficult. After a closer inspection of the HTML code with FireBug, I figured out which part of the page to replace. In my case, the element to replace was in the gbfwa div element. After replacing the standard code, my HTML portion looked like this: <div id="gbfwa" class="gbqfwa "> <asp:TextBox</asp:TextBox> </div> The next thing to do was to replace the search button. To keep the design consistent, I decided to also replace the “I feel lucky” button so that my code looked like this: <div id="gbqfbwa" class="jsb"> " /> The next step was to implement the search results page. I decided to keep everything in one template so I simply created a new .NET panel (pnlResults) containing the HTML of the search results page, which is dynamically shown if the page should display search results. However, the main search page HTML must be hidden if the search results are displayed. So I’ve added another panel encapsulating the search page HTML (pnlSearch). Here is a simple skeleton for that page: <head id="Head1" runat="server"> <link href="~/CMSPages/GetCSS.aspx?stylesheetname=GoogleMockup" type="text/css" rel="stylesheet"/> </head> <body> <form id="form1" runat="server" defaultbutton="gbqfba"> <div> <ajaxToolkit:ToolkitScriptManager <cms:CMSPortalManager </div> <!-- skipped code --> <asp:Panel <!-- skipped code --> <asp:TextBox</asp:TextBox> <!-- skipped code --> "/> <!-- skipped code --> </asp:Panel> <asp:Panel <!-- skipped code --> <asp:TextBox</asp:TextBox> <!-- skipped code --> <!-- results listing --> </asp:Panel> <!-- skipped code --> </form> </body> To switch between the two views (results and search), I simply used a URL query string. If the user is in the search view and if he searches for any string, this string will be passed as a query string in the URL. So, if searching for “Kentico”, the URL will look like this: <base URL>?search=Kentico The same page would be loaded, but now it would detect the query string containing a search keyword and a search value. Now the search panel would be hidden on the page load event and the results panel would be displayed. The search value is retrieved from the query string and used to modify the search results. I’ve hard-coded the search results into the template for simplicity. An alternative approach would be to dynamically change the results and request the search results from Google, however this is not necessary in our case. To display our search keyword in the search results, I’ve simply used a .NET variable in the markup (<%= SearchString %>), which is defined in the CodeBehind as a public static string SearchString variable. (In a real-life scenario, please sanitize the input checking for XSS.) Then I implemented the body of the click event of the search button. The implementation is pretty simple; I just redirect the visitor to the same page with the already-mentioned query string and search value from the TextBox attached. Now we have a dynamically changing page and some realistic-looking search results. Next, we have to use Kentico’s built-in features to personalize the content. We know, that we can use campaigns to personalize someone’s landing page. This scenario can be used when speaking about paid ads, where you enter the target URL, so you can include a custom query string parameter. So one of the search result links will be dynamically generated to point to the landing page and to include a campaign-specific query string. For simplicity, only two query strings are supported. The “SampleCampaign” and the “Google” query string. The Google query string is used if such a campaign was created in the system. If not, the default SampleCampaign would be used. The URL in the markup is replaced the same way as the search term in the results—with a .NET variable (<%= AddURL %>). To create this URL, the campaign parameter needs to be retrieved with our settings API. Additionally, the application base URL is taken as the target URL. The second dynamic URL (<%= StandardURL %>) is a standard search result URL. Since the results aren’t directly generated by Google, I cheated a bit. I logged the external search activity manually on the page load and redirected the visitor afterwards, the same way as with the ad dynamic link. Additionally, I implemented some cleanup so these activities are deleted (together with the campaign cookie) when this mockup page is loaded without any query parameters. This is basically how this mockup page works. Additionally, you have to configure Kentico to make use of this metadata. Here are some sample macros that may be used in this case to display personalized variants of web parts or widgets: External search for the string “searchvalue”-based visit: Filter(Contact.Activities, " ActivityType == \"externalsearch\" && ActivityValue==\"searchvalue\"").Count > 0 Google campaign related visit: Cookies.Campaign == “Google” For additional details please check the attached code files. You can import the template and the stylesheet as a standard Kentico import package. Result As always, here is the download link to the export package, ready to be used on your Kentico instance. Share this article on Twitter Facebook LinkedIn Google+ Boris Pocatko Senior Solution Architect for Kentico Comments Steve Williams commented on Sep 9, 2013 Hi Boris - this is great and works really well when a customer wants to see how external search phrases can be used to personalise a landing page with targeted content that speaks to the search terms - thanks for putting together ! New subscription Leave message Your email:
https://devnet.kentico.com/articles/how-to-build-a-google-mockup-page-in-kentico-cms
CC-MAIN-2017-26
refinedweb
1,315
63.29
This tutorial will demonstrate the basic workflow. import treelite In this tutorial, we will use a small regression example to describe the full workflow. Let us use the Boston house prices dataset from scikit-learn ( sklearn.datasets.load_boston()). It consists of 506 houses with 13 distinct features: from sklearn.datasets import load_boston X, y = load_boston(return_X_y=True) print(f'dimensions of X = {X.shape}') print(f'dimensions of y = {y.shape}') The first step is to train a tree ensemble model using XGBoost (dmlc/xgboost). Disclaimer: Treelite does NOT depend on the XGBoost package in any way. XGBoost was used here only to provide a working example. import xgboost dtrain = xgboost.DMatrix(X, label=y) params = {'max_depth':3, 'eta':1, 'objective':'reg:squarederror', 'eval_metric':'rmse'} bst = xgboost.train(params, dtrain, 20, [(dtrain, 'train')]) Next, we feed the trained model into Treelite. If you used XGBoost to train the model, it takes only one line of code: model = treelite.Model.from_xgboost(bst)
https://treelite.readthedocs.io/en/latest/tutorials/first.html
CC-MAIN-2022-21
refinedweb
161
52.46
If you look into what happened during the past few years in the world of JavaScript, you can see that component thinking made it to the mainstream. Even with this, there's still some kind of a boundary between the frontend and the backend. In this interview, we'll learn about Christian Mortaro's approach to the problem. I am a 28 years old Brazilian programmer, and I recently found out that I'm on the autism spectrum. I fell in love with code back when I was 11 years old as It was one of the few things that made sense to me since my social skills are pretty bad. I prefer to spend my days in front of my computer working and testing new libs for fun since I tend to have sensory overload when I go outside. Nullstack is a full-stack framework for building progressive web applications. It connects a stateful UI layer to specialized microservices in the same component using vanilla JavaScript. Nullstack components are regular JavaScript classes but with both the frontend and backend. I want the developer to have a full-stack application by default without dealing with all the decisions. Nullstack allows you to make your application work as fast as possible, but it is also flexible enough so you can refactor it into something beautiful. Consider the example below where a stateful component uses a server function to read from a database connection saved on the server context: import Nullstack from "nullstack"; class BookPage extends Nullstack { title = ""; description = ""; static async findBookBySlug({ database, slug }) { return await database .collection("books") .findOne({ slug }); } async initiate({ page, params }) { const book = await this.findBookBySlug({ slug: params.slug, }); if (book) { page.title = book.title; Object.assign(this, book); } else { page.status = 404; } } render() { return ( <section> <h1>{this.title}</h1> <div>{this.description}</div> </section> ); } } export default BookPage; In the example, Nullstack server-side renders and returns SEO ready HTML when the user enters the application from this route. When the user navigates to this page, an API call is made to an automatically generated micro-service that returns the book as JSON and updates the DOM. Nullstack generates two bundles: one for the server and one for the client with the least dependencies possible. The framework is responsible for deciding when to use an API call or using a local function; the programmer only needs to think about the behavior of their functions. Each environment has its context, which is a proxy passed to every function. The feature makes Nullstack a horizontal structure instead of a tree, which is very important for my daily job since I often have to move code around based on customer feedback, and I wouldn't want to be locked into a structure. In the example below, we are parsing the README only when the application starts and saving it in the server context memory: import Nullstack from "nullstack"; import { readFileSync } from "fs"; import { Remarkable } from "remarkable"; class About extends Nullstack { static async start(context) { const text = readFileSync("README.md", "utf-8"); const md = new Remarkable(); context.readme = md.render(text); } static async getReadme({ readme }) { return readme; } async initiate(context) { if (!context.readme) { context.readme = await this.getReadme(); } } render({ readme }) { return <article html={readme || ""} />; } } export default About; The client invokes a server function and saves the README content in the client context that is available offline on other views. Both readFileSync and remarkable are excluded from the client bundle. There are many optimizations in this code, but the component looks almost as simple as a basic one. The nice answer is that it was, since the beginning, thought of as a complete solution that uses the same concept to solve every problem. The approach makes Nullstack very easy to learn since picking up the first steps is enough to allow you to code full-stack. I used many more complicated stacks in the past, and you could always notice where things were glued together. The not so nice answer is that it doesn't differ that much from any other web framework. All of the options have the same goal, and eventually, one inspires the other. Nowadays, the market is trending towards a "one size fits all" approach where React is the solution for everything. If you think of frameworks as shoes, Nullstack is just a shoe that fits my size and makes me comfortable. My friends and I were getting burned out of web development as it seemed like things didn't match our thought process. The first idea was to make an extension for React to make it look a bit more like Ember.js and add a server layer very similar to the server components they just announced. However, we got carried away and started modifying it so much that we eventually reset the project as its own thing. I wrote a class that would be "the ideal code for us" and reverse-engineered the idea until it worked. I'll keep developing my freelancing projects with Nullstack as I finally don't feel the need to change stacks at every project anymore. The work will result in more features being extracted into Nullstack as long as they follow the same principles. It's essential to me that Nullstack remains a single concept. Besides that, I will focus on creating content on Youtube both in English and Portuguese, so more people can understand it while I get the plus of developing my social skills. More people have the same barriers as me, and I hope to reach them, so they don't burn out of web development. I can't tell what the future is, but I can tell you what I wish it were. I prefer a more decentralized web. For the last years, I've been passionate about PWAs since it removed the centralization of the stores. The next step I'd like to see decentralized is the frameworks, so developers can pick and choose a stack that makes them happy instead of looking good on the job market. Test everything yourself, look inside the code, and don't merely use things because the community says so. Breaking stuff is the most fun part of developing, and there is no shame in figuring out what you like is not the most popular thing as long as you can deliver results. Honestly, I have no idea. I lived in a "cave" for the last 28 years; I just gathered the courage to make a Twitter account. I want to thank everyone who gave me feedback and for the opportunity of this interview. Nullstack is almost two years old, and my poor communication skills and anxiety prevented me from showing it to people. I'm thrilled that none of the catastrophic scenarios I had in my head happened so far. Thanks for the interview, Christian! I find it refreshing that there's movement to have shared logic in the same files while having transparent optimizations in place. Perhaps the division between the frontend and the backend will become blurry over time. To learn more about Nullstack, head over to the project site. You can also find the project on GitHub. There's also a brief introduction to the topic on YouTube:
https://survivejs.com/blog/nullstack-interview/
CC-MAIN-2022-40
refinedweb
1,212
61.56
Quickstart: Create an Azure Cognitive Search service in the portal Azure Cognitive Search is a standalone resource used to plug in a search experience in custom apps. Although Azure Cognitive Search integrates easily with other Azure services, you can also use it as a standalone component, or integrate it with apps on network servers, or with software running on other cloud platforms. In this article, learn how to create a resource in the Azure portal. Prefer PowerShell? Use the Azure Resource Manager service template. For help with getting started, see Manage Azure Cognitive Search with PowerShell. Subscribe (free or paid) Open a free Azure account and use free credits to try out paid Azure services. After credits are used up, keep the account and continue to use free Azure services, such as Websites. Your credit card is never charged unless you explicitly change your settings and ask to be charged. Alternatively, activate MSDN subscriber benefits. An MSDN subscription gives you credits every month you can use for paid Azure services. Find Azure Cognitive Search - Click the plus sign ("+ Create Resource") in the top-left corner. - Use the search bar to find "Azure Cognitive Search" or navigate to the resource through Web > Azure Cognitive Search. Choose a subscription Setting the subscription ID and resource group is your first step. If you have more than one subscription, choose one that also has data or file storage services. Azure Cognitive Search can autodetect Azure Table and Blob storage, SQL Database, and Azure Cosmos DB for indexing via indexers, but only for services under the same subscription. Set a resource group A resource group is required and is useful for managing resources all-up, including costs. A resource group can consist of one service, or multiple services used together. For example, if you are using Azure Cognitive Search to index an Azure Cosmos DB database, you could make both services part of the same resource group for management purposes. If you aren't combining resources into a single group, or if existing resource groups are filled with resources used in unrelated solutions, create a new resource group just for your Azure Cognitive Search resource. Over time, you can track current and projected costs all-up (as shown in the screenshot) or scroll down to view charges for individual resources. The following screenshot shows the kind of cost information you can eventually expect to see when you combine multiple resources into one group. Tip Resource groups simplify cleanup because deleting a group also deletes the services within it. For prototype projects utilizing multiple services, putting all of them in the same resource group makes cleanup easier after the project is over. Name the service In Instance Details, provide a service name in the URL field. The name is part of the URL endpoint against which API calls are issued:. For example, if you want the endpoint to be, you would enter myservice. Service name requirements: - It must be unique within the search.windows.net namespace - 2 and 60 characters in length - Use lowercase letters, digits, or dashes ("-") - Avoid dashes ("-") in the first 2 characters or as the last single character - No consecutive dashes ("--") anywhere Tip If you think you'll be using multiple services, we recommend including the region (or location) in the service name as a naming convention. Services within the same region can exchange data at no charge, so if Azure Cognitive Search is in West US, and you have other services also in West US, a name like mysearchservice-westus can save you a trip to the properties page when deciding how to combine or attach resources. Choose a location As an Azure service, Azure Cognitive Search can be hosted in datacenters around the world. The list of supported regions can be found in the pricing page. You can minimize or avoid bandwidth charges by choosing the same location for multiple services. For example, if you are indexing data provided by another Azure service (Azure storage, Azure Cosmos DB, Azure SQL Database), creating your Azure Cognitive Search service in the same region avoids bandwidth charges (there are no charges for outbound data when services are in the same region). Additionally, if you are using AI enrichment, create your service in the same region as Cognitive Services. Co-location of Azure Cognitive Search and Cognitive Services in the same region is a requirement for AI enrichment. Note Central India is currently unavailable for new services. For services already in Central India, you can scale up with no restrictions, and your service is fully supported in that region. The restriction on this region is temporary and limited to new services only. We will remove this note when the restriction no longer applies. Choose a pricing tier (SKU) Azure Cognitive Search is currently offered in multiple pricing tiers: Free, Basic, or Standard. Each tier has its own capacity and limits. See Choose a pricing tier or SKU for guidance. Basic and Standard are the most common choices for production workloads, but most customers start with the Free service. Key differences among tiers is partition size and speed, and limits on the number of objects you can create. Remember that a pricing tier cannot be changed once the service is created. If you need a higher or lower tier later, you have to re-create the service. Create your service After you've provided the necessary inputs, go ahead and create the service. Your service is deployed within minutes, which you can monitor through Azure notifications. Consider pinning the service to your dashboard for easy access in the future. Get a key and URL endpoint Unless you are using the portal, programmatic access to your new service requires that you provide the URL endpoint and an authentication api-key. On the Overview page, locate and copy the URL endpoint on the right side of the page. On the Keys page, copy either one of the admin keys (they are equivalent). Admin api-keys are required for creating, updating, and deleting objects on your service. In contrast, query keys provide read-access to index content. An endpoint and key are not needed for portal-based tasks. The portal is already linked to your Azure Cognitive Search resource with admin rights. For a portal walkthrough, start with Quickstart: Create an Azure Cognitive Search index in the portal. Scale your service After your service is provisioned, you can scale it to meet your needs. If you chose the Standard tier for your Azure Cognitive Search service, you can scale your service in two dimensions: replicas and partitions. Had you chosen the Basic tier, you can only add replicas. If you provisioned the free service, scale is not available. Partitions allow your service to store and search through more documents. Replicas allow your service to handle a higher load of search queries. Adding resources increases your monthly bill. The pricing calculator can help you understand the billing ramifications of adding resources. Remember that you can adjust resources based on load. For example, you might increase resources to create a full initial index, and then reduce resources later to a level more appropriate for incremental indexing. Important A service must have 2 replicas for read-only SLA and 3 replicas for read/write SLA. - Go to your search service page in the Azure portal. - In the left-navigation pane, select Settings > Scale. - Use the slidebar to add resources of either type. Note Per-partition storage and speed increases at higher tiers. For more information, see capacity and limits. When to add a second service Most customers use just one service provisioned at a tier providing the right balance of resources. One service can host multiple indexes, subject to the maximum limits of the tier you select, with each index isolated from another. In Azure Cognitive Search, requests can only be directed to one index, minimizing the chance of accidental or intentional data retrieval from other indexes in the same service. Although most customers use just one service, service redundancy might be necessary if operational requirements include the following: - Disaster recovery (data center outage). Azure Cognitive Search does not provide instant failover in the event of an outage. For recommendations and guidance, see Service administration. - Your investigation of multi-tenancy modeling has determined that additional services is the optimal design. For more information, see Design for multi-tenancy. - For globally deployed applications, you might require an instance of Azure Cognitive Search in multiple regions to minimize latency of your application’s international traffic. Note In Azure Cognitive Search, you cannot segregate indexing and querying operations; thus, you would never create multiple services for segregated workloads. An index is always queried on the service in which it was created (you cannot create an index in one service and copy it to another). A second service is not required for high availability. High availability for queries is achieved when you use 2 or more replicas in the same service. Replica updates are sequential, which means at least one is operational when a service update is rolled out. For more information about uptime, see Service Level Agreements. Next steps After provisioning a service, you can continue in the portal to create your first index. Feedback
https://docs.microsoft.com/en-us/azure/search/search-create-service-portal
CC-MAIN-2019-47
refinedweb
1,540
54.12
Table Of Contents Controlling the environment¶ Many environment variables are available to control the initialization and behavior of Kivy. For example, in order to restrict text rendering to the PIL implementation: $ KIVY_TEXT=pil python main.py Environment variables should be set before importing kivy: import os os.environ['KIVY_TEXT'] = 'pil' import kivy Path control¶ New in version 1.0.7. You can control the default directories where config files, modules and kivy data are located. - KIVY_DATA_DIR Location of the Kivy data, defaults to <kivy path>/data - KIVY_MODULES_DIR Location of the Kivy modules, defaults to <kivy path>/modules - KIVY_HOME Location of the Kivy home. This directory is used for local configuration, and must be in a writable location. - Defaults to: Desktop: <user home>/.kivy Android: <android app path>/.kivy iOS: <user home>/Documents/.kivy New in version 1.9.0. - KIVY_SDL2_PATH If set, the SDL2 libraries and headers from this path are used when compiling kivy instead of the ones installed system-wide. To use the same libraries while running a kivy app, this path must be added at the start of the PATH environment variable. New in version 1.9.0. Warning This path is required for the compilation of Kivy. It is not required for program execution. Configuration¶ - KIVY_USE_DEFAULTCONFIG If this name is found in environ, Kivy will not read the user config file. - KIVY_NO_CONFIG If set, no configuration file will be read or written to. This also applies to the user configuration directory. - KIVY_NO_FILELOG If set, logs will be not print to a file - KIVY_NO_CONSOLELOG If set, logs will be not print to the console - KIVY_NO_ARGS If set to one of (‘true’, ‘1’, ‘yes’), the argument passed in command line will not be parsed and used by Kivy. Ie, you can safely make a script or an app with your own arguments without requiring the – delimiter: import os os.environ["KIVY_NO_ARGS"] = "1" import kivy New in version 1.9.0. - KCFG_section_key If a such format environment name is detected, it will be mapped to the Config object. They are loaded only once when kivy is imported. The behavior can be disabled using KIVY_NO_ENV_CONFIG. import os os.environ["KCFG_KIVY_LOG_LEVEL"] = "warning" import kivy # during import it will map it to: # Config.set("kivy", "log_level", "warning") New in version 1.11.0. - KIVY_NO_ENV_CONFIG If set, no environment key will be mapped to configuration object. If unset, any KCFG_section_key=value will be mapped to Config. New in version 1.11.0. Restrict core to specific implementation¶ kivy.core try to select the best implementation available for your platform. For testing or custom installation, you might want to restrict the selector to a specific implementation. - KIVY_WINDOW Implementation to use for creating the Window Values: sdl2, pygame, x11, egl_rpi - KIVY_TEXT Implementation to use for rendering text Values: sdl2, pil, pygame, sdlttf - KIVY_VIDEO Implementation to use for rendering video Values: gstplayer, ffpyplayer, ffmpeg, null - KIVY_AUDIO Implementation to use for playing audio Values: sdl2, gstplayer, ffpyplayer, pygame, avplayer - KIVY_IMAGE Implementation to use for reading image Values: sdl2, pil, pygame, imageio, tex, dds, gif - KIVY_CAMERA Implementation to use for reading camera Values: avfoundation, android, opencv - KIVY_SPELLING Implementation to use for spelling Values: enchant, osxappkit - KIVY_CLIPBOARD Implementation to use for clipboard management Values: sdl2, pygame, dummy, android Metrics¶ - KIVY_DPI If set, the value will be used for Metrics.dpi. New in version 1.4.0. - KIVY_METRICS_DENSITY If set, the value will be used for Metrics.density. New in version 1.5.0. KIVY_METRICS_FONTSCALE If set, the value will be used for Metrics.fontscale. New in version 1.5.0. Graphics¶ - KIVY_GL_BACKEND The OpenGL backend to use. See cgl. - KIVY_GL_DEBUG Whether to log OpenGL calls. See cgl. - KIVY_GRAPHICS Whether to use OpenGL ES2. See cgl. - KIVY_GLES_LIMITS Whether the GLES2 restrictions are enforced (the default, or if set to 1). If set to false, Kivy will not be truly GLES2 compatible. Following is a list of the potential incompatibilities that result when set to true. New in version 1.8.1. - KIVY_BCM_DISPMANX_ID Change the default Raspberry Pi display to use. The list of available value is accessible in vc_dispmanx_types.h. Default value is 0: 0: DISPMANX_ID_MAIN_LCD 1: DISPMANX_ID_AUX_LCD 2: DISPMANX_ID_HDMI 3: DISPMANX_ID_SDTV 4: DISPMANX_ID_FORCE_LCD 5: DISPMANX_ID_FORCE_TV 6: DISPMANX_ID_FORCE_OTHER - KIVY_BCM_DISPMANX_LAYER Change the default Raspberry Pi dispmanx layer. Default value is 0. New in version 1.10.1. Event Loop¶ - KIVY_EVENTLOOP Which async library should be used when the app is run in an asynchronous manner. See kivy.appfor example usage. 'asyncio': When the app is run in an asynchronous manner and the standard library asyncio package should be used. The default if not set. 'trio': When the app is run in an asynchronous manner and the trio package should be used. New in version 2.0.0.
https://kivy.org/doc/master/guide/environment.html
CC-MAIN-2021-39
refinedweb
786
58.58
In this section you will learn about Instance variable in java. In java all variable must be declared before they are used. The basic form of variable declaration is : type identifier = value; The type is one of the data type in java ,identifier is one of the name of the variable. Here are the various example to declare variable as below: int a,b,c; // declaring three variable of type int a,b and c double pi=3.14; // declaration with initialization both int e=2,f,g=0; // declares three int variables and initializing only two variables byte s=54; //initializes s char k='K'; //initializes char k with value of K There are three kind of variable in java, they are as follows : Instance variable : Example : A code of program for Instance variable import java.io.*; class Employe { public String name; //Instance variable is visible in any class private int salary; //This Instance variable is visible only in current class public Employe(String name,int salary) { this.name=name; //instance varible Name and salary the above program Advertisements Posted on: June: Instance variable in java Post your Comment
http://www.roseindia.net/java/beginners/java-Instance-variable.shtml
CC-MAIN-2015-18
refinedweb
190
54.66
Automating Repetitive Tasks in Visual Studio Visual Studio.Net has added an expanded extensibility model and macros to the unified Visual Studio IDE. The macro tools allow you to quickly record repetitive tasks, enhancing productivity. The productivity gain you get depends on the weight of the repetitive tasks. Small automatic tasks, small gains. Bigger tasks, bigger gains. When you have automated the repetitive task you can open the Macro IDE and customize the macros, tapping into the extensibility model using Visual Basic code. In this article we examine the rudiments of creating, customizing, and employing macros in Visual Studio, for the benefit of our Visual Basic projects. Creating Macros An easy way to start a macro is to record the steps close to or identical to the process you want to automate. When you have recorded the macro, open the Macro IDE and customize the macro, adding the fit and finish you need. Recording and Playing Macros Macros can be recorded in the Visual Studio IDE by selecting Tools|Macros|Record TemporaryMacro (Ctrl+Shift+R is the shortcut). This displays the macro toolbar as shown in Figure 1. When you have finished recording your steps then click the centerStop Recordingtool button (shown in figure 1). Figure 1. The Visual Studio.Net IDE macro toolbar allows you to automate tasks by recording your interaction with VS.NET. Figure 2. Recorded macros are temporarily stored in the Project Explorer in the Macros IDE. (The Project Explorer is shown in the figure.) A recorded macro is temporarily stored in the module RecordModule, shown in the Project Explorer for the Macros IDE figure 2. The temporary macro is saved in a subroutine named TemporaryMacro. To record a macro to display the Breakpoints window, follow these steps: - Select Tools|Macros|Record TemporaryMacro - With the recorder on, select Debug|Windows|Breakpoints - Click the Stop Recording button on the macro tool bar (see figure 1) - Press Alt+F11 to view open the Macro IDE (Alt+F11 is the shortcut for the Tools|Macros|Macros IDE menu option) The number steps will create a module named RecordingModule, and approximately create the code shown in listing 1. Listing 1: A TemporaryMacro subroutine is created after recording a macro. Option Strict OffOption Explicit OffImports EnvDTEImports System.DiagnosticsPublic Module RecordingModule Sub TemporaryMacro() DTE.ExecuteCommand("Debug.Breakpoints") End SubEnd Module To run the temporary macro press Ctrl+Shift+P. This is the shortcut operation for the Tools|Macros|Run TemporaryMacro menu item. If you want to customize the macro, then you will need to rename the module and the temporary macro subroutine to something meaningful. Saving the Temporary Macro To save your macro, open the Microsoft Visual Studio Macros 7.0 IDE, the Macro IDE. In the macro Project Explorer, select the RecordingModule and from the right-click menu select Rename. Provide a meaningful name for the recording module and a new name for the TemporaryMacro subroutine. A good name for the macro we recorded in the numbered steps might be ViewBreakpoints. Once you have saved the macro you can run it from the Command Window in the Visual Studio IDE. Open the Command Window (Ctrl+Alt+A) and type Macros.MyMacros.mymodule.ViewBreakpoints, where mymodule is the name you gave the RecordingModule when you renamed it. (When you record a new macro, Visual Studio.Net will create a new RecordingModule automatically.) Macros is a collection containing all macros. MyMacros is a macro project I created from the Tools|Macros|New Project menu item; mymodule is the new name for the RecordingModule created by default when we record a macro. ViewBreakpoints is the name of the subroutine originally named TemporaryMacro. For the most part you can think of a macro project in the same way you would think about a VB project: the macro project contains modules and code and is defined in a .vsmacros project file. The macro code is Visual Basic.Net code, which you already know or are learning how to write. The biggest benefit of macros is the extensibility model. There are a lot of classes and tools that make up the extensibility model. In addition, you can add classes and code from the CLR to your macros, as demonstrated by listing 1. Listing 1 imports the System.Diagnostics and the EnvDTE namespaces. System.Diagnostics contains debugging and diagnostics tools. The EnvDTE namespace contains the Common Environment Object Model and the Debugger Object Model. The last section demonstrates how to use the EnvDTE namespace to create several macros that are a bit more useful than our recorded macro. Customizing the Macro Code A task you may realistically want to perform is to enable or disable all breakpoints without removing them. You could accomplish this task by open the breakpoints window, selecting each breakpoint in turn, and unchecking the checkbox next to the break point. This would have the effect of disabling or enabling a breakpoint, as the case may be, without removing the breakpoint. Note: The macros in this section are already supported by selecting the Debug|Disable All Breakpoints (or Debug|Enable All Breakpoints) menu or by typing Debug.EnableAllBreakpoints or Debug.DisableAllBreakpoints in the Command Window. The macros are provided for demonstration purposes, rather than to replace existing functionality. To create our macros open an existing macro project or create a new macro project from the Tools|Macros|New Project menu. Add a module named Debugger to your macros project. Add the code in listing 2 to the Debugger module. (A synopsis of the code follows the listing.) Listing 2: Customized macros to enable and disable all breakpoints; the macro also writes the state change to the Output window. 1: Option Strict Off2: Option Explicit Off3: 4: Imports EnvDTE5: Imports System.Diagnostics6: 7: Public Module Debugger8: 9: Function GetOutputWindow() As OutputWindow10: Return _11: DTE.Windows.Item(Constants.vsWindowKindOutput).Object12: End Function13: 14: Function GetActivePane() As OutputWindowPane15: Return GetOutputWindow.ActivePane16: End Function17: 18: Private Sub WriteState(ByVal BreakPoint As Breakpoint)19: GetActivePane.OutputString( _20: String.Format("Breakpoint ({0}) enabled={1}", _21: BreakPoint.Name, BreakPoint.Enabled) & vbCrLf)22: End Sub23: 24: Private Sub SetBreakpointState(ByVal Enabled As Boolean)25: 26: Dim Breakpoint As Breakpoint27: For Each Breakpoint In DTE.Debugger.Breakpoints28: Breakpoint.Enabled = Enabled29: WriteState(Breakpoint)30: Next31: 32: End Sub33: 34: Sub EnableBreakPoints()35: SetBreakpointState(True)36: End Sub37: 38: Sub DisableBreakpoints()39: SetBreakpointState(False)40: End Sub41: End Module Option Strict and Option Explicit are Off by default, but they should probably be turned to On as a general rule, to help us find and resolve problems in our code as soon as possible. (However, this is a discussion for another day.) The Debugger module imports the EnvDTE and System.Diagnostics namespaces on lines 4 and 5. Lines 9 through 12 and 14 through 16 define two refactored Query Methods that return an instance of the OutputWindow defined in the Windows collection and an OutputWindowPane. (Just because we are writing macros, doesnt mean we should not use good techniques like Refactoring.) Line 18 through 22 uses the query method GetActivePane, and employs the Shared method String.Format to write formatted output to the Output Window in the Visual Studio IDE. The Private method SetBreakPointState actually iterates over each Breakpoint (lines 24 through 32) and changes the Enabled property of each breakpoint object. The public interface for our macro module, Debugger, is EnableBreakPoints and DisableBreakpoints. (We can also reuse the query methods GetOutputWindow and GetActivePane in some other context if we need them. These methods were created using the Refactoring Replace Temp with Query Method; read martin Fowlers Refactoring: Improving the Design of Existing Code for more on Refactoring.) Adding a Custom Macro to the Visual Studio IDE Menu To invoke our macros we can use the Command Window and call the macro directly, as demonstrated: Macros.MyMacros.Debugger.EnableBreakpoints or Macros.MyMacros.Debugger.DisableBreakpoints. MyMacros is the name I gave to the project containing the Debugger module. Figure 3. The Customize dialog is used for, among other things, adding your custom macros to a toolbar or menu. Create a menu or toolbar shortcut for your macro by selecting Tools|Customize. With the Customize dialog displayed select the Commands tab. Find the Macros category and click on it (see figure 3). In the Commands list on the right find your custom macro and drag and drop the command onto a toolbar or menu. (If you have used macros in Microsoft Office you probably already know how to do this.) Macros have been around in Office for a while, and their integration into Visual Studio.Net works consistently with the way macros are managed in Office. There is a tremendous amount of material in the extensibility model. Experiment with the Command Window and macros to gain some experience. When you find yourself performing a repetitive task, consider making that task a macro and making the macro available to your co-workers. You will find dozens of examples in the Samples.vsmacro project which ships with Visual Studio.Net.. This article was originally published on July 30, 2001
https://www.developer.com/net/vb/article.php/856631/Automating-Repetitive-Tasks-in-Visual-Studio.htm
CC-MAIN-2020-50
refinedweb
1,509
56.86
CGI Developer's Guide Chapter 14 Proprietary Extensions CONTENTS - HTML Extensions - Server-Side Push - File Upload - Maintaining State with Cookies - Summary You might have noticed that the CGI, HTML, and HTTP standard protocols are broad, flexible, and fairly powerful. Using a fairly small set of features under a limited client/server model, you can write some very sophisticated applications. However, there remain limitations and room for improvement. Both HTML and HTTP are evolving standards, constantly changing to meet the demands of the growing number of Web users. Manipulating some of these new features requires using CGI applications in innovative ways. Although the CGI protocol itself does not seem to be changing, you can constantly find new ways to use CGI to take advantage of those features of the World Wide Web that are changing. This chapter is called "Proprietary Extensions" mainly to acknowledge the role of commercial software companies in enhancing Web technologies. Companies such as Netscape Communications, Sun Microsystems, and Microsoft Corporation have proposed many of these new extensions and features and are largely responsible for the rapid development of new technologies. However, the title "Proprietary Extensions" is somewhat of a misnomer. Many of the extensions described in this chapter are being proposed as Internet standards. HTML is basically an evolving standard, and so many of these proposed extensions are widely used, that they should be considered standards even though they are not officially acknowledged as such. This chapter describes some of the more common Web extensions. You first learn extensions to HTML, including client-side imagemaps, frames, and some other browser-specific extensions. You then learn Netscape's server-side push and how you can use server-side push to create inline animation. You learn how to maintain state using HTTP cookies. Finally, you see an example of server extension: NCSA and Apache Web servers' capability to use a special CGI program to print customized error messages. HTML Extensions Perhaps the most dynamic Web technology is HTML, which is a constantly evolving technology. Many have proposed extensions to the current standard, and a large number of these extensions are widely supported by most Web browsers. Netscape is largely responsible for many of these proposed extensions, and because the Netscape browser is the most widely used on the Web, many other browsers have adopted these extensions as well. Microsoft is also beginning to develop new extensions and has introduced a few original ones of its own, implemented in its Internet Explorer browser. Four extensions are described in this section: client-side imagemaps, HTML frames, client-side pull, and some miscellaneous extensions. Client-side imagemaps were originally proposed by Spyglass, and many browsers have since adopted this standard. HTML frames and client-side pull are both Netscape proposals; although these features have not been widely implemented on other browsers, many Web authors take advantage of these extensions because of the popularity of the Netscape browser. Finally, the miscellaneous extensions discussed are some of Microsoft's proposed HTML tags to improve the multimedia capabilities of the Web. Client-Side Imagemaps In Chapter 15, "Imagemaps," you learn the most common way to implement imagemaps: using a server-side application such as the CGI program imagemap. However, even though there is an advantage to using a server application for customized imagemap applications (such as the tictactoe program in Chapter 15), a server-based imagemap is a slow operation by nature. The imagemap CGI program works as follows: - The client sends coordinates to the CGI program. - The CGI program compares coordinates to a map file that maps imagemap coordinates to the URL of a document. The program sends the location of the document back to the browser. - The browser sends a request to the new URL and displays the new document. In order to determine where to go next, the browser needs to make two different requests. It is much more efficient to define where to go next within the HTML document so that the browser needs to make only one request, as shown in Figure 14.1. A client-side imagemap contains the mapping information within an HTML document so that the browser can figure out where to go according to where the user clicked on the image. Figure 14.1 : Using an imagemap CGI application requires the browers to make two connections to the server. A client-side imagemap requires only one connection. To specify that an image is part of a client-side imagemap, you use the parameter USEMAP with the <image> tag: <IMG SRC=" . . . " USEMAP=" . . . "> The value of USEMAP is the location of the map information. Map information is specified using the <map> tag: <MAP NAME=" . . . "> </MAP> NAME is the identifier of this map. The value of NAME is referenced by USEMAP the same way you would reference an <a name> tag, preceded by a pound sign (#). For example, the client-side imagemap <img src="buttons.gif" usemap="#buttonbar"> would correspond to the map information in the same HTML page labeled with this: <map name="buttonbar"> You can store the map information in a separate file from the actual imagemap. For example, if you had a button bar that was the same on all of your pages, you might want to store the map information in the file buttonbar.html surrounded by the tags <map name="buttonbar"> and </map>. Then, to reference your button bar in your documents, you would use this: <img src="buttons.gif" usemap="buttonbar.html#buttonbar"> Within the <map> tags, you store the definitions of your map using the <area> tag. The <area> tag relates an area on the image to another document. Here is the proper format for the <area> tag: <AREA [SHAPE=" . . . "] COORDS=" . . . " [HREF=" . . . "] [NOHREF] [ALT=" . . . "]> SHAPE defines the shape of the area. By default, if you do not specify a SHAPE parameter, <area> assumes a rectangular shape. The possible shapes you can define depend on the browser. Shapes commonly defined by browsers are RECT, CIRCLE, and POLYGON. COORDS contains a comma- delimited list of coordinates that define the boundaries of your area. A rectangular area requires four numbers to describe it: the x and y coordinates of the upper-left and lower-right corner. Thus, the COORDS value of a rectangular shape would take the following form: upperleft_x,upperleft_y,lowerright_x,lowerright_y COORDS for a circle take this format: center_x,center_y,radius Polygons take a list of coordinates of each vertex. Although there is no theoretical limit to the number of vertices you can define for your polygon, there is a practical limit. HTML does not enable parameter values larger than 1024 characters. HREF specifies where to go if the user has clicked in the area specified by that <area> tag. If you do not specify an HREF parameter or if you specify NOHREF, then the browser will ignore any clicks within that area. This is not a very useful parameter because the browser will simply ignore clicks in any undefined region. If you don't want the browser to do anything if the user clicks on a certain region, just don't define that region. ALT is a text description of the specified area and is used by text browsers that cannot view images. If you view a client-side imagemap from a text browser, you'll see a list of names (specified by the ALT parameter in each <area> tag). Clicking one of these names takes you to the URL specified in HREF. If you define two areas that intersect, the first area defined takes precedence. For example, with the following imagemap the rectangular region bounded by (30,0) and (50,50) is covered by both regions: <img src="map.gif" usemap="#mymap"> <map name="mymap"> <area coords="0,0,50,50" href="one.html"> <area coords="30,0,80,50" href="two.html"> </map> If a user clicks anywhere inside this common region, then he or she will go to one.html, because that is the first <area> tag specified. Listing 14.1 contains some sample HTML for a client-side imagemap. Figure 14.2 shows how this imagemap looks from a browser. Figure 14.2 : The rendered client-side imagemap from Listing 14.1. Listing 14.1. A sample client-side imagemap. <html> <head> <title>Pforzheimer House</title> </head> <body> <a href="/cgi-bin/imagemap/~pfoho/imagemaps/pfoho-buttons.map"> <img src="/~pfoho/images/pfoho-buttons.gif" alt="[Short Cuts]" ISMAP</a> <map name="pfoho-buttons"> <area href="" coords="31,0,65,33" alt="Harvard University"> <area href="index.html" coords="66,0,100,33" alt="Pforzheimer House"> <area href="house/" coords="101,0,177,33" alt="The House"> <area href="people/" coords="178,0,240,33" alt="People"> <area href="events/" coords="241,0,303,33" alt="Events"> <area href="orgs/" coords="304,0,403,33" alt="Organizations"> <area href="tour/" coords="404,0,453,33" alt="Tour"> </map> </body> </html> Frames The standard Web browser consists of one window that displays the HTML or other documents. Netscape has introduced extensions that enable you to divide up this single window into multiple "frames," where each frame essentially acts as a separate window. Figure 14.4 later in this chapter is an example of a standard Web page using frames. Using frames, you can keep common elements of your Web site on the browser window at all times while the user browses through the other documents on your site in a separate frame. Frames follow a very similar syntax to HTML tables. To specify a frame, you use the tag <frameset>, which replaces the <body> tag in an HTML document. <html> <head> </head> <frameset> </frameset> </html> The format of the <frameset> tag is <FRAMESET ROWS| </FRAMESET> The <frameset> tag takes either the ROWS or COLS attribute. The value of the ROWS attribute specifies how to divide the browser window into rows, just as the COLS attribute specifies how to divide the window into columns. The ROWS and COLS attributes take a list of values that describe the division of the particular frameset. You can specify the height of a frame row or the width of a frame column as a percentage of the window size, by pixel size, or by whatever is left. For example, suppose you wanted to divide up a window into three rows of equal width, as shown in Figure 14.3. If you assume that the browser window is 300 pixels high, you could use this: Figure 14.3 : Dividing the brower window into three rows. <frameset rows="100,100,100"> Unfortunately, you can almost never guarantee the height of the browser; therefore, this is not usually a good specification. (It is useful if you have a fixed-size graphic within one of the frames.) You could instead specify the percentage of the current window each row should take. <frameset rows="33%,33%,34%"> Note that the sum of the percentages in the ROWS attribute must equal 100%. If the values do not add up to 100% and there are no other types of values, then the percentages are readjusted so that the sum is 100%. For example: <frameset rows="30%,30%"> is equivalent to <frameset rows="50%,50%"> Using this tag, the size of the frames will readjust when the browser is resized. Although this method works well, there is an even simpler method. <frameset rows="*,*,*"> The asterisk (*) tells the frame to use relative sizes for determining the size of the rows. The three asterisks mean that each row should split the available height evenly. If you want to make the first row twice as big as the other two rows, you could use this: <frameset rows="2*,*,*"> You can mix different value types in the ROWS or COLS attribute. For example, the following will create one row 100 pixels high and split the remaining space in half for the remaining two rows: <frameset rows="100,*,*"> If you use the following, the first row would take up 20 percent of the window height, the second row would take up 30 percent, and the last row would use up the rest of the space: <frameset rows="20%,30%,*"> The number of values in the ROWS or COLS parameter determines the number of rows or columns within a frameset. Within the <frameset> tags, you define each frame using another <frameset> tag that will further divide that frame, or you can use the <frame> tag to specify attributes of that frame. Here is the <frame> tag's format: <FRAME [SRC=" . . . " NAME=" . . . " MARGINWIDTH=" . . . " MARGINHEIGHT=" . . . " SCROLLING="no|yes|auto" NORESIZE]> If you do not specify any attributes within the <frame> tag, you'll just see an empty frame. SRC specifies the document that goes in that frame. NAME is the name of the frame. The NAME is useful because it enables you to force the output of CGI programs to appear in specific frames. MARGINWIDTH and MARGINHEIGHT are aesthetic tags that define the width of the margins between the content of the document and the border of the frame. SCROLLING determines whether or not a scrollbar should appear within the frame. By default, SCROLLING is set to auto, meaning that a scrollbar appears only when necessary. You can set it to always appear (yes) or to never appear (no). Finally, by default, the user can change the size of the frames from his or her browser. Specifying NORESIZE disables this feature. Listing 14.2 contains a sample HTML document that defines several empty frames. Figure 14.4 shows what frames.html looks like from your browser. Figure 14.4 : Frames.html. Listing 14.2. The frames.html program. <html> <head> <title>Frames</title> </head> <frameset cols="30%,70%"> <frame> <frameset rows="80%,20%"> <frame> <frame> </frameset> </frameset> </html> You can describe an alternative HTML document within the <frameset> tags that browsers that do not understand frames will display. To do this, embed the HTML within the tags <NOFRAMES> </NOFRAMES>. These tags should go between the <frameset> tags. Listing 14.3 contains an example of a frame with alternate HTML. Listing 14.3. The alt-frames.html program. <html> <head> <title>Frames</title> </head> <frameset cols="30%,70%"> <noframes> <h1>Frames</h1> <p>This HTML document contains frames. You need a frames-enabled browser such as Netscape v2.0 or greater to view them.</p> </noframes> <frame> <frameset rows="80%,20%"> <frame> <frame> </frameset> </frameset> </html> How do you redirect output to one of these frames? There are two situations in which you might want to redirect output, and two ways to handle these situations. The first possibility is that you have clicked a link-either an <a href>, a <form> submit button, or a client-side imagemap <area>-and you want the retrieved document to appear in one of your frames or even in a new browser window. You can accomplish this using the TARGET attribute in either the <a href>, <form>, <area>, or <base> tag. You can specify either the name of a browser, the name of a frame, or a special variable (listed in Table 14.1) in the TARGET attribute. For example, the following frame document splits the screen in half and places doc1.html in the left frame, called "left," and doc2.html in the right frame, called "right": <html> <head> <title>Frames</title> </head> <frameset cols="*,*"> <frame src="doc1.html" name="left"> <frame src="doc2.html" name="right"> </frameset> </html> If doc1.html had the following tag when a user clicks "new document," new.html displays in the left frame: <a href="new.html">new document</a> If, however, doc1.html contains <a href="new.html" target="right">new document</a> then, when the user clicks "new document," new.html appears in the right frame. Similarly, if doc1.html contains the following and the user clicks "new document" or any other link on that page, the new document appears in the right frame: <html><head> <title>First Document</title> <base target="right"> </head> <body> <a href="new.html">new document</a> </body></html> Similarly, you can target CGI output by sending the HTTP header Window-target followed by the window or frame name. For example, if you wanted to send the output of a CGI program to the right frame, you could send this: Window-target: right Content-Type: text/plain output from CGI program Client-Side Pull Netscape has a feature called client-side pull that enables you to tell the browser to load a new document after a specified amount of time. This has several potential uses. For example, if you provide real-time sports scores on your Web site, you might want the page to automatically update every minute. Normally, if the user wants to see the latest scores, he or she would have to use the browser's reload function. With client-side pull, you can tell the browser either to automatically reload or load a new page after a specified amount of time. You specify client-side pull by using the Netscape CGI response header Refresh. The following is the format for the header, where n is the number of seconds to wait before refreshing: Refresh: n[; URL=url] If you want the document to load another URL after n seconds instead of reloading the current document, you specify it using the parameter URL followed by the URL. For example, if you had a CGI program called scores.cgi that sends an HTML document with the current sports scores, you could have it tell the Netscape browser to reload every 30 seconds. #!/usr/local/bin/perl # scores.cgi print "Refresh: 30\n"; print "Content-Type: text/html\n\n"; print "<html> <head>\n"; print "<title>Scores</title>\n"; print "</head>\n\n"; print "<body>\n"; print "<h1>Latest Scores</h1>\n"; # somehow retrieve and print the latest scores here print "</body> </html>\n"; When a Netscape browser calls scores.cgi, it displays the HTML document, waits 30 seconds, and then reloads the document. If you were serving scores.cgi from and you moved the service to, you might want the scores.cgi program at scores.com to send the header Refresh: 30; URL= and a message that says the URL of this service has changed. #!/usr/local/bin/perl # replacement scores.cgi for print "Refresh: 30;URL=\n"; print "Content-Type: text/html\n\n"; print "<html><head>\n"; print "<title>Scores Service Moved</title>\n"; print "</head>\n\n"; print "<body>\n"; print "<h1>Scores Service Has Moved</h1>\n"; print "<p>This service has moved to"; print "<a href=\"\">"; print "</a>.\n"; print "If you are using Netscape, you will go to that document\n"; print "automatically in 30 seconds.</p>\n"; print "</body></html>\n"; When the user tries to access, it sends the previous message and the Refresh header. If you are using Netscape, your browser waits for 30 seconds and then accesses. Although sending a Refresh header from a CGI program to specify reloading the document might seem useful, sending that header to load another document does not. There isn't a good reason to use the Refresh header for redirection rather than the Location header if you are using a CGI program. For example, you could replace the old scores.cgi program with the following, which simply redirects the browser to the new URL: #!/usr/local/bin/perl print "Location:\n\n"; This works for all browsers, not just Netscape. The Refresh header is useful, however, because Netscape properly interprets the <META HTTP-EQUIV> <head> tag. As you might recall from Chapter 3, "HTML and Forms," <META HTTP-EQUIV> enables you to embed HTTP headers within the HTML document. For example, if you had an HTML document (rather than a CGI program) that had the latest scores, you could have it automatically reload by specifying the header using the <META HTTP-EQUIV> tag. <html> <head> <title>Sports Scores</title> <meta http- </head> <body> <h1>Latest Scores</h1> <!-- have the latest scores here --> </body></html> When Netscape loads this page, it displays it and then reloads the page after 30 seconds. Similarly, you could also have the HTML page load another page after a specified amount of time. You can use client-side pull to automatically load a sound to accompany an HTML document, thereby implementing "inline" sound. For example, suppose you are the CEO of a company called Kaplan's Bagel Bakery, and you want to have an audio clip that plays automatically when the user accesses your Web page. Assuming your URL is and the audio clip is located at, your HTML file might look like this: <html><head> <title>Kaplan's Bagel Bakery</title> <meta http- </head> <body> <h1>Kaplan's Bagel Bakery</h1> <p>Welcome to our bagel shop!</p> </body></html> When you access this HTML file from Netscape, it immediately loads and plays the intro.au sound clip. You don't have to worry about the sound clip continuously loading because the sound clip will not have a Refresh header. You can create some potentially useful applications using client-side pull, but you should use it in moderation. HTML documents that constantly reload can be annoying as well as a resource drain on both the server and client side. There are more efficient and aesthetic ways of implementing inline animation than using client-side pull. Other Extensions Many of the custom extensions and techniques described in this chapter were created to improve the multimedia and visual capabilities of the World Wide Web. Microsoft provides three extensions to HTML that extend the multimedia capability of its Internet Explorer browser. The tag <bgsound> enables you to play background sounds while the user is viewing a page. <BGSOUND SRC=" . . . " [LOOP="n|infinite"]> SRC is the relative location of either a WAV or AU sound file. By default, the sound plays only once. You can change this by defining LOOP to be either some number (n) or infinite. Internet Explorer has two tags that offer some form of animation. The first, <marquee>, enables you to have scrolling text along your Web browser: <MARQUEE [BGCOLOR=" . . . " DIRECTION="RIGHT|LEFT" HEIGHT="n|n%" WIDTH="n|n%" BEHAVIOR=[SCROLL|SLIDE|ALTERNATE] LOOP="n|infinite" SCROLLAMOUNT="n" SCROLLDELAY="n" HSPACE="n" VSPACE="n" ALIGN="top|middle|bottom"]> </MARQUEE> The text between the <marquee> tags will scroll across the screen. DIRECTION specifies the direction the text moves, either left or right. HEIGHT and WIDTH can either be a pixel number or percentage of the entire browser window. BEHAVIOR specifies whether the text scrolls on and off the screen (scroll), slides onto the screen and stops (slide), or bounces back and forth within the marquee (alternate). SCROLLAMOUNT defines the number of pixels to skip every time the text moves, and SCROLLDELAY defines the number of milliseconds before each move. HSPACE and VSPACE define the margins in pixels. ALIGN specifies the alignment of the text within the marquee. In order to include inline animations in Microsoft Audio/Visual format (*.AVI) in Internet Explorer, you use an extension to the <img> tag: <IMG DYNSRC="*.AVI" [LOOP="n|infinite" START="fileopen|,mouseover" CONTROLS]> DYNSRC contains the location of the *.avi file (just as SRC contains the location of the graphic file). LOOP is equivalent to LOOP in both <bgcolor> and <marquee>. If CONTROLS is specified, video controls are displayed underneath the video clip, and the user can rewind and watch the clip again. START can take two values: fileopen or mouseover. If fileopen is specified, the video plays as soon as the file is accessed. If mouseover is specified, the video plays every time the user moves the mouse over the video. You can specify both at the same time, separating the two values with a comma. Server-Side Push As an alternative to client-side pull for generating dynamically changing documents, Netscape developed a protocol for server-side push applications. A server-side push application maintains an open connection with the browser and continuously sends several frames of data to the browser. The browser displays each data frame as it receives it, replacing the previous frame with the current one. In order to tell the browser to expect a server-side push application, the CGI application sends the MIME type multipart/x-mixed-replace as the Content-Type. This MIME type is an experimental, modified version of the registered MIME type multipart/mixed. The MIME type multipart/x-mixed-replace follows the same format as multipart/mixed. You specify the MIME type followed by a semicolon (;) and the parameter boundary, which specifies a separator string. This string separates all of the different data types in the entity, and it can be any random string containing valid MIME characters. For example: Content-Type: multipart/x-mixed-replace;boundary=randomstring --randomstring When the browser reads this header, it knows that it will be receiving several blocks of data from the same connection, so it keeps the connection open and waits to receive the data. The browser reads and displays everything following-randomstring until it reads another instance of-randomstring. When it receives this closing-randomstring string boundary, it continues to keep the connection open and waits for new information. It replaces the old data with the new data as soon as it receives it until, once again, it reaches another boundary string. Each data block within the two boundary strings has its own MIME headers that specify the type of data. This way, you can send multiple blocks of different types of data, from images to text files to sound. Each boundary string is defined as two dashes (--) followed by the boundary value specified in the multipart/x-mixed-replace header. The last data block you want to send ends with two dashes, followed by the boundary value, followed by another two dashes. However, there is no need to have a final data block. The server-side push application can continue to send information indefinitely. At any time, the user can stop the flow of data by clicking the browser's Stop button. For example, suppose you had the five text files listed in Listings 14.4 through 14.8. Listing 14.4. The first text file. | | | | | Listing 14.5. The second text file. / / / / / Listing 14.6. The third text file. - - - - - Listing 14.7. The fourth text file. \ \ \ \ \ Listing 14.8. The fifth text file. | | | | | To force the browser to display all five of these text files in succession as quickly as possible, you would write a CGI program that sends the following to the browser: Content-Type: multipart/x-mixed-replace;boundary=randomstring --randomstring Content-Type: text/plain | | | | | --randomstring Content-Type: text/plain / / / / / --randomstring Content-Type: text/plain ----- --randomstring Content-Type: text/plain \ \ \ \ \ --randomstring Content-Type: text/plain | | | | | --randomstring- Upon receiving a block of data like this, Netscape prints each text file as soon as it receives it (in this case achieving an animated twirling bar effect.) Each data type contains its own Content-Type header that specifies the type of data between that header and the string boundary. In this example, each block of data is a plain text file; thus, the Content-Type: text/plain header. Notice also that the final data block ends with two dashes, followed by the boundary value, followed by another two dashes (--randomstring--). In this example, all of the blocks of data are the same type; however, this does not have to be the case. You could replace text with images or sound. Animation A common application of server-side push is to create inline animation that sends several GIF files in succession, creating animation. For example, if you had two GIF frames of an animated sequence (frame1.gif and frame2.gif), a server-side push program that sent each of these frames might look like this: #!/usr/local/bin/perl print "Content-Type: multipart/x-mixed-replace;boundary=blah\n\n"; print "-blah\n"; print "Content-Type: image/gif\n\n"; open(GIF,"frame1.gif"); print <GIF>; close(GIF); print "\n-blah\n"; print "Content-Type: image/gif\n\n"; open(GIF,"frame2.gif"); print <GIF>; close(GIF); print "\n-blah--\n"; Writing a general animation program that loads several GIF images and repeatedly sends them using server-side push is easy in principle. All it requires is a loop and several print statements. However, in reality, you might get choppy or slow animation. In the case of server-side push animations, you want to do everything you can in order to make the connection and the data transfer between the server and client as fast as possible. For some very small animations on a very fast connection, any code improvements might not be noticeable; however, on slower connections with more frames, more efficient code greatly enhances the quality of the animation. The best way to prevent choppiness in your server-side push animations is to unbuffer the output. Normally, when you do a print in Perl or a printf() in C, the data is buffered before it is printed to the stdout. If the internal buffer size is large enough, there might be a slight delay as the program waits for the buffer to fill up before sending the information to the browser. Turning off buffering prevents these types of delays. Here's how to turn off buffering in Perl for stdout: select(stdout); $| = 1; In C: #include <stdio.h> setbuf(stdout,NULL); Normally, the server also buffers output from the CGI program before sending it to the client. This is undesirable for the same reason internal buffering is undesirable. The most portable way to overcome this buffering is to use an nph CGI program that speaks directly to the client and bypasses the server buffering. There is also another very minimal performance gain because the headers of the CGI output are not parsed, although this gain is nil for all practical purposes. I wrote two general server-side push animation programs in Perl and C (nph-animate.pl and nph-animate.c, respectively) that send a finite number of individual GIF files continuously to the browser. All of the GIF files must have the same prefix and exist in the same directory somewhere within the Web document tree. For example, if you have three GIF files, stick1.gif, stick2.gif, and stick3.gif (see Figure 14.5), located in the directory /images relative to the document root, you would include these files as an inline animation within your HTML document using this: Figure 14.5 : Three GIF files: stick1.gif, stick2.gif, and stick3.gif. <img src="/cgi-bin/nph-animate/images/stick?3"> nph-animate assumes that all of the images are GIF files and end in the prefix .gif. It also assumes that they are numbered 1 through some other number, specified in the QUERY_STRING (thus, the 3 following the question mark in the previous reference). The Perl code for nph-animate.pl (shown in Listing 14.9) is fairly straightforward. It turns off buffering, reads the location and number of files, prints an HTTP header (because it is an nph script) and the proper Content-Type header, and then sends the GIFs one-by-one, according to the previous specifications. In order to make sure the script dies if the user clicks the browser's Stop button, nph-animate.pl exits when it receives the signal SIGPIPE, which signifies that the program can no longer send information to the browser (because the connection has been closed). Listing 14.9. nph-animate.pl: a push animation program written in Perl. #!/usr/local/bin/perl $SIG{'PIPE'} = buhbye; $| = 1; $fileprefix = $ENV{'PATH_TRANSLATED'}; $num_files = $ENV{'QUERY_STRING'}; $i = 1; print "HTTP/1.0 200 Ok\n"; print "Content-Type: multipart/x-mixed-replace;boundary=whatever\n\n"; print "-whatever\n"; while (1) { &send_gif("$fileprefix$i.gif"); print "\n-whatever\n"; if ($i < $num_files) { $i++; } else { $i = 1; } } sub send_gif { local($filename) = @_; local($filesize); if (-e $filename) { print "Content-Type: image/gif\n\n"; open(GIF,$filename); print <GIF>; close(GIF); } else { exit(1); } } sub buhbye { exit(1); } I use several system-specific, low-level routines in the C version of nph-animate (shown in Listing 14.10) for maximum efficiency. It will work only on UNIX systems, although porting it to other operating systems should not be too difficult. First, instead of using <stdio.h> functions, I use lower-level input and output functions located in <sys/file.h> on BSD-based systems and in <sys/fcntl.h> on SYSV-based systems. If write() cannot write to stdout (if the user has clicked the browser's Stop button and has broken the connection), then nph-animate.c exits. Reading the GIF file and writing to stdout requires defining a buffer size. I read the entire GIF file into a buffer and write the entire file at once to stdout. Even with the inherent delay in loading the file to the buffer, it should be faster than reading from the file and writing to stdout one character at a time. In order to determine how big the file is, I use the function fstat() from <sys/stat.h>, which returns file information for files on a UNIX system. Listing 14.10. nph-animate.c: a push animation program written in C. #include <sys/file.h> /* on SYSV systems, use <sys/fcntl.h> */ #include <sys/stat.h> #include <sys/types.h> #include <string.h> #include <stdlib.h> #include <unistd.h> #define nph_header "HTTP/1.0 200 Ok\r\n" #define multipart_header \ "Content-Type: multipart/x-mixed-replace;boundary=whatever\r\n\r\n" #define image_header "Content-Type: image/gif\r\n\r\n" #define boundary "\n-whatever\n" void send_gif(char *filename) { int file_desc,buffer_size,n; char *buffer; struct stat file_info; if ((file_desc = open(filename, O_RDONLY)) > 0) { fstat(file_desc,&file_info); buffer_size = file_info.st_size; buffer = malloc(sizeof(char) * buffer_size + 1); n = read(file_desc,buffer,buffer_size); if (write(STDOUT_FILENO,buffer,n) < 0) exit(1); free(buffer); close(file_desc); } else exit(1); } int main() { char *picture_prefix = getenv("PATH_TRANSLATED"); char *num_str = getenv("QUERY_STRING"); char *picture_name; int num = atoi(num_str); int i = 1; char i_str[strlen(num_str)]; if (write(STDOUT_FILENO,nph_header,strlen(nph_header))<0) exit(1); if (write(STDOUT_FILENO,multipart_header,strlen(multipart_header))<0) exit(1); if (write(STDOUT_FILENO,boundary,strlen(boundary))<0) exit(1); while (1) { if (write(STDOUT_FILENO,image_header,strlen(image_header))<0) exit(1); sprintf(i_str,"%d",i); picture_name = malloc(sizeof(char) * (strlen(picture_prefix) + strlen(i_str)) + 5); sprintf(picture_name,"%s%s.gif",picture_prefix,i_str); send_gif(picture_name); free(picture_name); if (write(STDOUT_FILENO,boundary,strlen(boundary))<0) exit(1); if (i < num) i++; else i = 1; } } Using nph-animate, I include an inline animation of my stick figures (stick1.gif, stick2.gif, and stick3.gif) running within an HTML document, as shown in Figure 14.6. Figure 14.6 : The stick figure running within an HTML document. File Upload Perhaps one of the most popular features people want to see on the Web is the capability to upload as well as download files. The current draft of the HTTP 1.0 protocol (February, 1996) defines a means for uploading files using HTTP (PUT), but very few servers have actually implemented this function. Web develpers have proposed a means of uploading files using the form's POST mechanism. At the time of the printing of this book, the only browser that has implemented this feature is Netscape v2.0 or greater. Here, I describe how Netscape has implemented file uploading as well as how to implement this feature using CGI programs. In order to use file upload, you must define ENCTYPE in the <form> tag to be the MIME type "multipart/form-data". <FORM ACTION=" . . . " METHOD=POST This MIME type formats form name/value pairs as follows: Content-Type: multipart/form-data; boundary=whatever -whatever Content-Disposition: form-data; name="name1" value1 -whatever Content-Disposition: form-data; name="name2" value2 --whatever- This is different from the normal URL encoding of form name/value pairs, and for good reason. For regular, smaller forms consisting mostly of alphanumeric characters, this seems to send a lot of extraneous information-all of the extra Content-Disposition headers and boundaries. However, large binary files generally consist of mostly non-alphanumeric characters. If you try to send a file using the regular form URL encoding, the size of the transfer will be much larger because the browser encodes the many non-alphanumeric characters. The previous method, on the other hand, does not need to encode any characters. If you are uploading large files, the size of the transfer will not be much larger than the size of the files. In order to allow the user to specify the filename to upload, you use the new input type file: <INPUT TYPE=FILE In this case, NAME is not the filename, but the name associated with that field. For example, if you use a form such as upload.html (shown in Listing 14.11), your browser will look like Figure 14.7. Figure 14.7 : The brower prompts the user to enter the filename of the file to upload. Listing 14.11. The form upload.html. <html><head> <title>Upload File</title> </head> <body> <h1>Upload File</h1> <form action="/cgi-bin/upload.pl" method=POST <p>Enter filename: <input type=file</p> <p><input type=submit</p> </form> </body></html> You can either directly type the complete path and filename of the file you want to upload in the text field, or you can click the Browse button and select the file using Netscape's File Manager. After you enter the filename and press Submit, the file is encoded and sent to the CGI program specified in the ACTION parameter of the <form> tag (in this case, upload.pl). Suppose you have a text file (/home/user/textfile) that you want to upload. If you enter this into the file field of the form and press Submit, the browser sends something like the following to the server: Content-Type: multipart/form-data; boundary=whatever Content-Length: 161 -whatever Content-Disposition: form-data; name="filename"; filename="textfile" contents of your textfile called "textfile" located in /home/user. --whatever- Notice that the filename-stripped of its path-is located in the Content-Disposition header, and that the contents of your text file follow the blank line separating the header from the contents. When the server receives this data, it places the values of the Content-Type and Content-Length headers into the environment variables CONTENT_TYPE and CONTENT_LENGTH, respectively. It then sends all of the data following the first blank line, including the first boundary line, to the stdin. Your CGI program should be able to parse this data and perform the desired actions. The concept of any person uploading files to your server conjures up many fears about security. The file upload protocol deals with security in several ways. First, only the name of the file is sent to the browser, not the path. This is to address potential privacy concerns. Second, you must type the filename and press the Submit button in order to submit a file. The HTML author cannot include a hidden input field that contains the name of a file that is potentially on the client's machine. If this were possible, then people browsing the Web risk the danger of allowing malicious servers to steal files from the client machines. This is not possible under the current implementation because the user must explicitly type and approve any files he or she wants to upload to the server. Parsing File Upload Parsing data of type multipart/form-data is a challenging task because you are dealing with large amounts of data, and because there is no strict standard protocol yet. Only time can solve the latter problem, and if you need to write CGI programs that implement file uploading, you'll want to prepare yourself for changes in the standard. There are good strategies for dealing with the problem of large data size. In order to best demonstrate the challenges of parsing multipart/form-data encoded data and to present strategies and solutions, I present the problem as posed to a Perl programmer. The problem is much more complex for the C programmer, who must worry about data structures, dynamically allocating memory, and writing proper parsing routines; however, the same solutions apply. Forget for a moment the size of the data and approach this problem as a Perl programmer with no practical limits. How would you parse this data? You might read the CONTENT_LENGTH variable to determine how much data there is and then read the entire contents of stdin into a buffer called $buffer. $length = $ENV{'CONTENT_LENGTH'}; read(STDIN,$buffer,$length); This loads the entire data block into the scalar variable $buffer. At this stage, parsing the data is fairly simple in Perl. You could determine what the boundary string is, split the buffer into chunks of data separated by the boundary string, and then parse each individual data chunk. However, what if someone is uploading a 30MB file? This means you need at least 30MB of spare memory to load the contents of stdin into the variable $buffer. This is an impractical demand. Even if you have enough memory, you probably don't want one CGI process to use up 30MB of memory. Clearly, you need another approach. The one I use in the program upload.pl (shown in Listing 14.12) is to read the stdin in chunks and then write the data to a temporary file to the hard drive. After you are finished creating the temporary file, you can parse that file directly. Although it requires an additional 30MB of space on your hard drive, this is much more likely and more practical than needing that equivalent of RAM. Additionally, if there is some error, you can use the temporary file for debugging information. Parsing the temporary file is fairly simple. Determine whether the data you are about to parse is a name/value pair or a file using the Content-Disposition header. If it is a name/value pair, parse the pair and insert it into the associative array %input keyed by name. If it is a file, open a new file in your upload directory and write to the file until you reach the boundary string. Continue to do this until you have parsed the entire file. Listing 14.12 contains the complete Perl code for upload.pl. You need to change two variables: $TMP, the directory that stores the temporary file, and $UPLOADDIR, the directory that contains the uploaded files. upload.pl generates the name of the temporary file by appending the time to the name formupload-. It saves the data to this temporary file, and parses it. Listing 14.12. The upload.pl program. #!/usr/local/bin/perl require 'cgi-lib.pl'; $TMP = '/tmp/'; $UPLOADDIR = '/usr/local/etc/httpd/dropbox/'; $CONTENT_TYPE = $ENV{'CONTENT_TYPE'}; $CONTENT_LENGTH = $ENV{'CONTENT_LENGTH'}; $BUF_SIZ = 16834; # make tempfile name do { $tempfile = $TMP."formupload-".time } until (!(-e $tempfile)); if ($CONTENT_TYPE =~ /^multipart\/form-data/) { # save form data to a temporary file ($boundary = $CONTENT_TYPE) =~ s/^multipart\/form-data\; boundary=//; open(TMPFILE,">$tempfile"); $bytesread = 0; while ($bytesread < $CONTENT_LENGTH) { $len = sysread(STDIN,$buffer,16834); syswrite(TMPFILE,$buffer,$len); $bytesread += $len; } close(TMPFILE); # parse temporary file undef %input; open(TMPFILE,$tempfile); $line = <TMPFILE>; # should be boundary; ignore while ($line = <TMPFILE>) { undef $filename; $line =~ s/[Cc]ontent-[Dd]isposition: form-data; //; ($name = $line) =~ s/^name=\"([^\"]*)\".*$/$1/; if ($line =~ /\; filename=\"[^\"]*\"/) { $line =~ s/^.*\; filename=\"([^\"]*)\".*$/$1/; $filename = "$UPLOADDIR$line"; } $line = <TMPFILE>; # blank line if (defined $filename) { open(NEWFILE,">$filename"); } elsif (defined $input{$name}) { $input{$name} .= "\0"; } while (!(($line = <TMPFILE>) =~ /^--$boundary/)) { if (defined $filename) { print NEWFILE $line; } else { $input{$name} .= $line; } } if (defined $filename) { close(NEWFILE); } else { $input{$name} =~ s/[\r\n]*$//; } } close(TMPFILE); unlink($tempfile); # print success message print &PrintHeader,&HtmlTop("Success!"),&PrintVariables(%input),&HtmlBot; } else { print &PrintHeader,&HtmlTop("Wrong Content-Type!"),&HtmlBot; } Maintaining State with Cookies In Chapter 13, "Multipart Forms and Maintaining State," I describe three different methods for maintaining state. All three of the methods required the server to send the state information to the client embedded in the HTML document. The client returned the state back to the server either by appending the information to the URL, sending it as a form field, or sending a session ID to the server, which would use the ID to access a file containing the state information. Netscape proposed an alternative way of maintaining state-HTTP cookies-which has since been adopted by several other browsers, including Microsoft's Internet Explorer. Cookies are name/value pairs along with a few attributes that are sent to and stored by the browser. When the browser accesses the site specified in the cookie, it sends the cookie back to the server, which passes it to the CGI program. To send a cookie, you use the HTTP response header Set-Cookie. Set-Cookie: NAME=VALUE; [EXPIRES=date; PATH=path; DOMAIN=domain] The only required field is the name of the cookie (NAME) and its value (VALUE). Both NAME and VALUE cannot contain either white space, commas, or semicolons. If you need to include these characters, you can URL encode them. EXPIRES is an optional header that contains a date in the following format: Dayname, DD-MM-YY HH:MM:SS GMT If you do not specify an EXPIRES header, the cookie will expire as soon as the session ends. If the browser accesses the domain and the path specified by DOMAIN and PATH, it sends the cookie to the server as well. By default, DOMAIN is set to the domain name of the server generating the cookie. You can only set DOMAIN to a value within your own domain. For example, if your server and CGI program is on, you can set the domain to be and yale.edu, but not whitehouse.gov. Domains such as .edu or .com are too general, and are consequently not acceptable. If your server is running on a non-standard port number, you must include that port number in the DOMAIN attribute as well. When the browser connects to a server, it checks its cookies to see if the server falls under any of the domains specified by one of its cookies. If it does, it then checks the PATH attribute. PATH contains a substring of the path from the URL. The most general value for PATH is /; this will force the browser to send the cookie whenever it is accessing any document on the site specified by DOMAIN. If no PATH is specified, then the path of the current document is used as the default. To delete a cookie, send the same cookie with an expiration date that has already passed. The cookie will expire immediately. You can also change the value of cookies by sending the same NAME, PATH, and DOMAIN but a different VALUE. Finally, you can send multiple cookies by sending several Set-Cookie headers. When the browser sends the cookie back to the server, it sends it as an HTTP header of the following form: Cookie: NAME1=VALUE1; NAME2=VALUE2 The server takes the value of this header and places it in the environment variable HTTP_COOKIE, which the CGI program can then parse to determine the value of the cookies. Although HTTP cookies are an interesting and potentially useful feature, consider several factors before using them. First, because not all browsers have cookie capability, cookies are not useful for general state applications. However, if you are writing an application and you are sure the user will use a cookie-capable browser, there may be some advantage to using cookies. Finally, there are some practical limitations to cookies. Some browsers will accept only a certain number of cookies per domain (for example, Netscape will accept only 20 cookies per domain and 300 total). An additional limitation is the size of constraint of the HTTP_COOKIE environment variable. If you have a site where you must potentially send many large cookies, you are better off using other state methods. Summary Several companies have extended some of the standard Web protocols in order to provide new and useful features. Most of these extensions are visual, such as extensions to HTML and server-side push to create inline animations. Other useful features include file upload and maintaining states using HTTP cookies. Should you use these extensions? If some of these extensions provide a feature you need, and you are sure that your users will use browsers that support these features, then by all means do. However, for general use, remember that these features are not necessarily widely implemented and that the protocol is likely to change rapidly.
http://www.webbasedprogramming.com/CGI-Developers-Guide/ch14.htm
CC-MAIN-2022-05
refinedweb
8,114
54.02
of an OS plumber The GNOME Shell developers appear to have no interest at present in supporting custom theming of the GNOME Shell. Frankly I do not blame the developers for taking this position as they had more than enough work on their hands getting the first release of the GNOME Shell stabilized and out the door and have lots of work to do to complete the next major version of the GNOME Shell for GNOME 3.2. I would have done the same. According to Owen Taylor, maintainer of the GNOME Shell: From the perspective of the GNOME Shell team, GNOME Shell themes are both not interesting and not supportable … we cannot commit to any stability of the CSS class names or actor hierarchy. However that has not stopped pathfinders from developing custom themes for the GNOME Shell and tools for installing and using these custom themes. A theme is a general term for all of the artwork & the color schemes used when rendering everything you see in a GNOME desktop. Themes are mostly static collections of image files that are used to construct window frames and widgets. A theme engine is a code module that uses a theme to style the look of the widgets that are drawn in your desktop. Two different theme engines could use the same theme files but render different visual results. Engines are more complex to create but can render better visual effects. Current public support for Shell theming is limited to the user-theme extension in the GNOME Shell extensions repository which was set up by Giovanni Campagna some months ago and the GNOME Tweak Tool by John Stowers. While both of these solutions enable custom theme support, neither of these solutions is satisfactory as far as I personally am concerned. The user-theme extension requires a user to use either gsettings or gconf to specify a specific theme and does not provide a visual preview of of the theme. While the gnome-tweak-tool is an excellent tool in many respects and has a long life ahead of it, it is problematic as far as theming is concerned, does not provide a theme preview and requires you to manually reload the GNOME Shell to see the new theme. I decided that I would explore embedding theme selection functionality directly into the GNOME Shell as another option in the Activities screen. A preview and some information is provided about each of the available themes. Clicking on a theme selects and activates the theme there and then. You can quickly cycle through the supported themes, select a theme and see the resultant presentation styling changes immediately.. Here are a couple of screenshots of the themeselector extension in action using five of Half-Left‘s excellent GNOME Shell themes together with the default Adwaita theme: There is a slight problem with the Dark Glass theme. The titles are actually displayed but they are displayed in black on a black background. This all works because each theme is located in its own subdirectory under a themes directory and contains all the files and data necessary for that theme: $ ls -l themes total 28 drwxrwxr-x. 2 fpm fpm 4096 Apr 25 12:47 Adwaita drwxr-xr-x. 2 fpm fpm 4096 Apr 25 12:48 ANewHope drwx------. 2 fpm fpm 4096 Apr 25 12:48 Atolm drwxrwxr-x. 2 fpm fpm 4096 Apr 25 12:47 DarkGlass drwxrwxr-x. 2 fpm fpm 4096 Apr 25 12:48 DeviantArt drwxrwxr-x. 2 fpm fpm 4096 Apr 25 12:48 Elementary drwxrwxr-x. 2 fpm fpm 4096 Apr 25 12:48 SmoothInsert $ cd themes/Atolm $ ls -l total 396 -rw-r--r--. 1 fpm fpm 3413 Apr 13 15:04 calendar-arrow-left.svg -rw-r--r--. 1 fpm fpm 3414 Apr 13 15:04 calendar-arrow-right.svg -rw-r--r--. 1 fpm fpm 4413 Apr 11 20:59 close.svg -rw-r--r--. 1 fpm fpm 4337 Apr 11 21:01 close-window.svg -rw-r--r--. 1 fpm fpm 1315 Mar 22 03:55 corner-ripple.png -rw-r--r--. 1 fpm fpm 3013 Mar 22 03:55 dash-placeholder.svg -rw-r--r--. 1 fpm fpm 3401 Apr 11 21:15 filter-selected.svg -rw-rw-r--. 1 fpm fpm 245 Apr 25 12:48 metadata.json -rw-rw-r--. 1 fpm fpm 231713 Apr 22 23:57 preview-atolm.png -rw-r--r--. 1 fpm fpm 4097 Mar 22 03:55 process-working.png -rw-r--r--. 1 fpm fpm 10056 Mar 22 19:30 process-working.svg -rw-r--r--. 1 fpm fpm 36293 Apr 13 15:03 stylesheet.css -rw-r--r--. 1 fpm fpm 15545 Apr 9 19:21 toggle-off-intl.svg -rw-r--r--. 1 fpm fpm 14295 Apr 9 19:16 toggle-off-us.svg -rw-r--r--. 1 fpm fpm 13564 Apr 9 19:22 toggle-on-intl.svg -rw-r--r--. 1 fpm fpm 15358 Apr 9 19:19 toggle-on-us.svg -rw-r--r--. 1 fpm fpm 3409 Apr 11 18:23 ws-switch-arrow-down.svg -rw-r--r--. 1 fpm fpm 3252 Apr 11 18:22 ws-switch-arrow-up.svg $ cat metadata.json { "name": "Atolm", "author": "Half-Left", "version": "1.0", "type": "custom", "thumbnail": "preview-atolm.png", "stylesheet": "stylesheet.css", "url": "" } Other optional supported tags include disabled which when set to boolean FALSE indicates that a theme should not be displayed by this extension, and shell-version and gjs-version which can be used to restrict a theme to a particular GNOME Shell or GJS (GNOME JavaScript) version. You probably will never need to use either shell-version or gjs-version but I felt that it was better to build in this support just in case. All necessary metadata about the theme is stored in each theme’s metadata.json. Note that this enables the theme preview image and theme stylesheet to be named anything you like. The themeselector extension reads each theme’s metadata.json file and can figure where to find the theme preview image and stylesheet. By the way, the XDG Base Directory specification is silent on the issue of where theme data should reside. Both the user-theme extension and gnome-tweak-tool expect theme files to reside under $HOME/.themes/’THEMENAME/gnome-shell/ and expect the theme stylesheet to be named gnome-shell.css. I disagree with that location and think that GNOME Shell theme files should live under $XDG_DATA_HOME, for example, $HOME/.local/share/gnome-shell/themes, just as GNOME Shell extensions live under $HOME/.local/share/gnome-shell/extensions. Let the flame wars start. I will put on my asbestos suit! You can download this version of the themeselector extension here. It contains all five Half-Left themes shown above. Place the downloaded tarball in $HOME/.local/share/gnome-shell/ and unpack it. A new directory themeselector@fpmurphy.com will be created under $HOME/.local/share/gnome-shell/extensions to contain the extension code. A new directory called themes will be created under $HOME/.local/share/gnome-shell/, and under this directory a series of directories will be created, one per theme. You need to install the following schema for org.gnome.shell.user-theme in a file called /usr/share/glib-2.0/schemas/org.gnome.shell.extensions.user-theme.gschema.xml <schemalist gettext- <schema id="org.gnome.shell.extensions.user-theme" path="/org/gnome/shell/extensions/user-theme/"> <key name="name" type="s"> <default>""</default> <summary>Theme name</summary> <description>Name of the custom theme</description> </key> </schema> </schemalist> If the user-theme extension is installed, this file will already be present in /usr/share/glib-2.0/schemas. If you manually install it, you must to compile the new schema using glib-compile-schemas. See the glib-compile-schemas man page for further information if you are unfamiliar with this utility. After restarting your GNOME Shell, you should see the Themes option in the Activities overview screen. This is still beta software – so make sure you first back up $HOME/.local/share/gnome-shell/ if you have anything important that you need to preserve. Currently global custom themes are not supported, only per-user custom themes. I plan to support global custom themes in a future release in the next week or two. Enjoy! Please let me know about any problems you encounter. [28 APRIL 2011] NOTE: The location of theme files and names of files as shown in this post will almost certainly change in the final version of the themeselector extension. I am working to get consensus among interested parties on a specification for GNOME Shell theme packaging and theme selectors. Once this is achieved I will update the extension and this post. [30 APRIL 2011] NOTE: A new version (v0.9) of the themeselector extension is available here. Please use this version instead of themeselector-0.8.tar.tz. Unpack in a temporary directory. Please read the README file for installation instructions. Do not follow the above instructions. Note that the location of the themes and the theme metadata file has changed in this version. [1 NOVEMBER 2011] NOTE: This version of the themeselector extension only works with version 3.0 or version 3.1 of the GNOME Shell. P.S. If you have found this post via an Internet search, you might be interested to know that I have written a number of other posts in this blog about configuring and extending the GNOME 3 Shell. If gnome-tweak-tool requires the shell to be restarted then that is a bug (as it works for me / used to work for me). Can you please file a bug and attach a theme so I can fix it. You have a dependency on another extension being enabled and working. I do not use the user-theme extension but have the extension schema installed. You could just as easily reload the shell from within gnome-tweak-tool. Works fine for me. From tweak_shell.py: def _shell_reload_theme(self): #reloading the theme works OK, however there are some problems with reloading images. # #however, smashing the whole shell just to change themes is pretty extreme. So we #just let the user-theme extension pick up the change by itself # #self._shell.reload_theme() #self.notify_action_required( # “The shell must be restarted to apply the theme”, # “Restart”, # lambda: self._shell.restart()) pass I’ve been looking for something like this, unfortunately it crashes my desktop after logon. Extracting the tarball doesn’t automagically create the directories as explained in your instructions, however I relocated them as described. Added the snippet of source to the bottom of the /usr/share/glib-2.0/schemas/org.gnome.shell.user-theme.gschema.xml. Also did the glib-compile-schemas process for good measure. I repeated the process twice & got the same crash both times. I used the recovery console to delete the extension and was able to return to the desktop. Whenever I have attempted to install gnome-tweak-tools (this is my 4th or 5th try with gnome 3 on ubuntu) it always shows “User theme extension not installed” (something similar). In addition, since installing this extension gnome-tweak-tools will not load any more. I appreciate what you are trying to do here and hope that whatever information I provide can be of some use to you. I am worried about the fact that you modified /usr/share/glib-2.0/schemas/org.gnome.shell.user-theme.gschema.xml to add something. If this file already existed, you should not have had to touch it. I suggest you revert back to the original version of this file, re-enable the user-theme extension (because gnome-tweak-tool has a dependency on it) and test gnome-tweak-tool again. Another way would be to blow away this file and the user-theme extension directory and reinstall the user-theme-extension and then test gnome-tweak-tool again. Please let me know how you get on. Ok, restored the schema file to original condition, gnome-tweak-tool is functioning again. However, I still have the “User theme extension not enable” message. This has been a recurring problem and is probably the root problem. I am able to change themes & icons with gnome-tweak-tool despite that message. Additionally whenever I have tried to download the git and make the gnome-shell-extensions it always gives me recursive errors. I end up copying the extensions to the .local/share/gnome-shell/extensions directory manually and they all appear to work. The Fedora & Arch users seem to be further along with Gnome 3 than the Ubuntu users at this point. Could it simply be an Ubuntu problem? I haven’t found any good resources for marrying Ubuntu & Gnome 3 since everyone else seems so taken with Unity at this point. It would appear that when trying to ./autogen.sh the gnome-shell-extension gnome-desktop-3.0 is not found. When I look in my package manager it doesn’t exist. The closest thing I have is gnome-desktop3-data. Obviously I will need a way to overcome that problem to cleanly install the shell extensions before I can properly use your extension, which I am anxious to do. I apologize for wasting your time trying to debug what is obviously a fault in ubuntu. I’m damn near ecstatic, thanks to Gayan over on I was able to successfully install the gnome-shell-extensions using this: git clone cd gnome-shell-extensions ./autogen.sh –prefix=/usr make && sudo make install After that it was a simple matter of copying your extension directory to the proper location & restart the system. Your theme selector works great! Many thanks. ah, you are on Ubuntu, not Fedora! Correct, there appear to be some slight differences installing the gnome-shell-extensions in Ubuntu compared to Fedora. I consider it a “bug” in Ubuntu but it’s probably more ignorance on my part. I noticed over on Deviant Art that you will probably get this integrated with gnome-tweak-tools, I want to commend you all for applying your talents to help the community. It’s people like you that make learning & using linux such a pleasure. I maintain the GNOME Shell Extensions package in Fedora along with a couple of others and this one works well for me. Was wondering if you were planning on submitting it upstream? I would like to enable this extension for Fedora and if this remains a separate source, it has to go through a separate package review which is tedious I am happy to provide the extension to the Fedora community (and anybody else) but would like to maintain it as a separate RPM (at least for now) for a number of reasons. One – it is much bigger than the existing extensions in gnome-shell-extensions by a factor of 5 or more because of theme information. Two – it clashes with the existing user-theme extension (which should be modified or removed because it does not play fairly with other extensions that play with themes and forces tools like gnome-tweak-tool to have a dependency on it. Currently there is no agreement on shell theme packaging or how shell theme selectors should interact with shell theme packages. I am working with a number of extension and theme developers to try and agree such a specification. This will eliminate such issues in the near future. @fpmurphy, the user-theme extension might be icky, and it might be rude for gnome-tweak-tool to depend on it, but IMHO the real fix is to move the user-theme extension into gnome-shell proper, thus fixing all problems. Then theme selectors like yours, and g-t-t could just change the gsettings key at will, and the shell will pick up the change. Actually, even just moving the schema into gnome-shell would be sufficient. I suggest you file a bug, then we can discuss it with upstream there. John Good idea, John. Will do Great! You mention that the xml file is /usr/share/glib-2.0/schemas/org.gnome.shell.user-theme.gschema.xml, do you mean /usr/share/glib-2.0/schemas/org.gnome.shell.extensions.user-theme.gschema.xml as that is what is installed from gnome-shell-extensions-user-theme-3.0.1-1.f016b9git.fc15.noarch @Brian, Yes, thanks for pointing it out. Fixed in post Hi Thanks for your good work ;) The code of schema file has an error: it would be I can confirm that this also works in LMDE with experimental debian sources..I had to do exactly the same thing that Charles Bowman did with his Ubuntu install. Works very well–no surprises. I commend you on a job very well done!!!! I’m including the link to this page in my Gnome-Shell thread: Greetings again!!!! A “feature” request if I may…….I have 10 themes installed now & the selection window will not scroll to “see” the 10th theme..looks like it will only allow 9 themes to be visible. Another thought would be to allow the previews to resize smaller if there are more than 9 themes….. What do you think? THANKS!!!!! Greets again… I “mucked” about with your extension…changed the number of Columns to 6, grid spacing to 5, grid padding to 15 & resized the thumbnails to 64×48. I now have space for 24 themes….all still look good enough to see the differences for a preview… I do hope that this won’t be enough space very soon…… Cheers!!!! Last for the evening—reverted the grid spacing & padding–reduced the columns to 5–left the thumbnail sizing the same…enough for 20 themes. Sorry for my terrible english. I have a little problem. I’m on arch i686 and when I install your extension and change the theme, my dock extension has no background, even if I return to Adwaita theme. If I remove your extension everything goes normal. Nobody else? thanks for the work, after installing gnome-shell-extensions-git, alt + tab keys do not work anymore. solution exists I’ve read and applied a large portion of what you posted in this, and the older articles. Thanks again, my Gnome 3.0 experience on Ubuntu has benefited greatly from your information & extensions. I’m sharing what I’m learning and actually created a custom default Ubuntu gnome 3.0 remaster which I’m sharing with as many people as possible. (gNatty Gnome) I tried installing this on Ubuntu 11.04 with Gnome 3 installed from the ppa. When I restarted my gnome session, the desktop immediately crashed, and forced me to logout. I removed the theme selector files in recovery mode, and the desktop booted again. Any idea how I can fix this? I was using the latest version (0.9). Is there any way to remove the panel on the right of the desktop?or how can i manually remove the shell extensions? i deleted the folder with the files but the panel is still there. thanks! Colin, do you have the gnome-shell-extension-user-theme installed? If you click on my name you will be transported to a thread on ubuntu forums where you might get a little assistance oriented to your distro. While the extensions are distro agnostic ubuntu seems to need a little extra TLC at this early stage of integrating Gnome 3.0 The download 404’s. :( I really wanted to install this extension. It is now at. I just set up this directory for shell extensions and you got caught in the move. Is there a way to scroll through the themes? I have more than what is displayed on the screen and it doesn’t seem to allow me to scroll down at all. […] here. Then, that blog also has an eye catching way to change the Gnome theme, which is documented here. I thank the authors of all of these posts, which helped install and improve my Gnome Shell. I […] I have available, only 3 out of 5 themes, what to do? У Ð¼ÐµÐ½Ñ Ð´Ð¾Ñтупно только 3 темы из 5, что делать? I’m on F15. I tried theme selector from repo, it doesn’t work. However, I did try yours, it work pretty well. Thanks! Any chance for you to modified the ‘dock’ to have auto-hide features? If not, I might stick with awn for now :) I have 8 themes but gnome-shell-extension-user-theme showing only six themes, How can i fix this? I need to add scrolling to the themeselector window to support this. I plan to replace a new version of themeselector in the next few weeks. Keep an eye on http:// fpmurphy.com/gnome-shell-extensions. i’m on F15 i use theme selector from here version 0.9, it is very useful tool thanks for taking the time write such tool, if you are open to suggestion then : – themes vertically exceeds to the panel area if you have more than 6 theme. – add a scrollbar to the list something similar to the “Apllication selector”. problems : – only one problem when i choose a different theme other than Adwaita then restart the pc it return back to Adwaita and after i click the activities then quite it return back to the selected theme, regards Sounds like you have the user-theme extension installed. If you have it installed, go to /usr/share/gnome-shell/extensions and delete the user-theme@… subdirectory. Do not uninstall it using yum. You need the gsettings key/gschema component of the user-theme package. Thanks for the suggestions. I will try and incorporate them in the next version of themeselector which I plan to start work on shortly. user-theme extension deleted not using yum, it seems it did the trick it’s working like a charm thanks now. well cannot wait to get it till then hope you have a good days. definitely will keep my eye on your blog. Is there a way to change the preview size in theme selector cause i’m not be able to select another themes when it reached more than 9 themes ???? For Ubuntu 11.04 how to use gnome-shell without any problem with extensions ?!! sudo add-apt-repository ppa:gnome3-team/gnome3 && sudo apt-get update && sudo apt-get dist-upgrade sudo apt-get install gnome-shell gnome-tweak-tool sudo apt-get purge unity scrollbar* gnome-accessibility-themes sudo apt-get install gnome-themes-standard Then logout and back in using the GNOME classic (No effects) Session.(no need if you installed your 3d driver just use GNOME Session) go to additional drivers install recommended driver restart your sys choose GNOME Session then login sudo add-apt-repository ppa:ricotz/testing && sudo apt-get update && sudo apt-get dist-upgrade sudo apt-get install gnome-common gnome-shell-extensions-user-theme now you’r ready: 1. go to then download themeselector-0.9.tar.gz and extract it 2. creat new dir call it themeselector@fpmurphy.com then move extension.js and metadata.json in it 3. move themeselector@fpmurphy.com dir to $HOME/.local/share/gnome-shell/extensions/ 4. move the themes dirs to $HOME/.themes/ 5. hit alt+F2 ,r Now .. Theme selector work without any problem and shell extensions will work aether in gnome-tweak-tool I hope that will help you ,, Peace FBML Note: fpmurphy thanks for your extensions it’s great thanks 2 much How does it work for Ubuntu 11.10? regards True that the developers have much work they need to be doing and it is great the others like you pick up the slack. This is the best thing ever invented! Gnome3 + This tool is awesome! :D Does not work under Natty/Gnome Shell 3.1.90.1: Missing init function Thanks for the update. I would not expect it to work under Natty or GNOME Shell 3.1.90. It will be updated when GNOME Shell 3.2 is released on one or more distributions. Arch Linux has it now in extra repo :D Just rename the function ‘main’ to ‘init’ in the source; that error goes away. No idea why that name was changed internally. But that still does not fix the issue entirely; because another error appears: Main.overview.viewSelector is undefined. The reason for that might be explained by this: Main.overview.viewSelector is undefined Main.overview._viewSelector for GNOME Shell 3.2 Do you plan to update Theme Selector for GS 3.2 soon ? Thanks the link for extension reports 404. By the way, Aare you planning to port it to GS 3.2? Please change your download link to in your note on 30th April. The link is broken. Thanks Done. The 3.0 extensions were all moved to some time ago. I forgot about the embedded links in that particular post. Enter your email address
http://blog.fpmurphy.com/2011/04/gnome-shell-theme-selector-preview.html
CC-MAIN-2022-05
refinedweb
4,196
66.13
Opened 5 years ago Closed 5 years ago #33352 closed defect (invalid) port goes berserk uninstalling Description port 2.0.99 Mac OS X 10.5.8 MacBook Pro Intel Core Duo Why does uninstalling four py25-* ports (I no longer have python 2.5 installed and I've been gradually getting rid of its dependents) nuke all sorts of apps and even some Perl ports? Note that eventually we run into a dependency and can't uninstall some stuff. Something is seriously wrong here! I don't actually care about any of the uninstalled ports (any more), as this system is quite old and I don't use it for state-of-the-art stuff any more. But acl2, gnupg2, and aquaterm were all installed voluntarily, not as dependencies of some port in that list. wideload:src/MacPorts 21:09$ sudo port -u uninstall py25-{numpy,nose,cairo,gtk} Password: ---> Deactivating py25-gtk @2.22.0_1 ---> Uninstalling py25-gtk @2.22.0_1 ---> Deactivating py25-cairo @1.8.2_1 ---> Uninstalling py25-cairo @1.8.2_1 ---> Deactivating py25-numpy @1.6.1_1+atlas+gcc44 ---> Cleaning py25-numpy ---> Uninstalling py25-numpy @1.6.1_1+atlas+gcc44 ---> Cleaning py25-numpy ---> Deactivating py25-nose @1.1.2_1 ---> Cleaning py25-nose ---> Uninstalling py25-nose @1.1.2_1 ---> Cleaning py25-nose ---> Uninstalling acl2 @3.2_0 ---> Uninstalling acl2 @3.4_0 ---> Uninstalling aquaterm @1.0.1_4 ---> Uninstalling giflib @4.1.6_0 ---> Uninstalling gnupg2 @2.0.12_0 ---> Uninstalling octave @3.2.4_3+atlas+gcc43 ---> Uninstalling hdf5 @1.6.9_0 ---> Uninstalling hs-hashed-storage @0.4.11_1 ---> Uninstalling hs-HTTP @4000.0.9_0 ---> Uninstalling lzmautils @4.32.7_0 ---> Uninstalling p5-error @0.17016_1 ---> Uninstalling p5-locale-gettext @1.05_5 ---> Uninstalling p5-xml-simple @2.18_1 ---> Uninstalling p5-xml-sax-expat @0.40_2 ---> Uninstalling p5-xml-sax @0.96_2 ---> Uninstalling p5-xml-namespacesupport @1.11_2 ---> Uninstalling p5-xml-parser @2.40_1 ---> Uninstalling py25-bz2 @2.5.4_0 ---> Uninstalling py25-curses @2.5.4_1 ---> Uninstalling py25-hashlib @2.5.4_0 ---> Unable to uninstall py25-setuptools @0.6c11_0, the following ports depend on it: ---> py25-lxml @2.2.2_0 Error: port uninstall failed: Please uninstall the ports that depend on py25-setuptools first. Change History (1) comment:1 Changed 5 years ago by macsforever2000@… - Resolution set to invalid - Status changed from new to closed Note: See TracTickets for help on using tickets. There's no bug here. You used the -u flag which means to also uninstall inactive ports such as the old versions of acl2 that you had installed but not active because they were updated. Eventually it hit "py25-setuptools" and it could not be uninstalled because py25-lxml is active and depends on it. Follow-up to the Macports-Users mailing list if you have further questions about this.
https://trac.macports.org/ticket/33352
CC-MAIN-2017-09
refinedweb
459
72.02
Opened 11 months ago Closed 11 months ago #27415 closed defect (fixed) py3: algebras/lie_algebras Change History (12) comment:1 Changed 11 months ago by - Branch set to u/jhpalmieri/lie_algebras_py3 comment:2 Changed 11 months ago by - Commit set to f09522169f9353ddd2b32d7d3689e1b0ce1c49d2 - Status changed from new to needs_review comment:3 Changed 11 months ago by - Commit changed from f09522169f9353ddd2b32d7d3689e1b0ce1c49d2 to 7c693ea147f4ab05298c6292cb7074d16a78abd3 Branch pushed to git repo; I updated commit sha1. This was a forced push. New commits: comment:4 Changed 11 months ago by I don't understand why you create the _sorting_key attribute? Why not just use _basis_key (which is guaranteed to exist from the category)? Nice catch on that bug in _bracket_, which then with using _basis_key would become -if key_ml < key_mr: +if self._basis_key(key_ml) < self._basis_key(key_mr): comment:5 Changed 11 months ago by If I use _basis_key, I get this: sage: d = lie_algebras.VirasoroAlgebra(QQ) sage: d._basis_key(3) 3 sage: d._basis_key('c') 'c' As a result, sorting doesn't work between them in Python 3. The _sorting_key attribute as I've defined it uses the function _basis_key in that file, which converts c to +Infinity. comment:6 Changed 11 months ago by I could instead add a new _basis_key method: def _basis_key(self, m): return _basis_key(m) comment:7 Changed 11 months ago by - Commit changed from 7c693ea147f4ab05298c6292cb7074d16a78abd3 to 6f3c8e7578463898ab964abfa985f3ad47442f15 Branch pushed to git repo; I updated commit sha1. This was a forced push. New commits: comment:8 Changed 11 months ago by The code in categories/lie_algebras_with_basis.py already uses self._basis_key(...), so I added a _basis_key method for the Virasoro algebra. comment:9 Changed 11 months ago by - Type changed from PLEASE CHANGE to defect comment:10 Changed 11 months ago by - Reviewers set to Travis Scrimshaw - Status changed from needs_review to positive_review I think this is a much better solution. Thank you. comment:11 Changed 11 months ago by I agree, thanks for the suggestion. comment:12 Changed 11 months ago by - Branch changed from u/jhpalmieri/lie_algebras_py3 to 6f3c8e7578463898ab964abfa985f3ad47442f15 - Resolution set to fixed - Status changed from positive_review to closed New commits:
https://trac.sagemath.org/ticket/27415
CC-MAIN-2020-05
refinedweb
352
62.07
XslCompiledTransform.Load Method (String, XsltSettings, XmlResolver) Loads and compiles the XSLT style sheet specified by the URI. The XmlResolver resolves any XSLT import or include elements and the XSLT settings determine the permissions for the style sheet. Assembly: System.Xml (in System.Xml.dll) Parameters - stylesheetUri - Type: System.String The URI of the style sheet URI and any style sheets referenced in XSLT import and include elements. The XslCompiledTransform class supports the XSLT 1.0 syntax. The XSLT style sheet must use the namespace. An XmlReader with default settings is used to load the style sheet. DTD processing is disabled on the XmlReader. If you require DTD processing, create an XmlReader with this feature enabled, and pass it to the Load method. The following example loads a style sheet that is stored on a network resource. An XmlSecureResolver object specifies the credentials necessary to access the style sheet. //; // Load the style sheet. xslt.Load("", null, res.
https://msdn.microsoft.com/en-us/library/ms163426(v=vs.90).aspx
CC-MAIN-2015-14
refinedweb
156
69.18
This section demonstrates you the use of file separator. A file separator is a character that is used to separate directory names that make up a path to a particular location. It is operating system dependent. On Microsoft Windows, it is a back-slash character (\) while on Mac OS and unix based operating system, it is forward slash(/). By using the file.separator key from the system property, you can avoid checking for the OS. In the given example, we have used file.separator to make up a path. Here is the code: import java.io.*; public class PathSeparator { public static void main(String[] args) { String path = "C:"; String pathSep = path + File.separator + "Hello" + File.separator; System.out.println(pathSep); } } Output: Advertisements Posted on: June+
http://www.roseindia.net/tutorial/java/core/files/fileseparator.html
CC-MAIN-2015-27
refinedweb
125
53.27
Be the first to know about new publications.Follow publisher Unfollow publisher Widener University Delaware Law School - Info Spread the word. Share this publication. - Stack Organize your favorites into stacks. - Like Like this publication. Widener Law Magazine Spring 2007 A Magazine for Alumni & Friends - Contents: Guilty as charged? Mistaken identity, science, and criminal law. The technology of surveillance. Twelve years of service from the Criminal Defense Clinic. Mistaken eyewitness identification and wrongful convictions. WIDENER UNIVERSITY SCHOOL OF LAW VOLUME 14 NUMBER 1 SPRING 07 Guilty as charged? Mistaken identity, science, and criminal law The technology of surveillance Twelve years of service from the Criminal Defense Clinic Mistaken eyewitness identification and wrongful convictions WIDENER UNIVERSITY SCHOOL OF LAW VOLUME 14 NUMBER 1 SPRING 07 Widener University School of Law Board of Overseers Eugene D. McGurk, Esq. ’78, Chair Associate Provost and Dean, Linda L. Ammons, Ex Officio Steven P. Barsamian, Esq. ’75, Ex Officio C. Grainger Bowman, Esq. The Honorable M. Jane Brady The Honorable Peter John Daley II ’93 Michael G. DeFino, Esq. ’75 The Honorable Susan C. Del Pesco ’75 Jeff Foreman, Esq. ’94 Geoffrey Gamble, Esq. Jacqueline G. Goodwin, EdD The Honorable Philip A. Gruccio Vice Dean Russell A. Hakes, Ex Officio President James T. Harris III, DEd, Ex Officio Richard K. Herrmann, Esq. Justice Randy J. Holland Andrew McK. Jefferson, Esq. ’93 Peter M. Mattoon, Esq. Kathleen W. McNicholas, MD, JD ’06 George K. Miller Jr., Esq. ’81 The Honorable Charles P. Mirarchi Jr. The Honorable Donald F. Parsons Jr. Joanne Phillips, Esq. ’87 Vice Dean Loren D. Prescott Jr., Ex Officio Thomas L. Sager, Esq. The Honorable Thomas G. Saylor John F. Schmutz, Esq. Susan E. Schwab, Esq. ’92 The Honorable Gregory M. Sleet The Honorable Lee A. Solomon ’78 Jack M. Stover, Esq. Donald P. Walsh, Esq. The Honorable Joseph T. Walsh John A. Wetzel, Esq. ’75 Widener University School of Law Magazine Published by the Office of University Relations Executive Editor: Lou Anne Bulik Editor: Debra Goldberg Contributing Writers: Mary Allen, Jules Epstein, Debra Goldberg, Russell Hakes, Stephen Henderson, Rosemary Pall, Loren Prescott, Judith Ritter, Sandy Smith, Leonard Sosnov Photography: Mary Allen, Ashley Barton, Jim Graham, Rosemary Pall, Nancy Ravert-Ward Magazine Advisory Board: Mary Allen, Linda L. Ammons, Lou Anne Bulik, Paula Garrison, Debra Goldberg, Michael Goldberg, Susan Goldberg, Russell Hakes, Deborah McCreery, John Nivala, Rosemary Pall, Loren Prescott, Liz Simcox, Constance Sweeney Contents 28 Delaware Supreme Court Hears Arguments at Widener Law Innocent and Found Guilty How can we prevent wrongful convictions? 4 The Widener Law Criminal Defense Clinic: Twelve Years of Making a Difference 6 2 Dean’s Message 18 Faculty Publications 22 Faculty News 27 Legal Briefs CRIMINAL LAW/LITIGATION 32 Alumni Impact 38 Success Stories 46 Alumni Events 50 Class Notes The Technology of Surveillance: Will the Supreme Court’s Expectations Ever Resemble Society’s? I’ll Never Forget That Face (But I Might Not Remember It Accurately) Mistaken eyewitness identification testimony 13 10 14 Selma Hayman ’86, Giving Back to Widener Law WIDENER LAW 1 A message from the dean “The highlight of my first semester as dean was participating in the bar swearing-in ceremonies and receptions for hundreds of our students who are now members of the bar in Pennsylvania, New Jersey, and Delaware.” D E A R A L U M N I A N D F R I E N D S : My first year as your new dean has been rewarding and filled with activity here at the Law School. As you will discover as you read this issue of the magazine, Widener Law is executing its vision, which states, in part, “Widener University School of Law aspires to be a synergy of diverse and highly qualified students interacting with dedicated scholars, teachers, and practitioners in a vibrant, student-centered environment.” The theme for this edition is criminal law and litigation. Our resident scholars, Professors Len Sosnov, Jules Epstein, Stephen Henderson, and Judy Ritter, provide informative articles on topics ranging from mistaken witness identification to how courts are applying the Fourth Amendment in issues of technology and surveillance. ■ Our students are getting real-world experience in a variety of forums. Professors Ritter and Arlene Rivera Finklestein’s sojourn to New Orleans over winter break with nine Widener-Delaware law students to assist in protecting the rights of the accused is featured. Externs in Harrisburg talk about their experience at the Dauphin County District Attorney and Public Defender offices. David Sunday, a third-year student on the Harrisburg campus, shares his thoughts about the summer he spent as an intern at the United Nations. Our alumni litigators in Philadelphia are also featured, Sharon Caffrey ’87, a partner with Duane Morris; Eugene McGurk ’78, a partner at Raynes McCarty; James Golkow ’86, a partner at Cozen O’Connor; Bernard Smalley ’80, a shareholder with Anapol Schwartz; and Larry Bendesky ’87, a shareholder at Saltz, Mongeluzzi, Barrett & Bendesky, talk about their experiences in the courtroom, give advice to actors who play lawyers in the movies, and explain what it means to be a Philadelphia lawyer. ■ There is so much good news; I cannot summarize it all here. However, read more about the Dean’s Leadership Forum inaugurated this past semester. Our first participant was New Jersey alumnus George Miller ’81, a successful attorney in private practice, businessman, and community leader. George packed our Ruby Vale Courtroom on the Delaware campus and spent a riveting hour talking about the difference Widener made in his life and his subsequent successes, including dealing with none other than Donald Trump. On the Harrisburg campus, U.S. District Court Judge John Jones lectured on his decision in the “intelligent design” case, and Ann Durr Lyon, niece of Justice Hugo Black, and whose parents — along with civil rights activist E.D. Nixon — bailed out Rosa Parks when she was arrested for not giving up her seat on a Montgomery bus, joined us to celebrate Dr. Martin Luther King’s birthday. ■ After the wonderful official welcoming ceremonies on both campuses last fall attended by so many of you, the highlight of my first semester as dean was participating in the bar swearing-in ceremonies and receptions for hundreds of our students who are now members of the bar in Pennsylvania, New Jersey, and Delaware. There is much more to tell, as you will discover when you read this issue. ■ Finally, I want to thank all of you who have been so gracious in making my transition to Widener Law so seamless and successful. At Widener Law, we are living our vision, and our best days are yet to come. ■ D E A N L I N DA L . A M M O N S 2 WIDENER LAW “The campaign to name the Alfred Avins Special Collections Library has now exceeded $200,000, and a dedication ceremony to name the room in honor of Dean Avins is scheduled for spring 2007.” DEAR ALUMNI AND FRIENDS: 2007 promises to be a great year for our law school. Our alumni are establishing themselves as leaders in the legal community and reaching new heights of success and achievement. On December 11, the Alumni of the Year Award was presented to Brian Preski ’92, former chief of staff to the Speaker of the Pennsylvania House of Representatives. Brian exemplifies the success that Widener graduates now achieve. The Outstanding Service Awards recipients are Yvonne Takvorian Saville ’92 and Scott Blissman ’97, and the Outstanding Young Alumni were Robert J. Sanders ’98 and William Higgins ’99. It is certainly gratifying to read what our alumni have done in such a short period of time. The annual Philadelphia Alumni Reception took place on March 22 at The Crystal Tea Room. Over 300 attend this annual event, including judges from many benches and counties in the tri-state area. If you missed the reception, please join your fellow alumni next year for this fabulous party, which notably excludes speeches and fundraising, and is all about reuniting with classmates and networking with colleagues. The campaign to name the Alfred Avins Special Collections Library has now exceeded $200,000, and a dedication ceremony to name the room in honor of Dean Avins is scheduled for spring 2007. Dean Avins, our founder, deserves this special recognition since, without his initiative and perseverance, there would be no Widener University School of Law, nor any Widener Law graduates. If you have not contributed to this campaign, please do so now, and be sure to attend the spring ceremony, which will also mark the seventh anniversary of his passing. Everyone associated with Widener Law owes a debt of gratitude to Dean Alfred Avins. Our new dean, Linda Ammons, has begun to implement her ambitious plans to propel Widener to the forefront of law schools, and you can look forward to much greater visibility for Widener Law. Dean Ammons brings her remarkable energy and talent to the helm of the law school, and we are already enjoying the fruits of her efforts. Her ability to connect with the administration, faculty, students, and alumni is invaluable in creating a unified and focused Widener image. Please join Dean Ammons and support our school in the way you can best serve! Alumni Association A message from the executive council WIDENER UNIVERSITY S C H O O L O F L AW A L U M N I A S S O C I AT I O N EXECUTIVE COUNCIL Steven P. Barsamian ‘75, President Frank C. DePasquale Jr. ‘86, Vice President Renae B. Axelrod ‘91, Secretary D I R E C TO RS Thomas R. Anapol ‘91 The Honorable Raymond A. Batten ‘79 The Honorable Robert S. Blasi ‘75 Scott E. Blissman ‘97 John F. Brady ‘91 Michael A. Burns ‘04 Christopher Cabott ‘05 The Honorable Richard M. Cappelli ‘81 John Cirrinicione, Student Bar Association President, Ex Officio Member Representative Mark B. Cohen ‘93 Bernard G. Conaway ‘89 Michael J. D’Aniello ‘83 Anna M. Darpino ‘06 The Honorable Michael A. Diamond ‘82 Brian P. Faulk ‘02 Kenneth D. Federman ‘93 Catherine N. Harrington ‘88 W. Bruce Hemphill ‘84 Michael J. Heron ‘03 Damian S. Jackson ‘96 John F. Kennedy ‘01 M. Susan Williams Lewonski ‘98 Kathryn A. Macmillan ‘78 Anne M. Madonia ‘94 Peter V. Marks Sr. ‘77 Cecilia M. McCormick ‘91 Jeffrey W. McDonnell ‘94 David C. McFadden ‘96 Frank J. McGovern ‘95 Maria C. McLaughlin ‘92 James F. Metka ‘80 The Honorable Paul P. Panepinto ‘76 Jonathan E. Peri ‘99 Dr. Stephen R. Permut ‘85 George T. Ragsdale III ‘92 Larry S. Raiken ‘75 Karen Ann Ulmer ‘95 Meghan L. Ward ‘03 S I N C E R E LY, S T E V E N P. B A R S A M I A N WIDENER LAW 3 CRIMINAL LAW/ADVOCACY Innocent and Found Guilty In an imperfect justice system, responsiveness and disclosure can help prevent wrongful convictions. By Professor Leonard Sosnov 4 WIDENER LAW When individuals charged with crimes are factually innocent, they usually get the verdict they deserve—not guilty. For others, there are guilty verdicts, long-term incarceration, and bleak prospects for vindication. Why does the system produce such results in these cases? This is partly because the very nature of the system is imperfect. There are no video cameras on when the crime took place, or physical evidence to test for DNA to tell us conclusively who the perpetrator is. Thus, the jury must sort out the testimony of witnesses who may be lying or sincerely mistaken. Other variables come into play, including the relative skills of the attorneys for the prosecution and the defense. It is no wonder then, that guilty persons are sometimes found not guilty, and innocent individuals sometimes convicted. There are, however, other factors that adversely affect the innocent, which are not natural by-products of an imperfect justice system. One problem is the tunnel vision some police and prosecutors possess once a crime is “solved” with an arrest. Any investigation before police concluded that the defendant was the perpetrator, even when fairly conducted, usually ceases once an arrest is made. The resources of the prosecutor and police are then directed toward building a case for conviction at trial. Not infrequently, this means ignoring leads and evidence, which may show that someone else did the crime. For example, often a crime is deemed “solved” when a crime victim, attacked by a stranger, identifies a picture of an individual from an array of photographs. The identified person is arrested as a result. In some cases, the victim (or eyewitness) attends a post-arrest lineup, in which police include the defendant. On occasion, the victim positively identifies an individual other than the defendant. Countless times, I have seen the same thing happen: The individual positively identified is not investigated, the identification is treated by authorities as a “mistake,” and the prosecution proceeds. Because defense counsel is present at the lineup, or otherwise informed of the result, the jury might be apprised that someone other than the defendant was identified as the perpetrator. In other situations, however, defense counsel is kept in the dark about evidence that might exculpate the client. The United States Supreme Court has held that a prosecutor’s duty is to seek justice, and therefore the Due Process Clause requires disclosure of any material evidence tending to show the defendant is innocent or which discredits the state’s witnesses. Because this is a self-policing obligation, overzealous police officers or prosecutors can bury significant evidence they are duty-bound to disclose. Once the innocent defendant is convicted, exoneration becomes much more difficult. The United States Supreme Court has held that, unlike some trial errors, a claim of innocence does not even raise a constitutional question that can be litigated in the federal courts. Reconsideration of a case is frequently unattainable in state courts as well, once the jury “has spoken”—no matter how uninformed or misled because of lawyer incompetence, prosecutorial misconduct, or other factors. Once, in a case where I had convincing new evidence of the defendant’s innocence, the prosecutor rejected it, telling me, “We have to respect the sanctity of the verdict.” Fortunately, in a few of these cases, relief is possible because a judge is concerned with justice, rather than finality. In a small percentage of cases, DNA testing can scientifically prove the perpetrator’s identity because physical evidence such as sweat, blood, saliva, or another bodily secretion may be tested. It is of vital importance that this evidence be preserved and made available for testing. While the law is Once the innocent defendant is convicted, exoneration becomes much more difficult. The United States Supreme Court has held that, unlike some trial errors, a claim of innocence does not even raise a constitutional question that can be litigated in the federal courts. generally improving, many states still have no provisions guaranteeing the preservation of this evidence for testing or re-testing as scientific methods advance. Even where the evidence is available, many states have statutes of limitations on testing requests, or difficult evidentiary hurdles. Post-conviction DNA testing, it is hoped, will become increasingly available. Both commentators and the courts have recognized DNA as nothing less than a truth machine that ensures justice. With the increasing availability of large DNA data banks, DNA testing has the potential not only to exonerate an innocent, incarcerated defendant, but also to lead to the arrest and conviction of the real perpetrator who has been free to commit more crimes. Additionally, if test results confirm the defendant’s guilt, society is served because any question of innocence has been put to rest. Our imperfect system needs to be more responsive to the possibility of error both before and after conviction. “Justice” system is a misnomer when there is not enough attention paid to fairly disclosing and analyzing all evidence in an effort to determine the truth. ■ Professor Sosnov teaches and writes in the areas of Criminal Law, Criminal Procedure, and Evidence. He has extensive litigation experience, including briefing and arguing two cases before the United States Supreme Court. WIDENER LAW 5 CRIMINAL LAW/ADVOCACY On the steps of the courthouse in New Orleans: From left, Rachel Ramsay, Danielle Graham, Dave Iannucci, Jessica Sanchez, Everett Gillison, Lisa Vetro, Nazim Karaca, and Eric Lubin. Also participating but not shown were students Brett Bendistis and Julie Serfess. Gillison is a public defender in Philadelphia who joined the students on the trip. 6 WIDENER LAW Widener Law Criminal Defense Clinic: Twelve Years of Making a Difference By Judith L. Ritter, Associate Professor BORN IN THE 1960S, CLINICAL LEGAL EDUCATION IS ROOTED IN THE NOTION THAT LAW STUDENTS BELONG ON THE FRONT LINES IN THE BATTLE FOR SOCIAL JUSTICE. Over time, law school clinics have served the dual functions of preparing students for the practice of law and providing legal assistance to underrepresented populations. When I joined the Widener Law faculty in 1994, I was offered the opportunity to start a new, live-client, in-house, criminal defense clinical program. “Live client” distinguishes the program from simulation courses. It means that students represent real defendants in real criminal prosecutions. “In-house” means that the clinic functions as a small firm housed at the law school. Looking back at our first twelve years, I am gratified that the Widener Criminal Defense Clinic has exemplified the founding principles of clinical education. My partner, Staff Attorney Romie Griesmer, and I follow a simple clinical education design: Through hands-on experience, a small caseload, close mentoring, and a “leave no stone unturned” philosophy, we teach students how to be thorough, prepared, and effective defense lawyers. Our goal is for clinic students to graduate having seen and practiced client-centered, zealous, and conscientious defense lawyering. Another significant feature of the clinic model is “team litigation.” Students work in teams that include a faculty member. Team meetings with routine brainstorming sessions provide valuable opportunities for students to appreciate the intellectual joy of creatively and carefully analyzing legal issues. Moreover, the legal product created by the team is superior to one provided by individual litigators. Certification from the Pennsylvania Supreme Court authorizes our students to function as first-chair lawyers. We start each semester with an intense training and orientation program that covers substantive law and legal skills. Once the students get their cases, they conduct client interviews, fact investigation, legal research, preliminary hearings, pre-trial discovery and motions, plea negotiations, guilty plea colloquies, pre-trial hearings, and trials. The cases run the gamut of those that a new public defender would encounter: assault, drug possession, DUI, terroristic threats, and thefts. What the students lack in experience, they make up for in preparation and passion enabling them to gain good results for our clients. In addition to the more routine cases, over the years the clinic has also taken on more unusual cases, as the following highlights demonstrate. R E P R E S E N TAT I O N O F C A P I TA L DEFENDANTS IN CERTIORARI PROCEEDINGS IN THE UNITED S T A T E S S U P R E M E C O U R T: The clinic has represented a number of Pennsylvania capital prisoners in petitions for certiorari to the United States Supreme Court. These projects provided the students with a breadth of experience as they mastered the formalities, rigors, and requirements of Supreme Court practice. The work strengthened their research skills, as they were required to immerse themselves in complicated, challenging, and often unfamiliar legal issues. Petitions prepared by the clinic presented issues involving the Confrontation Clause, jury instructions, and the Eighth Amendment’s prohibition WIDENER LAW 7 of cruel and unusual punishment. These petitions also allowed students to hone their writing skills as they drafted and re-drafted arguments and received much feedback and editing suggestions. Of course, the final product had to be perfect, and students learned how to exert maximum effort to accomplish that goal. Perhaps most important were the relationships students established with condemned prisoners. Students were expected to correspond with their clients and thereby learned something of the personalities behind the names. Indeed, when one cert client with sickle cell anemia suffered from hypersensitivity to the cold, a clinic student went through prison channels to get him extra blankets. When he died from his illness while his case was pending, the student attended the funeral, interacted with his family, and felt a loss. Clinic enrollment is always full because students recognize the enormous rewards the program offers— providing quality defense for the indigent, gaining first-chair lawyering experience, acquiring confidence, enhancing a resume, and employability— to name a few. Defense clinic students took on a couple of cases in which, years after a conviction and the imposition of life sentences, our clients sought the opportunity to be exonerated through DNA testing. In both cases the technology was not available at the time of their trials. While exoneration was not to be, students gained knowledge of the science of DNA, post-conviction law and practice, the challenges of investigating old cases, and techniques for counseling clients with diminishing options. DNA AND INNOCENCE: On New Year’s Day, nine third-year Widener Law students headed to New Orleans for a week of volunteering, to assist the Office GOING TO THE GULF: 8 WIDENER LAW Students Dave Iannucci, Danielle Graham, and Lisa Vetro see first-hand the devastation left by Katrina. of the Public Defender for Orleans Parish. As part of this project, a joint endeavor of the Criminal Defense Clinic and the Public Interest Resource Center, students assisted with and conducted bail motions on behalf of pre-trial detainees who have suffered, and whose cases have been neglected due to the chaotic state of the court system after Hurricane Katrina. Because a number of these students have just completed their Criminal Clinic experience, they have real-world experience doing this work and can provide truly valuable assistance. Much of the financial backing for this trip came from generous contributions from Widener Law alumni who rallied to our call for help. T W E LV E Y E A R S O F R E W A R D S : Clinic enrollment is always full because students recognize the enormous rewards the program offers—providing quality defense for the indigent, gaining firstchair lawyering experience, acquiring confidence, enhancing a resume, and employability—to name a few. Then there is the reward for the teachers. That comes when we accompany students to court appearances and see the pronounced growth and professional maturity demonstrated by our students. We feel enormous pride when we see our students conducting court proceedings as professional and persuasive advocates. ■ Judith L. Ritter is professor of law and director of the Criminal Defense Clinic at the Delaware campus. She teaches and writes in the areas of Criminal Law, Criminal Procedure and Post Conviction Remedies. CRIMINAL LAW/ADVOCACY The Spirit of Service While most law students across the country enjoyed a well-deserved break from their studies during the holiday season, a group of Widener Law students traveled to the Gulf Coast to assist the Public “We are excited to be taking Widener’s spirit of service on the road and putting it to work in Orleans Parish.” –Arlene Rivera Finkelstein, Director, Public Interest Resource Center Defender’s Office there, which is still trying to recover from the case backlog created by Hurricane Katrina. Top: Marks on boarded-up buildings around New Orleans show the flood level during Katrina. Middle: Nazim Karaca helps with paperwork to process Louisiana prisoners. During the first week of January, nine student volunteers conducted in-depth interviews with jailed defendants awaiting trial in Orleans Parish—one of the hardest-hit areas of the storm. Students drafted memos on their interviews and gave them to the Orleans Public Defender’s Office in an attempt to help move inmates’ cases forward. Widener Law professors Judy Ritter and Arlene Rivera Finkelstein accompanied and supervised the students. Ritter directs the Pennsylvania Criminal Defense Clinic on Widener’s Delaware Campus. Finkelstein directs the Public Interest Resource Center on the Delaware Campus. Both helped train the students in preparation for their trip, in conjunction with the “KatrinaGideon Interviewing Project.” The project is named after the landmark case, Gideon v. Wainwright, in which the U.S. Supreme Court recognized that states are constitutionally obligated to provide counsel to indigent criminal defendants. “We are excited to be taking Widener’s spirit of public service on the road and putting it to work in Orleans Parish,” said Finkelstein. “Our students will get a taste of the good they can do as attorneys, and while they are making this meaningful contribution, they will get real, practical experience for the future.” The Widener Law students joined students from Fordham and Brooklyn law schools in visiting Orleans Parish. However, Widener is the only Philadelphia-region law school to send students to the Gulf Coast for the Katrina-Gideon Interviewing Project during the winter break. Bottom: Students Lisa Vetro and Dave Iannucci work on a case. WIDENER LAW 9 CRIMINAL LAW/ADVOCACY …USA Today created quite a stir when, on May 11, 2006, it reported that the National Security Agency had obtained and was parsing the records identifying millions, if not billions, of telephone calls placed by Americans. The Technology of Surveillance: Will the Supreme Court’s Expectations Ever Resemble Society’s? By Stephen E. Henderson, Associate Professor. Consider some recent events and technologies, and the problem becomes clear. Why might Patricia Dunn soon be able to empathize with Martha Stewart? Because she has been indicted for her role in the Hewlett-Packard board-leak fiasco. In a nutshell, Dunn allegedly authorized and assisted in an investigation that relied upon an “information broker” to determine which board member or members were leaking confidential information to the press. How does an “information broker” obtain that information? She lies. But it isn’t pleasant to have to tell new acquaintances at a cocktail party that one lies for a living, so instead “information brokers” engage in “pretexting,” which means contacting phone companies, posing as customers, and thereby obtaining call records. HP also appears to have engaged in dumpster diving, shadowing, and other favorites in the snoop’s arsenal, but for our purposes we want to focus on pretexting. Naming aside, pretexting must be a pretty nasty business. Not only did it cost Dunn her chairman job, but the Attorney General of California charged her in a felony indictment and settled civil charges against the company for $14.5 million. The FBI investigated, the SEC instigated a review (admittedly for a tangentially related Sarbanes-Oxley issue), the House Committee on Energy and Commerce held hearings, the “governator” (Schwarzenegger) signed legislation explicitly criminalizing pretexting of telecommunications records, and the United States Congress considered — and might enact — the same. Whatever it takes to constitute a “reasonable expectation of privacy,” it must be satisfied with respect to dialing WIDENER LAW 11. records. After all, USA Today created quite a stir when, on May 11, 2006, it reported that the National Security Agency had obtained and was parsing the records identifying millions, if not billions, of telephone calls placed by Americans. And there is, in fact, a federal statute, the Stored Communications Act, which forbids such access absent legal process. Apparently, a reasonable American both should and would expect dialing records to be confidential. But according to the Supreme Court, there is no Fourth Amendment restriction on police accessing such records. They can be obtained for any reason, or for no reason. Mere curiosity will do. Why? Because to the Court, one who discloses information to a third party retains no reasonable expectation of privacy in that information (the “third party doctrine”). And we know we give those numbers to our phone company—how else are its switches to connect the call? So how about your bank records? As far as you are concerned, there is no constitutional constraint on government access.. What if the government wants to fly over your backyard to see what you do within that fence of yours? There is no constitutional constraint. And if the government wants to comb through your garbage, going so far as reconstructing shredded documents or testing a tampon for seminal fluid? There is no constitutional constraint. These are the cases law students learn. But it gets worse. The human body is constantly radiating energy. This in itself sounds worrisome, but unless you are at a temperature of absolute zero (so chilly, atoms stop vibrating), you are going to emit energy. We don’t see this energy because it isn’t in the visible spectrum, but it turns out the body is much more 12 WIDENER LAW emissive in the millimeter wave spectrum than most other objects, such as guns, knives, and particulates. And just as visible light transmits through glass, millimeter waves transmit through clothing. This allows police to carry what is in essence a video camera attuned to this spectrum and view what a person is carrying on his or her person from a distance. Does the Fourth Amendment restrict use of such a device? Not under the third party doctrine, because you knowingly (at least now you know) convey this information to others. And there are more banal examples. Consider to whom you disclose your e-mail messages. And how about your physical location? If you carry a modern cellular phone, you typically convey a very accurate location to your service provider not only when you are placing or receiving a call, but anytime the phone is turned on. And what of querying the mammoth databases amalgamating different types of information that we tend to hear about when they suffer security breaches? This is the magnum opus of the Court’s third party doctrine—the Court has removed all constitutional (legal) constraint, and technology has now removed any significant cost constraint. So what should the Court do? Obviously the third party doctrine must go, but it is admittedly difficult to replace this wonderfully bright-line rule with anything administrable. I have crafted a proposal, and interested readers can peruse it via my page at Widener’s Web site. But in the space I have here let me just say this: Last term the Supreme Court declared that “[t]he constant element in assessing Fourth Amendment reasonableness…is the great significance given to widely shared social expectations.” As the HP debacle demonstrates, the Court’s jurisprudence deviates sharply from actual expectations. Unless the Court changes course our Constitution will read like AT&T’s recently modified privacy policy, which explains that “[w]hile your account information may be personal to you, these records constitute business records that are owned by AT&T.” That might suffice for corporate America, but it shouldn’t do for our Constitution. ■ So what should the Court do? Obviously the third party doctrine must go, but it is admittedly difficult to replace this wonderfully bright-line rule with anything administrable. Stephen E. Henderson is associate professor on the Delaware campus, where he concentrates on intellectual property and criminal law. He received his JD from Yale Law School, where he co-founded the Yale Law and Technology Society. Giving Back to Widener Law Selma Hayman ’86 Selma Hayman ’86 feels that she has been able to effect meaningful change through the law. She is also generous and grateful to her alma mater, Widener Law, for the opportunities it gave her to do important and valuable work for the underserved. And so, in keeping with her style and her values, Hayman recently donated appreciated securities to fund a charitable gift annuity to benefit the law school. A charitable gift annuity is a planned gift which affords the donor substantial tax benefits, including a tax deduction at the time of the gift, while allowing the person to receive annuity payments. Hayman likes the charitable gift annuity vehicle because the annuity payments will supplement her income when she is retired, and she is pleased with the tax savings. In her words, “It should be noted that a portion of the annuity payment, because it is a return of principal, is tax free, and there are some tax savings if you donate appreciated property.” The charitable gift annuity vehicle is beneficial for the charitable institution as well, because the institution receives the remaining funds after the donor dies. In Hayman’s case, the gift will benefit a scholarship for minority students. “When I was in law school, there were only about two black students in my class, and I thought it was a major gap,” she explains. Although she realizes that minority enrollment at Widener Law has increased since that time, Hayman wants to encourage a more diverse population in the legal profession. A desire to “give back” to the law school as thanks for scholarship help she personally received as a student is also a motivating factor, she says. Hayman, who has a bachelor’s degree in biology from Antioch College and a PhD in biochemistry from the University of Wisconsin at Madison, entered Widener Law in 1983 at the age of 52 after a career in research. Of her decision to attend law school later in life, Hayman says, “I was on the board of the American Civil Liberties Union and saw that law school could be interesting and socially relevant.” She parlayed her law degree and her commitment to social justice into a career that has been devoted to representing clients who desperately need her help. Focusing primarily on elder law, Hayman has handled guardianships, cases involving nursing home rights, Medicare and Medicaid issues, and Social Security Disability claims and appeals. Elder law, according to Hayman, “turned out to be a good fit.” She recently noted, “An awful lot of what I do is helping people who have serious problems. I don’t deal with people with a lot of money. I send them to someone else. I didn’t go into the law to get rich.” Hayman’s philosophy on life and the law has indeed aided many clients during her career and—in light of her recent planned gift—it will also benefit many future law students. WIDENER LAW 13 The simple problem is that the mind is not a video or digital recorder; it neither perceives all details nor retains those it did see in a pristine, unalterable state. 14 WIDENER LAW CRIMINAL LAW/ADVOCACY I’ll never forget that face . . . (But I might not remember it accurately.) Mistaken eyewitness identification testimony is a leading cause of most wrongful convictions. By Professor Jules Epstein Mistaken identification cases are “high profile” in the media. This has resulted in significant part from DNA exonerations. The scientific conclusiveness of DNA as proof of innocence has permitted a retrospective assessment of “what went wrong” in those cases. And “what went wrong” in a substantial proportion of those cases was a reliance on eyewitness identification testimony. According to the Innocence Project, as of November 2006 “over 75% of the 183 post-conviction DNA exonerations in the U.S. involve mistaken eyewitness identification testimony, making it the leading cause of these wrongful convictions.” But it is not just in 183 cases. For more than a century, every study of wrongful convictions has shown that mistaken identification is the major culprit, usually at a rate of roughly two-thirds of the cases studied. And FBI statistics of DNA examinations in sex offenses show a startlingly high rate of mistaken identifications: in over 10,000 cases where crime scene DNA was tested against suspects’ DNA, the exclusion rate [the rate at which the DNA showed the suspect could not have contributed the evidence] was 20 percent. The numbers show the prevalence of this problem, but not its cause. And the cause has five demonstrable components — problems with perception and memory, improper police evidence gathering, juror over-valuing of eyewitness testimony, bad lawyering, and judicial decisions and practices that are contrary to the known science. Perception and Memory: The simple problem is that the mind is not a video or digital recorder; it neither perceives all details nor retains those it did see in a pristine, unalterable state. How do we know this? In the past three decades, over 2,000 peer-reviewed studies have shown that several factors impede accurate perception and recall: Weapons Focus: When a firearm or knife is present, crime witnesses look at the weapon, not at the perpetrator’s face. Own-Race Bias: It remains a sad but true fact that witnesses are better at identifying persons of their own race than of other races. Stress: Very high levels of stress impair the accuracy of eyewitness testimony. This is true even where the witness is a trained police officer or member of the military. The Memory Drop-Off: Accurate recall of an event drops sharply after a several hour period. Confidence/Accuracy: Although many witnesses maintain that they, personally, are “100% sure” of their identification, the correlation between their confidence and the accuracy of their identifications is low. Police Evidence Gathering: An abundance of studies has shown that when police conduct interviews or lineups, their words or gestures can contribute to mistaken identifications. Asking “did the man have a big mustache” may implant that feature onto the witness’ memory; and telling witnesses to look for “the perpetrator in the lineup” can suggest that the perpetrator is in the group being looked at, and thus cause the witness to pick someone, not necessarily the right person. Juror Over-valuing of Eyewitness Testimony: Jurors believe eyewitnesses (usually crime victims), as they seem sincere and have no apparent motive to lie or pick the wrong person. In one mock jury study, 72% of the jurors found the subject WIDENER LAW 15 The standard for suppressing eyewitness testimony as unreliable has no regard for how memory works or how police conduct may influence witnesses . . . guilty when there was one eyewitness. A separate set of jurors were given the same one-witness evidence and were told that the eyewitness was legally blind; the percentage of jurors voting guilty dropped only to 68%. Bad Lawyering: Too many cases involve lawyers who have not studied the psychology of eyewitness evidence and who use cross-examination techniques designed to expose the dishonest witness when what they are confronting is an honest but mistaken witness. Judicial Decisions: The law’s development is currently running twenty to thirty years behind the clear science. The standard for suppressing eyewitness testimony as unreliable has no regard for how memory works or how police conduct may influence witnesses. Jury instructions also fail to keep pace with the science, and many jurisdictions prohibit or limit the use of expert witnesses to explain why an eyewitness’ claim might be unreliable. So, where and how are remedies being sought? Many legislatures, police departments, and state agencies have adopted guidelines for police investigation in identification cases, particularly in how to interview witnesses and conduct lineups. New Jersey has adopted these on a statewide basis, including the requirement that lineups be conducted “blind,” i.e., by a detective who does not know which person is the suspect. Advanced training in how to litigate a case of mistaken identification is being provided nationally, and across Pennsylvania, for defense lawyers. In the courts, resources are being amassed to press for better jury instructions, greater acceptance and use of experts, and a more science-based standard for assessing whether eyewitness testimony should be admissible. Much is occurring, but much more is needed, particularly in terms of education for judges, juries, and police. And until these changes are implemented, we will continue to see headlines like those from Delaware in September 2006 when two people were wrongly identified from bank robbery surveillance photos: “Police, again, accuse the wrong man.” ■ Jules Epstein is associate professor at Widener’s Delaware campus. He joined Widener from the Philadelphia criminal defense firm of Kairys, Rudovsky, Epstein & Messing, and teaches criminal law and evidence courses. He serves as faculty for a National Judicial College program training judges in capital case representation. 16 WIDENER LAW Mistaken Identity Resources Articles: Epstein, Tri-State Vagaries: The Varying Responses of Delaware, New Jersey, and Pennsylvania to the Phenomenon of Mistaken Identifications, 12 WIDENER L. REV. 327 (2006) Psychology Eyewitness Expert Resources: Web site of Professor Gary Wells, gwells/homepage.htm Web site of Dr. Solomon Fulero, Web site of Professor Stephen Penrod, Government Publications: New Jersey State Attorney General Guidelines on Eyewitness Identification,. iastate.edu/FACULTY/gwells/njguidelines.pdf National Institute of Justice, Eyewitness Evidence: A Guide for Law Enforcement, files1/nij/178240.pdf Organizations: The Innocence Project, Media: FRONTLINE, “What Jennifer Saw,” Ever wish you could do more for Widener Law? Many of Widener University School of Law’s most faithful contributors give an annual donation each year but wish they could support the law school in a more significant way. A planned gift is the answer for many people, and it may be the answer for you. The following options allow you to make gifts to the law school while also benefiting yourself and your heirs: ■ testamentary bequest ■ life insurance policy ■ individual retirement account ■ charitable remainder trust ■ charitable lead trust ■ charitable gift annuity Please discuss these and other planned giving vehicles with your financial advisor or contact the Alumni and Development Office at 302-477-2172 for further information. Alumni and Development Office 4601 Concord Pike Wilmington, DE 19803-0474 302-477-2172 Faculty Publications 2005-2007 B A R N E T T, L A R R Y D . , Social Productivity, Law, and the Regulation of Conflicts of Interest in the Investment Industry, 3 CARDOZO PUB. L. POL’Y & ETHICS J. 793 (2006). When Is a Mutual Fund Director Independent? The Unexplored Role of Professional Relationships Under Section 2(A)(19) of the Investment Company Act, 4 DEPAUL BUS. & COM. L.J. 155 (2006). The Regulation of Mutual Fund Names and the Societal Role of Trust: An Exploration of Section 35(d) of the Investment Company Act, 3 DEPAUL BUS. & COM. L.J. 345 (2005). Home as a Legal Concept, 46 SANTA CLARA L. REV. 255 (2006). BARROS, BENJAMIN, At Last, Some Clarity: The Potential Long-Term Impact of Lingle v. Chevron and the Separation of Takings and Substantive Due Process, 69 ALB. L. REV. 343 (2005). When a Certified Mail Notice of Tax Delinquency Is Returned as Undelivered, Must Governments Take Additional Steps Before Seizing Property?, 200506 PREVIEW U.S. SUP. CT. CAS. 207. B R I T T O N , A N N H . , Bones of Contention: Custody of Family Pets, 20 J. AM. ACAD. MATRIMONIAL LAW. 1 (2006). et al., SPORTS LAW: CASES AND MATERIALS (Carolina Academic Press 2nd ed. 2007). COZZILLIO, MICHAEL J., & Robert L. Hayman Jr., SPORTS AND INEQUALITY (Carolina Academic Press 2005). C U L H A N E , J O H N G . , Lawrence-ium: The Densest Known Substance, 11 WIDENER L. REV. 259 (2005). Even More Wrongful Death: Statutes Divorced from Reality, 32 FORDHAM URB. L.J. 171 (2005). Writing On, Around, and Through Lawrence v. Texas, Symposium on the Implications of Lawrence and Goodridge for the Recognition of Same-Sex Marriages and the Validity of DOMA, 38 CREIGHTON L. REV. 493 (2005). Bad Science, Worse Policy: The Exclusion of Gay Males from Donor Pools, 24 ST. LOUIS U. PUB. L. REV. 129 (2005). & Stacey L. Sobel, The Gay Marriage Backlash and Its Spillover Effects: Lessons From a (Slightly) “Blue State,” Symposium: The Legislative Backlash to Advances in Rights for SameSex Couples, 40 TULSA L. REV. 443 (2005). & Jeremy Sarkin, RECONCILIATION IN DIVIDED SOCIETIES: FINDING COMMON GROUND, (Penn Press 2007). D A LY, E R I N , The Small Group Progress Conference, 20 THE SECOND DRAFT (Bull. of the Legal Writing Inst.), August, 2005, at 11. CHESLER, SUSAN, & Robert R. Keatinge, KEATINGE AND CONAWAY ON CHOICE OF BUSINESS ENTITY, (Thomson West 2006). C O N A W AY, A N N E . , et al., Internal Disputes and Break-Ups: Colorado, California, and Delaware, Limited Liability Entities – 2005 New Developments in Limited Liability Companies and Limited Liability Partnerships Live via Satellite TV/Webcast on the American Law Network, ALI-ABA Video Law Review, March 17, 2005 (VMF0317 ALI-ABA 81). Transferee and Assignee Rights; Charging Orders; Piercing and Reverse Piercing; Duty to Creditors; and Other Creditor Remedies in Uniform Unincorporated Acts, Selecting Legal Form and Structure for Closely-Held Businesses and Ventures Live Via Satellite TV/Webcast on the American Law Network, ALI-ABA Video Law Review, February 10, 2005 (VMF0210 ALI-ABA 165). The New Liberty, 11 WIDENER L. REV. 221 (2005). WRITING ESSAY EXAMS TO SUCCEED (NOT JUST TO SURVIVE) (2d ed. Aspen 2007). DERNBACH, JOHN, et al., Stabilizing and Then Reducing U.S. Energy Consumption: Legal and Policy Tools for Efficiency and Conservation, 37 ENVTL. L. REP. (Envtl. L. Inst.) 10,003 (2007) Targets, Timetables, and Effective Implementing Mechanisms: Necessary Building Blocks for Sustainable Development, SUSTAINABLE DEVELOPMENT LAW & POLICY (Fall 2005). & Dan Tarlock, 2005, Sustainable Development and Natural Governance: The Challenges Ahead in SOCIAL SCIENCES AND HUMANITIES, IN ENCYCLOPEDIA OF LIFE SUPPORT SYSTEMS (EOLSS), Developed Under the Auspices of UNESCO, EOLSS Publishers, Oxford, UK,. et al., Committee on Climate Change and Sustainable Development: 2005 Annual Report, in ENV’T ENERGY AND RESOURCES L.: THE YEAR IN REVIEW 115 (2006). et al., Committee on Climate Change and Sustainable Development: 2004 Annual Report, in ENV’T ENERGY AND RESOURCES L.: THE YEAR IN REVIEW 120 (2005). et al, SPORTS LAW: CASES AND MATERIALS (Carolina Academic Press 2nd ed. 2007). Sex Offender Registration and Community Notification, in THE PROSECUTION AND DEFENSE OF SEX CRIMES, Chapter 43 (Bender 1976, 2004; chap. 2005, chap. 2006). Federal Sentencing in Drug Cases, in DEFENSE OF NARCOTICS CASES, Chapter 5B (Bender 1972, 2005; chap. 1999, 2001, 2002, 2004, 2005). DIMINO, MICHAEL D. SR., Counter-Majoritarian Power and Judges’ Political Speech, 58 FLA. L. REV. 53 (2006). The Non-Political Branch, 10 TEX. REV. L. & POL. 449 (2005-2006) (reviewing LEE EPSTEIN & JEFFREY A. SEGAL, ADVICE AND CONSENT: THE POLITICS OF JUDICIAL APPOINTMENTS (2005)). The Worst Way of Selecting Judges—Except all the Others That Have Been Tried, 32 N. KY. L. REV. 267 (2005). Jury Selection in Drug Offense Cases, in DEFENSE OF NARCOTICS CASES, Chapter 5A (Bender 1972; chap. 2004; releases through 2005). True Lies: The Constitutional and Evidentiary Bases for Admitting Prior False Accusation Evidence in Sexual Assault Prosecutions, 24 QUINNIPIAC L. REV. 609 (2006). F A M I LY, J I L L E . , Another Limit on Federal Court Jurisdiction? Immigrant Access to Class-Wide Injunctive Relief, 53 CLEV. ST. L. REV. 11 (2005-06). The Normalization of Product Preemption Doctrine, 57 ALA. L. REV. 725 (2006). The Rush to Limit Judicial Review, PERSPECTIVES ON IMMIGRATION (American Immigration Law Foundation, Washington, D.C.), Sept.2006, available at september_perspective.shtml. Daubert and Its Progeny: Expert Scientific Evidence in Massachusetts Personal Injury Cases, J. OF THE MASS. ACAD. OF TRIAL ATT’YS 22 (Fall 2005/Winter 2006). Owning a Piece of the Doc: State Law Restraints on Lay Ownership of Healthcare Enterprises, 39 J. HEALTH L. 1 (2006). Toxic Exposures at Ground Zero: Is There a Role for the Tort System?, 1 WIDENER HEALTH L. TODAY 1 (Fall 2005). Health Care at Lake Wobegon, 1 WIDENER HEALTH L. TODAY 1 (Spring 2005). TOXIC TORTS IN A NUTSHELL (Nutshell Series, Thomson West 3rd ed. 2005). EGGEN, JEAN M. FICHTER, ANDREW, Improving the Rolling Contract, 56 AM. U. L. REV. 1 (2006) FRIEDMAN, STEPHEN E., The Impact of the Class Action Fairness Act on Plaintiffs in Mass-Tort Actions, 12 No. 3 ANDREWS CLASS ACTION LITIG. REP. 17, Apr. 21, 2005. JULES EPSTEIN Tri-State Vagaries: The Varying Responses of Delaware, New Jersey, and Pennsylvania to the Phenomenon of Mistaken Identifications, 12 WIDENER L. REV. 327 (2006). Expert Witnesses, in THE PROSECUTION AND DEFENSE OF SEX CRIMES, Chapter 30 (Bender 1976; chap. 2006). Jury Selection in Sex Offense Cases, in THE PROSECUTION AND DEFENSE OF SEX CRIMES, Chapter 33 (Bender 1976; chap. 2006). Protecting Children From Speech, 57 FLA. L. REV. 565 (2005), reprinted in FIRST AMENDMENT LAW HANDBOOK (Rodney Smolla ed., 2005/2006). GARFIELD, ALAN E., Editorial, Hate the Vile Campaign Ads? Blame the Supreme Court, PHIL. INQ, NOV. 2, 2006, at A19. Editorial, A More Perfect Union, THE NEWS J. (Wilmington, DE), Sept. 17, 2006, at A21. Editorial, Independence Day Honors Lofty Concept, Hard-Won Reality, THE NEWS J. (Wilmington, DE), July 4, 2006, at E3. Editorial, Science-Belief Tension Is Natural, THE NEWS J. (Wilmington, DE), Apr. 8, 2006, at A7. Editorial, ….and on ‘Constitution Day’, What to Celebrate? PHIL. INQ., Sep. 16, 2005, at A21. Editorial, Judge Judges on How They Use Their Power, THE NEWS J. (Wilmington, DE), Nov. 18, 2005, at A14. G E D I D , J O H N L . , Editorial, Upholding Separation of Power Was Proper, PATRIOT NEWS (Harrisburg, PA), Sept. 15, 2006, at A11. Rights of Union Members Within Their Unions, in EMPLOYEE AND UNION MEMBER GUIDE TO LABOR LAW: A MANUAL FOR ATTORNEYS REPRESENTING THE LABOR MOVEMENT, Chapter 12 (Thomson-West 2005 revisions). GOLDBERG, MICHAEL J., Teamster Reformers: Their Union, Their Jobs, Their Movement, 72 J. TRANSP. L. LOGISTICS & POL’Y 13 (2005). et al., 2005 Uniform Commercial Code Survey: Introduction, 61 BUS. LAW. 1541 (2006). HAKES. RUSSELL A., et al., The Uniform Commercial Code Survey: Introduction, 60 BUS. LAW. 1635 (2005). H A M E R M E S H , L AW R E N C E A . , Symposium: Litigation reform Since the PSLRA: A Ten-Year Retrospective: Panel Three: SarbanesOxley Governance Issues: The Policy Foundations of Delaware Corporate Law, 106 COLUM. L. REV. 1749 (2006). Ruby R. Vale and a Definition of Legal Scholarship, 31 DEL. J. CORP. L. 253 (2006). Twenty Years After Smith v. Gorkom: An Essay on the Limits of Civil Liability of Corporate Directors and the Role of Shareholder Inspection Rights, The Smith v. Gorkom Symposium, 45 WASHBURN L.J. 283 (2006). & Michael L. Wachter, The Fair Value of Cornfields in Delaware Appraisal Law, 31 J. CORP. L. 119 (2005). Corporate Officers and The Business Judgment Rule: A Reply to Professor Johnson, 60 BUS. LAW. 865 (2005). HARRINGTON CONNER, DANA, Editorial, - Rachel Kipp, Law School Has Human Side, THE NEWS J. (Wilmington, DE), January 14, 2007, at B2. WIDENER LAW 19 Faculty Publications 2005-2007 Societal Views and Survivors of Domestic Violence: Asking the Right Questions, 13 WIDENER SCHOOL OF LAW MAGAZINE, 2006, at 10. & Michael J. Cozzillio, SPORTS & INEQUALITY (Carolina Academic Press 2005). H AY M A N , R O B E R T L . J R . , H E M I N G W AY, A N N A P. , Keeping Students Interested While Teaching Citation, 20 THE SECOND DRAFT (Bull. of the Legal Writing Inst.), August, 2005, at 14. Learning From All Fifty States – How to Apply the Fourth Amendment and Its State Analogs to Protect Third Party Information from Unreasonable Search, 55 CATH. U. L. REV. 101 (2006). HENDERSON, STEPHEN E., Nothing New Under the Sun? A Technologically Rational Doctrine of Fourth Amendment Search, 56 MERCER L. REV. 507 (2005). Services as Objects of International Trade: Bartering the Legal Profession, 39 VAND. J. TRANSNAT’L L. 347 (2006). HILL, LOUISE L., Sustainable Development and the Marrakech Accords, in THE LAW OF ENERGY FOR SUSTAINABLE DEVELOPMENT, Chapter 4, at 56 (Adrian J. Bradbrook et al. eds. 2005). H O D A S , D AV I D R . , State Law Responses to Global Warming: Is It Constitutional to Think Globally and Act Locally?, Symposium on Environmental Law and The Constitution, 21 PACE ENVTL. L. REV. 53 (2003), reprinted in Daniel A. Farber and Jim Chen, DISASTERS AND THE LAW: KATRINA AND BEYOND 312 (Aspen 2006). A Boundary Dispute’s Effect on Siting an LNG Terminal, 21 NAT. RESOURCES & ENV’T. 34, Summ. 2006. Clinic Provides Environmental Defense, Legal Training, WIDENER U. SCH. L. MAG., Fall 2005, at 9. Nineteenth Century Visions of a Twenty-First Century Bar: Were Dickens’s Expectations for Lawyers Too Great?, 15 WIDENER L. J. 283 (2006). LEE, G. RANDALL, Dorothy Day and Innovative Social Justice: A View from Inside the Box, 12 WM. & MARY J. WOMEN & L. 187 (2005). Bruce Springsteen’s Hope and the Lawyer as Poet Advocate, 14 WIDENER L.J. 867 (2005). Lessons to be Learned, Lessons to Live Out: Catholicism at the Crossroads of Judaism and American Legalism, 49 ST. LOUIS U. L.J 367 (2005). Introduction, The Lawyer as Poet Advocate: Bruce Springsteen and the American Lawyer, 14 WIDENER L.J. 719 (2005). A Law Professor on Being Fashioned, 14 WIDENER L.J. 469 (2005). The Challenge of High Priced Oil, 20 NAT. RESOURCES & ENV’T 59, Fall 2005. The Continuing Moral Fashioning of a Law Professor, ORANGE COUNTY LAW., June 2005, at 18. Executive Privilege and Energy Policy, 19 NAT. RESOURCES & ENV’T 3 (2005). LIPKIN, ROBERT JUSTIN, Vantage Point & Issue Editor, Transboundary Conflicts Issue, 21 NAT. RESOURCES & ENV’T, Summ. 2006. Which Constitution: Who Decides? The Problem of Judicial Supremacy and the Interbranch Solution, 28 CARDOZO L. REV. 1055 (2006). K E A R N E Y, M A R Y K A T E , The Harm of Same-Sex Marriage: Real or Imagined? 11 WIDENER L. REV. 277 (2005). The Seduction of the Appellate Body: Shrimp/Sea Turtle I and II and the Proper Role of States in WTO Governance, 38 CORNELL INT’L L.J. 459 (2005). Book Review, 15 L. & POL. BK. REV. 539 (2005) (reviewing FRANK I. MICHELMAN, BRENNAN AND DEMOCRACY (2005)), at subpages/reviews/michelman605.htm PRODUCTS LIABILITY AND BASIC TORT LAW (Carolina Academic Press 2005). Book Review, 16 L. & POL. BK. REV. 133 (2006) (reviewing ARGUING MARBURY V. MADISON (Mark Tushnet, ed.) (2005)). Recognizing That They Watch, 14 WIDENER L.J. 437 (2005). K E L L E Y, J . P A T R I C K , KOTLER, MARTIN, A., 20 WIDENER LAW Allocating Responsibilities for Environmental Cleanup Liabilities through Purchase Price Discounts, ENVTL. COUNS., Oct. 15, 2005, at 2, reprinted in CORPORATE COUNSEL’S GUIDE TO ACQUISITIONS AND DIVESTITURES Chapter 20.1 (Thomson West 2005, 2006 revisions). K R I S T L , K E N N E T H T. , Going Courting: How Same-Sex Marriage Opponents Came to Love the Courts (Sept. 9, 2005), at M E A D OW S, RO BY N L . , et al., Sales (Uniform Commercial Code Annual Survey), 61 BUS. LAW. 1545 (2006). R AY, L A U R A K . , Laughter at the Court: The Supreme Court as a Source of Humor, 79 S. CAL. L. REV. 1397 (2006). M A N N , R O B E R T A F. , & Mona L. Hymel, Getting Into the Act: Enticing the Consumer to Become “Green” Through Tax Incentives, 26 ENVTL. L. REP. (Envtl. L. Inst.) 10419 (2006). et al., Sales (Uniform Commercial Code Annual Survey), 60 BUS. LAW. 1639 (2005). America Meets the Justices: Explaining the Supreme Court to the General Reader, 72 TENN. L. REV. 573 (2005). On The Road Again: How Tax Policy Drives Transportation Choice, 24 VA. TAX REV. 587 (2005). Top Ten Strategies for Encouraging Tax Compliance, 111 TAX NOTES 919 (2006). & Jasper L. Cummings, Jr., Point & Counterpoint: The No-Net-Value Proposed Regulations: Invalid Exercise of Authority or WellReasoned Interpretation, 25 NEWSQUARTERLY (ABA Tax Sec.) 14 (Fall 2005). et al., The American Jobs Creation Act’s Impact on Individual Investors: A Monster of Complexity? 22 J. TAX’N OF INVEST. 187 (2005). & Robert L. Glicksman, Justice Rehnquist and the Dismantling of Environmental Law, 36 ENVTL. L. REP. (Envtl. L. Inst.) 10585 (2006). M AY, J A M E S R . , The North American Symposium on the Judiciary and Environmental Law: Constituting Fundamental Environmental Rights Worldwide, 23 PACE ENVTL. L. REV. 113 (2005/2006). Trends in Constitutional Environmental Law, 37 No. 4 ABATRENDS 8 (March/ April 2006). The Aftermath of TMDL Litigation: Consent Decrees and Settlement Agreements, in CLEAN WATER ACT: LAW AND REGULATION 157 (ALI-ABA Course of Study, Oct. 26-28, 2005). Where Constitutional Law and Environmental Law Intersect, WIDENER U. SCH. L. MAG., Fall 2005, at 12. Now More Than Ever: Trends in Environmental Citizen Suits at 30, Environmental Citizen Suits at Thirtysomething: A Celebration & Summit Symposium, Part I, 10 WIDENER L. REV. 1 (2003), reprinted in ENVIRONMENTAL LAW 385 (ALI-ABA Course of Study, Feb. 16-18, 2005). The Availability of State Environmental Citizen Suits, 18 NAT. RESOURCES & ENV’T 53 (2004), reprinted in ENVIRONMENTAL LAW 439 (ALI-ABA Course of Study, Feb. 16-18, 2005). Unconscionability as a Contract Policing Device for the Elder Client: How Useful Is It? 38 AKRON L. REV. 741 (2005). et al., 2005 Uniform Commercial Code Survey: Introduction, 61 BUS. LAW. 1541 (2006). et al., The Uniform Commercial Code Survey: Introduction, 60 BUS. LAW. 1635 (2005). Relinquish Control! Why the IRS Should Change Its Stance on Exempt Organizations in Ancillary Joint Ventures, 6 NEV. L.J. 21 (2005). M I R K AY, N I C H O L A S A . , Has Congress Slimmed Down The Hogs?: A Look at the BAPCA Approach to Pre-Bankruptcy Planning, 15 WIDENER L.J. 615 (2006). MORINGIELLO, JULIET M., & William L. Reynolds, Internet Contracting Cases 2004-2005, 61 BUS. LAW. 433 (2005). Signals, Assent and Internet Contracting, 57 RUTGERS L. REV. 1307 (2005). Cyberspace Law Survey: Introduction, 61 BUS. LAW. 431 (2005). Holder v. Hall, in THE OXFORD COMPANION TO THE SUPREME COURT OF THE UNITED STATES 466 (Kermit L. Hall et al., eds., 2nd ed. 2005). & Hon. Eunice L. Ross, WILL CONTESTS (West Group 2d ed. 1999, & Cum. Supp. 2006). REED, THOMAS J., Admitting the Accused’s Criminal History: The Trouble With Rule 404(b), 78 TEMP. L. REV. 201 (2005). ROBINETTE, CHRISTOPHER J., Torts Rationales, Pluralism, and Isaiah Berlin, 14 GEORGE MASON L. REV. 329 (2007) Can There be a Unified Theory of Torts? A Pluralist Suggestion from History and Doctrine, 43 BRANDEIS L.J. 369 (2005). S O S N O V, L E O N A R D N . , & David Rudovsky, PENNSYLVANIA CRIMINAL PROCEDURE: LAW, COMMENTARY AND FORMS (West’s Pennsylvania Practice Series, West Group 2d ed. 2001 (pocket parts through 2006)). M O U LT O N , H . G E O F F R E Y J R . , Consistency, Proportionality, and Substantive Judicial Review in Capital Sentencing, 80 IND. L.J. 98 (2005). N I C H O L S , N A T H A N I E L C . , When Harry Met Sally: Client Counseling Under BAPCPA, 15 WIDENER L.J. 641 (2006). Modeling Professionalism: The Process From a Clinical Perspective, 14 WIDENER L.J. 441 (2005). The Landscape Art of Daniel Urban Kiley, 29 WM. & MARY ENVTL. L. & POL’Y REV. 267 (2005). N I V A L A , J O H N F. , The Rise and Fall of Material Witness Detention in Nineteenth Century New York, 1 N.Y.U. J. L. & LIBERTY 726 (2005). O L I V E R , W E S L E Y, Toward a Better Categorical Balance of the Costs and Benefits of the Exclusionary Rule, 9 BUFF. CRIM. L. REV. 201 (2005). Parsing Personal Predilections: A Fresh Look at the Supreme Court’s Cruel and Unusual Death Penalty Jurisprudence, 58 ME. L. REV. 99 (2006). S T R A U S S , A N D R E W L . , ET AL., INTERNATIONAL LAW AND WORLD ORDER: A PROBLEM-ORIENTED COURSEBOOK (Thomson West 4th ed. 2006). ET AL., SUPPLEMENT OF BASIC DOCUMENTS TO INTERNATIONAL LAW AND WORLD ORDER (Thomson West 4th ed. 2006). Is International Law a Threat to Democracy: Framing the Question, 12 ILSA J. INT’L & COMP. L. 555 (2006). TAKING DEMOCRACY GLOBAL: ASSESSING BENEFITS AND CHALLENGES OF A GLOBAL PARLIAMENTARY ASSEMBLY (One World Trust 2005). THE Exploring the Complexities of Environmental Justice, WIDENER U. SCH. L. MAG., Fall 2005, at 8. WILLIAMS, SERENA M., RAEKER-JORDAN, SUSAN M., WIDENER LAW 21 Faculty News JOHN DERNBACH MICHAEL DIMINIO J I L L F A M I LY LOREN PRESCOTT JR. CHRISTOPHER ROBINETTE made a presentation in Vail, CO, in August 2006 at the Annual Colorado Business Law Institute. The presentation was entitled, “Overtaxing the Concept of ’Good Faith’: The Distinction in Contractual Good Faith and Good Faith in the Law of Fiduciaries and Trust Law.” P R O F E S S O R A N N E . C O N A W AY Ann was the organizer and a panelist on the October 4 roundtable at Widener Law School on “Statutory and Case Law Developments in Good Faith.” On October 5, Ann organized the Third Annual Symposium on the Law of Delaware Business Entities, entitled “Good Faith After Disney: The Role of Good Faith in Organizational Relations in Delaware Business Entities,” sponsored by the Delaware State Bar Association. She introduced the program by presenting “Context of Good Faith in Delaware Corporate Law.” During the afternoon session, Ann presented “Overtaxing Good Faith: The Distinction Among Contractual Good Faith and Good Faith in the Law of Fiduciaries and Trust Law.” P R O F E S S O R J O H N G . C U L H A N E has been the Acting Director of the Health Law Institute since January 2006. In addition to the responsibilities that position entails, he continues to work on the Public Health Law Information Project, expected to be completed Spring 2007. On May 5, 2006, Professor Culhane made two appearances in connection with Philadelphia’s Equality Forum: The Global GLBT Event. That morning, he was a featured guest on Radio Times (WHYY-FM, Philadelphia); the topic was “Family Matters: Gays and the Law” (available at). In the afternoon, he participated in a panel entitled “Family Matters.” On April 24, 2006, John participated on the ethics panel for “Custody Reform on the Horizon,” a CLE sponsored by the Pennsylvania Bar Institute. In January, 2007, Professor Culhane spoke at a conference on same-sex marriages and civil unions held at Tulane Law School. The results of the conference are to be published in LAW AND SEXUALITY: A REVIEW OF LESBIAN, GAY, BISEXUAL AND TRANSGENDER LEGAL ISSUES, published at Tulane. P R O F E S S O R J O H N D E R N B A C H has been reappointed Chair of the ABA Committee on Sustainable Development, Ecosystems and Climate Change for 2006-2007. John, with seven Widener students (Robert Altenburg, Thomas Corcoran, Norman Marden, Allison Rafferty, Christopher Reibsome, Edward Ruud, and David Sunday) in his Seminar on Energy Efficiency, published “Stabilizing and Then Reducing U.S. Energy Consumption: Legal and Policy Tools for Efficiency and Conservation,” 37 ENVTL. L. REP. (Envtl. L. Inst.) 10,003 (2007). John was a panelist for “Federal and State Environmental Issues” at the Environmental Law Forum at the Pennsylvania Bar Institute in Harrisburg on April 5, 2006. On April 6, John made a presentation on “Sustainability, Climate Change, and Energy Efficiency” at the “Climate Change Challenges: Legal Responses to Environmental Disasters” symposium at the New England School of Law in Boston. On April 18, John made a presentation on “Taking Energy Efficiency Seriously” at the “Catastrophic Climate Change: The Science, the Costs, and the Race for Remedies” symposium at the Albany School of Law in New York. In May he was a presenter on “Effective Approaches for Motivating People and Institutions to Adopt More Sustainable Practices and Technologies” at the Roundtable on Science and Technology for Sustainability at the National Academy of Sciences in Washington, D.C. In August he provided testimony before the House Democratic Policy Committee in Harrisburg on H.B. 2744, which would authorize grants for local climate change action plans. In September John was a moderator at the Charlestown Citizens Forum on “Energy 101: How Energy Policy Hits Home” in Berwyn, Pennsylvania. On October 5, John made a presentation on “Energy Policy and Climate Change” to the Opening Session on Energy Policy Act of 2005, at the 14th Section Fall Meeting of the ABA Section of Environment, Energy and Resources in San Diego. In November and December, John presented: “Sustainable Development and State and Local Governance,” at the Florida Coastal School of Law, Environmental Summit, Jacksonville, Florida, Nov. 3, 2006; “Sustainable Development, Climate Change, and State Action, Responses to Global Warming: The Law, Economics, and Science of Climate Change,” at the University of Pennsylvania Law Review 2006-2007 Symposium, Philadelphia, Nov. 16, 2006; and “Facing Climate Change in a Sea of Legal Uncertainty,” at the Sixth Annual Great Lakes Water Conference: Climate Change, the Courts, and a Common Water Policy, sponsored by the University of Toledo College of Law, Toledo, Ohio, Dec. 1, 2006. of four faculty for a four day course on “advanced evidence” for judges from around the nation, sponsored by the National Judicial College at the end of May in Philadelphia. Professor Epstein has been named one of three Pennsylvania representatives to a national network for the reform of eyewitness identification law, sponsored by the Innocence Project and other organizations. He remains an active member of the committee that drafts jury instructions (criminal) for the Pennsylvania courts. made a presentation on May 5 in Las Vegas at the Immigration Law Teachers’ Workshop on the subject of “The Role of Injunctive and Declaratory Relief in Immigration Cases.” P R O F E S S O R J I L L E . F A M I LY On October 14, Jill spoke on the subject of “Stripping Judicial Review: Congress in Action” at the Temple Political and Civil Rights Law Review Symposium at Temple University Beasley School of Law in Philadelphia. PROFESSOR MICHAEL DIMINO PROFESSOR ARLENE RIVERA presented “The Community Caretaking Doctrine and Fourth Amendment Reasonableness” to the Federalist Society Faculty Conference. FINKELSTEIN, was a CLE planner and presenter at the 2006 Pennsylvania Bar Institute, Criminal Law Symposium, “Capital Case” training; the Pennsylvania Association of Criminal Defense Lawyers (PACDL), “Mistaken Identification” training (Fall 2006); Bucks County Bar Association Bench Bar CLE, “Hollywood and Cross-Examination” (Fall, 2006); and Philadelphia Bar Association Bench Bar CLE, “And Justice for All Cross-Examination in the Movies.” PROFESSOR JULES EPSTEIN Jules helped plan, and was a presenter on, death penalty issues at the February 9 daylong capital case training sponsored by the Pennsylvania Association of Criminal Defense Lawyers. Jules is one of four faculty for a National Judicial College capital case training for Pennsylvania judges, to be held in late March in Harrisburg. He will also be one working with Professor Judith Ritter, helped organize and lead a group of Widener Law students in their travel to New Orleans in January 2007 to provide assistance to inmates that had been unavailable as a result of the Katrina disaster. Arlene was also a panelist for “Cross-Ex: The Crossroads of your Case!” at the ABA National Conference for the Minority Lawyer on Thursday, June 22, 2006, in Philadelphia. Panelists included attorneys and a judge who commented on the effectiveness of cross-examination strategies demonstrated in contemporary television shows. In June and July of 2006, Arlene taught two writing seminars entitled “Real Writing for Real Lawyers” at the Philadelphia Law Department. The seminars were designed to improve the writing skills of new attorneys and summer interns, and to help them shift gears from writing in the academic setting to writing in the high-pressure, high-volume City Solicitor’s Office. P R O F E S S O R A L A N E . G A R F I E L D , as the outgoing Chair of the AALS Section on Mass Communication Law, moderated “Secrecy in the Name of Security” and “A Conversation with Daniel Ellsberg” at the AALS Annual Conference in January 2007. Alan organized “The First State Celebrates Constitution Day” program, a Web site collection of essays on the Constitution written by Delaware government and community leaders. On January 19, 2007, Alan presented “Evaluating Copyrights Impact on Speech” at a symposium at the Brennan Center for Justice, sponsored by Hofstra University School of Law and New York University School of Law. On February 7, 2007, Alan spoke on “Preemption, Local Municipalities and Federal Immigration Law: A Struggle for Balance” at Widener. Alan made presentations to the Council on American-Islamic Relations on February 11th, entitled, “Civil Liberties During Wartime”; and on “Separation of Church and State” at Easttown Library on February 20. Alan was the organizer and moderator for “A Conversation with Delaware Valley Muslim Leaders” at Widener University School of Law on March 18, 2007. was the keynote speaker at the annual meeting of the National Association of Administrative Law Judges held in Des Moines, Iowa, in early July. He spoke on the revision of the Model State Administrative Procedure Act, of which John is the reporter for the National Conference of Commissioners on Uniform State Laws. PROFESSOR JOHN L. GEDID John served as an organizer of the 9th Annual Administrative Law Symposium held by the Pennsylvania Bar Institute on August 23. He also conducted a CLE session on developments in administrative law at the symposium. WIDENER LAW 23 Faculty News In July John served as Reporter and presented a draft Model State Administrative Procedure Act to the National Conference of Commissioners on Uniform State Laws at the annual meeting of the National Conference in North Carolina. John was appointed by PBA vice chair of the PBA statutory law committee, and chair-elect for the following year. PROFESSOR MICHAEL GOLDBERG won a significant victory in August in the 3d Circuit in a union democracy case brought on behalf of four reformers in the International Longshoremen’s Association. The case strengthened the statutory free speech and due process rights union members have within their unions, and also enforced a statutory requirement that unions inform their members about the “union members bill of rights” contained in the Labor-Management Reporting & Disclosure Act. The case, now on remand to determine remedies, is reported at 457 F.3d 331. P R O F E S S O R L AW R E N C E A . H A M E R M E S H presented a paper, “The Policy Foundations of Delaware Corporate Law,” to the 12th Annual Institute for Law and Economic Policy (Nassau, symposium on federal and state regulation of corporate governance) on May 5, 2006; the paper was accepted for the symposium issue of the COLUMBIA LAW REVIEW. On May 10, he made a presentation in Chicago to the ABA Section of Business Law’s Committee on State and Local Business Bar Leaders, on majority voting issues and their treatment in pending amendments to the Model Business Corporation Act. Larry presented his working paper (coauthored with Prof. Michael Wachter at Penn), “The Short and Puzzling History of the Implicit Minority Discount in Delaware Appraisal Law,” to the advanced business law seminar at Fordham Law School on January 16, 2007. On December 8, he presented this paper at a roundtable at the Institute for Law and Economics at the University of Pennsylvania. In December Larry was appointed by the Delaware Insurance Commissioner as the hearing officer on the application for approval of the proposed acquisition of Royal Indemnity Company and other Delaware affiliates. The acquisition is challenged by General Motors, DaimlerChrysler, the owners of the World Trade Center, and many other significant policyholders. The hearing on the application was held on January 19, 2007. Larry spoke in Washington, DC, on December 1, on a panel at the fall meeting of the ABA Section of Business Law’s Committee on Federal Securities Litigation, on the subject of institutional investor activism. Co-panelists included the Executive Director of the Council of Federal Securities Litigation, the Executive Director of the Council of Institutional Investors, the head of corporate governance affairs for TIAA-CREF, and the Chief Justice of the Delaware Supreme Court. The Securities and Exchange Commission approved for public notice and comment Larry’s proposed plan of distribution in the Columbia Funds mutual fund market timing settlement. The plan is available at 34-54175-pdp.pdf. Larry was a panelist on the October 4 roundtable at Widener Law School on “Statutory and Case Law Developments in Good Faith.” On October 5, he was moderator of a panel on “Good Faith in Delaware Corporate Law,” as part of Delaware State Bar Association’s program on good faith in business entity law. On October 13, Professor Hamermesh presented the paper previously described, “The Policy Foundations of Delaware Corporate Law,” as part of a panel on evolving rules of corporate governance, during a day-long symposium at the University of Maryland Law School on the interplay of state and federal law in corporate governance. PROFESSOR DANA HARRINGTON C O N N E R presented a work in progress on “Child Visitation Determinations for Incarcerated Perpetrators of Extreme Acts of Violence Against Women” at the Update for Feminist Law Professors at Temple Law School on February 3, 2007. PROFESSOR STEPHEN HENDERSON PROFESSOR ROBERT JUSTIN P R O F E S S O R N I C H O L A S M I R K AY has been named Reporter for a new set of ABA Criminal Justice Standards regarding government access to third-party information. accepted an invitation to be a guest blogger at Ratio Juris: Empirical and Mathematical Analysis of Legal Decisionmaking: A Member of the Jurisdynamics Network. Bobby has subsequently established his own blog entitled Essentially Contested America. presented “Tax Exempt Challenges” at the Lorman Seminar on “Tax Exempt Organizations in Delaware” in Newark, Delaware on May 31, 2006. Stephen made the following presentations: “Criminal Defense in the Age of Terrorism,” to the Criminal Justice Section of the Pennsylvania Bar in Philadelphia on September 26; “Personal Technology & Free Speech: The Truth About FaceBook, MySpace, and File Sharing,” at Widener University on September 21; “Where Science, Technology and the Law Intersect,” at a Planning Engagement Workshop for the Florida Judiciary’s LongRange Strategic Plan, Orlando, FL on May 18; and “New Technologies and Their Impact on the Fourth Amendment,” at the Pennsylvania Association of Criminal Defense Lawyers (PACDL) Search & Seizure Seminar, in King of Prussia, PA, on February 3. chaired the Planning Committee and moderated the overall conference and final session of the 34th American Bar Association National Spring Conference on the Environment, “Ecosystems, Infrastructure and the Environment: Reconciling Law, Policy and Nature,” held at the University of Maryland Law School, June 9, 2006. P R O F E S S O R D AV I D H O D A S David presented a paper, “Ecosystems and Energy,” at The Law and Policy of Ecosystem Services: A Symposium, Florida State University College of Law, April 7 - 8, 2006 (to be published Spring 2007). David was the Issue Editor, Transboundary Conflicts Issue, 21 NATURAL RESOURCES & ENVIRONMENT (Summ. 2006). P R O F E S S O R P A T R I C K K E L LY presented a paper, “Democratic Constitutionalism and the Reception of International Law into Domestic Legal Orders,” at the Temple University School of Law’s Annual Delaware Valley International Day symposium on October 21, 2006. LIPKIN Bobby was interviewed by the SUNDAY NEWS JOURNAL on August 20, 2006, on “Middletown Grants, Loans: Unethical or Just Generous?”; by WILM RADIO August 18, 2006, on “District Court Decision to Overturn the Administration’s Warrantless Wiretaps”; by the DELAWARE STATE NEWS on May 30, 2006, on “DSU Meeting Questioned; Board Session Behind Doors Called Violation”; and by the WILMINGTON NEWS JOURNAL on May 12, 2006, on “Bush Faces Outrage on Phone Records”. was also appointed to the Council of the ABA Section on Environment, Energy and Resources, to its Education Service Group, and as Vice Chair of its Strategic Response Committee, and served as Chair of its Committee on Constitutional Law. P R O F E S S O R J A M E S R . M AY Jim spent his sabbatical as Visiting Scholar at the Environmental Law Institute in Washington, D.C. During his sabbatical, Jim spoke at conferences sponsored by the American Law Institute (citizen enforcement), the American Bar Association (water quality policy and the environmental effects of Hurricane Katrina and other national disasters), and Oregon Law School (constitutional law). He also spoke about Justice Rehnquist’s legacy at three law schools and the Environmental Law Institute. Jim also traveled to South Africa, where he lectured about the intersection of Constitutional and Environmental Law. Nick presented “2006 Developments in Federal Income Taxation” at the annual Delaware Tax Institute on November 14, 2006. Professor Nicholas Mirkay was elected Vice Chair of the Food Bank of Delaware Board of Directors in May 2006. Professor Mirkay was also elected Secretary of the Delaware HIV Consortium Board of Trustees in May 2006. PROFESSOR JULIET MORINGIE LLO was named Co-Chair of the International Coordinating Committee of the ABA Business Law Section. Juliet was appointed to the Editorial Board of “Business Law Today,” the magazine of the ABA Business Law Section. Juliet made a presentation on September 12 at the Pennsylvania Bar Institute’s Eleventh Annual Bankruptcy Institute entitled “Ethics for the Consumer Practitioner.” As Chair of the Uniform Commercial Code Committee of the Pennsylvania Bar Association Business Law Section, she and her committee prepared the “Report on the Uniform Commercial Code Modernization Act of 2007” which recommends the enactment of Revised Articles 1 and 7 of the UCC in Pennsylvania. The Report was approved by the Business Law Section Council on January 10, 2007 and will be considered by the PBA House of Delegates at its next meeting. made a presentation in September at NYU Law School on material witness detention in the nineteenth century. PROFESSOR WESLEY OLIVER P R O F E S S O R D O R E T TA M C G I N N I S has been selected to serve on the Upper Level Writing Committee of the Legal Writing Institute. Among other things, the Committee will compile nationwide information regarding law school upper level writing curricula and develop Web site content for interested faculty. Wes made a presentation entitled Magistrates’ Examinations, Police Interrogations and Miranda-Like Warnings in Nineteenth Century New York at Harvard Law School as part of the Harvard Legal History Colloquium, on October 16. WIDENER LAW 25 Faculty News In October, Wes made a presentation on the origins of Miranda-like warnings at the University of Colorado School of Law symposium entitled “Cautions and Confessions: Miranda vs. Arizona After 40 Years,” commemorating the fortieth anniversary of the Miranda decision. The symposium included keynote speaker Yale Kamisar and presentations by Albert Alschuler, Margareth Etienne, Mark Godsey, Judge Morris Hoffman, Richard Leo, John Parry, Jacqueline Ross, Bruce Smith, George Thomas III, and Melissa Waters. Wes’ presentation will be forthcoming as an article in the TULANE LAW REVIEW. Wes has recently appeared twice (and will likely appear at least a third time) on the CBS affiliate offering commentary on the local Kevin Eckenrode murder trial. P R O F E S S O R R O B E R T P O W E R made a presentation in May at the Middle District of Pennsylvania’s Bankruptcy Law Conference on the subject of “Constitutional Issues in the 2005 Bankruptcy Act.” Bob also made a presentation on the “Pinochet Case” and the increasing globalization of criminal law at the Law and Society Association Conference in July. PROFESSOR CHRISTOPHER R O B I N E T T E is currently working with Jeffrey O’Connell as co-author on a book on tort reform to be published by Carolina Academic Press. PROFESSOR JUDITH RITTER, working with Professor Arlene Rivera Finkelstein, helped organize and lead a group of Widener Law students in their travel to New Orleans in January 2007 to provide assistance to inmates that had been unavailable as a result of the Katrina disaster. P R O F E S S O R A N D R E W S T R A U S S gave the Henry Usborne Memorial Lecture on March 8, 2006, to members of the British Parliament in the Houses of Parliament in London. The title of his speech was, “Taking Democracy Global.” Then, on March 10, Andy presented “Are Present Global Institutions Still Relevant?”in Athens, Greece, at the New School of Athens Conference entitled “Beyond the Millennium Declaration: Embracing Democracy and Good Governance.” On March 31, Andy presented “Pursuing International Trade Remedies for the Problem of Global Warming” on a panel at the Annual Meeting of the American Society of International Law. On April 6-8 Andy chaired the Widener University School of Law Symposium, “Envisioning a More Democratic Global System.” On December 5, 2006, Andy lectured at Yale Law School on “Toward a More Democratic Global System.” P R O F E S S O R C AT H E R I N E W A S S O N was elected to serve a second four-year term on the editorial board of the JOURNAL OF THE LEGAL WRITING INSTITUTE. She was also appointed chair of the newly-created Teaching Resources Committee of the Legal Writing Institute and has been invited to serve on the Membership Task Force for the Association of Legal Writing Directors. Legal Briefs Law School Launches Online Constitution Day Project Widener University School of Law—the only law school in America’s “First State” of Delaware—launched a new online project this fall in observance of Constitution Day. The result was a resource for all Americans looking to better understand what the nation celebrates every Sept. 17. The project, the brainchild of Professor Alan E. Garfield, the H. Albert Young Fellow in Constitutional Law, capitalized on Delaware’s role as the first state to ratify the Constitution. It was intended to bring everyone a deeper meaning of this national observance, by drawing out the thoughts of Delaware’s political, legal, and civic leaders. at the law school Web site, where readers from all walks of life can peruse the statements, comment on them, or submit their own thoughts about what we should celebrate on Constitution Day. Garfield asked Delaware leaders to reflect upon the meaning of the Constitution and to explain their roles in ensuring our democracy’s ongoing vitality. He amassed a collection of pieces from an array of people including the Delaware governor, both U.S. senators and Delaware’s lone representative in the U.S. House, state legislative leaders, state and federal judges, local clergy, and legal and business leaders. “The Constitution is not perfect,” Garfield says, “and neither are Supreme Court decisions interpreting it. But it has come to symbolize our societal commitment to respecting the dignity and humanity of every individual.” Garfield began the project with an eye for enlightening the public about what it is we should celebrate on Constitution Day. The result was an essay package beautifully displayed “As Delaware’s only law school, Widener Law is poised to be a central figure in future observances, and I look forward to enhancing the celebration we have started this year,” he says. Garfield said he hopes the project will kick-start a process that continues to make Constitution Day a meaningful event as Americans grow accustomed to the observance. Honoring King The faculty, students and staff on both campuses of Widener University School of Law took time in January to honor the memory of Dr. Martin Luther King Jr. In Harrisburg, about 100 people gathered on Jan. 25 in the administration building to hear Ann Lyon retired educator Ann Lyon speak about history and her family’s connections to the civil rights movement. Lyon, 79, grew up the daughter of privileged, white Southern parents who stood out for their support of equal treatment for African Americans. Her parents, along with activist E.D. Nixon, helped bail Rosa Parks—a family friend who helped Lyon’s mother with her sewing—out of jail and convinced her to become a test case. Lyon, the niece of the late U.S. Supreme Court Justice Hugo Black, recalled the days of the Montgomery bus boycott, in which she talked of King’s commitment to peaceful demonstrations. “He said if there’s any kind of violence, the cause is lost and we are doomed.” In Delaware, about 100 people gathered Thursday, Jan. 18 in the Ruby R. Vale Moot Courtroom for a panel discussion of civil rights issues and a keynote address by Dr. Mary Frances Berry, the Geraldine R. Segal Professor of American Social Thought and Professor of History at the University of Pennsylvania. Other panelists included Widener Law faculty members Serena M. Williams, Arlene Rivera Finkelstein, Andrew L. Strauss, Robert J. Lipkin, Robert L. Hayman Jr., and Drewry Nash Fennell, executive director of the ACLU-Delaware. King, Berry said, was going to be a preacher and could have had a long, potentially lucrative career in the church. “He wasn’t born a leader. It wasn’t on his belly button when he was born. He became a leader. So can you,” she told the students. After the talk, Berry signed copies of her book, My Face is Black Is True: Callie House and the Struggle for Ex-Slave Reparations. WIDENER LAW 27 Legal Briefs Delaware Supreme Court Hears Arguments at Widener Law visited the law school to hear oral arguments and the crowd was so large that some watched from an overflow classroom equipped with a live video feed. The Delaware Supreme Court came to Widener University School of Law to hear oral arguments in two cases on March 14. The court sat en banc before a packed Ruby R. Vale Moot Courtroom. It was the first time since 2002 that the court “We were thrilled to welcome the Delaware Supreme Court back to campus. Their presence enriches the legal education experience for our students and the cases of the day were especially meaningful for everyone affiliated with our Institute of Delaware Corporate & Business Law,” Law Dean Linda L. Ammons said. “We are grateful to the Court for giving us this rare and valuable access, and we are especially proud of our alumni who attended in their professional capacity. They are an example for our students.” The court heard two cases: Trenwick v. Billett, which originated in the Court of Chancery; and AT&T v. Clarendon, which came from the Superior Court. Delaware counsel in the second case included three Widener Law alumni, David A. Denham ’02, Mary B. Matterer ’88, and Kevin F. Brady ’82. The Law School hosted members of the court, their staff and court administrators for lunch after the hearings. Ammons said the day was so successful she hopes to welcome the justices back again next year. Widener Law Takes a Seat When asked who helped him win his hard-won victory over incumbent Mike Fitzpatrick in Pennsylvania’s Eighth Congressional District, Patrick Murphy ’99 credits his family. His Widener family, that is. Widener connections also helped as Murphy’s campaign to unseat Fitzpatrick progressed. “I approached President Harris and David Hoskins, a trustee— both were very helpful with strategic advice—and my fellow classmates from as far away as West Virginia,” he says. Two classmates, Keith Gamble ’99 and Melissa Foley ’99, joined his campaign as field operatives, as did undergraduate student Ryan Riley ’07. He also received advice from former Vice President for Government and Community Relations Marcus Lingenfelter. “The individual attention I received at Widener Law and the family atmosphere that Widener provided helped me a great deal in launching my career and my campaign,” says Murphy. “I was lucky that I served on the Trial Advocacy Honor Society, where you learn to develop the skills to be a litigator. I applied that skill set first as a prosecutor, then as a professor, then in running for Congress. The first Iraq War veteran to be elected to Congress, Murphy joined the Army in 1993 and is currently a captain in the Army Reserve. After graduating from Widener Law, he became first a prosecuting attorney, then a professor of constitutional law at the United States Military Academy. He was teaching at West Point when the Americanled coalition invaded Iraq in 2002 to topple Saddam By Sandy Smith Hussein, and he soon found himself called to serve with his unit, the 86th Airborne Division. The conditions Murphy found on his tour of duty with the 86th—including the 19 men in his unit who lost their lives there—led him to conclude that a change of direction was needed both there and back home. That conclusion, in turn, launched his congressional campaign. Even though Murphy has never held elective office before, he will head to Washington knowing something about how the legislative process, thanks in part to another Widener connection. “I was lucky that I worked in constituent services for State Representative Tom Tangretti [D-Westmoreland]. I got a good exposure to public service that way.” Murphy got that job through another Widener Law alumna, Mary Peters ’00. Legal Briefs Widener Law and Thomas Jefferson University announce partnership The Health Law Institute at Widener University School of Law and Thomas Jefferson University in Philadelphia are now offering two joint-program degrees. The schools finalized the exciting agreement in March. This fall, students can begin working toward either juris doctor/master of science in public health degrees or master of jurisprudence in health law/master of science in public health. John G. Culhane, acting director of the Health Law Institute, said the typical track will have students beginning their studies at Widener for a year, moving on to Jefferson for another year or two and then finishing at Widener. “These joint programs will be an exciting step forward for our law school and our Health Law Institute. With the ever-increasing interest in, and awareness of, public health issues, the time has never been better for collaboration with one of the nation’s leading medical and health education universities. Thomas Jefferson and Widener Law will both benefit immensely from these programs and from the cross-pollination of the disciplines of law and public health.” At Thomas Jefferson, the program will be run through the university’s College of Graduate Studies, which was established in 1969. The university’s master of science program in public health is designed for part-time students with the option of fulltime study. Graduates are able to pursue careers in public health administration, health insurance organization leadership, health consulting, international programs and the pharmaceutical industry. Law Dean Linda L. Ammons said combining the new joint degree programs will add to the already nationally and internationally recognized Health Law Institute and the other quality degrees the school offers in this area, including the LLM, MJ, SJD, and DL. Combining a Widener law degree with a Jefferson public health degree will surely make the program’s graduates sought-after in the marketplace, Ammons said. “A joint degree from two such reputable schools is going to bring fantastic opportunities for students who choose this path,” she said. Sports and Entertainment Law Association Hosts Second Symposium About 75 people spent a day on the Widener Law Delaware campus learning about the latest developments in sports and entertainment law, including new legal issues that have accompanied the worldwide explosion in personalized ring tones. More ring tones than fulllength songs are now downloaded online, 21 percent of U.S. wireless subscribers have a downloaded ring tone, and Billboard has begun carrying a Top-40 ring tone chart, speaker Terence W. Camp, Esq., told the crowd gathered in the Ruby R. Vale Moot Courtroom on Nov. 17. For a recording industry looking to preserve revenue in an increasingly digital music age, an October decision by the U.S. Copyright Office may have provided the answer, Camp said. The office decided that most ring tones are subject to royalties under the copyright law licensing system, but the decision could be appealed. “It may be a battle won in a war yet to be resolved,” Camp said. Eleven speakers made presentations during the second annual conference, put on by the law school and its Sports and Entertainment Law Association. Other topics of the day included the release of films and television shows on the Internet, NFL contract negotiations, and the ethical concerns that face entertainment attorneys. “We are excited to bring you such a unique learning opportunity through these timely topics that affect all of our lives, in some way, each day,” Dean Linda L. Ammons said during her welcome remarks. Sports and Entertainment Law Association students who helped plan the event stand with their faculty advisor. From left, Isha Mehta, Ivan Lee, adjunct faculty member Alexander Murphy, Benjamin Cline, David Iannucci, Kevin Pulley, Chris Egoville. WIDENER LAW 29 Legal Briefs Dean’s Leadership Forum Kicks Off With Alumnus George Miller ’81 Great leaders are not the people who come into a room and start telling people how wonderful they are, a standing-room-only crowd in the Ruby R. Vale Moot Courtroom was told in November. “At the end of the day what you want to try to do is lead through accomplishment, lead through service, lead through example,” said George K. Miller, a successful attorney, businessman, and community leader. Miller was interviewed for the inaugural Dean's Leadership Forum that took place Nov. 15 on the Delaware campus. The forum is an engaging new program for the entire law school community that focuses primarily on the experiences of Widener Law alumni and provides an opportunity for Widener Hosts Discussion on Hot-Button Immigration Issues Widener Law hosted a timely discussion of immigration issues Oct. 23 on the Harrisburg campus. The school’s student chapter of the ACLU sponsored the event with support from the Black and Minority Law Students Association. The program stemmed from headline-making legislation passed in Hazleton, PA. The controversial Hazleton Illegal Immigration Act ordinance would have obligated Hazleton landlords and business owners to confirm that tenants students, in a conversational setting, to learn about what it takes to be a leader in legal and other communities. Dean Linda L. Ammons chatted with Miller in front of about 250 students and faculty, who were then allowed to pose their own questions. coffee and driving his superiors when needed—and how today he balances time for his family and himself. “In every business, if you don't put your foot down, the business will eat you up,” Miller says. “You really have to think about what you want from life.” Miller is a 1981 Widener Law grad who has his own practice in Atlantic City and serves on the law school Board of Overseers. He has been involved in business ventures with Harrah's Entertainment, the Philadelphia Stars franchise, United States Football League, and the Shore Cable Company of New Jersey. He mixed anecdotes about representing the infamous Donald Trump in Atlantic City with sage advice on the importance of networking and building a name in the legal field. Miller talked about his work ethic—and how he secured his first job as an attorney by starting the day early, working late, and willingly making and customers were legal residents before providing them with any services. The city agreed not to enforce the ordinance, but is working on a new version. The afternoon featured two important speakers. Elena Park, an immigration attorney who practices with Cozen O’Connor in West Conshohocken, PA, where she heads the firm’s immigration practice, discussed federal agency policy and legislative trends involving immigration. She focused on how these trends have impacted Pennsylvania residents, including a brief explanation of what has been happening with the Hazleton Illegal Immigration Act ordinance. Park concentrates on business and employment immigration “George was a fantastic interview for our inaugural forum,” Ammons says. “He struck a wonderful balance between words of wisdom and entertaining stories. He got our program off to a great start, and I look forward to bringing it back with another outstanding speaker.” matters. As part of her practice she also trains employers on I-9 (employment verification) compliance and defends employers in the event of Department of Labor or Department of Homeland Security investigations. Park is co-counsel to the coalition that is opposing the Hazleton Illegal Immigration Act in federal court. She is a member of the American Immigration Lawyers Association. Park holds an undergraduate degree from the University of Toronto and a law degree from Temple University James E. Beasley School of Law. Dr. Agapito Lopez, a retired ophthalmologist who represents Luzerne and Lackawanna Counties on the Pennsylvania Governor’s Advisory Commission on Latino Affairs, spoke about the history of migration Widener Welcomes U.S. District Judge John Jones Presided over “Intelligent Design” case Widener Law’s Harrisburg campus welcomed U.S. District Judge John Jones to a packed moot courtroom for a talk on judicial independence on October 24. The inaugural Dean's Leadership Forum on the Delaware campus was a huge success. George K. Miller '81 took questions from Law Dean Linda L. Ammons. into the United States, including the changing immigration laws, Hazleton’s immigration history, the new immigrant wave there, and possible political and social motives affecting immigration policies. Lopez has been a regular spokesman for the Latino community in Hazleton and has been in the forefront of opposition to the Hazleton Illegal Immigration Act. Lopez holds an undergraduate degree from the University of Puerto Rico in Rio Piedras. He earned a medical degree from the University of Puerto Rico School of Medicine in 1971. His wife, Sandra L. Medina-Lopez, is a social worker who has directed the Migrant Education Program in Hazleton for the last 14 years. “We must never forget that the rule of law is not a conservative or liberal value. It is an American value.” More than 200 people, predominately students, filled the room for the hour-long discussion. Jones explained how the high-profile “intelligent design” case of Kitzmiller v. Dover, over which he presided last year, ignited a passion in him for matters pertaining to judicial independence. He said that case and the firestorm of attention it got from special interest groups, the media, and pundits, taught him the public has no real grasp on how judges operate. Jones decided the Kitzmiller case in December 2005, ruling in favor of the 11 Dover, PA, parents who sued their local school district claiming intelligent design is a form of creationism—something that cannot legally be taught in public schools. Jones agreed, finding intelligent design to be a religious belief, not a scientific theory. He ruled that teaching it in public classrooms violated the U.S. Constitution. He and his family lived under the protection of the United States Marshals Service during the trial, held in Harrisburg. Jones suggested to the Widener students that judges and attorneys must do more to speak to the process of law and how the courts operate, with an eye for educating the public. “We must never forget that the rule of law is not a conservative or liberal value. It is an American value,” he said. President George W. Bush appointed Jones to the U.S. District Court in 2002. The U.S. Senate unanimously confirmed him in July of that year. His talk aired on C-SPAN’s “America & the Courts” program Saturday, Oct. 28, and is available through, by clicking on “America & the Courts.” “When necessary, the public believes judges will, or should, throw one for the home team,” Jones said. “It does exist and it is quite real.” WIDENER LAW 31 Alumni Impact Tierney offers the Daily News as an example of what he meant when he spoke of the paper finding its voice. “The Daily News has a clear sense of what it is and what it wants to be,” he says. “The Inquirer, which is still a terrific paper, sometimes has some uncertainty about what it is and what it wants to do. “It’s like seeing an old friend who keeps changing the part in her hair every so often.” Running Philadelphia Newspapers, Inc., which is still the region’s dominant news organization, has required Tierney to use both the right-brain creativity he acquired in the course of a career in advertising and public relations and the left-brain rigor he developed while studying law at Widener. Hometown Boy Makes News Running The Philadelphia Inquirer and Daily News is a labor of love for Brian Tierney ’87. By Sandy Smith Brian Tierney ’87, the man who made James Earl Jones the voice of Verizon, would like The Philadelphia Inquirer to get its voice back too. And as chairman and chief executive officer of Philadelphia Media Holdings and publisher of the Inquirer, he is now in a position to make that happen. Tierney is the public face of the group of local investors that purchased the Inquirer and its sister paper, the Philadelphia Daily News, from the McClatchy Company for $515 million last summer. “My family has read the Inquirer for three generations,” he says, so he is familiar with both its past and its present. And if the Inquirer can regain a sense of purpose, he said in an interview not long after the sale, it could have a future as great as its past. He embarked on both his PR career and his legal studies at about the same time, not long after graduating from the University of Pennsylvania in 1979. “When I first got out of college, I was going to go right into law school, but I decided I wanted to try other things first,” he says. So he started a public relations firm, in his words, “as a day job to pay my bills while I worked my way through law school. Here I was, going to be a lawyer, and I had a family and bills to pay, so I started the PR firm thinking it was something I would do while I studied to become a lawyer. But the firm took off; it became really successful.” The success didn’t stop him from enrolling in law school anyway. Brian followed his older brother Kevin ’82 to Widener Law, where he would be followed by his younger brother, Michael ’93. Running Philadelphia Newspapers, Inc., which is still the region’s dominant news organization, has required Tierney to use both the rightbrain creativity he acquired in the course of a career in advertising and public relations and the left-brain rigor he developed while studying law at Widener. The law school years were a hectic time for Tierney. “I was dealing with major clients during the day, then jumping in my car and driving down to law school at night,” he recalls. “I knew that I wasn’t going to practice law, but I found it stimulating still. It was something that I thought would be useful, and it has proved a useful tool as a businessperson to have the law degree.” When asked to provide examples, he continues, “I think it’s helpful in negotiations, obviously. But what is particularly helpful for me is on the creative side. “Many times, creativity is about connecting things in illogical ways, seeing patterns that aren’t necessarily obvious at first glance. Law tends to build on logic—this leads to this leads to that—and that has made me successful as a businessperson. “I also enjoyed the intellectual stimulation that law school offered. I found sitting in constitutional law classes almost a tonic after the end of a long day.” Tierney offered praise for his Widener Law instructors. “I had some terrific professors when I was at Widener,” he says. “I can honestly say that the professors I had at Widener were on a caliber with those I had at Penn. “There was Ruth Gansky, who taught me contracts and procurement issues; Chuck Peruto on criminal law; Fairfax Leary, a constitutional law professor who had taught at Penn; and the real-world folks who were members of the Delaware Supreme Court—a real strong group of professors.” The recent drama surrounding negotiations with the newspapers’ unions and the layoffs of some 70 Inquirer reporters have not dampened his enthusiasm nor deflected him from his goal of restoring the paper to prominence. “We want to be in a position where a year or two from now, if you ask someone, ‘What is the best media company in serving its community?’ they will say, ‘You ought to go to Philadelphia and check out what they’re doing there.’” He is also well aware of the role the Inquirer and Daily News play in setting the region’s news agenda. Tierney and his partners have received tons of e-mail from readers and media professionals and have conducted focus groups and informal discussions to learn how Philadelphians view the papers. “One thing that comes through is that this is the most important media site for the region,” he says of the papers and their joint Web site, philly.com. “There’s great affection for the product,” he notes. “And there’s a lot of pride in the fact that in Philadelphia, we’ve been able to do something that no one else has been able to do, and that’s have local control of the papers again. “The New York Times and the national media are talking about the Philadelphia experiment. We’ve had people calling in from other cities— L.A., Baltimore—asking about what we’re up to.” It’s definitely a high-wire act, and so far, Tierney has managed to keep his balance on the tightrope as he works to get the papers back on a growth trajectory. In a recent interview, he described some of the underbrush he had to clear. “We bought a company with a lot of challenges that was owned by one of the worst run media companies in the country,” he said. “Part of the problem was the labor contracts. Fortune magazine described them in December as the most archaic contracts of any in the United States. “But with a lot of conversation, working in partnership with our unions, we were able to change just about every work rule we wanted to change. . . . What I learned in law school was that negotiating is not about splitting the difference,” he said. “The end result has to be something that works for both sides.” While Tierney was not directly involved in the labor negotiations, the negotiating team he assembled kept this in mind. “I also enjoyed the intellectual stimulation that law school offered. I found sitting in constitutional law classes almost a tonic after the end of a long day.” Tierney has also put muscle back into the papers’ marketing efforts, which have already begun to produce results. “In November and December, Inquirer circulation was up for the first time in two years. Daily News circulation was up for the first time in four years” —a stark contrast to the papers’ recent performance under Knight Ridder management. It’s all in keeping with the ultimate goal of becoming the region’s preeminent news source. “The goal of an enterprise is to grow, to serve the community, and to hire the right people to do the job,” he said. WIDENER LAW 33 Alumni Impact The Philadelphia Story: The term “Philadelphia lawyer” has always held a certain cachet—and still does. Traditionally, Philadelphia attorneys have been viewed as aggressive and top-notch in the profession, with courtroom skills that complement their razor-sharp legal minds. Recently, we asked some Widener Law alumni, who rank among the city's most respected civil litigators, to comment on the highlights and challenges of courtroom practice in one of America’s toughest legal towns. S H A R O N L . C A F F R E Y ’ 8 7, a partner with Duane Morris, concentrates her practice in the areas of mass tort, product liability, and toxic tort litigation, from the defense side. She has also handled numerous asbestos cases and medical malpractice cases. Her extensive litigation experience includes more than 75 cases brought to trial, with 25 of those being tried to verdict. Caffrey is a frequent speaker on the topic of eDiscovery issues and serves as vice chair of the Toxic Tort and Environmental Law Committee of the American Bar Association. What do you love about being in the courtroom? It takes a great deal of effort and preparation to be an effective trial attorney. Once you are prepared for trial and begin opening arguments, a trial is like a chess match—you have to think three steps ahead of your adversary and the witnesses you are cross-examining. It is exciting and mentally stimulating to try to outwit your adversary and their witnesses. What is the most difficult part of trying a case? There are many challenges to trying a case, including the seemingly mundane, like scheduling experts who have conflicting schedules, getting all of your exhibits ready, and so on. However, the most difficult aspect of trial for me is waiting for a jury to return a verdict and sitting while the verdict is read. At that point there is nothing else the trial lawyer can do to affect the outcome of the case: You simply have to wait and have confidence that you did the best you could for your client. What advice do you give to beginning attorneys with regard to courtroom skills, demeanor, or tactics? One of my partners recently told me that 80 percent of a person’s impression of you is based upon your appearance. While I am not certain of the accuracy of the statistic, a trial lawyer needs to look and act like a trial lawyer at all times while in the courtroom. If you exude confidence and professionalism, the jury will pick up on that. Similarly, if you exude arrogance or indifference, the jury will pick up on that as well. The jury needs to see that you believe in your clients and their cause and that you have confidence and conviction in what you say. They also need to believe you and, to some extent, like you as a person. I also advise against greed and deception before a jury, such as overreaching, stretching the truth, trying to bury bad evidence, or otherwise appearing that you have something to hide. Juries almost always pick up on these tactics and will punish a client for the lawyer’s behavior. Are there any particular courtroom moments that stand out in your mind as career highlights or are simply unforgettable? There is one moment I will not forget from my first few years of practice. A senior member of the bar, who was sitting next to me in the courtroom while we were each waiting to argue motions, leaned over and said, “Honey, don’t worry. Someone will come along and marry you, and you won’t have to do this anymore.” On a more positive note, a number of years ago I defended a large corporation in a tough wrongful death case. I surprised even myself by winning the case outright. The case was tried very cleanly, so the plaintiff did not have much of an appeal and, in fact, did not appeal after the post-trial motions were denied. I recently had a deposition with the same attorney, but he did not recognize me, as I had married in the interim and practice under my married name. He mentioned that I looked a lot like another lawyer who had out-maneuvered him in trial, and I had to hide a smile. What do you think it means these days to be a “Philadelphia attorney?” I have tried cases all over the country and still believe Philadelphia attorneys earn their reputations as being among the most skilled and clever adversaries. Philadelphia attorneys have the rare combination of intellect, courtroom skills, and street smarts, which makes them formidable adversaries. Because Philadelphia attorneys are often pitted against other Philadelphia attorneys in the courtroom, they learn from the best on a daily basis. Litigators in the City EUGENE D. MCGURK JR. ’ 7 8 , a partner at Raynes SHARON L. CAFFREY ’87 EUGENE D. MCGURK JR. ‘78 McCarty, has represented claimants in medical malpractice cases, products liability, and all forms of personal injury involving catastrophic injuries and deaths, from the trial courts to the highest courts of Pennsylvania and New Jersey, since joining the firm in 1981. An experienced lecturer, he has spoken at the University of Pennsylvania, Temple University School of Law, Thomas Jefferson University Medical School, the Medical College of Pennsylvania, and at numerous continuing legal education seminars. He is also an adjunct faculty member in Widener’s Health Law program. McGurk is Chair of the Board of Overseers of Widener Law, and he is a member of the Board of Trustees of Widener University. In addition, McGurk serves on the Philadelphia State Civil Procedures Committee of the Philadelphia Bar Association. He has also been a board member of the Center City Proprietors’ Association and at present serves as an advisory board member. What do you love about being in the courtroom? It is such a dynamic process that is all about relating to and engaging people. It demands your best and can be exhilarating. What is the most difficult part of trying a case for you? Getting everyone in the right place at the right time. Without a doubt scheduling witnesses is every lawyer’s biggest dilemma. What advice do you give beginning attorneys with regard to courtroom skills, demeanor, or tactics? Be yourself and be prepared. There are no substitutes for preparation. If you try to be something you’re not, or put a spin on things, the jury will see right through it. Are there any particular courtroom moments that stand out in your mind as career highlights or as something that you will just never forget? I especially enjoy trying cases with a colleague. You know the old adage, a load divided between two is more than halved in weight. As to moments I will never forget, I vividly remember a case in which there was some difficult economic analysis which the jury was asked to make. I told them in closing that they may need a calculator when they come to the question of damages. Needless to say, I was thrilled when a note was sent to the judge stating that they “were ready for that calculator.” What do you think it means these days to be a "Philadelphia attorney?" Pretty much what the dictionary says: “A knowledgeable lawyer who pays attention to detail.” I think that rings true for my fellow lawyers and judges in the Philadelphia Bar. JAMES D. GOLKOW ’86, a partner at Cozen O’Connor, where he serves as chair of the firm’s Workers’ Compensation Subrogation & Recovery Group, has tried numerous cases in both state and federal courts, including those in Delaware, Pennsylvania, New York, New Jersey, Virginia, Vermont, and Puerto Rico. His litigation experience includes prosecuting and defending cases involving complex product liability, particularly those involving medical devices and medical equipment, construction accidents, and premises liability, and he has been a frequent speaker and author on tort-related topics. What do you love about being in the courtroom? The spontaneity. No matter how much preparation is performed, there will be moments when the unexpected or unanticipated happens and a trial lawyer needs to react instantly in front of the judge and jury. It may be that a witness says something unexpected, or a new document surfaces. That challenge is what I enjoy the most. What, for you, is the most difficult part of trying a case? Jury selection. It is very difficult to determine if a potential juror will be sympathetic to your side of the case based upon some brief questions. In some jurisdictions the Court conducts the juror’s examination (voir dire), making it even more difficult. JA M E S D. G O L KOW ‘ 8 6 WIDENER LAW 35 Alumni Impact The Philadelphia Story: What advice do you give to beginning attorneys with regard to courtroom skills, demeanor, or tactics? Be yourself, first and foremost. Many young attorneys try to copy another lawyer’s style, demeanor, etc., which usually doesn’t work. The most effective attorneys act in a manner most natural to them, which usually comes across with sincerity. Are there any particular courtroom moments that stand out in your mind as career highlights or as something that you will just never forget? I was trying a high profile case in Puerto Rico. I got my adversary’s expert on crossexamination to admit his theory “didn’t fit,” but lost the case anyway. A valuable lesson. It taught me that juries look at the big picture—not just an isolated courtroom battle. What do you think it means these days to be a “Philadelphia attorney?” Tough, aggressive, but at all times, scrupulous and trustworthy. BERNARD W. SMALLEY ’80 concentrates his practice in the areas of medical negligence, pharmaceutical liability, defamation, class actions, products liability, and other personal injury matters from the plaintiff’s side. A shareholder at Anapol Schwartz, Smalley’s extensive trial experience has earned him a place as a fellow in both the International Academy of Trial Lawyers and the American College of Trial Lawyers. He sits on the Board of Governors of the Association of Trial Lawyers of America and has served as president of the Philadelphia Trial Lawyers Association. What do you love about being in the courtroom? The absolute electricity that comes with cross-examination of the target defendant. It brings together the knowledge that you’ve acquired based upon your preparation as well as the agility of mind that is required when the answers to your questions come back not quite as you anticipated. What, for you, is the most difficult part of trying a case? Before the trial, it is the arduous and painstaking preparation to try the case by making sure that you have left no stone unturned. Afterwards, it is waiting for the verdict. What advice do you give to beginning attorneys with regard to courtroom skills, demeanor, or tactics? Observe the good, the bad, and the “downright” ugly from everyone but develop your own style, one that you are comfortable with. Your style, however, must include both inwardly and outwardly the fact that you are in control. Are there any particular courtroom moments that stand out in your mind as career highlights or as something that you will just never forget? The first case that I tried to verdict as a new associate in my current firm was one in which I was up against one of my mentors. After a two-week trial, I got a chance to respond on rebuttal to my mentor’s closing argument that his medical expert simply could not have misrepresented the truth given his extensive training, experience, and his eighty-five page resume; I reminded the jury on rebuttal that our former President, Richard M. Nixon, probably had a resume just as extensive, and we all know what he did. It was a spontaneous remark, but it helped to carry the day. The jury returned a substantial verdict for my client. Litigators in the City B E R N A R D W. S M A L L E Y ‘ 8 0 What do you think it means these days to be a “Philadelphia attorney?” A while back, I had the privilege of meeting and being asked a series of questions by acclaimed actor Denzel Washington in preparation for his role as the attorney who represented Tom Hanks in the movie Philadelphia. Mr. Washington said it best: he wanted to make sure he acted in the finest tradition of a Philadelphia attorney, one who effectively and with passion and zeal, represents those whose rights would be snuffed out by the powerful for their own gain. LARRY BENDESKY ’87 L A R RY B E N D E S K Y ‘ 8 7 is a shareholder at Saltz, Mongeluzzi, Barrett & Bendesky. Bendesky represents plaintiffs in product liability cases, including those involving the operation of motor vehicles, elevators, power tools, and industrial and manufacturing equipment, and he also handles claims involving catastrophic construction accidents. He has served as lead counsel or co-counsel in numerous complex cases involving verdicts or settlements exceeding $1 million. Bendesky serves on the Board of Governors of the Pennsylvania Trial Lawyers Association and is a frequent lecturer on the topics of product liability and construction litigation. What do you love about being in the courtroom? So much of life involves uncertainty and shades of grey. Trying a case is one of the few areas where this is a winner and a loser. It is all or nothing. Everything is magnified. It is fun to be on your feet, questioning the witness, giving an opening or giving a closing, knowing that a decision will be made, one way or the other, with certainty. What is the most difficult part of trying a case? Preparing for it.The vast majority of cases settle. Because you don’t know which cases will settle or will try, it is necessary to prepare every case as if you are going to trial. After you have completed discovery, turned over expert reports and (in some jurisdictions) completed expert depositions, you must prepare the case for trial. It is a tedious, timeconsuming process. You have to painstakingly review the file, mark the portions of the depositions that you want read in, prepare and respond to motions in limine, notice and schedule witnesses, and prepare a proposed jury charge and voir dire questions. It is painstaking, but must be done if you are going to be an effective advocate in the courtroom. What advice do you give to beginning attorneys with regard to courtroom skills, demeanor, or tactics? We try to tell all of our young attorneys to soak up as much information as they can from all available sources. We encourage our attorneys to come to court with us, even if they are not involved in the case, to watch trials. We also pay for any continuing education course on evidence or courtroom skills that they would like to attend. I am particularly proud of the fact that eight of our attorneys have received an LLM in Trial Advocacy. With all of the training that we encourage our young lawyers to obtain, we emphasize that they must be themselves in and out of the courtroom. Jurors recognize an act. We also encourage our lawyers to be courteous and respectful to the court, the court staff, and opposing counsel. Trial lawyers have, I believe, a poor and unfair reputation in the public at large. Poor public perception of trial lawyers should not be compounded by discourtesy to those in the courtroom. Are there any particular courtroom moments that stand out in your mind as career highlights or as something that you wil just never forget? The most exciting moment in the courtroom is when a jury comes back and you are waiting for the verdict. The career highlight of mine is receiving an $8.3 million verdict for a client who was rendered deaf when a utility pole struck him on the head. What do you think it means these days to be a “Philadelphia attorney?” The term “Philadelphia attorney” means someone who is sharp, savvy, industrious, and hard-working. To be a “Philadelphia lawyer” means knowing your way around the courtroom and how to get things done in a practical and efficient manner. WIDENER LAW 37 Success Stories the job to concentrate on his law studies full time and expects to graduate in May. Sunday had always had an interest in the UN after witnessing its peacekeeping missions during his time in the Navy. While researching summer work options, he went online and learned about the UN program. His acceptance came after Sunday had already secured other summer work. He turned to Dean of Students Elizabeth G. Simcox for advice. “I told him that I did not ever want him to look back and wish that he had taken this opportunity if he had turned it down,” Simcox recalled. David Sunday with Elizabeth Simcox at the United Nations in New York City. International Intrigue Intern acts as Widener ambassador in the peacekeeping world. Harrisburg third-year law student David Sunday knows how to follow a dream. The 31-year-old spent the summer of 2006 interning at the United Nations in New York City, under an ultra-competitive program where he was one of only 10 American law students selected to participate. Sunday was one of 3,000 applicants from around the world. A total of 170 interns from 70 countries took part in the 10-week program. “The entire summer I was an ambassador for Widener,” Sunday said, explaining he told his colleagues about the school and its programs and always strived to do his best, so it would reflect positively on the school. Sunday was assigned to a very high-profile department: peacekeeping operations. He performed legal research, wrote memos, and did some finance work. His main project was drafting a manual on the handling of third-party claims filed against the UN as a result of peacekeeping missions. The unpaid summer experience took an unexpected turn in the final weeks, when Israeli forces attacked Lebanon, and Sunday was tapped—because of prior military experience—to help staff a New York-based UN crisis center, which was the nerve center for peacekeeping operations. The Harrisburg native enlisted in the Navy after high school, where he spent six years traveling the world, working in counter-narcotics in the Caribbean and South America and enforcing UN sanctions in the Persian Gulf. He earned a degree in finance from Penn State University after leaving the Navy and then became a finance analyst for UPS. He left The two stayed in close touch over the summer, and Simcox visited him in New York. “The work he did, friends he met, interactions he made were incredible for him. He volunteered for special assignments as often as possible, and his enthusiasm was boundless. On the site visit I made, his supervisors and co-workers all spoke highly of him,” she says. “In my mind, this was a pivotal experience for David. He went into it with little preparation, made the arrangements, traveled a distance every day to get to work, learned the city, participated in substantive projects in his department, and learned how to deal with a very diverse group of people from all over the world. I can safely say this was something he will remember for a lifetime.” Sunday said he enjoyed getting a global perspective on things, which he missed from his time in the military. “I think it’s important for people to see there are a lot of opportunities out there. You have to be creative and build on your own personal experiences that are unique to your background,” he says. Mission Accomplished Santino Ceccotti gets by with hard work— and a little help from his friends. If you congratulate Santino Ceccotti about his December 2006 law school graduation, he will modestly deflect praise for his accomplishments, and instead, talk about his gratitude to the law school community for assisting him in achieving his goal of a law school education. “It’s been very easy here with all the assistance the university has provided,” he says. “I wouldn’t have been able to do this without the law school being so willing and accommodating.” The Widener Law graduate, although confined to a wheelchair, has an enthusiasm for his law school experience that transcends physical barriers. After earning a degree in finance from the University of Delaware, Ceccotti enrolled as an evening division student and fully embraced life at Widener Law, participating in moot court programs, in the bankruptcy clinic, in the probono partnership program, and in the environmental law clinic and serving in leadership roles with the Moot Court Honor Society, the Business Law Society, and the Association of Latin American and Hispanic Students. He recently commented, though, that a judicial externship with Vice Chancellor Donald F. Parsons Jr. at the Delaware Court of Chancery “has been the highlight of my law school career.” Ceccotti worked on judicial opinions in the areas of corporations, estates, and guardianships over the course of a year. “It’s been very easy here with all the assistance the university has provided,” he says. “I wouldn’t have been able to do this without the law school being so willing and accommodating.” Ceccotti emphasizes that the welcoming and accommodating nature of the faculty, administration, staff, and students helped him to achieve his goals. Unable to take notes himself, Ceccotti recorded his law school classes and also utilized a fellow student “note-taker.” His mother, Liliana, drove Ceccotti to campus for all of his classes and has been a familiar face at the law school. She says that her son’s time at Widener Law “has been a wonderful experience because I can see the making of ‘my esquire.’ We’ve always been proud of him; he always accomplished everything he set out to do.” Associate Dean for Student Affairs Susan Goldberg also has great regard for Ceccotti’s determination and accomplishments, saying, “Santino has been a delight to work with during his time at Widener. I really enjoyed getting to know him. Despite his physical limitations, he has excelled in his classes and has taken advantage of many learning opportunities available at the law school. He is bright, capable, articulate, and dedicated and enthusiastic about embarking on his legal career.” Ceccotti is not resting on his laurels, though. As this publication goes to press, he is awaiting the results of the February Pennsylvania bar examination, with plans to take the Delaware bar exam in July. His job search has also begun. Ceccotti hopes to land a position in the corporate or bankruptcy fields and has embarked upon his quest for a legal job with the enthusiasm and determination typical of his Widener Law career. Santino and Liliana Ceccotti WIDENER LAW 39 Success Stories Third-year Harrisburg students gain courtroom experience on both sides of the aisle. ATale of M I C H A E L G I B S O N , 29, grew up in southern New Jersey, just outside of Ocean City, in Linwood. He is a 2000 graduate of Villanova University and hopes to graduate in May from Widener Law. He is an extern with the Dauphin County District Attorney’s office. Why did you choose Widener Harrisburg? I chose Widener Harrisburg for several reasons. First, because of its recognition in the South Jersey area—where I grew up and planned to practice—and because my brother is a graduate of the Delaware campus, and my father was an adjunct professor at the Delaware campus. Both my brother and father were able to give me first-hand feedback of their experiences in the Widener community, which made me feel extremely comfortable with the law school. Second, I chose Harrisburg because I thought it would provide me with some new experiences. I had already gone to undergraduate school and worked for several years in the Philadelphia area and thought Harrisburg would be a nice change. Describe your experience at the DA’s office. I have always been interested in criminal law but never seriously considered it as a field of practice. However, after my experience at the DA’s office, it is definitely something that I am considering. It has not only exposed me to the nuances of the life of an attorney in a criminal law office, but also allowed me to get some courtroom experience. At the DA’s offices I was involved in everything from case research to conducting detention hearings in both adult and juvenile court. Did any specific case open your eyes to an issue or injustice in the criminal justice system? I cannot recall a case where I felt that I witnessed any injustice, but I do recall one discouraging issue that seemed to underline all the cases: drugs. I was somewhat surprised and discouraged to see just how big the drug problem is in the criminal world. No matter what issue was being tried or what charge had been filed, it seemed almost every case involved drugs. My experiences in these cases really opened my eyes to the significance of this problem and how it leads to additional criminal activity. How will your extern experience help you as an attorney? One of the areas that I am really interested in is litigation, and my experience at the DA’s office provided me with several opportunities to participate in the courtroom and interact with the judges and other attorneys. By taking advantage of this opportunity, I now have foundation, my “sea legs” if you will, to enter a courtroom and have a general comfort level that other students may not have. Two Externs D A M I A N D E S T E F A N O , 26, hails from Easton, PA, and attended Northeastern University in Boston. He is externing at the Dauphin County Public Defender’s office and will graduate from Widener Law in May. Why did you choose Widener Harrisburg? I got into the Trial Admission Program and it was close to home. Describe your experience at the PD ’s office. I obtained my interest in criminal law by working through Widener’s Law Clinic. During the summer ‘06 semester, I was able to work on actual criminal files under an amazing staff attorney, Monica D. Cliatt ‘99. The clinic introduced me to criminal procedure. I would interview defendants and take them through the preliminary hearing, formal arraignment, and disposition stages. What are your career goals? My exact career goals are somewhat undefined at this point. However, I plan to return to the South Jersey area and begin my practice. It is most likely that I will enter into a civil litigation practice or get involved with some casino work, but I am always looking for new interests. Furthermore, since both my brother (a partner in a firm in Cape May County) and father (a retired Superior Court Judge, now in mediation and arbitration) are practicing in the area, it would be nice if one day the three of us could join forces. I have secured a clerkship following graduation with Judge Perskie in Atlantic City, which I am very excited about. Working alongside Judge Perskie will provide me with a tremendous learning experience and lay a strong foundation for my legal career. How has Widener played a part? Not only has the school taught me how to think and analyze situations like a lawyer, but it has also provided me with external opportunities, such as my externship and trial workshops, that have helped put my learning to the test. In addition, I feel that I matured during my years at Widener. Despite being one of the older first-year students when I began school, I still feel that my experience at Widener helped me grow as a person. This gave me valuable experience for the PD’s office; I was able to jump right in. On the first day of working at the PD’s office, I was asked to sit in on an interview with two attorneys from the PD’s office and a defendant for a firstdegree murder charge before the preliminary hearing held at Central Court at DCP. I attended preliminary hearings with supervising attorneys and was handed “simple assault” files. I was to interview the client and present the matter to the District Justice so that the charges could be dismissed instead of the client being bound over for trial. I was asked to lift capias and reinstate bails under unique circumstances. I drafted a lot of continuances and guilty plea colloquies. I was also able to sit on initial interviews for individuals that were applying for PD’s assistance. The approach to criminal practice within Dauphin County is hands on. I was able to meet judges and attorneys from all over the county. I was asked to wear a suit every day, and my office was basically the Dauphin County Courthouse. Lastly, I was able to attend Juvenile Detention Hearings. Did any specific cases affect you or open your eyes to an issue or injustice in the criminal justice system? All the cases affect me and they all opened my eyes. Every defendant has a story and every one has a version of the facts. We barter with people’s liberty. That is the nature of the criminal system, but it leaves no room to miss details. How will your extern experience help you when you become an attorney? I would not be a complete attorney without this externship. This is courtroom experience every day. I learned how to interact in a courtroom between fellow attorneys and judges. Plus, I learned where to file any kind of document at the courthouse. What are your career goals? I plan to stay in criminal practice for a while at the public defender’s office and get practical experience in the courtroom. The number of attorneys that have actual courtroom experience is low, and I already have that before graduating law school. Later, I hope to join a private criminal practice. WIDENER LAW 41 Success Stories Dual Threat For stand-out student Meghan Adams, success is par for the course. Whether she is on the golf course or in the classroom, third-year law student Meghan Adams strives for perfection. Adams won the Delaware Women’s Golf Association state amateur tournament in June 2006. She also was the tourney champ in ’03 and ’05, and came in second in 2004. An exemplary student, Adams is in the top 10 percent of her class. She serves as articles editor for the Delaware Journal of Corporate Law, and is active in the Student Bar Association. Additionally, Adams clerks for the Delaware Supreme Court’s Chief Justice Myron T. Steele through Widener Law’s judicial externship program. From the age of 11, when she took lessons offered to members of her swim team, Adams has been hooked on golf. Her parents saw her enthusiasm for the sport and bought her first set of clubs that Christmas. By the time she was 13, she was involved in a junior golf program through Hartefeld National Golf Club in Avondale, PA, and was participating in tournaments. At Dover High School, she played on the men’s team and shot from the men’s tees. out with my male law school classmates who think they can beat me!” According to Adams, “Dover High was a golf powerhouse,” and the competition with her male teammates improved her game. During her senior year, she finished ninth in the boys’ state tournament, playing from the men’s tees. After two years at James Madison University, sports scholarships were cut and Adams transferred to the University of North Carolina. There, she was captain of the golf team, leading the squad to qualify for the NCAA nationals— its best finish in five years. Adams’ success this year in golf and in law school is bittersweet, however. Her father, Michael, her inspiration and biggest cheerleader, died suddenly of heart disease in September. Not only was he a golfer who attended her tournaments and shuttled her to golf activities as a youngster, but according to Adams, “My dad always wanted to go to law school and didn’t have the opportunity. He encouraged me to go. I didn’t know if I would really like it, but I love it.” Despite her talent for the sport, Adams made the decision not to turn pro. Instead, she golfs for pleasure and works with a golf teacher who is based in Florida. “I just want to go out there and have fun,” she explains. “I really like to go The week before he died, Adams shared with her father the news that she had been offered a position after graduation with the Wilmington firm of Chimicles & Tikellis, where she will focus on corporate law from the plaintiff’s side. In Search of . . . Widener University School of Law Alumni Attention Alumni: We want your Class Notes! IN AN EFFORT TO BRING TOGETHER Name Class Notes invites alumni to write to the Development/Alumni Office with news of interest. If your name has not appeared recently in Class Notes, take a moment to share some news about yourself for an upcoming issue. If you wish, include a photograph with your information (digital 300 dpi or hard copy). ALUMNI FROM AROUND THE GLOBE, Widener University School of Law is proud to announce the publication of an all-new Alumni Directory. Class Year Scheduled for release in late 2007, our Alumni Directory will be an up-to-date and complete reference of more than 10,800 Widener University School of Law grads. This comprehensive volume will include current name, previous name as a student (if different), as well as class year. Each biographical listing will also include home address, phone number, names of spouse and children, plus detailed professional information. Home Address The new 2007–2008 edition will list alumni alphabetically, by class year, by geographic location, and by occupation in our special “career networking” section. The Alumni Office has chosen Harris Connect to produce this special edition. Harris Connect will begin researching and compiling data for inclusion in the directory by mailing a questionnaire to each alumnus/a soon. Please be sure to complete the questions and return the form immediately. If we don’t have your current address on file, please contact the Alumni Office at 302-477-2172 as soon as possible, so we can make sure you receive a directory questionnaire. City / State / Zip Telephone (Home) E-mail (Home) Business Address City / State / Zip Telephone (Business) E-mail (Business) Your news: With your participation, the 2007-2008 edition of the Widener University School of Law Alumni Directory is sure to be a great success! Visit the Widener University School of Law Web site at. Send your Class Note to: Alumni Office, Widener University School of Law P.O. Box 7474, Wilmington, DE 19803-0474 43 Congratulations! Widener University School of Law and Dean Linda L. Ammons congratulate these Widener Law alumni who have recently passed state bar examinations.* CALIFORNIA Michael Brandon Smith D E L AWA R E Theodore W. Annos Katie W. Arrington Sara E. Auerbach Gary D. Berg Allyson M. Britton Justin P. Callaway Kevin M. Carroll Jimmy C. Chong Sandra F. Clark Matthew P. D’Emilio Timothy W. Davenport David W. DeBruin John J. Ellis Keith J. Feigenbaum Samuel C. Fiechter Erin K. Fitzgerald Matthew B. Frawley Kristi N. Frazer Michael B. Galbraith Vicki L. Goodman Robin M. Grogan Rochelle L. Gumapac Tara E. Hafer Peter K. Janczyk Leonard Kingsley Nicholas M. Krayer Vince R. Melone Marcus E. Montejo Jennifer A. Murphy Penelope B. O’Connell Andrea C. Panico Mona A. Parikh Kristen W. Poff Pamela D. Politis Ciro C. Poppiti Chandra J. Rudloff Michael G. Rushe Heather A. Schwenzer Raymond N. Scott Anne K. Seelaus Chakaravarthi R. Srivatsan William R. Stewart III Jennifer L. Story Dana L. Vinograd Raeann C. Warner Matthew M. Warren Steven G. Weiler Rachelle R. Wells MARYLAND *This list reflects only those alumni for whom the law school has received notification of bar passage. 44 WIDENER LAW Elizabeth Eremita Kathleen Mari Feely Bryan Stephen Flood Adam Paul Frank Thomas J. Harrison Bret Keisling Frank Joseph Mazurek III Vivek Sawhney Andrew Schwartz Benjamin Chapman Stevens Frank D. Thompson II Shannon Marie Weaver NEW JERSEY Michael R. Abbott Erik R. Anderson Richard C. Andrien Chandra M. Arkema Sara E. Auerbach James Anthony Augustine Mitchell R. Ayes Rahat N. Babar Sandra L. Battista Zlata Berman Justin M. Bieber Seth T. Black Robert Bondar Rita J. Bonner Kevin T. Bright Beth A. Brockson Megan E. Brown Lauren E. Bucksner Andrea J. Bullock Michael L. Burns Jill A. Cantor Kevin M. Carroll Michael S. Chuven Lisa B. Cohen Michael S. Cohen Kyle F. Colin Brian P. Corcoran Jennifer E. Cranston Anna M. Darpino Daphne A. Demourtzidis Christine A. DePetris Kathleen B. Duffy Mazin I. Elias John J. Ellis Christina A. Eunson Keith J. Feigenbaum Matthew M. Fisher Erin K. Fitzgerald Corinne M. Foley Samuel E. Friedman Gretchen E. Fry Melissa A. Fry Joseph Galea Nicola F. Gammon Jacquelyn S. Goffney Kevin M. Gogots David E. Goldberg Justin L. Groen Andrea E. Hammel Matthew G. Hauber Christian G. Heesters Andrew J. Hennessy David J. Jablonski Richard Jahn Joshua A. Janis Melissa D. Karabulut Christina M. Keating Michele L. Kluk Justin L. Krik Daniel M. Kurkowski Ian F. Landman Brandon J. Lauria Richard Lee Melissa A. Lentz Anthony J. Leonard Michael D. Leva Joshua A. Levin Jeffrey R. Lindsay Evan Y. Liu John R. Logan Anne M. Lombardo Edward M. Louka Dan A. Lovin Megan L. Malavolta Nicholas W. Mattiacci Mary McClellan Alyson J. McDonald Kevin G. McDonald Brian P. McEntee Jeffrey H. McGovern Daniel B. McMeen Ryan F. Michaleski Jarrod M. Miller Suzanne D. Montgomery Justin S. Moriconi Michael P. Murphy John Mylan Shawn C. Newman Marybeth O’Connor Christopher D. Olszyk Luciano N. Patruno Andrew M. Peoples Colette M. Perri Arthur W. Petersen Bryan M. Remington Sarah K. Resch Milena Rodionov Trisha L. Romano Summer Rose-Rich Matthew A. Ross Lawrence B. Rowe Terence P. Ruf Michael G. Rushe Mark A. Rushnak Joseph J. Russo Matthew I. Sack Mathew L. Sampson Neil Sarker Geoffrey F. Sasso Dawn M. Schwartz Justin J. Serianni Steven Shakhnevich James R. Shamy Catharine E. Sibel Michael J. Sileski Holly E. Smith Kristen K. Stoker Jennifer L. Story Franklin R. Strokoff Joshua E. Tebay Christopher J. Tellner Marguerite L. Thomas Jennifer J. Thompson Marc R. Tilney Brett N. Tishler Matthew T. Tranter Robert A. Turco James Turner Jeremiah J. Underhill Amy A. Underwood Lauren A. VanEmbden Nicholas J. Wachinski Vanessa L. Walters Raeann C. Warner Patrick J. Wesner Laura R. Westfall Lisa A. Wood N E W YO R K Travis Arrindell Mitchell Ayes Tanya Bridges Ryan Briskin Michael Chuven Sean Newell John Eric Olsson Corey Adam Ruggiero Mark Rushnak Carolina Salvia Steven Shakhnevich P E N N S Y LV A N I A Michael R. Abbott Robert J. Albanese Laurie A. Anderson Theodore W. Annos Chandra M. Arkema Rahat N. Babar Amy L. Bennecoff Zlata Berman Justin M. Bieber Thomas D. Bielli Seth N. Boer John M. Bollinger Patrick J. Bradley Sarah S. Brase-Davis Kevin T. Bright Beth A. Brockson Eric M. Brown Megan E. Brown Lauren E. Bucksner Zeljka U. Budisavljevic Andrea J. Bullock Michael L. Burns Jill Alison Cantor Marie C. Cespuglio Ayesha S. Chacko Jimmy C. Chong Douglas S. Cinoman Michael J. Clark Sandra F. Clark Brian P. Cleghorn Christopher A. Clouse Lisa B. Cohen Kyle F. Colin Patrick J. Collins Edwin J. Colon Brian P. Corcoran Jennifer E. Cranston Stewart C. Crawford Bryan D. Cutler Anna M. Darpino Daphne Ahtaridis Demourtzidis Leanne M. Deptula David C. DiDonna Carmen C. DiMario Tina M. DiNicola Daniel M. Dixon Stuart B. Doctorovitz Erin F. Downing Linda B. Drejza Kathleen B. Duffy Miles P. Dumack Leonid Eberman Christina A. Eunson Islanda L. Finamore Matthew M. Fisher Robert C. Fisher III Corinne Michele Foley Kristi A. Fredericks Samuel E. Friedman Melissa A. Fry Nicola F. Gammon Jacquelyn S. Goffney David E. Goldberg Michael B. Goldberg Justin L. Groen Tara E. Hafer Andrea E. Hammel Matthew G. Hauber Roger C. Hay Melissa L. Heckman Melissa M. Heesters Katherine L. Heintzelman Andrew J. Hennessy Justin Highlands Sara M. Hudock Christopher A. Iacono Sasha C. IntriagoRodrigues David J. Jablonski Richard Jahn Miranda G. James Joshua A. Janis Ingrid Jean-Baptiste Amanda Joachim Christopher R. Johnson Jennifer A. Juchniewicz Melissa D. Karabulut Richard A. Kates Christina M. Keating Matthew B. Keener Lucas A. Kelleher K. Scott Kennedy Scott H. Kerr Lisa M. Klein Justin L. Krik Christin C. Kubacke Susan K. Kubinsky Andrew C. Laird Jeanna L. Lam Ian F. Landman Mary Ruth Lasota Elizabeth G. Latorre Brandon J. Lauria Mrs. Julie A. Lavan Melissa A. Lentz Anthony J. Leonard Joshua A. Levin Evan Y. Liu Edward M. Louka Dan A. Lovin Megan L. Malavolta Kathleen A. Maloles Jason I. Manus Jason P. Marmon Andre Martino Nicholas W. Mattiacci Frank T. McCabe Mary McClellan Kevin G. McDonald Lindsay A. McDonald Brian P. McEntee Jeffrey H. McGovern Daniel B. McMeen Joshua T. McNamara William A. McNeal Stephen M. McVey Kimberly A. Meany Elizabeth A. Meredith Ryan F. Michaleski Antonio D. Michetti Shannon P. Miller Jarrod M. Miller Christa M. Miller Steven R. Mills Suzanne D. Montgomery Thomas W. Moore Justin S. Moriconi Andrew R. Morris Karen P. Muroski Mark J. Mustin David A. Ney Margaret D. Nikolis Marybeth O’Connor Christopher D. Olszyk Michael P. Opacki Ryan A. Palmer Luciano N. Patruno Lisa M. Pectol Gustine J. Pelagatti Andrew M. Peoples Colette M. Perri Arthur W. Petersen Joseph W. Petka Christine M. Pierangeli Mark T. Pilon Pamela D. Politis Tracey L. Potere Kimberly V. Pranton Brett P. Preston Kenneth R. Pyle Nina G. Qureshi Krista M. Reale Bryan M. Remington Matthew J. Rifino John M. Roberts Milena Ilevska Rodionov Andrew S. Rosenbloom Mrs. Summer Rose-Rich Lawrence B. Rowe Chandra J. Rudloff Terence P. Ruf Joseph J. Russo John E. Sabo Matthew I. Sack Mathew L. Sampson Neiladree Sarker Geoffrey F. Sasso Shannon T. Schlott Heather A. Schwenzer Michael J. Sechrist Korab R. Sejdiu Justin J. Serianni Jacqueline J. Shafer Catharine E. Sibel Michael J. Sileski Sarah B. Silver Jesse S. Silverman Holly E. Smith Virginia J. Speicher Aimee Spelman Casey O. Srogoncik Amir Stark Angela M. Stehle Charles W. Stinson Kristen K. Stoker Franklin R. Strokoff Melissa A. Szydlik Christopher M. Tallarico Charlene A. Taylor Damon T. Taylor Joshua E. Tebay Christopher J. Tellner Marguerite L. Thomas Toby C. Thomas Brett N. Tishler Matthew T. Tranter Elizabeth A. Tucker Robert A. Turco Amy A. Underwood J’aime L. Walker Rachelle R. Wells Patrick Joseph Wesner Suzanne West Laura R. Westfall Pamela Whitney Charles T. Williams Kristen N. Winsko Lisa A. Wood James D. Wood Tawnya M. Yetter WIDENER LAW 45 Events OUTSTANDING ALUMNUS OF THE YEAR AWARD The Outstanding Alumnus of the Year Award was given on December 11 on the Wilmington campus to Brian J. Preski ’92 of Philadelphia. The award is presented to an alumnus or alumna who, through service to his or her community or profession, or other accomplishments, has brought honor, recognition, and distinction to the Widener University School of Law. Preski, a graduate of the Wilmington campus, is the former chief of staff to the Speaker of the Pennsylvania House of Representatives. In that role, he served as the key advisor to members of the Pennsylvania House leadership team and helped to develop and author legislation aimed at improving the quality of life and well-being of Pennsylvania residents. Preski recently joined the Philadelphia law firm of Wolf, Block, Schorr and Solis-Cohen, LLP. NEW LAWYERS JOIN THE PENNSYLVANIA BAR On Nov. 13, 2006, 14 Widener Law graduates were admitted to the practice of law at a ceremony that was held in Harrisburg at the Dauphin County Court House (below). Pennsylvania Supreme Court Justice Thomas G. Saylor presided with The Honorable Richard A. Lewis, President Judge, Dauphin County Court of Common Pleas. In addition, a ceremony was held on Nov. 14 at Philadelphia City Hall. Twenty-eight new lawyers were admitted during this event. Pennsylvania Supreme Court Justice Sandra Schultz Newman presided over the ceremony with The Honorable Robert S. Blasi ‘75, Supervising Judge, Philadelphia County, Municipal Court, Civil Division, The Honorable C. Darnell Jones II, President Judge, Philadelphia County, Court of Common Pleas, The Honorable Charles P. Mirarchi Jr., Administrative Judge Emeritus, Philadelphia County, Court of Common Pleas, Trial Division, and The Honorable Margaret T. Murphy ‘77, Supervising Judge, Philadelphia County, Court of Common Pleas, Domestic Relations. 46 WIDENER LAW The Widener-unique events allow the Law School graduates to be admitted to the bar alongside their friends and classmates. A reception that welcomed the family and friends to celebrate with the graduates followed each ceremony. 2006 ALUMNI AWARD WINNERS Seated from left, William J. Higgins Jr. ’99 (Outstanding Recent Alumni Award), Yvonne Takvorian Saville ’95 (Outstanding Service Award). Standing from left, Law Dean Linda L. Ammons, Robert J. Sander ’98 (Outstanding Recent Alumni Award), Brian J. Preski ’92 (Alumnus of the Year Award), Scott E. Blissman ’97 (Outstanding Service Award), Steven P. Barsamian ’75, president of the Widener Law Alumni Association. JUDGES’ RECEPTION The Honorable Paul Panepinto ’76 hosted a reception in Philadelphia City Hall on Dec. 18 for Judges of the First Judicial District to welcome Dean Linda Ammons to the region. Shown, left to right, are Hon. James J. Fitzgerald III, A.J., Hon. Paul P. Panepinto ’76, Dean Linda Ammons, Hon. George Overton ’86, and Hon. C. Darnell Jones, P.J. HARRISBURG CLASS OF ’96 REUNION Alumni from Harrisburg Campus Class of ’96 and guests reconnected at Scott’s Grille in downtown Harrisburg on Nov. 4. Pictured are: Front row, left to right: Jack Marino ’96, Stephanie Hoover ’96; Back row, left to right: Dan Clough ’96, Emily Clough, Paul Zimmerman ’96, John Coyle ’96, Amy Wolfberg, Doug Wolfberg ’96, Carrie Carroll ’96, John Zimmerman ’96, Robin Hensinger Grenoble ’96, Angie Ioannou ’96, Julie Coyle ’96, Cheryl Brown ’96, and Mike Wanagiris ’96. Also attending but not shown were Erin Hennessey ’96, Caryn Green ’96, Chris Preate ’96, Jim Carroll ’95, and Scott Grenoble ’94. WASHINGTON, D.C. AALS RECEPTION On Jan. 3, Dean Linda Ammons greeted Washington, D.C. area alumni and students, as well as faculty members, at a special reception held in conjunction with the annual meeting of the Association of American Law Schools. WIDENER LAW 47 Events WIDENER LAW HOSTS DISTINGUISHED LECTURE IN HEALTH LAW More than 100 people heard New Jersey attorney George W. Conk, Esq., deliver the second annual Raynes McCarty Distinguished Lecture in Health Law on Widener’s Delaware campus and at the Union League in Philadelphia. The lecture, delivered at both of the locations on Oct. 11, was titled “Will the post 9/11 world be a post-tort world?” Conk is managing partner at Tulipan and Conk, PC, in South Orange, NJ, and is an adjunct faculty member at Fordham Law School. At the McCarty lecture, seated from left, Regina M. Foley ‘92, David F. Binder, and Martina W. McLaughlin, all of Raynes McCarty. Standing from left, Eugene D. McGurk Jr. ’78, chairman of the Widener Law Board of Overseers and an attorney with Raynes McCarty; Martin K. Brigham of Raynes McCarty; George W. Conk, of Tulipan and Conk, PC, the attorney who delivered the 2006 Raynes McCarty Distinguished Lecture in Health Law; Gerald A. McHugh Jr. of Raynes McCarty; Widener Law Dean Linda L. Ammons; Dr. Andrew Newman, associate director of Widener’s Health Law Institute; and Raynes McCarty attorneys Timothy R. Lawn ’89, Stephen E. Raynes, Dr. Daniel M. Finelli, Lois DeAntonio, and Daniel Bencivenga. The event was made possible through the generosity of the Raynes McCarty law firm, based in Philadelphia. Raynes McCarty attorneys represent the catastrophically injured. It is one of the country’s most philanthropic and civicminded firms. DEAN’S WELCOME RECEPTION, WILMINGTON Well-wishers, including, front row from left, Mrs. Mary Wagner (Dean Ammons’ mother), Widener University President James T. Harris III, Delaware Supreme Court Chief Justice Myron T. Steele, New Castle County Executive Chris Coons, Wilmington Mayor James M. Baker, Dean of Ohio State University-Moritz College of Law Nancy H. Rogers, and Widener University Provost Jo Allen, congratulated Dean Linda L. Ammons at a welcome reception on the Wilmington campus on Sept. 26. 48 WIDENER LAW Dean Linda L. Ammons shows off a key to the city of Wilmington given to her by Mayor James M. Baker at the Dean’s Welcome Reception, Wilmington. IOWA PROFESSOR DELIVERS WIDENER LAW’S 2006 FRANCIS G. PILEGGI DISTINGUISHED LECTURE IN LAW Hillary A. Sale, the F. Arnold Daum professor of corporate finance and law at the University of Iowa College of Law, delivered the 2006 annual Francis G. Pileggi Distinguished Lecture in Law to a packed du Barry Room at the Hotel du Pont in Wilmington on Oct. 20. Sale’s presentation “Caremark: A Tale of Two Fiduciaries” came on the 10th anniversary of the Court of Chancery decision by retired Chancellor William T. Allen titled In re Caremark International Inc. Derivative Litigation. The famous decision dramatically focused attention on directors’ roles in implementing corporate compliance programs. Sale said she teaches the opinion every year in her classes and she reached out to students, too, during her trip to Widener. After giving her lecture to an audience of more than 100 members of the legal community in downtown Wilmington, including four members of the Delaware Supreme Court, she traveled to the School of Law campus and addressed about 100 students. The event was made possible by the generosity of Francis G. Pileggi, a founding attorney of Pileggi & Pileggi and father of Widener Law alumnus Francis G.X. Pileggi ’86, who conceived of the idea to create a corporate law forum for practitioners, judges, and academics. Sale’s lecture was presented by the law school and the Delaware Journal of Corporate Law, the school’s prestigious law review. The lecture series has attracted many renowned speakers in the area of corporate law since the first Pileggi lecture in 1986. At the Pileggi lecture: Delaware Chief Justice Myron T. Steele; Hillary A. Sale, the F. Arnold Daum professor of corporate finance and law at University of Iowa College of Law; Francis G. Pileggi, whose generosity made the lecture possible; and Widener Law Dean Linda L. Ammons. DEAN’S WELCOME RECEPTION, HARRISBURG Law Dean Linda L. Ammons mingles with students at the welcome ceremony in her honor on Sept. 20. From left: Vaneskha Hyacinthe, Melissa Vega, Dean Linda L. Ammons, Elizabeth Schwartz, and Prince Holloway. WIDENER LAW 49 Class Notes 1975 The Honorable Howard Sherman, in December 2006, was sworn in as a Justice of the New York State Supreme Court, Bronx County. 1976 The Honorable Paul P. Panepinto is a candidate for the GOP nomination as Justice of the State of Pennsylvania Supreme Court. He has been a member of the Court of Common Pleas of Philadelphia since 1990. Charles W. Proctor III recently received the designation of Certified Land Title Professional from the Pennsylvania Land Title Association. Only 49 individuals in Pennsylvania have received this prestigious award. He practices law in Broomall, Delaware County, and is the owner of Industrial Valley Abstract Company. 1977 1979 Roy Alan Cohen is a senior litigation principal of Porzio, Bromberg & Newman, PC, in Morristown, NJ, and has been named chair of the Toxic and Hazardous Substances Litigation Committee of the International Association of Defense Counsel. 1980 Greg Jacobs has retired from his position as Senior Chief Intelligence Specialist, United States Navy Reserve, Reserve Intelligence Area Six, Naval Air Station— Joint Reserve Base, Fort Worth, TX. James M. Matour of Hangley Aronchick Segal & Pudlin has been selected for inclusion in the 2007 edition of Best Lawyers in America. 1981 Coming Soon! Looking for a better way to connect and reconnect with fellow alumni from Widener Law? Widener University School of Law will soon introduce an online community. Here are some of the exciting, interactive features you’ll be able to enjoy: ■ Find classmates and colleagues in the online alumni directory ■ Update your personal information ■ Check out upcoming events and register online ■ Post class notes and photos or read about news from other Widener Law graduates ■ Make a credit card gift or a pledge to the Widener Law Fund Look for more information to follow in the mail and online. 50 WIDENER LAW Robert A. Honecker received the Outstanding Career Advocacy Award on September 15, 2006, from the County Prosecutors Association of New Jersey at the annual State of New Jersey County Prosecutors Association meeting, held in Atlantic City. The award is given to a career assistant prosecutor who has demonstrated leadership that has inspired his colleagues to become better prosecutors, conveyed professionalism and integrity throughout his career, and made a difference in the lives of victims. Honecker, who joined the Monmouth County Prosecutor’s Office in 1981, is the first assistant prosecutor from Monmouth County to receive this award. Honecker has served for over 25 years with the Monmouth County Prosecutor’s Office, having held the positions of Director of the Child Abuse Unit, Director of the Environmental Crimes Unit, Second Assistant Prosecutor, First Assistant Prosecutor, and Acting Prosecutor for Monmouth County. One of eight assistant district attorneys and prosecutors in the United States who sit on the National District Attorneys Association’s Board of Directors, Honecker is a certified criminal trial attorney in the state of New Jersey and resides in Shrewsbury with his wife and three children. David Moneymaker reports that he is alive and well despite a report in the Fall 2006 issue of the Widener Law magazine to the contrary. Commenting on the error, Moneymaker noted, “As strange as it may sound, a lot of good has come from that mistake. I was able to catch up with individuals with whom I haven’t spoken in a very long time.” 1982 Kevin F. Brady, partner in the corporate and commercial litigation group of the Wilmington office of Connolly Bove Lodge & Hutz, received the Andrew D. Christie Pro Bono Publico Award. 1983 Joseph W. Oxley was sworn in as president of the American Jail Association in May 2006 and received the Sheriff of the Year Award from the National Sheriffs Association in June 2006. Mary E. Sherlock of Mary E. Sherlock, PA in Dover, DE, is the new vice president/president of the Community Legal Aid Society Inc. in Dover. 1985 Kevin J. Barnes married Nadine E. Rotondo, who holds a bachelor’s and a master’s degree in social work from Widener University, on June 2, 2006, in Ocean City, NJ. They are expecting their first child in April 2007. The family resides in Ocean City, NJ, where Kevin owns a law practice, The Law Offices of Kevin J. Barnes, LLC. 1989 Scott E. Diamond has joined Stark & Stark as a shareholder and practices from the Princeton and Marlton, NJ, offices. 1987 Jill Fisher has joined the Philadelphia-based law firm of Zarwin Baum DeVito Kaplan Schaer Toddy PC. She will head the firm’s Employment Department. Prior to joining Zarwin Baum DeVito Kaplan Schaer Toddy PC, Fisher headed her own practice specializing in the full spectrum of employment law and human resources management. She has drafted numerous employment handbooks and personnel policies, conducted custom inhouse seminars, and developed training programs for managers and supervisors. A well-known lecturer in her field, her speaking engagements have included the World Affairs Council, the Council on Education in Management and Lorman Education Services. Fisher is a member of the Employment Law and Human Resource sections of the Philadelphia Bar Association and is a member of the Society for Human Resource Management. Alexander Bowie II has joined the Commercial Litigation Practice of Day, Berry & Howard in their New York City office. Derek R. Layser was recently named a Pennsylvania “Super Lawyer” by the publishers of Law & Politics and Philadelphia magazine for plaintiff’s personal injurymedical malpractice. A “Super Lawyer” designation represents the top five percent of practicing attorneys in Pennsylvania, and selection is by an extensive peer nomination and polling process. This is the third consecutive year Layser has received this honor. Layser is a founding shareholder of Layser & Freiwald, PC, with offices in Philadelphia, PA, and Westmont, NJ, and has an active trial practice in both states. Donald L. Logan is the principal at the newly formed Logan & Associates, LLC, in Wilmington, DE. Joseph J. McGovern announced the release of his first book The Kyoto Protocol. He has also authored a second book, The Lazarus Witness. 1990 Emmanuel J. Argentieri has been elected to the Executive Committee of New Jersey business law firm Parker McCay. Mary Ann Plankinton has joined MacElree Harvey in West Chester, PA, as a partner and will practice out of the firm’s Kennett Square, PA, office. Plankinton, of Landenberg, PA, is a graduate of St. Joseph’s University. She concentrates her practice in family law. She has served on several civic boards of directors, including the Kennett Square YMCA and Chester County Futures, the Kennett Township Planning Commission, and the Chester County Bar Association. She is appointed as a guardian ad litem through the Delaware family courts. Licensed by both the Pennsylvania and Delaware bars, she is a member of the American, Pennsylvania, Delaware, and Chester County Bar Associations. WIDENER LAW 51 Class Notes Kevin D. Sheehan has been promoted to shareholder at the New Jersey law firm of Parker McCay where he focuses on real estate development, local government law, affordable housing, redevelopment, and environmental issues. 1991 Michael A. Brown, a managing partner with the Washington, D.C., office of consulting firm Alcalde & Fay, is a candidate for the District of Columbia City Council. A special election will be held in May to fill the Ward 4 seat. Brown also serves as a Democratic commentator for Fox News. Claire M. DeMatteis, director of Stradley Ronon’s Wilmington, DE, office, was recently presented with the Women’s Leadership Award from the Delaware State Bar Association. DeMatteis was selected for this award as a member of the Delaware Bar whose 52 WIDENER LAW character, strength, personality, achievement, and activities in matters affecting women lawyers have served as an inspiration for women lawyers in their professional careers. In addition to overseeing the firm’s Delaware office, DeMatteis serves as counsel in the firm’s government and public affairs practice, focusing her practice in legislative and regulatory lobbying and business development in Delaware and Washington, D.C. She also chairs the firm’s gaming practice group. DeMatteis currently serves as chair of the Delaware Commission for Women. 1992 A. Kyle Berman has joined Fox Rothschild LLP in Lansdale, PA. Donald J. Detweiler has joined Greenberg Traurig, LLP, Wilmington, DE, office as a shareholder in the reorganization and bankruptcy department. Lisa Goldstein, President of Rainmaker Trainers, recently spoke at the Hadassah Attorneys’ Council. The lunch and learn seminar entitled “Business Development for Women Lawyers: Addressing the Gender Factor” focused on how acknowledging communication differences between men and women can help women lawyers to succeed in business development. Goldstein was also appointed advisor to the ABA Women Rainmakers this year. Her company, Rainmaker Trainers, coaches lawyers to help them increase law firm revenues. 1993 Lisa Hunn Barber, associate general counsel for Brandywine Realty Trust, has been named leadership executive by the Delaware Valley chapter of the National MS Society. Barber was presented with the Multiple Sclerosis Leadership Award at a reception held recently at the Pyramid Club in Philadelphia. She was elected for the award for her outstanding contribution to the civic, business, and cultural betterment of the Greater Delaware Valley. She has raised over $4,000 in a special gifts campaign for the chapter. William O. Krekstein, partner at Nelson Levine de Luca & Horst, LLC, was a featured speaker at the International Association of Special Investigative Units 21st Annual Seminar and Expo on Insurance Fraud. Richard L. Morris Jr. was awarded, by NameProtect, the Trademark Insider Award for #1 Miami Law Firm for U.S. Trademark Filings and #4 Top U.S. Trademark Filer for 2005. 1994 Jill (Moyer) Mayer was promoted to Supervising Deputy Attorney General of the Organized Crime & Racketeering Bureau of the New Jersey Division of Criminal Justice. She resides in Cherry Hill, NJ, with her husband, Joel Mayer, Esq., and their two children. David J. Shannon of Marshall, Dennehey, Warner, Coleman & Goggin, Philadelphia, PA, spoke on understanding copyright, trademark, and trade secret law as a faculty member of the continuing legal education seminar entitled “Technology Law in Pennsylvania: The Fundamentals and More.” Gina Rubel, president of Furia Rubel Communications, Inc., along with Jeffrey B. Albert, Esquire, of McKissock & Hoffman, PC, presented a two-hour CLE at the Bucks County Bar Association sponsored by the Women Lawyers’ Committee on November 21, 2006. The program addressed “Tips and Tricks for Marketing Your Law Firm Ethically and Effectively.” Rubel is an attorney with 15 years of integrated communications experience. After practicing law for several years, she now focuses on her passion for proactive, integrated communication for law firms and legal organizations. Rubel has developed and executed integrated communications plans for large and small law firms and supervised crisis communications, risk management, and media relations for internationally publicized death penalty trials. She served on a Supreme Court of Pennsylvania Disciplinary Board Hearing Committee for six years, acting as the chairperson for three years. 1995 Suzanne Spencer Abel has opened a private practice. She handles workers’ compensation and family law cases in Cumberland and Dauphin counties, PA. Joseph M. Ariyan and his wife, Susan, welcomed their son, Joseph Leon, on October 12, 2006. In August, Ariyan was named to a five-year term on the Northeast Bergen County Utilities Authority for Bergen County, NJ. Charles F. Gfeller has been promoted to partner at Edwards Angell Palmer & Dodge where he is a member of the Insurance & Reinsurance Department. He practices primarily in the areas of complex civil and commercial litigation, products liability, and risk management. Gfeller has tried cases in both state and federal courts, has argued before the Connecticut Appellate Court, and has participated in arbitrations, mediations, and other alternative dispute resolution proceedings. Robert C. Trichilo has joined the law firm of Post & Post in its Berwyn, PA, office and concentrates in the area of medical malpractice defense litigation. Prior to joining Post & Post, Robert served as an assistant district attorney in both Luzerne and Monroe counties. 1996 Eric R. Augustine has been named an associate at Keefer Wood Allen & Rahal where he will concentrate his practice in civil litigation. George T. Lees III has joined Rawle & Henderson, LLP, as counsel in the firm’s Wilmington, DE, office. Jack Marino has joined Rhoads & Sinon, LLP, in Harrisburg, PA. Ronald J. Reybitz was named a “Rising Star” in the 2005 edition of Pennsylvania Super Lawyers. Reybitz is currently in-house counsel with PPL Corporation in Allentown, PA. Patrick J. Sweeney was named to the Board of Directors of DRI—The Voice of the Defense Bar, a national organization of more than 22,000 defense trial lawyers and corporate counsel. At the recent annual meeting of DRI, Sweeney, partner at the Philadelphia office of the law firm Sweeney & Sheehan, was named Atlantic Regional Director of the nation’s largest civil defense bar organization. As a member of DRI’s Board of Directors, Sweeney will oversee programs for Delaware, New Jersey, New York, and Pennsylvania. Sweeney practices in the areas of transportation, premises liability, consumer protection, and matters of general liability. His service record with DRI includes three years as state representative for Pennsylvania and vice chair of DRI’s Technology Committee. 1997 Joan M. Bergman has joined the Greensboro, NC, office of Nexsen Pruet Adams Kleemeier and will work as an associate in the firm’s real estate practice group. Previously, Bergman practiced at a Greensboro law firm, representing clients— primarily developers—in land acquisitions and construction of commercial and multi-family residential sites. She has experience in numerous areas of real estate, including conducting due diligence and title searches, reviewing loan documents, and drafting easements and closing statements. Prior to that, Bergman worked at a large regional law firm in the products liability practice group. She also has experience in employment, corporate, and construction law. Leslie K. Gross has been named Director of Communications in Saul Ewing’s marketing department. In WIDENER LAW 53 Class Notes this newly created position, Gross will oversee the firm’s communications initiatives, including public relations, all marketing materials, and the advertising campaign. Prior to joining Saul Ewing, Gross was an attorney with the Philadelphia firm of Fell & Spalding, where she concentrated her litigation practice in the areas of defamation, healthcare, professional malpractice, and employment. She also oversaw the firm’s marketing efforts. Prior to her legal career, Gross worked in Comcast Corporation’s marketing department, where she focused on marketing collateral, advertising, and public relations. Maureen Mackay Nacey has joined the Chester County, PA, law firm Gawthrop Greenwood. Nacey’s practice concentrates in domestic law, including divorce, equitable distribution, property settlement agreements, spousal support, child custody, child support, and adoptions. Prior to joining Gawthrop Greenwood, Nacey was an associate in a private practice. She previously served as law clerk to the Hon. Judge James P. MacElree II of the 54 WIDENER LAW Court of Common Pleas of Chester County. She has been an active member of the Chester County Bar Association, where she chaired the Law Related Education Committee, and served as a member of the Board of Directors of the Chester County Bar Foundation. Jack Rosenbloom of the Jenkintown firm Semanoff Ormsby Greenberg & Torchia, LLC, has been named a Pennsylvania “Rising Star” by Pennsylvania Super Lawyers for the second year in a row. Only 2.5 percent of Pennsylvania attorneys receive this honor every year. “Rising Stars” are chosen by their peers as being among the top up-and-coming lawyers in the state. John Sabatina was elected to the Pennsylvania House of Representatives, 174th Legislative District, in March of 2006 and sworn in during April 2006. Ellen B. Wilber, an associate in the firm Dickie, McCamey & Chilcote, PC, Philadelphia, PA, has been named a “Rising Star” for 2006 by Philadelphia Magazine and Pennsylvania Super Lawyers Magazine— Rising Stars 2006. 1998 John J. Flynn III recently accepted a new position as the Executive Director of the Maryland Republican Party in Annapolis, MD. Kara A. Kaczynski has joined WolfBlock in the firm’s Roseland office. Kaczynski, an associate in the firm’s Real Estate practice group, most recently served as in-house counsel to the Kushner Companies in Florham Park, NJ. She previously was the lead Land-Use associate at Scarinci and Hollenbeck, LLC, in Lyndhurst, NJ. Amy Parsons and Robert C. Fisher III ’06 were married on July 28, 2006. Claudia Guglielmo ‘98 and Cari Weitzman Raymond ‘98 were bridesmaids. Amy recently accepted a new position with Schering-Plough in Kenilworth, NJ, and the couple purchased a new home in Burlington, NJ. Ari D. Weitzman joined Abom & Kutulakis in Carlisle, PA. 1999 Ninette Byelich-Jackson and husband, Marc, announce the birth of their third child, a girl, Keturah Elizabeth, on Sept. 29, 2006. She joins sister Abigail, who was born on Sept. 11, 2005, and 4-year-old brother Benjamin. Randall Hurst has been appointed to the Sewage Management and Treatment Task Force advisory committee. The task force, which serves as a legislative advisory committee to the Joint Legislative Air and Water Pollution Control and Conservation Committee of the General Assembly, advises Pennsylvania lawmakers on pollution, conservation, and infrastructure issues. Hurst is an associate with Mette, Evans & Woodside and focuses his practice in the environmental law and land use area. Hurst holds a Master of Science in Environmental Pollution Control from Penn State, is certified by the Institute of Professional Environmental Practice as a senior Qualified Environment Professional (QEP), and holds a Class A-1 State Wastewater Treatment Plant operator’s license. Christopher A. Ward has joined Klehr, Harrison, Harvey, Branzburg & Ellers as an associate. Ward concentrates his practice in bankruptcy, reorganizations, workouts and restructurings, and debtorin-possession financing. 2000 Elysa Bergenfeld has joined the law firm of Stark & Stark as an associate. Lorraine Bohanske Possanza has joined Sunstein Murphy & Associates, PC, in West Chester, PA, as an associate in the practice of health law. Michael T. Hollister has joined Logan & Associates, LLC, in Wilmington, DE, as an associate. Christina (Maycen) Thompson and her husband Jim welcomed their daughter, Avery Juliana, on May 23, 2006. Christina is an associate in the Business Law Group at Connolly Bove Lodge & Hutz LLP in Wilmington, DE. 2001 2002 Bryan McQuillan has become a partner in the firm of Mancke Wagner Spreha & McQuillan. McQuillan is the former Dauphin County chief deputy public defender. Teri Calloway, husband Matt, and daughter Keely Clarke welcomed Kaitlin Janet Calloway on Aug. 1, 2006. Victoria K. Petrone has joined Logan & Associates, LLC, in Wilmington, DE, as an associate. Mark Schiavo, an attorney with Dilworth Paxson, LLP, concentrates his practice in the area of complex commercial litigation in the state and federal courts of Pennsylvania and New Jersey. He routinely represents clients in a wide variety of litigation matters, including commercial breach of contract claims and tort defense litigation. Schiavo resides in Mt. Laurel, NJ. Daniel W. Scialpi has entered the Army Judge Advocate General Corps as a First Lieutenant. After completion of training, he, his wife Mervi, and their two sons, Erik and William, will be stationed at Fort Hood, TX. Chad Toms has joined the firm of Bifferato Gentilotti Biden & Balick, Wilmington, DE. Tabatha L. Castro of Rapposelli Castro and Gonzales in Wilmington, DE, was honored on Oct. 4, 2006, receiving the Wilmington Award. The program acknowledges Wilmington citizens who exemplify excellence in areas such as the arts, athletics, business, community service, education, faith, government, health and science, heroism, seniors, volunteerism, and human and/or civil rights. Geoffrey Christ and wife Debra welcomed their first child, son Owen, in November of 2004, and their second child, a son, Spencer, in January. Christ was just appointed by Governor Minner to a three-year term on the Delaware State Board of Pharmacy as a professional member. In addition to his real estate practice, he is a practicing pharmacist in the Bethany Beach area. Shaun H. Day has accepted the position of assistant general counsel with RL Corporation, an accounting and financial consulting company in West Chester, PA. Eugene DePasquale, a Democrat, took office in January as a member of the Pennsylvania State House of Representatives. He represents the York Citybased 95th State House District. Before running for office, he worked for a variety of government institutions and elected officials, including a stint as legislative director for State Senator (now Congresswoman) Allyson Schwartz. He also served as director of economic development for the City of York, as well as a deputy secretary for the State Department of Environmental Protection under Governor Ed Rendell. He is a former chair of the York County Democratic Party. Tanya Pino Jefferis has returned to Prickett Jones & Elliott, PA, to practice with their litigation group. She focuses her practice on product & premises liability and toxic tort. Jefferis also practices corporate and business litigation. Leda Pojman is working as an assistant attorney general in the Wyoming Attorney General’s office, Criminal Division, Appellate Section. WIDENER LAW 55 Class Notes Scott W. Reid, an associate with Cozen O’Connor, was recently elected presidentelect of the Barristers’ Association of Philadelphia. 2003 Robert J. Foley Jr., an associate with the Foley Law Firm since December 2003, appeared in the Pennsylvania Super Lawyers 2006, Rising Stars Edition. In addition to his legal work, Foley sits on the Board of Directors of the Lackawanna County Bar Association, Young Lawyers Division. Robert King received the Master of Laws (LLM) degree in Trial Advocacy, with Honors, from Temple University in Philadelphia in recent ceremonies. King was selected by his fellow trial attorneys in the program as “Best Oral Advocate.” King focuses his practice on litigation and healthcare law. He also serves as a national mediator and arbitrator for the American Health Lawyers Association in Washington, D.C. 56 WIDENER LAW Nancy Lewis, who serves as an attorney with the Judge Advocate General Corps, was the official Army escort for the United States Supreme Court at the funeral of former President Gerald R. Ford. 2004 Suzanne N. Canning has joined the Associate Chief Counsel’s Office of U.S. Customs and Border Protection as in-house counsel under the Department of Homeland Security. Harrison E. Cherney was named chief operating officer of 1st Republic Mortgage Bankers, a national mortgage bank with its headquarters located in Floral Park, NY. Cherney is also a member of the board of directors of Global Group Holdings, Inc., and the Little Peoples Children’s Theater of New York. Tim Daly, appointed by the Borough Council of Phoenixville, PA, to the planning commission last July, recently accepted a new appointment as member of Council from the middle ward. He and his wife have two children and have resided in the borough for four years. Lori E. Hood has joined Drinker Biddle & Reath, LLP in Philadelphia, PA, as an associate in the environmental law practice group of the firm’s litigation department. Abbegael M. Pacuska has joined the Pennsylvania Office of Attorney General recently in the litigation section based in Harrisburg. Pacuska practiced in a private law firm until her recent appointment. She is a member of the Pennsylvania Trial Lawyers Association and serves on the Board of Directors for Heinz-Menaker Senior Center. Michelle L. Sommer has joined Abom & Kutulakis in Carlisle, PA. Jennifer Stonerod has joined Parker McCay’s Medical Malpractice Group in Marlton, NJ, as an associate. Stonerod will concentrate her practice in the area of medical malpractice. Prior to joining Parker McCay, Ms. Stonerod served as law clerk to the Honorable Michael J. Hogan, in the Superior Court of New Jersey, Burlington County, Civil Division. She also served as a clerk in the Division of Law in the Office of the New Jersey Attorney General. Jay C. Whittle has opened a solo practice and specializes in immigration and naturalization. 2006 2005 Thomas D. Bielli has joined Harvey, Pennington Ltd. in Philadelphia, PA. Seth N. Boer has joined the District Attorney’s Office in Berks County, PA, as an assistant district attorney. Tricia Cruz is an assistant public defender for Adams County, PA, and she resides in Gettysburg. James G. Lare has joined Cozen O’Connor’s Philadelphia office as an associate, practicing in the firm’s general litigation department. Robert C. Fisher III married Amy Parsons ‘98 on July 28, 2006. John R. Logan of Susquehanna Township, PA, has been appointed vice president and general counsel of The Vartan Group Inc. Deceased 1981 Nancy E. Davitt 1985 Patricia Jeanne Chalfont Widener Law Fund Join your fellow alumni—give today! 100 percent of Widener Law Alumni at Young Conaway Stargatt & Taylor give to the Widener Law Fund! Dean Linda L. Ammons congratulated Widener Law Alumni at a reception to celebrate five consecutive years of 100-percent participation by Widener Law graduates and to thank Administrative Partner Richard Levine for the firm’s sponsorship of a Public Interest Student Fellowship. Twenty-two Widener Law alumni are employed at Young Conaway. Gifts to the Widener Law Fund expand financial aid, enhance student programs, support clinics, improve library resources and services, and allow the school to attract and retain a world class faculty. (Left to right, first row) Linda Ammons, Richard Levine, Lisa Goodman ’94, Patricia A. Widdoss ’98, Jennifer Noel ’00, student fellow Megan Kneisel, Michael W. McDermott ’03; (left to right, second row) Edwin Harron ’95, Richard DiLiberto ’86, Eugene DiPrinzio ’80, Timothy Snyder ’81, Scott Holt ’95, and Monte Squire ’05. G I V E T O D AY T O T H E W I D E N E R L AW F U N D By phone: 301-477-2172 By mail: Widener University School of Law Office of Development/Alumni Relations P.O. Box 7474 Wilmington, DE 19803-0474 Online: Calendar JUNE 2007 4 Widener Law Alumni Reception, Scranton 14 Widener Women’s Network Event, Philadelphia AUGUST 2007 4 Widener Alumni Night at Harrisburg Senators Baseball Game NONPROFIT ORG US POSTAGE PA I D PITTSBURGH PA PERMIT NO. 5605 4601 Concord Pike P.O. Box 7474 Wilmington, DE 19803-0474 Address Service Requested
http://issuu.com/ciking/docs/sol_spring_2007_mag?mode=embed&documentId=080923155420-cdf901a640f4441bb380059931e82301&layout=grey
CC-MAIN-2015-35
refinedweb
30,752
56.05
Collections Posted on March 1st, 2001 To summarize what we’ve seen so far, your first, most efficient choice to hold a group of objects should be an array, and you’re forced into this choice if you want to hold a group of primitives. In the remainder of the chapter we’ll look at the more general case, when you don’t know at the time you’re writing the program how many objects you’re going to need, or if you need a more sophisticated way to store your objects. Java provides four types of collection classes to solve this problem: Vector, BitSet, Stack, and Hashtable. Although compared to other languages that provide collections this is a fairly meager supply, you can nonetheless solve a surprising number of problems using these tools. Among their other characteristics – Stack, for example, implements a LIFO (last-in, first-out) sequence, and Hashtable is an associative array that lets you associate any object with any other object – the Java collection classes will automatically resize themselves. Thus, you can put in any number of objects and you don’t need to worry about how big to make the collection while you’re writing the program. Disadvantage: unknown type The “disadvantage” to using the Java collections is that you lose type information when you put an object into a collection. This happens because, when the collection was written, the programmer of that collection had no idea what specific type you wanted to put in the collection, and making the collection hold only your type would prevent it from being a general-purpose tool. So instead, the collection holds handles to objects of type Object, which is of course every object in Java, since it’s the root of all the classes. (Of course, this doesn’t include primitive types, since they aren’t inherited from anything.) This is a great solution, except for these reasons: - Since the type information is thrown away when you put an object handle into a collection, any type of object can be put into your collection, even if you mean it to hold only, say, cats. Someone could just as easily put a dog into the collection. - Since the type information is lost, the only thing the collection knows it holds is a handle to an Object. You must perform a cast to the correct type before you use it. On the up side, Java won’t let you misuse the objects that you put into a collection. If you throw a dog into a collection of cats, then go through and try to treat everything in the collection as a cat, you’ll get an exception when you get to the dog. In the same vein, if you try to cast the dog handle that you pull out of the cat collection into a cat, you’ll get an exception at run-time. Here’s an example: //: CatsAndDogs.java // Simple collection example (Vector) import java.util.*; class Cat { private int catNumber; Cat(int i) { catNumber = i; } void print() { System.out.println("Cat #" + catNumber); } } class Dog { private int dogNumber; Dog(int i) { dogNumber = i; } void print() { System.out.println("Dog #" + dogNumber); } } public class CatsAndDogs { public static void main(String[] args) { Vector cats = new Vector(); for(int i = 0; i < 7; i++) cats.addElement(new Cat(i)); // Not a problem to add a dog to cats: cats.addElement(new Dog(7)); for(int i = 0; i < cats.size(); i++) ((Cat)cats.elementAt(i)).print(); // Dog is detected only at run-time } } ///:~ You can see that using a Vector is straightforward: create one, put objects in using addElement( ), and later get them out with elementAt( ). (Note that Vector has a method size( ) to let you know how many elements have been added so you don’t inadvertently run off the end and cause an exception.) The classes Cat and Dog are distinct – they have nothing in common except that they are Objects. (If you don’t explicitly say what class you’re inheriting from, you automatically inherit from Object.) The Vector class, which comes from java.util, holds Objects, so not only can you put Cat objects into this collection using the Vector method addElement( ), but you can also add Dog objects without complaint at either compile-time or run-time. When you go to fetch out what you think are Cat objects using the Vector method elementAt( ), you get back a handle to an Object that you must cast to a Cat. Then you need to surround the entire expression with parentheses to force the evaluation of the cast before calling the print( ) method for Cat, otherwise you’ll get a syntax error. Then, at run-time, when you try to cast the Dog object to a Cat, you’ll get an exception. This is more than just an annoyance. It’s something that can create some difficult-to-find bugs. If one part (or several parts) of a program inserts objects into a collection, and you discover only in a separate part of the program through an exception that a bad object was placed in the collection, then you must find out where the bad insert occurred. You do this by code inspection, which is about the worst debugging tool you have. On the upside, it’s convenient to start with some standardized collection classes for programming, despite the scarcity and awkwardness.Sometimes it works right anyway It turns out that in some cases things seem to work correctly without casting back to your original type. The first case is quite special: the String class has some extra help from the compiler to make it work smoothly. Whenever the compiler expects a String object and it hasn’t got one, it will automatically call the toString( ) method that’s defined in Object and can be overridden by any Java class. This method produces the desired String object, which is then used wherever it was wanted. Thus, all you need to do to make objects of your class print out is to override the toString( ) method, as shown in the following example: //: WorksAnyway.java // In special cases, things just seem // to work correctly. import java.util.*; class Mouse { private int mouseNumber; Mouse(int i) { mouseNumber = i; } // Magic method: public String toString() { return "This is Mouse #" + mouseNumber; } void print(String msg) { if(msg != null) System.out.println(msg); System.out.println( "Mouse number " + mouseNumber); } } class MouseTrap { static void caughtYa(Object m) { Mouse mouse = (Mouse)m; // Cast from Object mouse.print("Caught one!"); } } public class WorksAnyway { public static void main(String[] args) { Vector mice = new Vector(); for(int i = 0; i < 3; i++) mice.addElement(new Mouse(i)); for(int i = 0; i < mice.size(); i++) { // No cast necessary, automatic call // to Object.toString(): System.out.println( "Free mouse: " + mice.elementAt(i)); MouseTrap.caughtYa(mice.elementAt(i)); } } } ///:~ You can see the redefinition of toString( ) in Mouse. In the second for loop in main( ) you find the statement: System.out.println("Free mouse: " + mice.elementAt(i)); After the ‘ +’ sign the compiler expects to see a String object. elementAt( ) produces an Object, so to get the desired String the compiler implicitly calls toString( ). Unfortunately, you can work this kind of magic only with String; it isn – if you passed the wrong type – you’ll get an exception at run-time. This is not as good as compile-time checking but it’s still robust. Note that in the use of this method: MouseTrap.caughtYa(mice.elementAt(i)); no cast is necessary.Making a type-conscious Vector You might not want to give up on this issue just yet. A more ironclad solution is to create a new class using the Vector, such that it will accept only your type and produce only your type: //: GopherVector.java // A type-conscious Vector import java.util.*; class Gopher { private int gopherNumber; Gopher(int i) { gopherNumber = i; } void print(String msg) { if(msg != null) System.out.println(msg); System.out.println( "Gopher number " + gopherNumber); } } class GopherTrap { static void caughtYa(Gopher g) { g.print("Caught one!"); } } class GopherVector { private Vector v = new Vector(); public void addElement(Gopher m) { v.addElement(m); } public Gopher elementAt(int index) { return (Gopher)v.elementAt(index); } public int size() { return v.size(); } public static void main(String[] args) { GopherVector gophers = new GopherVector(); for(int i = 0; i < 3; i++) gophers.addElement(new Gopher(i)); for(int i = 0; i < gophers.size(); i++) GopherTrap.caughtYa(gophers.elementAt(i)); } } ///:~ This is similar to the previous example, except that the new GopherVector class has a private member of type Vector (inheriting from Vector tends to be frustrating, for reasons you’ll see later), and methods just like Vector. However, it doesn’t accept and produce generic Objects, only Gopher objects. Because a GopherVector will accept only a Gopher, if you were to say: gophers.addElement(new Pigeon()); you would get an error message at compile time . This approach, while more tedious from a coding standpoint, will tell you immediately if you’re using a type improperly. Note that no cast is necessary when using elementAt( ) – it’s always a Gopher.Parameterized types This kind of problem isn’t isolated – there are numerous cases in which you need to create new types based on other types, and in which it is useful to have specific type information at compile-time. This is the concept of a parameterized type . In C++, this is directly supported by the language in templates. At one point, Java had reserved the keyword generic to someday support parameterized types, but it’s uncertain if this will ever occur. There are no comments yet. Be the first to comment!
http://www.codeguru.com/java/tij/tij0088.shtml
CC-MAIN-2015-11
refinedweb
1,614
61.97
Enter add-on properties When submitting an add-on, the options on the Properties page help determine the behavior of your add-on when offered to customers. Product type Your product type is selected when you first create the add-on. The product type you selected is displayed here, but you can't change it. Tip If you haven't published the add-on, you can delete the submission and start again if you want to choose a different product type. The fields you see on this page will vary, depending on the product type of your add-on. Product lifetime If you selected Durable for your product type, Product lifetime is shown here. The default Product lifetime for a durable add-on is Forever, which means the add-on never expires. If you prefer, you can set the Product lifetime so that the add-on expires after a set duration (with options from 1-365 days). Quantity If you selected Store-managed consumable for your product type, Quantity is shown here. You'll need to enter a number between 1 and 1000000. This quantity will be granted to the customer when they acquire your add-on, and the Store will track the balance as the app reports the customer’s consumption of the add-on. Subscription period If you selected Subscription for your product type, Subscription period is shown here. Choose an option to specify how frequently a customer will be charged for the subscription. The default option is Monthly, but you can also select 3 months, 6 months, Annually, or 24 months. Important After your add-on is published, you can't change your Subscription period selection. Free trial If you selected Subscription for your product type, Free trial is also shown here. The default option is No free trial. If you prefer, you can let customers use the add-on for free for a set period of time (either 1 week or 1 month). Important After your add-on is published, you can't change your Free trial selection. Content type Regardless of your add-on's product type, you'll need to indicate the type of content you're offering. For most add-ons, the content type should be Electronic software download. If another option from the list describes your add-on better (for example, if you are offering a music download or an e-book), select that option instead. These are the possible options for an add-on's content type: - Electronic software download - Electronic books - Electronic magazine single issue - Electronic newspaper single issue - Music download - Music streaming - Online data storage/services - Software as a service - Video download - Video streaming Additional properties These fields are optional for all types of add-ons. Keywords You have the option to provide up to ten keywords of up to 30 characters each for each add-on you submit. Your app can then query for add-ons that match these words. This feature lets you build screens in your app that can load add-ons without you having to directly specify the product ID in your app's code. You can then change the add-on's keywords anytime, without having to make code changes in your app or submit the app again. To query this field, use the StoreProduct.Keywords property in the Windows.Services.Store namespace. (Or, if you're using the Windows.ApplicationModel.Store namespace, use the ProductListing.Keywords property.) Note Keywords are not available for use in packages targeting Windows 8 and Windows 8.1. Custom developer data You can enter up to 3000 characters into the Custom developer data field (formerly called Tag) to provide extra context for your in-app product. Most often, this is in the form of an XML string, but you can enter anything you'd like in this field. Your app can then query this field to read its content (although the app can't edit the data and pass the changes back.) For example, let’s say you have a game, and you’re selling an add-on which allows the customer to access additional levels. Using the Custom developer data field, the app can query to see which levels are available when a customer owns this add-on. You could adjust the value at any time (in this case, the levels which are included), without having to make code changes in your app or submit the app again, by updating the info in the add-on's Custom developer data field and then publishing an updated submission for the add-on. To query this field, use the StoreSku.CustomDeveloperData property in the Windows.Services.Store namespace. (Or, if you're using the Windows.ApplicationModel.Store namespace, use the ProductListing.Tag property.) Note The Custom developer data field is not available for use in packages targeting Windows 8 and Windows 8.1.
https://docs.microsoft.com/en-us/windows/uwp/publish/enter-add-on-properties
CC-MAIN-2018-34
refinedweb
814
62.27
Why Python is Slow: Looking Under the Hood We've all heard it before: Python is slow. When I teach courses on Python for scientific computing, I make this point very early in the course, and tell the students why: it boils down to Python being a dynamically typed, interpreted language, where values are stored not in dense buffers but in scattered objects. And then I talk about how to get around this by using NumPy, SciPy, and related tools for vectorization of operations and calling into compiled code, and go on from there. But I realized something recently: despite the relative accuracy of the above statements, the words "dynamically-typed-interpreted-buffers-vectorization-compiled" probably mean very little to somebody attending an intro programming seminar. The jargon does little to enlighten people about what's actually going on "under the hood", so to speak. So I decided I would write this post, and dive into the details that I usually gloss over. Along the way, we'll take a look at using Python's standard library to introspect the goings-on of CPython itself. So whether you're a novice or experienced programmer, I hope you'll learn something from the following exploration. Python is slower than Fortran and C for a variety of reasons: 1. Python is Dynamically Typed rather than Statically Typed.¶ What this means is that at the time the program executes, the interpreter doesn't know the type of the variables that are defined. The difference between a C variable (I'm using C as a stand-in for compiled languages) and a Python variable is summarized by this diagram: For a variable in C, the compiler knows the type by its very definition. For a variable in Python, all you know at the time the program executes is that it's some sort of Python object. So if you write the following in C: /* C code */ int a = 1; int b = 2; int c = a + b; the C compiler knows from the start that a and b are integers: they simply can't be anything else! With this knowledge, it can call the routine which adds two integers, returning another integer which is just a simple value in memory. As a rough schematic, the sequence of events looks like this: C Addition¶ - Assign <int> 1to a - Assign <int> 2to b - call binary_add<int, int>(a, b) - Assign the result to c The equivalent code in Python looks like this: # python code a = 1 b = 2 c = a + b here the interpreter knows only that 1 and 2 are objects, but not what type of object they are. So the interpreter must inspect PyObject_HEAD for each variable to find the type information, and then call the appropriate summation routine for the two types. Finally it must create and initialize a new Python object to hold the return value. The sequence of events looks roughly like this: Python Addition¶ Assign 1to a - 1a. Set a->PyObject_HEAD->typecodeto integer - 1b. Set a->val = 1 Assign 2to b - 2a. Set b->PyObject_HEAD->typecodeto integer - 2b. Set b->val = 2 call binary_add(a, b) - 3a. find typecode in a->PyObject_HEAD - 3b. ais an integer; value is a->val - 3c. find typecode in b->PyObject_HEAD - 3d. bis an integer; value is b->val - 3e. call binary_add<int, int>(a->val, b->val) - 3f. result of this is result, and is an integer. Create a Python object c - 4a. set c->PyObject_HEAD->typecodeto integer - 4b. set c->valto result The dynamic typing means that there are a lot more steps involved with any operation. This is a primary reason that Python is slow compared to C for operations on numerical data. 2. Python is interpreted rather than compiled.¶ We saw above one difference between interpreted and compiled code. A smart compiler can look ahead and optimize for repeated or unneeded operations, which can result in speed-ups. Compiler optimization is its own beast, and I'm personally not qualified to say much about it, so I'll stop there. For some examples of this in action, you can take a look at my previous post on Numba and Cython. 3. Python's object model can lead to inefficient memory access¶ We saw above the extra type info layer when moving from a C integer to a Python integer. Now imagine you have many such integers and want to do some sort of batch operation on them. In Python you might use the standard List object, while in C you would likely use some sort of buffer-based array. A NumPy array in its simplest form is a Python object build around a C array. That is, it has a pointer to a contiguous data buffer of values. A Python list, on the other hand, has a pointer to a contiguous buffer of pointers, each of which points to a Python object which in turn has references to its data (in this case, integers). This is a schematic of what the two might look like: It's easy to see that if you're doing some operation which steps through data in sequence, the numpy layout will be much more efficient than the Python layout, both in the cost of storage and the cost of access. So Why Use Python?¶ Given this inherent inefficiency, why would we even think about using Python? Well, it comes down to this: Dynamic typing makes Python easier to use than C. It's extremely flexible and forgiving, this flexibility leads to efficient use of development time, and on those occasions that you really need the optimization of C or Fortran, Python offers easy hooks into compiled libraries. It's why Python use within many scientific communities has been continually growing. With all that put together, Python ends up being an extremely efficient language for the overall task of doing science with code. Above I've talked about some of the internal structures that make Python tick, but I don't want to stop there. As I was putting together the above summary, I started hacking around on the internals of the Python language, and found that the process itself is pretty enlightening. In the following sections, I'm going to prove to you that the above information is correct, by doing some hacking to expose Python objects using Python itself. Please note that everything below is written using Python 3.4. Earlier versions of Python have a slightly different internal object structure, and later versions may tweak this further. Please make sure to use the correct version! Also, most of the code below assumes a 64-bit CPU. If you're on a 32-bit platform, some of the C types below will have to be adjusted to account for this difference. import sys print("Python version =", sys.version[:5]) Python version = 3.4.0 Integers in Python are easy to create and use: x = 42 print(x) 42 But the simplicity of this interface belies the complexity of what is happening under the hood. We briefly discussed the memory layout of Python integers above. Here we'll use Python's built-in ctypes module to introspect Python's integer type from the Python interpreter itself. But first we need to know exactly what a Python integer looks like at the level of the C API. The actual x variable in CPython is stored in a structure which is defined in the CPython source code, in Include/longintrepr.h struct _longobject { PyObject_VAR_HEAD digit ob_digit[1]; }; The PyObject_VAR_HEAD is a macro which starts the object off with the following struct, defined in Include/object.h: typedef struct { PyObject ob_base; Py_ssize_t ob_size; /* Number of items in variable part */ } PyVarObject; ... and includes a PyObject element, which is also defined in Include/object.h: typedef struct _object { _PyObject_HEAD_EXTRA Py_ssize_t ob_refcnt; struct _typeobject *ob_type; } PyObject; here _PyObject_HEAD_EXTRA is a macro which is not normally used in the Python build. With all this put together and typedefs/macros unobfuscated, our integer object works out to something like the following structure: struct _longobject { long ob_refcnt; PyTypeObject *ob_type; size_t ob_size; long ob_digit[1]; }; The ob_refcnt variable is the reference count for the object, the ob_type variable is a pointer to the structure containing all the type information and method definitions for the object, and the ob_digit holds the actual numerical value. Armed with this knowledge, we'll use the ctypes module to start looking into the actual object structure and extract some of the above information. We start with defining a Python representation of the C structure: import ctypes class IntStruct(ctypes.Structure): _fields_ = [("ob_refcnt", ctypes.c_long), ("ob_type", ctypes.c_void_p), ("ob_size", ctypes.c_ulong), ("ob_digit", ctypes.c_long)] def __repr__(self): return ("IntStruct(ob_digit={self.ob_digit}, " "refcount={self.ob_refcnt})").format(self=self) Now let's look at the internal representation for some number, say 42. We'll use the fact that in CPython, the id function gives the memory location of the object: num = 42 IntStruct.from_address(id(42)) IntStruct(ob_digit=42, refcount=35) The ob_digit attribute points to the correct location in memory! But what about refcount? We've only created a single value: why is the reference count so much greater than one? Well it turns out that Python uses small integers a lot. If a new PyObject were created for each of these integers, it would take a lot of memory. Because of this, Python implements common integer values as singletons: that is, only one copy of these numbers exist in memory. In other words, every time you create a new Python integer in this range, you're simply creating a reference to the singleton with that value: x = 42 y = 42 id(x) == id(y) True Both variables are simply pointers to the same memory address. When you get to much bigger integers (larger than 255 in Python 3.4), this is no longer true: x = 1234 y = 1234 id(x) == id(y) False Just starting up the Python interpreter will create a lot of integer objects; it can be interesting to take a look at how many references there are to each: %matplotlib inline import matplotlib.pyplot as plt import sys plt.loglog(range(1000), [sys.getrefcount(i) for i in range(1000)]) plt.xlabel('integer value') plt.ylabel('reference count') <matplotlib.text.Text at 0x106866ac8> We see that zero is referenced several thousand times, and as you may expect, the frequency of references generally decreases as the value of the integer increases. Just to further make sure that this is behaving as we'd expect, let's make sure the ob_digit field holds the correct value: all(i == IntStruct.from_address(id(i)).ob_digit for i in range(256)) True If you go a bit deeper into this, you might notice that this does not hold for numbers larger than 256: it turns out that some bit-shift gymnastics are performed in Objects/longobject.c, and these change the way large integers are represented in memory. I can't say that I fully understand why exactly that is happening, but I imagine it has something to do with Python's ability to efficiently handle integers past the overflow limit of the long int data type, as we can see here: 2 ** 100 1267650600228229401496703205376 That number is much too long to be a long, which can only hold 64 bits worth of values (that is, up to $\sim2^{64}$) Let's apply the above ideas to a more complicated type: Python lists. Analogously to integers, we find the definition of the list object itself in Include/listobject.h: typedef struct { PyObject_VAR_HEAD PyObject **ob_item; Py_ssize_t allocated; } PyListObject; Again, we can expand the macros and de-obfuscate the types to see that the structure is effectively the following: typedef struct { long ob_refcnt; PyTypeObject *ob_type; Py_ssize_t ob_size; PyObject **ob_item; long allocated; } PyListObject; Here the PyObject **ob_item is what points to the contents of the list, and the ob_size value tells us how many items are in the list. class ListStruct(ctypes.Structure): _fields_ = [("ob_refcnt", ctypes.c_long), ("ob_type", ctypes.c_void_p), ("ob_size", ctypes.c_ulong), ("ob_item", ctypes.c_long), # PyObject** pointer cast to long ("allocated", ctypes.c_ulong)] def __repr__(self): return ("ListStruct(len={self.ob_size}, " "refcount={self.ob_refcnt})").format(self=self) Let's try it out: L = [1,2,3,4,5] ListStruct.from_address(id(L)) ListStruct(len=5, refcount=1) Just to make sure we've done things correctly, let's create a few extra references to the list, and see how it affects the reference count: tup = [L, L] # two more references to L ListStruct.from_address(id(L)) ListStruct(len=5, refcount=3) Now let's see about finding the actual elements within the list. As we saw above, the elements are stored via a contiguous array of PyObject pointers. Using ctypes, we can actually create a compound structure consisting of our IntStruct objects from before: # get a raw pointer to our list Lstruct = ListStruct.from_address(id(L)) # create a type which is an array of integer pointers the same length as L PtrArray = Lstruct.ob_size * ctypes.POINTER(IntStruct) # instantiate this type using the ob_item pointer L_values = PtrArray.from_address(Lstruct.ob_item) Now let's take a look at the values in each of the items: [ptr[0] for ptr in L_values] # ptr[0] dereferences the pointer [IntStruct(ob_digit=1, refcount=5296), IntStruct(ob_digit=2, refcount=2887), IntStruct(ob_digit=3, refcount=932), IntStruct(ob_digit=4, refcount=1049), IntStruct(ob_digit=5, refcount=808)] We've recovered the PyObject integers within our list! You might wish to take a moment to look back up to the schematic of the List memory layout above, and make sure you understand how these ctypes operations map onto that diagram. Now, for comparison, let's do the same introspection on a numpy array. I'll skip the detailed walk-through of the NumPy C-API array definition; if you want to take a look at it, you can find it in numpy/core/include/numpy/ndarraytypes.h Note that I'm using NumPy version 1.8 here; these internals may have changed between versions, though I'm not sure whether this is the case. import numpy as np np.__version__ '1.8.1' Let's start by creating a structure that represents the numpy array itself. This should be starting to look familiar... We'll also add some custom properties to access Python versions of the shape and strides: class NumpyStruct(ctypes.Structure): _fields_ = [("ob_refcnt", ctypes.c_long), ("ob_type", ctypes.c_void_p), ("ob_data", ctypes.c_long), # char* pointer cast to long ("ob_ndim", ctypes.c_int), ("ob_shape", ctypes.c_voidp), ("ob_strides", ctypes.c_voidp)] @property def shape(self): return tuple((self.ob_ndim * ctypes.c_int64).from_address(self.ob_shape)) @property def strides(self): return tuple((self.ob_ndim * ctypes.c_int64).from_address(self.ob_strides)) def __repr__(self): return ("NumpyStruct(shape={self.shape}, " "refcount={self.ob_refcnt})").format(self=self) Now let's try it out: x = np.random.random((10, 20)) xstruct = NumpyStruct.from_address(id(x)) xstruct NumpyStruct(shape=(10, 20), refcount=1) We see that we've pulled out the correct shape information. Let's make sure the reference count is correct: L = [x,x,x] # add three more references to x xstruct NumpyStruct(shape=(10, 20), refcount=4) Now we can do the tricky part of pulling out the data buffer. For simplicity we'll ignore the strides and assume it's a C-contiguous array; this could be generalized with a bit of work. x = np.arange(10) xstruct = NumpyStruct.from_address(id(x)) size = np.prod(xstruct.shape) # assume an array of integers arraytype = size * ctypes.c_long data = arraytype.from_address(xstruct.ob_data) [d for d in data] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] The data variable is now a view of the contiguous block of memory defined in the NumPy array! To show this, we'll change a value in the array... x[4] = 555 [d for d in data] [0, 1, 2, 3, 555, 5, 6, 7, 8, 9] ... and observe that the data view changes as well. Both x and data are pointing to the same contiguous block of memory. Comparing the internals of the Python list and the NumPy ndarray, it is clear that NumPy's arrays are much, much simpler for representing a list of identically-typed data. That fact is related to what makes it more efficient for the compiler to handle as well. Using ctypes to wrap the C-level data behind Python objects allows you to do some pretty interesting things. With proper attribution to my friend James Powell, I'll say it here: seriously, don't use this code. While nothing below should actually be used (ever), I still find it all pretty interesting! Inspired by this Reddit post, we can actually modify the numerical value of integer objects! If we use a common number like 0 or 1, we're very likely to crash our Python kernel. But if we do it with less important numbers, we can get away with it, at least briefly. Note that this is a really, really bad idea. In particular, if you're running this in an IPython notebook, you might corrupt the IPython kernel's very ability to run (because you're screwing with the variables in its runtime). Nevertheless, we'll cross our fingers and give it a shot: # WARNNG: never do this! id113 = id(113) iptr = IntStruct.from_address(id113) iptr.ob_digit = 4 # now Python's 113 contains a 4! 113 == 4 True But note now that we can't set the value back in a simple manner, because the true value 113 no longer exists in Python! 113 4 112 + 1 4 One way to recover is to manipulate the bytes directly. We know that $113 = 7 \times 16^1 + 1 * 16^0$, so on a little-endian 64-bit system running Python 3.4, the following should work: ctypes.cast(id113, ctypes.POINTER(ctypes.c_char))[3 * 8] = b'\x71' 112 + 1 113 and we're back! Just in case I didn't stress it enough before: never do this. Above we did an in-place modification of a value in a numpy array. This is easy, because a numpy array is simply a data buffer. But might we be able to do the same thing for a list? This gets a bit more tricky, because lists store references to values rather than the values themselves. And to not crash Python itself, you need to be very careful to keep track of these reference counts as you muck around. Here's how it can be done: # WARNING: never do this! L = [42] Lwrapper = ListStruct.from_address(id(L)) item_address = ctypes.c_long.from_address(Lwrapper.ob_item) print("before:", L) # change the c-pointer of the list item item_address.value = id(6) # we need to update reference counts by hand IntStruct.from_address(id(42)).ob_refcnt -= 1 IntStruct.from_address(id(6)).ob_refcnt += 1 print("after: ", L) before: [42] after: [6] Like I said, you should never use this, and I honestly can't think of any reason why you would want to. But it gives you an idea of the types of operations the interpreter has to do when modifying the contents of a list. Compare this to the NumPy example above, and you'll see one reason why Python lists have more overhead than Python arrays. Using the above methods, we can start to get even stranger. The Structure class in ctypes is itself a Python object, which can be seen in Modules/_ctypes/ctypes.h. Just as we wrapped ints and lists, we can wrap structures themselves as follows: class CStructStruct(ctypes.Structure): _fields_ = [("ob_refcnt", ctypes.c_long), ("ob_type", ctypes.c_void_p), ("ob_ptr", ctypes.c_long), # char* pointer cast to long ] def __repr__(self): return ("CStructStruct(ptr=0x{self.ob_ptr:x}, " "refcnt={self.ob_refcnt})").format(self=self) Now we'll attempt to make a structure that wraps itself. We can't do this directly, because we don't know at what address in memory the new structure will be created. But what we can do is create a second structure wrapping the first, and use this to modify its contents in-place! We'll start by making a temporary meta-structure and wrapping it: tmp = IntStruct.from_address(id(0)) meta = CStructStruct.from_address(id(tmp)) print(repr(meta)) CStructStruct(ptr=0x10023ef00, refcnt=1) Now we add a third structure, and use it to adjust the memory value of the second in-place: meta_wrapper = CStructStruct.from_address(id(meta)) meta_wrapper.ob_ptr = id(meta) print(meta.ob_ptr == id(meta)) print(repr(meta)) True CStructStruct(ptr=0x106d828c8, refcnt=7) We now have a self-wrapping Python structure! Again, I can't think of any reason you'd ever want to do this. And keep in mind there is noting groundbreaking about this type of self-reference in Python – due to its dynamic typing, it is realatively straightforward to do things like this without directly hacking the memory: L = [] L.append(L) print(L) [[...]] Python is slow. And one big reason for that, as we've seen, is the type indirection under the hood which makes Python quick, easy, and fun for the developer. And as we've seen, Python itself offers tools that can be used to hack into the Python objects themselves. I hope that this was made more clear through this exploration of the differences between various objects, and some liberal mucking around in the internals of CPython itself. This exercise was extremely enlightening for me, and I hope it was for you as well... Happy hacking!
https://jakevdp.github.io/blog/2014/05/09/why-python-is-slow/
CC-MAIN-2022-21
refinedweb
3,587
63.8
. At then end of section: "An example of a naming collision" you pose the question: "So how do we keep function names from conflicting with each other?" I'm not sure if I'm a little tired but is the answer to that question meant to be all the follow text on this page? Best, Yes, namespaces are one mechanism by which we can keep function names from having naming collisions. Ooooh thank god it's not short for "Sexually Transmitted Disease" :'-D Does overriding a function contradict the ODR because you are redefining a unction with the same name and number of parameters? No, there's no conflict with the One Definition Rule (ODR). The ODR as applied to functions says there can only be one definition for a given function. Overloaded functions share a name, but are still considered unique functions. And because they're unique, each can have a different definition without violating ODR. Many books that using "using namespace std" instead of "std::" for cout and cin. Is it best practice for "using namespace std" for cout and cin? Hi, Many books use it because it's easier to show code "snippets". Typing "using namespace std" is just referring (use namespaces from the standard library, hence std::). It's not best practice to use "using namespace std", why? Example if you are using two libraries: using namespace example; using namespace example1; and in both libraries, you have "cout", there will be conflict. Because compiler can't decide which to use! If you wrote it like this: example::cout or example1::cout, this will work because you have used "cout" from separate libraries. ----------------------------------------------------------------------------------------------------------- Still if you want to use "using namespace std", you can do it this way: using std::cout; using std::cin; and you can write a program like this: Kio nails it. This topic is discussed in more detail in lesson [link id="4979"]. Thank you so much for your contribution!! Alex when you die, you'll go directly to the heaven and sit with pope. XD how come I don't have to use 'namespace' in turbo C++ ? Because Turbo C++ is very old, and was created before they moved the standard library into the namespace. You really should use a more modern compiler if you can. Alex ,,, you are my best friend. where u living bro ? Thank you so much for your great contribution ! According to what I have learnt, the modern compilers with 'non-.h' facility should work even without using std::cout, shouldn't it? I don't understand what you are asking. My dear c++ Teacher, Please accept my many thanks for your instructive answer (below, by July 7, 2017 at 12:19 pm). With regards and friendship. My dear c++ Teacher, Please let me explain you that section 4.1a deals with naming confliction between variables. My question is about naming confliction between function and variable. My view is following: 1. From 1st program follows that: 1a. Compiler permits same name for function and variable inside its body. 1b. When in function's body (in this case main()) there is function call before variable's definition, latter's name can be same as that of called function. My explanation is that called function, after its execution, is destroyed, so its name is available. 2. From 2nd program it follows that when in function's body there is variable's definition before function's call, called function's name can NOT be same as variable's, apparently for variable is not yet destroyed so its name is not available inside hosting function. I my view correct? With regards and friendship. No. A variable name inside a block shadows an identical name from outside the block. So in the second program, the variable x shadows the function x. Therefore, when you try to call x(), the compiler notes that x is a variable, not a function, so calling it like a function doesn't make sense. In the top program, when the x function is called, the x variable hasn't shadowed it yet, so this is legal. My dear c++ Teacher, Please let me say you following: This program works fine and this produces error line 13 error: 'x' cannot be used as a function I used Code::Blocks for both. Then I'm confused on when a function and a variable can have same name. With regards and friendship. The x declared inside the block shadows the x declared outside the block. See for more information. cout is reserved word right ! but we should explicitly tell to the compiler that it is from std namespace does that mean we should create other identifiers namely cout cin but from other user-defined namespaces ? No, cout is the name of a predefined object. You should explicitly tell the compiler that it is from the std namespace. You can create your own namespaces if you wish, but I find this generally isn't necessary unless you're making a library that will be distributed to others.
https://www.learncpp.com/cpp-tutorial/naming-conflicts-and-the-std-namespace/comment-page-1/
CC-MAIN-2019-13
refinedweb
845
73.78
Stefano: The download link you indicated below requires a user name and password. How do you obtain it without becoming a part of the expert group? Carlos ----- Original Message ----- From: "Stefano Mazzocchi" <stefano@apache.org> To: "Apache Cocoon" <dev@cocoon.apache.org> Cc: "David Nuescheler" <david.nuescheler@day.com> Sent: Saturday, September 13, 2003 03:16 Subject: [headsup] JSR 170 v0.8 Community Review > I'm happy to announce that after years of work, the JSR 170 expert > group has been released the JSR 170 v0.8 for community review. After a > period of inactivity, I'm back in that expert group (and pushed a > little for this community review to happen sooner rather than later, > but didn't find any obstacles since they are eager to get feedback from > a wider community). > > There are currently weaknesses in XML support (expecially whitespace > and namespaces) but the EG is fully committed to fix those before 1.0 > > You should be able to find the spec here: > > >- > > draft.zip?id=170&fileId=1121 > > Feel free to send (costructive) comments on the API either publicly on > this list or privately to me. I'll make sure they arrive to the expert > group. > > Thanks. > > -- > Stefano. > >
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200309.mbox/%3C005201c37a92$76ad7940$6a01a8c0@rivendell%3E
CC-MAIN-2015-11
refinedweb
201
57.47
Created on 2009-08-28 08:32 by m.sucajtys, last changed 2010-12-18 18:21 by pitrou. This issue is now closed. During writing some code I discovered some behaviour of httplib. When we connect to host, which doesn’t respond with status line, but it just sending data, httplib may consume more and more memory, becouce when we execute h = httplib.HTTPConnection(‘host’) h.conect() h.request(‘GET’, ‘/’) r = h.getresponse() httplib tries to read one line from host. If host doesn’t send new line character (‘\n’), httplib reads more and more data. On my tests httplib could consume all of 4GB of memory and the python process was killed by oom_killer. The resolution is to limit maximum amount of data read on getting response. I have performed some test: I received 3438293 from hosts located in the network. The longest valid response line is HTTP/1.1 500 ( The specified Secure Sockets Layer (SSL) port is not allowed. ISA Server is not configured to allow SSL requests from this port. Most Web browsers use port 443 for SSL requests. )\r\n and it has 197 characters. In RFC2616 in section 6.1 we Reason-Phrase is intended to give a short textual description of the Status-Code.” So limiting maximum status line length to 256 characters is a solution of this problem. It doesn’t break compatibility withc RFC 2616. My patch was written originally on python2.4, but I’ve tested it on python2.6: [ms@host python2.6]$ patch --dry-run -i /home/ms/httplib.patch patching file httplib.py Hunk #1 succeeded at 209 (offset 54 lines). I've also check patch against code in svn tree: wget patch -p0 -i httplib.patch --dry-run patching file httplib.py Hunk #1 succeeded at 209 (offset 54 lines). Hunk #2 succeeded at 303 (offset 10 lines). Sumar, to get this moved forward could you please provide a unit test. Attached is a unit test which tests the issue. Unfortunately, since it uses the resource module to limit memory to a workable size, it will only work on Unix. The given patch appears to fix the issue well. I think this should be taken as a security issue (even if a rather odd one) since a malicious http server could be set up in place of the normal one and crash any http python clients that connect to it. Eg: Run: dd if=/dev/zero bs=10M count=1000 | nc -l 8888 And then: import httplib h = httplib.HTTPConnection('localhost', 8888) h.connect() h.request('GET', '/') r = h.getresponse() This should cause python to use up all the memory available. A py3k patch against revision 87228. First, I don't think the resource module needs to be used here. Second, I don't see why getcode() would return 200. If no valid response was received then some kind of error should certainly be raised, shouldn't it? By the way, looking at the code, readline() without any parameter is used all over http.client, so fixing only this one use case doesn't really make sense.? Well, the HTTP 1.0 RFC was filed in 1996 and HTTP 1.1 is most commonly used today. I don't think we need to support 0.9 anymore. I'll open a separate issue for ripping off 0.9 support, though. I just read the whole discussion and it seems that code was in place so that client can tolerant of a BAD HTTP 0.9 Server response. Given that issue10711 talks about removing HTTP/0.9 support (+1 to that), this issue will become obsolete. I too support removing HTTP/0.9. There are hardly any advantages in keeping it around. Well, removing 0.9 support doesn't make this obsolete, does it? On Thu, Dec 16, 2010 at 01:18:30PM +0000, Antoine Pitrou wrote: > Well, removing 0.9 support doesn't make this obsolete, does it? It does. Doesn't it? Because I saw in your patch that you fall back on HTTP 1.0 behaviour when the server does not return a status line and in which case a Exception will be raise and this issue won't be observed. > It does. Doesn't it? Because I saw in your patch that you fall back on > HTTP 1.0 behaviour when the server does not return a status line and > in which case a Exception will be raise and this issue won't be > observed. I don't think you understood the issue here. Calling readline() without a maximum length means the process memory potentially explodes, if the server sends gigabytes of data without a single "\n". On Thu, Dec 16, 2010 at 02:02:10PM +0000, Antoine Pitrou wrote: > I don't think you understood the issue here. Calling readline() without > a maximum length means the process memory potentially explodes, if the > server sends gigabytes of data without a single "\n". Yeah, I seem to have misunderstood the issue. Even if the response wa s an *invalid* one but it was huge data without \n, the readline call would just explode. - reading chunked response is doing a readline call too. Both this need to be addressed by having a limit on reading. I thought readline() is being called only when parsing headers which should almost always have CRLF (or at least LF) and thought valid responses always start with headers. Now that 0.9 client support has been removed, this can proceed (at least for 3.2). Here is a patch limiting line length everywhere in http.client, + tests (it also affects http.server since the header parsing routine is shared). In the morning, I had a comment on the patch wondering why read _MAXLENGH + 1 and then check for len of header > _MAXLENGH. Instead of just reading _MAXLENGH (and if the length matched rejecting). ( Looks like it did not go through). I think that either way is okay. I am taking the privilege of committing the patch. Fixed for py3k in 87373. So it is be available in the next beta. Shall merge the changes to other codelines. Partially backported in r87382 (3.1) and r87383 (2.7). Not everything could be merged in because of HTTP 0.9 support and (in 2.7) a slightly different architecture. Thank you.
https://bugs.python.org/issue6791
CC-MAIN-2021-39
refinedweb
1,060
77.64
Talk:Modernised Pascal 4) - that might make sense, if also applied to for, with, if, making 'begin' implicit everywhere, like it is done in Component Pascal. However it is major leap, and can be considered only in terms of implementin one more specific non-Borland mode. But personall i'd prefer implicit 'try' there :) 6) - as far as i got it, it is runtime variable initialisation, instead of compile-time one. Syntactic sugar of doubtful quality. I would say that may make sense together with 5), modifiyng with to make shortcuts to some subclasses, record members, whatever. with Addr = DataSet1.FieldByName['Address'], Phone=DataSet1.Field['Telephone'].AsInteger do begin Addr.Value := Addr.Value + #13#10 + IntToStr(Phone) end; Hence, turning with into bi-modal operator, like FROM clause in SQL SELECT, ehich can settle default table without alias and other tables with aliases. However this a major change in compiler, and cons and pros are not clear. - '=' looks absoletely not Pascal-style here, replaccing it with 'is' is even worse, brand new keyword like "beeing" ? Pascal is to be small language, even is not as small as Component Pascal :) - Compiler would have hard time to decide what to stor in aliases: cached value read-only, pointer to original variable, pointer to some object returned by some function, make inline substitution or whatever. + Programmer is no more needed to settle teporary variables, and would never change or Dispose it by mistake. + Programmer have easy way to avoid multiple expensive functions calls, like .FieldByName is. However it would be not making new initialized variables, but rather an aliases to exisiting ones. ......... i tried to giggle here, but later decided to go main page :) Contents - 1 "for" variables as "break" and "continue" labels - 2 prefix for binary and hexadecimal numbers - 3 braindump - 4 whats the point of elseif - 5 Proposal: Optional trailing separators and empty lists - 6 Proposal: Enhanced numbers - 7 Proposal: Enhanced case of string - 8 Proposal: Half open array definitions - 9 Proposal: Enhanced case for floats and open ranges - 10 Proposal: Explicitly discarding results - 11 Proposal: if 1 > a < 3 then ... {!} "for" variables as "break" and "continue" labels As fas as I know, "break" and "continue" can only influence their own "for" loop. When using nested loops (e.g. graphics programming, multi-dimensional array parsing) it can be useful to break out of the entire "for" structure. Currently this requires declaring a label or putting the code into a subroutine and using "exit". A shorter (and imo easier) way would be allowing parameters for "break" and "continue": for y := 0 to 2047 do for x := 0 to 1023 do if (scr[y, x] = 0) then break(y); // with parentheses for y := 0 to 2047 do for x := 0 to 1023 do if (scr[y, x] < 10) then continue y; // without parentheses prefix for binary and hexadecimal numbers 0b0010110 // binary via letter 2_0010110 // binary via base 0x01230ABC // hexadecimal via letter 16_01230ABC // hexadecimal via base braindump - It should be disallowed to call the constructor from an instace! Or it should have no result when calling from instance. - "Abstract class" declaration like Java. - "final" AKA "readonly" variables, a la Java and .Net. - "final" methods and classes which cannot be overriden or extended, respectively, like in Java. - "Removal of begin"? Nice :) The only interesting thing in VB (of course done badly :-/ ) is having a different block terminator for each block "type". I guess it would be nice to have these and implicit blocks in all conditionals and loops. - Non-reusable variables. Most of the time you don't need and you don't want to assign unrelated values to a variable repeatedly (i.e. you only need to use ':=' once on it), just Inc, Dec, use in for loop, etc. To be allowed to reassign a variable it should have to be declared with a special directive. whats the point of elseif in a block based language like pascal? Plugwash 21:59, 4 June 2006 (CEST) Proposal: Optional trailing separators and empty lists In many cases it makes sense to allow an additional trailing separator as well as allowing for empty lists, especially when dealing with conditional compilation. As an additional benefit it makes it easier to move around lines of code. Here are some (more or less stupid) examples: // no hazzle with the last comma, empty uses (if neither x1 nor x2 defined) uses (*$ifdef a *) unit_a, (*$endif *) (*$ifdef b *) unit_b, (*$endif *) ; // empty case statement (if neither x1 nor x2 defined) case expr of (*$ifdef x1 *) 1: write('x1'); (*$endif *) (*$ifdef x2 *) 2: write('x2'); (*$endif *) end; // no hazzle with the last semicolon: extensable parameters procedure f( const s: string; x,y: integer; ); // no hazzle with the last comma: extensable sets / arrays function_call([ 1, 2, ]); // no hazzle with the last comma: extensable parameters writeln( var_1, var_2, ); // empty type list (if neither x1 nor x2 defined); in some cases types must be in one block type (*$ifdef x1 *) t1=integer; (*$endif *) (*$ifdef x2 *) t2=integer; (*$endif *) There should be no problems concerning compatability and implementation. --Jasper 11:45, 5 August 2010 (CEST) Proposal: Enhanced numbers What about better readable numbers such as 100_000 or $_1234_5678 (note the "_" signs)? A number still should be started with "$" or a digit. All occurring "_" shall simply be accepted (and ignored). A base could be specified as well, e.g. - 4$123 (=27) - 2$1100 (=12) - 36$az (=395) These ideas are stolen from e.g. Ada. There should be no problems concerning compatability and implementation. --Jasper 11:45, 5 August 2010 (CEST) Proposal: Enhanced case of string The currently realized string-case allows for ranges which is nice but IMHO not really needed. What I need sometimes is matching string beginnings. A more or less stupid idea for the syntax could be: case str_expr of 'match'.. :; // match all strings beginning with 'match' end; However probably the parser needs to be changed to accept the missing second argument of "..". --Jasper 11:45, 5 August 2010 (CEST) Proposal: Half open array definitions What about arrays with an unspecified upper bound such as this?: type ta_byte = array [0..] of byte; No bounds checking should occur on such array accesses. The size of such type should be 0. Example usages: - The Windows BMP header type, i.e. the palette - When simulating dynamic arrays - Proper replacement of the often used "array [0..0]", see e.g. examples in Delphi's RTL --Jasper 11:45, 5 August 2010 (CEST) Proposal: Enhanced case for floats and open ranges I would like to propose to enhance the case statement for floats. What is needed then is the ability to test for open ranges and half open ranges. case float_expr of <2 : ; // match all below 2 >=2 .. <4: ; // match 2<=x<4 4 : ; // match 4 >4 .. <=6: ; // match 4<x<=6 8 .. 9 : ; // match 8<=x<=9 >9 .. 10 : ; // match 9<x<=10 end; In ranges the left expression should be less than the right one and therefore the first relational operator should be ">" or ">=" and the second one "<" or "<="; if the operators are absent ">=" resp. "<=" should be assumed. In Visual Basic .NET an "is" is added before the relational operator. An alternative syntax idea could be ranges like 4<..<=6. Proposal: Explicitly discarding results I often would like to explicitly discard a function result such as void my_func(x); The usual workaround is my_func(x); // using "extended syntax" or dummy := my_func(x); // dummy is not used, this gives a hint Both workarounds are not the best way to express the intention. There should be a way to allow for the new syntax and at the same time provide a hint/warning for the workaround with the extended syntax. There should be no problems concerning compatibility (extra switch) and implementation. Proposal: if 1 > a < 3 then ... {!} Shouldn't this be this: if 1 < a < 3 then ... {!} If not, i do not understand the proposal at all. --Mischi 13:50, 11 October 2010 (CEST)
http://wiki.freepascal.org/Talk:Modernised_Pascal
CC-MAIN-2018-47
refinedweb
1,327
61.36
future value schedule Evaluate the future value of an investment using a schedule of interest rates.Controller: CodeCogs Interface C++ Excel Future Value Schedule This function evaluates the future value of an investment using a schedule of compound interest rates. References: Example 1 #include <stdio.h> #include <codecogs/finance/banking/future_value_schedule.h> int main() { double principal = 10000.00; double schedule[] = {0.050, 0.051, 0.052, 0.053, 0.054}; int n = 5; double d = Finance::Banking::future_value_schedule(principal, schedule, n); printf("After 5 years, the 10000.00 investment will have grown to %5.2f\n", d); }Output: After 5 years, the 10000.00 investment will have grown to 12884.77 Parameters Returns - a double, the future value. Authors - James Warren (May 2005) Source Code Source code is available when you agree to a GP Licence or buy a Commercial Licence. Not a member, then Register with CodeCogs. Already a Member, then Login.
https://www.codecogs.com/library/finance/banking/future_value_schedule.php
CC-MAIN-2019-39
refinedweb
153
53.47
tmpfile From cppreference.com Opens a temporary file. The file is opened as binary file for update ( "wb+ mode). The filename of the file is guaranteed to be unique within the filesystem. The file will be closed when the program exits. [edit] Parameters (none) [edit] Return value The associated file stream or NULL if an error has occurred [edit] Example tmpfile with error checking. Code opens a temporary file with mode "wb+". Run this code #include <stdio.h> #include <stdlib.h> int main(void) { FILE* tmpf = tmpfile(); /* mode: "wb+" */ if (tmpf == NULL) { perror("tmpfile()"); fprintf(stderr,"tmpfile() failed in file %s at line # %d", __FILE__,__LINE__-4); exit(EXIT_FAILURE); } fputs("Hello, world", tmpf); rewind(tmpf); char buf[6]; fgets(buf, sizeof buf, tmpf); printf("%s\n", buf); return EXIT_SUCCESS; } Output: Hello
http://en.cppreference.com/w/c/io/tmpfile
CC-MAIN-2014-52
refinedweb
131
59.8
Important: Please read the Qt Code of Conduct - [shiboken2] Shared pointer support I want to wrap a library which is based on QT and which uses shared pointers. I stumbled over a few issues, one of them seems to be a showstopper: It seems that wrapping classes which return shared pointers in a virtual method is not possible at all. Non-virtual functions are fine. The error from shiboken2 is "#error: CppGenerator::writeVirtualMethodNative: B::genA(): Could not find a minimal constructor for type 'QSharedPointer<A >'. This will result in a compilation error.". I'm not sure about what can be expected from the shared pointer bridging. While I'm able to pass shared pointers which are generated from the c++ side around in python, it seems to be impossible to create a shared pointer from an object generated from the python side. The documentation in shiboken regarding smart pointers seems to be very limited. I couldn't figure out if setting "ref-count-method" is required and why? For QSharedPointer, there doesn't seem to exist a method for querying the reference count. The first point looks like a shiboken2 bug to me and I'm wondering if there are any workarounds here? Here are my test files: test.h #ifndef TEST_H #define TEST_H #include <QtCore/QSharedPointer> // if uncommented, you will get the following error when calling shiboken // #error: CppGenerator::writeVirtualMethodNative: B::genA(): Could not find a minimal constructor for type 'QSharedPointer<A >'. This will result in a compilation error. #define SHIBOKEN_ERROR class A { public: int a; A(int _a); virtual ~A(); }; class B { public: int b; B(int _b); virtual ~B(); void doSomething(QSharedPointer<A> a); #ifdef SHIBOKEN_ERROR virtual #endif QSharedPointer<A> genA(); }; #endif ptest.xml <!--?xml <load-typesystem <smart-pointer-type <object-type <object-type </typesystem> ptest .py import PySide2 import ptest a=ptest.A(1) b=ptest.B(2) a2=b.genA() print(a2.a) b.doSomething(a2) # following is not working, need to create a shared pointer from the python object a b.doSomething(a) - SGaist Lifetime Qt Champion last edited by Hi and welcome to devnet, Since it's on the shiboken level, I would recommend bringing your question to the PySide mailing list. You'll find there PySide2 developers/maintainers. This forum is more user oriented. Thanks for answering - I did so :)
https://forum.qt.io/topic/110539/shiboken2-shared-pointer-support
CC-MAIN-2021-21
refinedweb
388
55.64
On 5/31/06, Kuppe <kuppe@360t.com> wrote: > Thanks James for the quick reply. > > I understand that they are two different topics. I was only trying to > demonstrate the scenario that is interesting to me. Everytime that the > client receives messages it is only receiving the most recent and none of > the previously updated messages - somehow receive latest image. > > So after reading the MessageEvictionStrategy, it seems that i need to > implement this interface and therefore implement the > evictMessage(LinkedList) method. My implementation would receive a linked > list of messages which are ordered by time, and i should return a > MessageReference referencing a message that can be evicted. Is this correct? Yes. See how it goes - it could be we could refactor the code a little to make it easier to do exactly what you want. > I assume then that this should be accompanied by a > PendingMessageLimitStrategy which will force the MessageEvictionStrategy to > be used. Is this correct? Yes. The PendingMessageLimitStrategy is there to enable message limits (i.e. to calculate the maximum message limit for a given subscription; once that limit is > 0 then eviction will start to occur when a particular subscription has more than this number of messages available). To see more detail of how this works, see TopicSubscription > I also assume then that i could implement my own PendingMessageLimitStrategy > that would determine just how many messages are valid for all messages in > the queue. Yes - note that we are talking about non-durable topics here right. Your use of the word "queue" was just a reference to the buffer of messages to be sent to a single slow consumer on the topic right? > But the javadoc describes that the number returned from the > getMaximumPendingMessageLimit is based on the messages currently in the > message queue and is in excess of the prefetch size for the subscription. So PendingMessageLimitStrategy just purely returns the high watermark for a given subscription. e.g. if you use the ConstantPendingMessageLimitStrategy (which is the most common) - its just a constant you define in the XML config file - such as all consumers have a maximum message limit of 1000 > Are both of these interfaces pluggable in the configuration? Yes > If so, how do i > plug in my own specific implementation. It seems the examples given show > only an alias for the implementation class. So all of the activemq.xml file is pluggable. Whenever there is a custom tag that defines a bean, you can replace it with some regular Spring configuration. So in this example - the <pendingMessageLimit> defines the property "pendingMessageLimit" on the PolicyEntry and the <constantPendingMessageLimitStrategy> tag defines a bean (of type ConstantPendingMessageLimitStrategy) <pendingMessageLimitStrategy> <constantPendingMessageLimitStrategy limit="10"/> </pendingMessageLimitStrategy> so you could do this instead to use your own implementation class... <pendingMessageLimitStrategy> <bean class="com.acme.Foo" xmlns=""> <property name="foo" value="bar"/> </bean> </pendingMessageLimitStrategy> Note the use of xmlns="" in there as the <broker> element is usually in the activemq namespace and <bean> must be in no-namespace in XML. > Still, it seems that it is not possible to configure a way of constantly > overwriting the latest price with a new price in the message queue. You just need to evict the old price first. If there is no old price for the topic on the last price added, just evict the oldest one. e.g. if you set the message limit to 1000 then you will have a LinkedList of 1000 messages. When message 1001 is added, your MessageEvictionStrategy will be called to find the message to evict - so iterate through the list looking for a message on the same topic - and evict that one first - or failing that evict the oldest one (the first in the list). The source code of OldestMessageWithLowestPriorityEvictionStrategy might help -- James -------
http://mail-archives.apache.org/mod_mbox/activemq-users/200605.mbox/%3Cec6e67fd0605310838j787bd882yee9bfdbaae666931@mail.gmail.com%3E
CC-MAIN-2014-49
refinedweb
624
52.8
Implementing equals() or equals() and hashcode() Hi, I'm implementing a solution in Java 1.5.0 and have some performance questions. Background: The aim is to store rule sets with one or more rules. The rule sets will seldomly be changed in the database. Proposed solution: My idea is to have a ArrayList with RulesSets. Each RuleSet in the ArrayList is a RuleSet class which has a field of type ArrayList that contains the rules related to the rule set. A number of strings on the RuleSet class makes it unique so not more than one rule set can have the same combination. The same goes for the Rule clss. Whenever I need to get information I have all variables making the key for the RuleSet. Code example: **************************** RuleProvider.java ArrayList rulesets = new ArrayList(); public ArrayList getRules(String keyPart1, String keyPart2) { RuleSet rulesetGet = new RuleSet(keyPart1, keyPart2) return rulesets.get(rulesets.indexOf(rulesetGet)).rules; } RuleSet.java public class RuleSet { public String keyPart1; public String keyPart2; public ArrayList rules = new ArrayList(); public RuleSet(String keyPart1, String keyPart2) { this.keyPart1 = keyPart1; this.keyPart2 = keyPart2; } // Should implement equals() and/or hashCode() here? } Rule.java public class Rule { public String ruleType = "somevalue"; public String ruleValue = "anothervalue"; // Should implement equals() and/or hashCode() here? } ******************** Questions: 1) My main focus is to write fast code. Is there any neglibale performance difference if I implement equals() or hashCode() in both/neither/one of the classes? 2) Is it best practice to override the objects equals(Object o ) and hashCode(Object o) using the same signatures, or is it okey to have for instance equals(RuleSet rs) and hashCode(RuleSet rs)? 3) If I were to use a HashMap instead, would the performance be better? (The key of the HashMap would be a class with two fields, keyPart1 + keyPart2, and the value a ArrayList/HashSet of Rules) 3) Do you see any other points on how to improve performance? Data: There will be approx 1000 rule sets, each of them containing about 10 rules. Best Regards, Streamside 1) You'll have to measure it, but probably not. 2) You _must_ use the same signature or it won't override those methods (and won't get used). Use the @Override annotation introduced in Java 1.5. 3) Yes, the look up of the rule sets will be faster. Set the initial capacity to you expected number of rule sets. 4) Not off hand, but build it simply first then measure if it is good enough. If not, look where the problems are.
https://www.java.net/node/685769
CC-MAIN-2014-10
refinedweb
424
65.62
Any ideas? Thanks. -- WHA > doubt you'll find one that will really cover both subjects adequately, since they're totally different languages. The only thing I can think of that comes close is the "little language" books, that give a brief summary of a wide variety of "little," high-level languages. The one I'm familiar with (though I don't own it) is _HPL: Little Languages and Tools_ by Salus (editor). -- Erik Max Francis / m...@alcyone.com / __ San Jose, CA, USA / 37 20 N 121 53 W / &tSftDotIotE / \ Things are as they are because they were as they were. \__/ Thomas Gold Bosskey.net: Return to Wolfenstein / A personal guide to Return to Castle Wolfenstein. There isn't such a book as Jalab wants. It'd be great fun to write one, but there isn't a market ... well, I'll just say such a book doesn't exist now. What's your goal for the students? To be ready to go out in industry and solve problems? To understand the theory and practice of industrial-strength "scripting languages"? To pad their résumés? To safety-proof them so they don't hurt themselves the first time they're asked to write a dynamic Web page? To supplement an academic course on DSLs? I know what *I*'d do in each case--and they're not all the same answer. I'm all for people buying *Little Languages and Tools*, incidentally. -- Cameron Laird <Cam...@Lairds.com> Business: Personal: Why not force your students to download and study some of the high quality, free on-line tutorials that are available for both languages? Raymond Hettinger. There is also a project devoted to make Python _the_ language for educational purposes. You will find the related documentation at.. (Here, I suppose you need a book devoted to programming with Perl or Python, not a book on creating language interpreters). --------------------- Alessandro Bottoni > Alle 23:53, venerdì 1 novembre 2002, Jalab ha scritto: >> Greetings all, >>. One that does include exercises and IS slanted towards university students, particularly ones just learning to program with Python as a first language, is Deitel and Deitel's "Python How to Program". I'm ambivalent about it -- I was a tech reviewer so cannot be unbiased, I guess. On the minus side (from my POV), it's HUGE (I like smaller books better than larger ones) and some of the advice and framing it gives is more appropriate to other languages (more "static" than Python) than to Python itself. On the plus side, it's quite thorough and, well, didactic. I'm helping a friend, bright but not a programmer, learn programming with Python, and she tells me that, while she liked Gauld's book, when it was done she felt still far from secure with Python and with programming -- good on the overall view, scarce, from her point of view, on the nitty-gritty. She's now proceeding with "How to think like a computer scientist" (Python edition) from the net, and so far is pretty happy about how it complements Gauld's text, with more details &c. But if one wants just one book, this seems to count as a defect of Gauld's text; Deitel and Deitel, I think, would not suffer from it -- if and when a student manages to work all through it, I think the student WILL feel pretty confident in their mastery of the subject. Another book worth mentioning in this context is Magnus Lie Hetland's "Practical Python". Again I can't be unbiased about it (because I was a technical reviewer for it, too, AND because I think of Magnus as a friend though we've never met face to face). It does lack chapter exercises, and it IS a pretty big book, but I like the informal, communication-rich style, and I think it's didactically very valid. Half the book (the half I didn't tech-review) is made up of completely worked-out *significant* examples: if one appreciates this didactical device, it's hard to find it better used than in Magnus' book (personally, I'm of two minds about it; overall I think I prefer many small toy examples rather than fewer larger significant ones, but I surely know many people whose preferences are for examples that are actually useful working programs). But this still doesn't answer the OP's request for a book that teaches BOTH Perl and Python, particularly one with all the typical devices used in university textbooks, such as per-chapter exercises. I don't think such a book exists, and for a good reason: tiny demand. Not for the 'exercises' part, which would be a good thing to consider for anybody writing textbooks, of course. No, the problem is with having a single book teach two disparate languages. I have a few such books and invariably they gather dust somewhere in the back rows of my shelves... by trying to do two things at once, they generally can't do either as well as a book that focuses on just one of them. >. And where would you find the volunteers, experienced AND happy with both Perl and Python, to do the writing? Quite a few Pythonistas do have Perl experience, but few have any _liking_ for it (there are surely exceptions, such as our homonym who's now working for our previous employer, but I have my doubts that there are enough of them...). > ;-). Alex Right! I was thinking to a book completely devoted to Python, actually. > > > ;-). I agree. I bought a few different books regarding compiler and interpreters building and I never found a really _usable_ one. It would be the kind of work I greatly appreciate. I met (on the net) someone who was writing a book on interpreter building based on Java (both as the implementation language and the target one). I asked him about the expected publishing date two years ago and he told me that it was late. Maybe now it is available... --------------------- Alessandro Bottoni Honestly, I don't know if there is such a thing. Most books about Python are listed at: Both Perl and Python book as usually practical guides or reference books for practicioners, rather than school books. Neither language has typically been on the curriculum. (Although I think this will change.) There is Perl to Python Migration by Martin C. Brown, but that's hardly what you are looking for... Otherwise I think your best bet might be some Linux programming book. But that won't be school books exactly... Python: Visual QuickStart Guide by Chris Fehily is a fairly cheap Python book, if you are worried about costs. It's not a typical school book though. A book commonly used for teaching Python is "Learning Python", by Lutz / Archer, but it's slightly dated. How to Think Like a Computer Scientist: Learning with Python by Allen Downey, Jeffrey Elkner, Chris Meyers teaches basic programming skills using Python, but that's not really what you are looking for I guess. It's probably the only book about Python written with students as the prime target. It's rather a high school book though. It's also on the web You could also have a look at web sites like I hope you find something suitable... >). It's also expensive, if you were one of the three or four regular posters to this list who didn't get a review copy :) Cheers, M. -- [3] Modem speeds being what they are, large .avi files were generally downloaded to the shell server instead[4]. [4] Where they were usually found by the technical staff, and burned to CD. -- Carlfish, asr My concerns were primarily teaching of poor coding techniques (e.g. use of magic numbers instead of well-named constants) and an approach that I can best describe as Python written by a Java programmer (rather than teaching a Pythonic approach). -- Aahz (aa...@pythoncraft.com) <*> Project Vote Smart: > In article <m27kfsm...@python.net>, Michael Hudson <m...@python.net> wrote: > >Alex Martelli <al...@aleax.it> writes: > >> > >>). > > My concerns were primarily teaching of poor coding techniques (e.g. use > of magic numbers instead of well-named constants) and an approach that I > can best describe as Python written by a Java programmer (rather than > teaching a Pythonic approach). I was one of the technical reviewers for this book, and frankly I wasn't impressed. They only asked me for review of factual information, not presentation or style; and I feel it is the latter two that torpedo this book. My favorite (so far) is Steve Holden's "Python Web Programming." It covers a lot of territory and includes a lot of neat stuff. Also enjoyable to read. I realize that several other bright people on this list are or have written books recently; I just haven't read them yet... Chris Gonnerman -- chris.g...@newcenturycomputers.net That's all they asked me for, too, but I gave 'em my opinions, anyway. (Big surprise, right? ;-) As it happens, one thing I did just yesterday was get a birthday present for a friend, bright but a newbie to programming in general, who's studying Python -- and upon mature deliberation what I ordered was indeed Steve's book. Said friend has done some HTML authoring, and I think Steve's coverage of many technologies relevant to web programming as well as Python will be quite helpful to her, and she'll also appreciate (I think) Steve's knack for friendly explanations and his readable style.). Alex >). Textbooks aren't designed to be entertaining, true; but frankly I found it painful to read in the final form. The content isn't the first thing one notices about a book; rather it's the presentation (cover, art, paper, etc.) The pages are nasty slick stuff, and the fonts are IMHO poorly chosen for readability. I haven't gone back and looked at the review copy to see if it's the same or not, but I don't remember it being so hard to look at. It's just a dang ugly book, in my opinion; and then on top of that it's a hard read. Sure, it may be suitable for a Python class at college, but my advice to the prospective victim (I mean student) is to get Steve's book also and read it first. Then you'll hardly have to look at the Dietel book most of the course. Nolo contendere -- except that some students may prefer Magnus Hetland's "Practical Python" (aPress) instead of Steve's book. (I'm equally biased towards both, since I tech-reviewed both and both Magnus and Steve are my friends;-). Steve introduces many other technologies that an accomplished web programmer needs to understand something about, including networks, HTTP, HTML, relational databases, etc; and develops one large, powerful framework for programming asynchronous webpages that rely on a relational DB. Magnus develops ten not-quite-as-large rich, complete examples, in a wide variety of different fields. If you like fully-worked-out significant examples, either book should gladden your heart; if the web is what you most care about, then particularly if you don't feel quite secure about its various technologies Steve's book may be preferable -- if you care about a wider range of things (with some web programming, but not just that), Magnus' variety may be preferable. Steve has the advantage of being of English mother-tongue (he's British but has been living and working in the US for years, so no "jarring" Britishisms should give any problems even to native-US'ers), while Magnus, like me, has English has a second language (I don't notice that as producing any significant difference -- but then, I guess I wouldn't, not being of English mother tongue myself). Both books are BIG, in good part due to the fully worked out significant examples. As a personal taste, I prefer SMALL books, and I prefer toy examples rather than "fully worked out significant ones" (e.g., the kind of examples found in Eckel's or Lippmann's books are much more to my didactical tastes than those found in Stroustrup's...). For readers that share my tastes, it may be worth waiting for the next edition of Lutz and Ascher's "Learning Python", O'Reilly -- the current edition is very good but alas limited to Python 1.5.2, the next one will no doubt cover 2.2. You pays your money and you makes your choice...! Alex At the University of Oslo there is currently a course which teaches Python, Perl and some Bash. The professor, H. P. Langtangen, has written a textbook called "Scripting Tools for Scientific Computations". The book hasn't been published yet, but there exists a preliminary version which the students are using. I like it. You might want to check out the course homepage (in Norwegian): or take a look at the course notes (in English), which cover alot and will give you a good idea of what the book is like: If this seems interesting I suggest you get in touch with the professor/author. You'll find his email address at the course homepage. Good luck. Martin > Textbooks aren't designed to be entertaining, true; but frankly > I found it painful to read in the final form. The content isn't > the first thing one notices about a book; rather it's the > presentation (cover, art, paper, etc.) I was one of the authors of the Deitel book; but just to be clear: I am speaking as a Python evangelist. I am not speaking for any of my coauthors or for my employer. That being said... One of the major goals for the book was to try to push Python down into CS1 courses, where I believe it deserves far more consideration than it currently receives. Our hope was that the book would demonstrate to professors how well suited Python is as a first programming language *and* how easy it is for novice programmers to create more complex applications (e.g., GUI, multimedia, etc.). However, I've noticed that most of the post-publication criticism of the book has focused on issues like presentation, art, etc. It's not my job to dispute these claims. I personally believe that an author should stay out of subjective discussions of his book, once that book has been published. But I still very much believe in the promise of Python in the university. And because the Deitel book is the only college textbook of which I am aware, I wanted to get some feedback on what would make a more successful Python textbook. How high does presentation rank when considering a book? Are there examples of other college textbooks that people think do an excellent job in the presentation department? What kinds of materials can we (as a community) produce to get schools to consider and adopt Python as a first programming language? Any other thoughts on Python in the university? At the February 2002 SIGCSE conference there was a panel discussion on using Python in the traditional CS 1 course. There were a few people there already using it and obviously most of the other people attending the discussion were intrigued enough to come. Everyone using Python alread spoke very positively about it. :-) I also suspect this issue may prevent other universities from seriously considering Python for introductory courses. Every publisher rep that stopped by my office this fall had never heard of Python and didn't think they had any Python textbooks but most of them ended up sending me some Python book that they published in hopes I might use it even though none of them were really textbooks. We're using a prepublication copy of a Python textbook written by someone else at the SIGCSE panel discussion. I won't mention his name or school until I check with him, but I really like his textbook. It is not a "teach every concept of Python book", but rather a traditional problem solving/design CS 1 textbook that uses Python. I did take a quick look at the Deitel & Deitel book, but have to admit I never liked any of their books for a CS 1 course and the Python one didn't change my mind. I haven't looked at it since last spring so I can't be very specific but the presentation style didn't do anything for me and I think it's way too big for a CS 1 textbook - there's no way I'm going to cover all that in CS 1 and I think the students find it intimidating. We were previously using C++ for our CS 1 course and have a Java course that students could take after CS 1 and CS 2 in C++. As a side note, I'm also teaching the Java course this semester and the more I use Python the more I dislike Java. While I have not done any formal studies, I can certainly say that Python is working much better than C++ did. Fewer students have dropped and a larger percentage of students registered for CS 2 than in past years. We did start with fewer CS 1 students this year - I suspect mainly because the tech downturn has discouraged those used to see CS as a quick way to get rich and weren't really interested/excited by it. In the past I would be swamped during office hours with students wanting help deciphering C++ compiler errors. This semester almost nobody has stopped by for syntax issues. The only reason they stop by now is for problem solving/algorithm help. The indentation certainly does not bother students who don't know any other way. The students appear to enjoy it more and we can solve more interesting/complex problems in Python than C++ at the CS 1 level. Also, because of the simpler syntax, we've been able to spend more time in class working on problem solving and design issues. I'm also going to use Python in CS 2. My current plans are to use a C++ CS 2 textbook and rewrite many of the examples in Python and then supplement other topics with my own notes and the Python version of "How to Think Like a Computer Scientist". I will also teach them a little C++ as we go along. I'd love to hear from anyone else using Python in CS 1 and/or CS 2 and I'd love to see a Python CS 2 textbook so I don't have to write one :-) Generating a list of the colleges/universities using Python would be a good start to possibly convincing others to give it a try. Dave Why can u teach this in CS2 when they learn their 2nd programming language? I think Python as a first language is a good idea. They can always build on this knowledge. >presentation department? What kinds of materials can we (as a >community) produce to get schools to consider and adopt Python as a >first programming language? Any other thoughts on Python in the >university? Just to repond to the "other thoughts" part: "Combinatorial Algorithms" by Kreher and Stinson is a fascinating book, and I regularly use it as a resource for programming algorithms. The algorithms are notated in a kind of pseudo computer language that is very easy to convert to python. There is a webpage for the book where one can download some C sourcecode implementing the algorithms. Seeing these beautiful algorithms converted to C sends shudders down my spine. Starting from these C-sources sometimes but mostly using the pseudo algorithms in the book I am slowly converting each algorithm in the book to python. Since this process is driven by the occasions that I need a specific algorithm it's pace is determined by my progress on the path of leaning combinatorial algorithms. Sometimes I imagine however how incredibly good an argument for python the book could become if the pseudo code would be replaced by python, or, failing that, if the webpage would also provide python versions of the algorithms. AV. The word is "you", not "u". You may not know this yet, but the rest of the Internet is *nothing* like AOL. We prefer complete English words here, and "cute" abbreviations like "2" for "to" (or "too"), "4" for "for", and "u" for "you" just make the writer appear juvenile and immature. I'm not saying that you *are* juvenile or immature, but that's how using those words will make you appear. The only shorthand that is commonly accepted and used on the broader Internet is appreviations of entire words, like "FYI" for "for your information" or "BTW" for "by the way". HTH. ("Hope this helps"). HAND. ("Have a nice day"). -- Robin Munn <rm...@pobox.com> PGP key ID: 0x6AFB6838 50FF 2478 CFFB 081A 8338 54F7 845D ACFD 6AFB 6838 >! :) > I also suspect this issue may prevent other universities from seriously > considering Python for introductory courses. Yeah, but in reality it shouldn't. Up-and-coming programmers routinely ignore what we consider to be even the most basic best practices. I mean, what first year (or any year, for that matter!) languages force people to use good variable names, limit their use of global variables, avoid magic numbers, and stay away from creating 1000-line functions? Anyway, it was exciting for me to read your observations so far from using Python at the university level, especially the idea that more problem-solving time is being spent in "algorithm space" than "syntax space" (which shouldn't surprise me since that's the experience Python users tend to have). I'd love to hear more of what you learn as time goes on - please keep posting! -Dave > I mean, what first year (or any year, for that matter!) languages > force people to [snip] stay away from creating 1000-line functions? If any language does this, it would be Python... Cheers, M. -- MGM will not get your whites whiter or your colors brighter. It will, however, sit there and look spiffy while sucking down a major honking wad of RAM. -- I realize the problem of course. In my class we do cover all the introductory material, and object oriented programming, but we also use VPython for graphics and write some cgi scripts to process html forms. Selecting exactly the topics to satisfy very diverse topics without writing an enormous text is not easy. ben...@cs.bu.edu (Ben Wiedermann) wrote in message news:<d6dae77c.02110...@posting.google.com>... Interesting. I doubled in both Math and CS. Half of the CS courses had an ethics component, while none of the maths did. <irony> Which only makes sense. I've never heard of a fudged mathmatical model used to support the desired conclusion of a sponsor with deep pockets. Have you? </irony> > On Wed, 6 Nov 2002, Dave Reed wrote: > > >. Simon [much deleted] Just concentrate and follow the thread, Rob. I have to say, I'm with Rob. How much work would two letters cost you? Especially when the two letters we're talking about are the two letters on the end of my *name*. My name is Robin, not Rob. Which brings me to a second point of Internet etiquette: don't trim attributions unless you are also trimming *all* quoted text written by that person. Trimming attributions had two effects in this case: first, it makes this thread look like ACalcium is responding to him/herself (and who's this Rob?). Second, it resulted in Chris also using my name incorrectly, since there was no more "Robin Munn wrote:" to point out what my name *really* is. Finally, did you ever think about the fact that Rob is a man's name exclusively, while Robin can be a man's name or a woman's name? No, you didn't think about that. You lucked out this time -- I am a man, so you didn't accidentally call me by a wrong-gender name. But be *careful* about that next time! Calling someone by a name that isn't theirs and isn't even the same gender as theirs is even more impolite than your first offense. O.K., end of netiquette flame. Nothing to see here folks, move along, preferably to a thread that's actually ON-topic... P.S. Chris, no need to apologize: you had no way of knowing that Rob isn't actually my name. Me, too--that is, I have occasional opportunities to advocate use of Python in school courses, and I'd welcome more information about what has and can work. -- Cameron Laird <Cam...@Lairds.com> Business: Personal:? I know of abundant ethical questions spe- cific to mathematicians. I can supply plenty of examples of "fudged mathematical model[s]", if that'll help. sounds more hostile than I mean to be. Alex, maybe you can help me articulate this. Just as it's taken a while for the discipline to catch on that static type checking (as in C and Java) is an impo- verished proxy for real validation, I have hopes that we can eventually raise programmers' vision of what constitutes good support for teamwork and re-use from its present regard on misapplied encapsulation. >? When I was studying for Computer Science degree, in each of the 4 years we had a term (semester) of ethics classes, 1 hour per week. These were usually given by the same lecturers who gave courses in more technical Comp Sci matters. The one that really sticks out in my mind is the following question that was posed to us in 2nd year (age ~20). We were asked to imagine ourselves as being asked to perform a particular task, and to state what would be our course of action in that situation. We were to select a single answer from a choice of four:- 1. Carry out the task without complaint. 2. Carry out the task under protest. 3. Resign 4. Resign and form a protest group. We were posed the following scenario: "You are asked to develop a software simulation of the spread of a chemical warfare agent. The purpose of the simulation is to maximise the death toll from the use of that agent". Not surprisingly, there were very few people who were willing to carry out the task, and I think those that said they would were saying so more to play Devil's Advocate than anything else. But there was one surprising outcome: When asked about developing the simulation in relation a Russian City (this was before the Berlin Wall came down), the majority in the class chose option 3: Resign. When asked about developing the situation in relation to a French City, the majority of the class selected option 4: Resign and form a protest group. I am very glad that I had the opportunity to confront and consider such important questions in a situation where there were no real ethical or financial considerations. The debate much clarified my thoughts in relation to not just the ethics of computer science, but also the unthinking racism that is often present in otherwise well-balanced individuals. > I know of abundant ethical questions spe- > cific to mathematicians. I can supply > plenty of examples of "fudged mathematical > model[s]", if that'll help. I think the above question about chemical warfare simulations applies equally to mathematics: obviously there would be a large mathematical and statistical input into such simulations. These days, is there such a thing as a mathematics course that doesn't involve extensive exposure to Computer Science? regards, -- alan kennedy ----------------------------------------------------- check http headers here: > These days, is there such a thing as a mathematics course that doesn't > involve extensive exposure to Computer Science? Yes, the one I did at Cambridge (UK) had very little CS content (there were computer projects in the second and third years but you could ignore them if you chose to -- I didn't, but I know people who did). I think I could have attended three hours of lectures on how to program in Pascal (I *did* ignore those -- my projects were a bizarre melange of Python, Haskell and Common Lisp tied together by bash scripts). I graduated in 2000; I don't think much has changed since ('cept it might be C and not Pascal now). Cheers, M. -- You can lead an idiot to knowledge but you cannot make him think. You can, however, rectally insert the information, printed on stone tablets, using a sharpened poker. -- Nicolai -- (snip) > We were posed the following scenario: "You are asked to develop a > software simulation of the spread of a chemical warfare agent. The > purpose of the simulation is to maximise the death toll from the use of > that agent". > That's far more interesting than our ethics unit, which dealt exclusively with the Usual Suspects of copy protection and file access. Were you in the real world outside of the Chemwar industry, the same question would be coached in terms of fertilizer dispersal. Your employer would get the same information, but would avoid cluttering up the question with folks emotions. The same graph theory that solves networking problems also solves indistrial logistics problems, and military transport problems. >. I've seen courses that used this tactic to (with some success) teach the value of obeying interfaces and abstractions. -Justin In my experience, the "encapsulation" problem is a significant hurdle. And it seems to me that folks who came to object-oriented programming from procedural programming seem to have a harder time overcoming the hurdle than those who have known only object-oriented programming. Strange. So, let's help folks get over this problem. If we could come up with some sound bites to offer potential converts, I think it would be of good use.. Any other thoughts? For that matter, we could tackle any other issues (e.g., typing) that decision-makers might have problems with. I am soliciting evangelists, here, not debaters! There are enough of those threads.... When I was studying Physics at the University of Padua (in the nineties) we had an official booklet of the faculty with advices about the choice of non-mandatory courses. I still remember the sentence about computer science courses. Translating quite literally from Italian it reads something like "In no case plans of studies involving CS courses from other faculties will be accepted by this Physics commission". I would not be surprised if this politics was still enforced. In more ten years of Physics (including the Ph.D.) I never had a single course about programming (excluding maybe two or three seminars in one of the laboratory course I had, but they were unofficial, not part of the course; we also had computer hours in the afternoon, but unofficial and not mandatory). It could seems strange to somebody, but personally I second that politics. Programming is a nice thing you can do for fun by yourself, but the University has to teach you the real thing, not to spend your time on other topics. Whereas it is true that lots of physicist (and mathematicians) spend a lot of time on computers for their research, it is also true that there still many researchers which can completely avoid the computer (I am one of them). I think that, at least in Europe, most of Physics and Mathematics course still do not involve extensive exposure to Computer Science. And this is a good thing. -- Michele Simionato - Dept. of Physics and Astronomy 210 Allen Hall Pittsburgh PA 15260 U.S.A. Phone: 001-412-624-9041 Fax: 001-412-624-9163 But very true; I was proficient in Pascal and C by the time I started using any OO language (C++ was my first, then Java, then [urgh] Perl, and finally, Python). While learning C++, the toughest problem was changing my "world-view" of simply- structured programming, into an effective "OO-world-view", where objects interact among themselves without much "procedural glue". > > So, let's help folks get over this problem. If we could come up with > some sound bites to offer potential converts, I think it would be of > good use. > "Give your data personality" "Allow your information to think for itself" "Let your data do the walking" (changed from the yellow pages bite) >. Maybe from a language design viewpoint it was easier to implemnt, but also from a phylosophical and from a software-design viewpoint, it's much saner than the "you and you, but not all yous" implementation of Java and C++, and others. > > Any other thoughts? For that matter, we could tackle any other issues > (e.g., typing) that decision-makers might have problems with. I am > soliciting evangelists, here, not debaters! There are enough of those > threads.... > :-) -gustavo OK, let's see what I had to say about this subject in our correspondence of about a year ago, then... the quoted questions in the following are also by you, Ben. [ start of edited quotes from our correspondence ] > Once we show __private, why would we suggest using the single > underscore convention, as this would be inconsistent w/ the "principle > of least privilege" we espouse in our other books? Hmmm, because that principle is incorrect? Once upon a time, there was the dream of the Waterfall model of software development. First, one would do all Analysis (advanced texts split that into Domain and Requirements phases, a mini-waterfall of its own), thus reaching perfect understanding of the domain being modeled and the needs to be met by one's programs in that domain. Then, one would proceed with Design, the outline of every solution aspect not depended upon details of implementation. Then, Coding. And so on. Maybe that could work for Martians; never having met one, it's hard for me to say. 30+ years of experience have easily shown it's a disaster for Earthlings, anyway (what a waste: anybody familiar with the works of some key thinkers of our grandfathers' generation, such as Ludwig von Wittgenstein, Alfred Korzibski, and George Santayana, would have known that from the start -- apparently, however, culture is fragmented enough today that few Methodologists had ever heard of these guys). Human beings are evolved to work in chaotic environments, superimposing MODEST, LOCALIZED amounts of control and supervision upon the sheer but undirected energy of Chaos to yield an overall process flowing more or less in the desired direction. Since a little supervision and control makes things so much better when compared to total disorder and anarchy, it's a natural fallacy to think that MUCH MORE control and supervision will make everything just perfect. Wrong. When some sort of threshold is exceeded, the attempts towards tighter control take on a dynamics of their own and start absorbing unbounded amount of energy and effort in a self-perpetuating system. I don't want to get into politics, but this IS very much what happened East of the Iron Curtain -- far too much control, indeed so much that the first serious attempt to release it crumbled the whole system to not much above feudal/anarchy. The excesses of control in the West were fortunately moderate enough that they could to some extent be (partially) unwound less explosively. I'm not arguing for anarchy, mind you -- Somalia is a living example of what anarchy means; just that, quite apart from ethical issues, from a strictly engineering, effectiveness standpoint there is a reasonably flat optimal region, well short of totalitarian states but much more controlled than 'everyone for himself'. Back to software development, we've witnessed similar "secular trends". Here, the "anarchy" side, in a small scale, is played back again and again in every student's initial approach to programming, and most small software houses' beginnings. But we also have plenty of huge projects run under "the stricter, the better" principles to remind us constantly of what an utter disaster those principles are. To be specific. "Principle of least privilege" assumes you somehow know what the 'right' privilege for a certain operation SHOULD be. This in turn takes it for granted that you completed a detailed and workable design before starting to assign privileges (presumably, before you started coding). And things just don't work that way. "Least privilege" is indispensable for SECURITY issues. It's anything but unusual for security issues to have diametrically-opposed needs to any other kind: if you have to assume an adversary who is actively trying to break things to his advantage, your correct mindset is very different than it would be otherwise. Assuming adversarial situations where none exist is, clinically speaking, symptom number one of paranoia. I'm as keen on security as anybody else I know (I run OpenBSD, I refuse to run any protocol on the net except SSH and other demonstrably-secure ones, etc, etc), but it's best to keep such mindsets WELL SEPARATED from ordinary software development needs, to keep one's sanity and productivity. Didactically speaking, you have to show a beginner that utter chaos will not work optimally, of course. But proposing over-tight control as the viable alternative is not feasible. The only really workable way to develop large software projects, just as the only really workable way to run a large business, is a state of controlled chaos. There is a broad range of mixes between chaos and control that can work, depending on the circumstances (as I said above, the optimal region is reasonably flat -- a good thing too, or we'd NEVER get it right), but the range does NOT extend (by a LONG shot!) all the way to the "Waterfall Model" on one side (just as it doesn't extend all the way to "Do what thou wilt" on the other:-). There's a didactical school that claims, to counteract the students' natural disposition to anarchy, they should be taught programming discipline as strict as possible. That's like saying that all schooling should be in a Victorian College atmosphere, with starched collars and canings from the Prefects, for similar reasons. The Victorians did achieve marvels of engineering that way, albeit perhaps at excessive social and psychological costs. But we don't do things that way any more in most fields. We surely don't in the kind of software development that makes a difference in a competitive field (if you develop non-competitively, e.g. for the government or in a university setting, your incentives are clearly different; personally, after a decade working for university, IBM Research, etc, I chose the rough and tumble playing field of real life software development -- more money, AND is more fun too:-). > What is the benefit of "public by default"? Minimizing boilerplate. 90% of exposed properties in languages and object models which don't support that lead to a lot of boilerplate: public WhateverType getPliffo() { return m_Pliffo; } or (in COM/C++): HRESULT get_Pliffo(WhateverType* pVal) { if(!pVal) return E_POINTER; *pVal = m_Pliffo; } or (in VB): Public Property Get Pliffo() As WhateverType Let Pliffo = mvarPliffo End Property and so on, and so forth. There is no benefit whatsoever in all that boilerplate. Maybe, if one has been educated to such strict discipline, there is a "feel-good factor" -- "See, Mum, I'm being a good boy and only accessing Pliffo via an accessor method!". But quite apart from program efficiency (that's not a major issue AT ALL), all that extra, useless code is a dead-weight dragging your productivity down -- sure, some silly "Wizard" can originally generate it, but _people_ are going to have to maintain and test and read and document it forevermore. Sheer dead weight. Some languages don't give you alternatives: if you originally expose a "public WhateverType Pliffo" you're sunk -- client code will come to depend on access idioms such as WhateverType myPliffoReference = theNiceObject.Pliffo; and then you can't turn that publicly exposed data into a call to an accessor forevermore (without breaking client code, NOT a viable option in most large software projects). But that's a language defect! You don't have to get all the way to Python to find solutions: e.g., in Eiffel, you can have client code use that sort of construct AND Pliffo will be just accessed if it's a data attribute, called if it's a method. Client code will need to _recompile_, of course, but then, in Eiffel, you need to 'rebuild world' every time you sneeze, so that's not considered an issue. Point is, with this approach you don't HAVE to do "Big Design Up Front" ( for much more). You have an attribute that looks like client code might perhaps be interested in looking at, you just use it. If while refining the design it turns out that a call to a getter-method is needed, fine, you add the getter-method *WITHOUT* breaking any client-code. When you do it IF AND WHEN NEEDED, you find out that (depending on various factors) it's something like 10% of properties that need to be "synthesized" or otherwise controlled by accessor-methods -- the others are just fine without any boilerplate. So, OF COURSE, the default in Python is oriented to supporting that 90% or so of cases where nothing special is needed, NOT the remaining 'special' 10%. This minimizes programmers' effort over the whole lifecycle -- which is typically interactive, of course, NEVER a "Waterfall" (you start with some analysis, then begin design and need to go back to the analysis because design has given you a better understanding, then back to design, start coding and you find the design needs to be tweaked, etc, etc -- AND YET you'd better *release early, release often* if your software artifact is to have some relevance to your customers' problems in this frantically changing world of business today...!!!). See for example. I find it particularly interesting that, while many of the "conspirators" were early enthusiasts of (what is now called) Agile development in various guises, others (such as Fowler, Martin, Mellor) were upper-M Methodologists and gradually saw the light over the last decade or so of experience. Of course, these things take forever to percolate down to universities. But, the pace is picking up, particularly as movement back and forth between universities and "real life" software development is not as glacial as it used to be. Saturday I was a scheduled speaker at Linuxday, presenting the changes in Python over the last year-plus, and I noticed with interest that well over half the attendees were involved with both university/research endeavors AND commercial projects. I was also struck once again by how many were using functional programming (Haskell foremost, but O'Caml seems to be gaining) for the "academic respectability" in their publication-oriented endeavors, AND pragmatical programming (Python foremost, but Ruby seems to be gaining) for the "real word productivity" in those projects where they actually have to deliver AND maintain working software. I don't necessarily LIKE all of these trends (I'll take Haskell over any ML, and Python over Ruby, any day:-) but I do observe them and think them significant. Some of these people teach SW development to freshmen as part of their duties, and in that case it appears that they almost invariably have to teach more traditional languages (Pascal foremost, but Java seems to be gaining). I don't know how much that depends on these being Italian institutions in particular, of course (I gather Java has already overtaken Pascal as an introductory language in the US universities, for example). But that's nothing new -- back when _I_ taught in universities, I invariably had to teach Fortran (since I was teaching at the Engineering school -- would have been Pascal if I taught at CS!-) even though I was using C for real world programming and Lisp (actually Scheme) for publications (needing "academic respectability", you see:-). [ I'm rephrasing, clarifying, and correcting some errors in the following example of signature-based polymorphism as enabled by Python's approach to encapsulation, compared to our original correspondence ] def hypot(somepoint): import math return math.hypot(somepoint.x, somepoint.y) this client-code only depends on the following subsignature of 'somepoint': exposing .x and .y attributes transparently compliant to 'float'. In practice this means interchangeability of: class NaivePoint: def __init__(self, x=0.0, y=0.0): self.x = x self.y = y and class UnchangeablePoint: def __init__(self, x=0.0, y=0.0): self.__dict__['x'] = x self.__dict__['y'] = y def __setattr__(self, *junk): raise TypeError def __hash__(self): return hash(self.x)+hash(self.y) def __eq__(self, other): return self.x==other.x and self.y==other.y and many other kinds of 'point'. The somepoint.x access may be mapped down to very different mechanisms (it's self.__dict__['x'] in both of these, but not in many others) but stays *signature-polymorphic* to the need of client-code 'hypot'. So the proper procedure is to choose the kinds of 'point' implementations that meet the design needs *you currently perceive at this stage in your project*: you don't need to be Nostradamus and foresee how your project will look in a year, nor to "wear braces AND a belt" and overdesign everything *just in case* currently-unforeseen needs MIGHT emerge one day. Rather, you can privilege SIMPLICITY -- that often-underrated, yet most crucial of all design virtues. "A designer know that perfection is reached, not when there is nothing left to add, but when there is nothing left to take away" (St. Exupery). Meet simple design-needs with simple mechanics (including NONE AT ALL, from the point of view of the Python coder -- rather, the simple needed mechanics are inside the Python interpreter, of course), rarer and more complex needs with more advanced mechanics. No "impedence mismatch" between the constraints that arise during a design's development, and the remedies needed to meet them. Bliss! The single-underline convention (actually enforced where it NEEDS to be, such as, the Bastion class, "from somemodule import *" where the module doesn't define an __all__ attribute, etc) is a good example of "human beings knowing their limit". If we drop the pretense that a designer is omniscient, we can see that any determination of a need or constraint has COSTS -- actual design costs (design time), costs in terms of missed oportunities, etc. The parallel is spookily close to that of asymmetric-information economic exchanges, and I note that the Nobel Prize in Economics [last] year were granted exactly for work in the asymmetric-information and signal-exchanging fields, so it would seem SOME academics are well aware of the issues. Sometimes (pretty often) you can determine at reasonable 'cost' that client code may well need to access attributes X, Y, and Z. Then, you expose them (without any underlines) -- and later may choose to wrap them up into accessors, etc, as above. Sometimes (not often) you can determine at reasonable cost that client code has no conceivable need to peek at attributes foo, bar and baz. Then, you name them __foo, __bar, and __baz, and generally don't even provide accessors -- those are the strictly-implementation-related attributes, not part of the abstract client-visible state of your class at all. There remains a considerable gray area of attributes that you do not THINK client-code will need to access, but can't make SURE about without unreasonable cost. If you expose an accessor method getEenie that does some irreversible computation on an internal attribute 'eenie' and returns the result, for example, it may require unreasonable delays and cost to ascertain whether it's ever conceivably possible that some client code may need to access the raw unprocessed value for 'eenie' rather than always being content with getEenie()'s results. These are very appropriate cases to handle with the single-underline convention, meaning: I don't THINK client-code ever needs to see this, but I can't be SURE -- *at your own risk* (very strong coupling -- the single-underline DOES mean "internal"!) you may choose to access this if there's no other way to do your job. If you've ever programmed to a framework designed by other criteria you know what I mean... there's this class X in the framework that does ALL you need BUT doesn't expose one key attribute or method 'meenie' -- it has it, but as "private". This recurring horror leads to "copy and paste programming", forking of framework code and the resulting nightmares in maintenance (it's a recognized "Antipattern" in the book by that title by Brown, Malveau, McCormick and Mowbray) -- or even weirder tricks such as "#define private public" before some "#include" in C++...!!! The problem was: the framework designer thought he was omniscient. The language URGED him to believe in his omniscience, rather than strongly discouraging such hubris. Well, he WASN'T omniscient -- surprise, surprise, he was a human being (maybe we should subcontract such design work to martians...?). So there's cognitive dissonance between the language-induced mindset, and human biological reality. Reality wins out each and every time, but not without much anguish in-between. (At the risk of skirting politics again: similarly, the Soviet system urged central planners to believe their own ominiscience, since everything had to be planned right from the start -- no built-in flexibility in the system; I recommend Perrow's "Normal Accidents", a masterpiece of sociology in my view, about the ills of tight-coupling and assumed omniscience in accident-prone systems, such as nuclear plants and ships). Note that I'm only arguing one side of the issue because you're not advocating the OTHER wrong extreme -- total lack of control, weak typing, haphazard jumbled-together slapdash 'systems' that don't deserve that name. You should see me argue FOR control and against chaos versus the typical Javascripters of this world:-). (As all people long used to occupying a reasonable middle position, I'm also quite used at being shot at from extremists of both sides, of course:-). [ end of edited quotes from our correspondence ] Alex <snip> >'s just my impression/what I think is one of the biggest issues that would discourage some academic types from switching to Python. I don't have any hard evidence to support it. Obviously this didn't stop me, but I'm relatively young for a college prof (early 30s), find Python programming extremely fun, and definitely lean toward the practical side of things. The other person in my dept. who teaches CS1 is about 20 years older than me and it didn't take much effort to convince him to switch to Python even though I pointed out the lack of private data to him. Of course, he has learned to hate C++ so that made persuading him much easier :-) Note: I am NOT advocating adding private members/methods to Python. Dave I agree with accessor methods being boilerplate in Python. But in C++, under some circumstances, it's a real necessity. The one place that I find it necessary to use accesor methods is for base classes to access attributes of derived classes. An object often need to access attributes from four places: (a) attributes from its own-level class, (b) attributes from its base class(es), (c) attributes from another object belonging to another class, (d) attributes from a subclass, this last point because a method is implemented at the base class level. Python can do all of (a), (b), (c) and (d). C++ cannot do (d), at least not in a simple manner. (And you don't want to have circular "#include"s.) In Python, I often would check the existence of an attribute before proceeding: if hasattr(self, 'x'): .... do something with self.x This is great for methods implemented at the base class level that need to access attributes of subclasses. C++ can't do this, in a straightforward manner. To fix idea, the base class could be "Person", the derived class could be "USTaxablePerson", and "x" be U.S. tax-id or social security number. One way out is for C++ to implement the variable 'x' at the base class level. But this means that objects that don't have the 'x' attributes still need to carry that data field. (e.g: infants don't have tax-id). This is not a problem when the number of data fields is small, or when the number of person is small. It becomes a problem when you have many data fields and many persons. What I have found out is, for some problems in C++, you often want to make powerful subclasses with minimum data attributes. That is, base classes will have a lot of methods, but only a few data members. On the other hand, subclasses add more data attributes, but not more methods. It is in these types of problems that accessor methods are necessary. It is much more economical and natural for objects in the class hierarchy to share methods than to share data attributes. The getX() method at the base class would simply raise an error, because 'x' does not exist at the base class level. The getX() in the subclass would return the appropriate value of x. Not just because U.S. tax-id is needed for one particular purpose, would we implement that data field for all persons (or martians), and put value NULL there. If there are 10 billion persons/martians, you'd be wasting many billion bytes, for nothing. Again, the problem is small if that's the only data field we are talking about, but in a complex class hierarchy tree with many subclasses and many instances, you'll have to start to pay attention as to where to put the data members. C++ has virtual methods, but no virtual data members. And, that to me, is the main reason why C++ needs accessor methods. Hung Jung > Alex Martelli <al...@aleax.it> wrote in message > news:<owWy9.11842$XO.4...@news2.tin.it>... >> > What is the benefit of "public by default"? >> >> Minimizing boilerplate. 90% of exposed properties in languages and >> object models which don't support that lead to a lot of boilerplate: ... > I agree with accessor methods being boilerplate in Python. But in C++, > under some circumstances, it's a real necessity. That's exactly the same thing I'm saying: C++ is an example of a language that doesn't support 'properties', which is one reason (out of many) which causes C++ code to need a lot of boilerplate. Note that the fact that the language makes it necessary doesn't mean such repetitious, performs-no-real-function code ISN'T boilerplate; "boilerplate" and "real necessity" aren't antonyms. > C++ has virtual methods, but no virtual data members. > > And, that to me, is the main reason why C++ needs accessor methods. It's ONE reason, yes -- subclasses cannot "override data" the way they can in Python. However, there are other reasons: in C++, if you expose an instance attribute X to clients, then client code written to access blahblah.X needs X to be and remain a data attribute forevermore -- the fact that X is data has become part of your class's interface. Therefore, to keep the flexibility to change your class in the future, so that X can perhaps be computed on the fly rather than stored, you DARE NOT expose X as an attribute on a public interface. Python's ability to "override data" is amazingly powerful (and plays incredibly well with the "Template Method" Design Pattern, too). But you don't need to go that far to remove the need for accessors: languages such as Delphi (Object Pascal), Eiffel, and the C++ dialects/extensions peddled by Borland and Microsoft, all manage to allow blahblah.X to compile to EITHER a data access OR a call on some accessor method of blahblah as appropriate. Alex Maybe. Speaking as someone who learned BASIC, Pascal, Ada, C, and FORTRAN long before learning a real object-oriented language, my opinion is that most OO languages try to *force* OOP in a way that makes it much harder to grasp. Python makes it easy. -- Aahz (aa...@pythoncraft.com) <*> A: No. Q: Is top-posting okay? Compared to C++? :) What's the problem with "self.__name = 'Guido van Rossum'" ? After all, people hardly write coder._Person__name = 'Danny de Vito' by accident. Code reviews hardly fail to see it either. You can bypass protection and do very nasty things in C++ in a way that makes Python feel like Fort Knox. C++ has arbitrary memory pointers! How can that ever be safe? What Python does here is to lock the door and put the key on a hook by the door. C++ just puts it under the door mat. In both cases this means that you can get in if there is an emergency, but gives a clear hint that you can't just walk in as you like. It's just more error prone in C++.
https://groups.google.com/g/comp.lang.python/c/LIXWQS2emaQ/m/IQvhEhPtd7kJ
CC-MAIN-2022-40
refinedweb
9,562
60.04
Current Version: Linux Kernel - 3.80 Description Linux provides the following namespaces: This page describes the various namespaces and the associated /proc files, and summarizes the APIs for working with namespaces. The namespaces API - $ ls -l /proc/$$/ns total 0 lrwxrwxrwx. 1 mtk mtk 0 Jan 14 01:20 ipc -> ipc:[4026531839] lrwxrwxrwx. 1 mtk mtk 0 Jan 14 01:20 mnt -> mnt:[4026531840] lrwxrwxrwx. 1 mtk mtk 0 Jan 14 01:20 net -> net:[4026531956] lrwxrwxrwx. 1 mtk mtk 0 Jan 14 01:20 pid -> pid:[4026531836] lrwxrwxrwx. 1 mtk mtk 0 Jan 14 01:20 user -> user:[4026531837] lrwxrwxrwx. 1 mtk mtk 0 Jan 14 01:20 files in this subdirectory are as follows: - . IPC namespaces (CLONE_NEWIPC)) When a network namespace is freed (i.e., when the last process in the namespace terminates), its physical network devices are moved back to the initial network namespace (not to the parent of the process). Use of network namespaces requires a kernel that is configured with the CONFIG_NET_NS option. Mount namespaces (CLONE_NEWNS) The /proc/[pid]/mounts file (present since Linux 2.4.19) lists all the filesystems currently mounted in the process's mount namespace. The format of this file is documented. The /proc/[pid]/mountstats file (present since Linux 2.6.17) exports information (statistics, configuration information) about the mount points in the process's mount namespace. This file is readable only by the owner of the process.. Currently (as at Linux 2.6.26), only NFS filesystems export information via this field. PID namespaces (CLONE_NEWPID) User namespaces (CLONE_NEWUSER) UTS namespaces (CLONE_NEWUTS) Use of UTS namespaces requires a kernel that is configured with the CONFIG_UTS_NS option. Conforming To Example See Also Colophon License & Copyright and Copyright (c) 2012 by Eric W. Biederman %%
https://community.spiceworks.com/linux/man/7/namespaces
CC-MAIN-2018-34
refinedweb
292
67.65
LGAMMA(3P) POSIX Programmer's Manual LGAMMA(3P) This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. delim $$ lgamma, lgammaf, lgammal, signgam — log gamma function #include <math.h> double lgamma(double x); float lgammaf(float x); long double lgammal(long double x); extern int signgam; The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of POSIX.1‐2008 defers to the ISO C standard. These functions shall compute $log_ e" " │ Γ ( ^ x ) │$ where $Γ ( ^ x )$ is defined as $int from 0 to inf e"^" " "{ - t } t"^" " "{ x - 1 } dt$. The argument x need not be a non-positive integer ($Γ( ^ x )$ is defined over the reals, except the non-positive integers). If x is NaN, −Inf, or a negative integer, the value of signgam is unspecified. These functions need not be thread-safe. logarithmic gamma of x. If x is a non-positive integer, a pole error shall occur and lgamma(), lgammaf(), and lgammal() shall return +HUGE_VAL, +HUGE_VALF, and +HUGE_VALL, respectively. If the correct value would cause overflow, a range error shall occur and lgamma(), lgammaf(), and lgammal() shall return ±HUGE_VAL, ±HUGE_VALF, and ±HUGE_VALL (having the same sign as the correct value), respectively. If x is NaN, a NaN shall be returned. If x is 1 or 2, +0 shall be returned. If x is ±Inf, +Inf shall be returned. These functions shall fail if: Pole Error The x argument is a negative integerGAMMA(3P) Pages that refer to this page: math.h(0p), signgam(3p), tgamma(3p)
http://man7.org/linux/man-pages/man3/lgamma.3p.html
CC-MAIN-2017-47
refinedweb
299
56.55
servlets in servlets There are four ways of authentication:- HTTP basic... authentication In FORM-based the web container invokes a login page. The invoked login page is usedto collect username and password ; | Java Servlets Tutorial | Jsp Tutorials | Java Swing Tutorials index Sessions in servlets Sessions in servlets What is the use of sessions in servlets? The servlet HttpSession interface is used to simulate the concept that a person's visit to a Web site is one continuous series of interactions Shopping Cart Index Page the, page created dynamically(on real time) appears. This page is generated jsp and servlets jsp and servlets what is difference between jsp and servlet?? what should a person use to develop website ?? JSP is used for view in MVC - I architecture. It used to generate dynamic contents in the form of HTML jsp and servlets / The above link will provide you full JSF login and register application.There Real-time GPS fleet Management and store data regularly and in real time so that the person gets updated...Fleet business depends on real-time GPS fleet management. Now to make it clear... destination and also helps him in avoiding traffic and accidents. Real-Time GPS login form - JSP-Servlet login form Q no.1:- Creat a login form in servlets? Hi...*; import javax.servlet.http.*; public class Login extends HttpServlet...(); pw.println(""); pw.println("Login"); pw.println(""); pw.println Login & Registration - JSP-Servlet Login & Registration Pls tell how can create login and registration step by step in servlet. how can show user data in servlet and how can add... the following links: What is Index? What is Index? What is Index servlets servlets what is the duties of response object in servlets servlets servlets why we are using servlets servlets - Servlet Interview Questions application.For example,star with how the login process would work and how you would make sure...; } login page User Name...; } login page servlets what are advantages of servlets what are advantages of servlets Please visit the following link: Advantages Of Servlets - JSP-Servlet =connection.createStatement(); rst=stmt.executeQuery("select * from login where username...); System.out.println("login sucessful"); } else { connection.close(); request.setAttribute("msg", "login failed java servlets with database interaction java servlets with database interaction hai friends i am doing a web application in that i have a registration page, after successfully registered... + " is already existing.Please Choose another login ID login i want to now how i can write code for form login incolude user and password in Jcreator 4.50 Hello Friend, Visit Here Thanks referring to question Person.. - Java Beginners referring to question Person.. Hi!firstly i want to thank to someone who always answer my questions, i really appreciate it..emm may i know what...=gender; this is base on previous question (Person)..can u explain to.awt.event.*; class Login { JButton SUBMIT; JLabel label1,label2; final JTextField text1,text2; { final JFrame f=new JFrame("Login Form...(); ResultSet rs=st.executeQuery("select * from login where username='"+value1...:// login how to create login page in jsp Here is a jsp code that creates the login page and check whether the user is valid or not. 1...;tr><td></td><td><input type="submit" value="Login"> servlets servlets hi i am using servlets i have a problem in doing an application. in my application i have html form, in which i have to insert on date value, this date value is retrieved as a request parameter in my servlet login i am doing the project by using netbeens.. it is easy to use the java swing for design i can drag and drop the buttons and labels etc.. now i want the code for login.. i created design it contains the field of user name Servlets - JSP-Servlet ", "root"); String sql = "insert into login(id,firstname,lastname,email servlets what are filters in java servlets what are filters in java Filters are powerful tools in servlet environment. Filters add certain functionality to the servlets apart from processing request and response paradigm Login Form in Java - Java Beginners Login Form in Java Hi, In my Project, I am using Java-JDBC-Mysql... Authenticated person can access these module. I create table in my-sql where fields...:"); text2 = new JPasswordField(15); SUBMIT=new JButton("Login"); panel Struts 2.1.8 Login Form Struts 2.1.8 Login Form  ... to validate the login form using Struts 2 validator framework. About the example: This example will display the login form to the user. If user enters Login Search index real+page - Java Beginners real+page   real+finance - Java Beginners real+finance   real+contact - Java Beginners real+contact   How to implement this superclass Person - Java Beginners a superclass Person which it need to Make two classes, Student and Instructor, that inherit from Person. A person has a name and a year of birth. A student has a major...: /** This class tests the Person, Student, and Instructor classes. */ public Person have name field of Name type..?? Person have name field of Name type..?? Person have name field of Name type means i understood to create new Name class with fields firstname, middlename, and lastname but i dont know how to use this class in person to give him static keyword with real example static keyword with real example static keyword with real examplestrong text HTML Login Page Code HTML Login Page Code Here is an example of html login page code. In this example, we have displayed one text field, Password, Reset button and Login button... JavaScript validation in Login page. We have set username and password value How to implement a superclass Person? - Java Beginners How to implement a superclass Person? How to implement a superclass Person. Make two classes, Student and Lecturer, that inherit from Person. A person has a name and year of birth. A student has a degree program and a lecturer Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://www.roseindia.net/tutorialhelp/comment/83118
CC-MAIN-2015-18
refinedweb
999
57.27
If and interacting with external applications. Despite the many uses of Intents, I never ran across a great explanation of their purpose until I dug into the Android documentation. This may be because Intents tend to work quite smoothly when used in the right circumstances. It is all too easy to copy-paste a few lines from a StackOverflow answer and never come back to it because things seem to just work. However, after doing this a few times myself, my lack of understanding kept nagging me to dig deeper. My goal with this post is to share what I have learned about these powerful abstractions and to provide a useful and interesting example in the process. Intents Let’s start out by getting a baseline definition. Android’s official documentation defines an Intent as “an abstract description of an operation to be performed”…helpful. Further on in the documentation, it gives the more approachable description that “[an Intent’s] most significant use is in the launching of activities, where it can be thought of as the glue between activities.” Okay, that makes more sense and explains why they seem to be involved in so many things. Intents come in two types: explicit and implicit. Explicit If you’re developing a multi-activity application, you will likely use a lot of explicit Intents. These are used to start an application component within your control–a class or activity to which you have access. In their most basic form, explicit Intents are straightforward. Here is a simple example of starting a new activity: public class MainActivity extends AppCompatActivity { ... public void startOtherActivity() { Intent intent = new Intent(this, OtherActivity.class); startActivity(intent); } } Note that when we create the Intent object, we explicitly state which component to start. This component can be an Activity or another class like a Service or a Broadcast Receiver for which you can give the fully qualified class name. Implicit Implicit Intents are meant for when you want a user to be able to take action without necessarily implementing the functionality yourself. Think about when you are using an app and want to share content–an image or article, for example. With Android Marshmallow, the Android operating system presents the Direct Share menu that displays all of your apps capable of sharing that content. When you select which app to share with, an activity of that app will be launched to allow you to complete your action. Implicit Intents are a little trickier to grasp, so let’s work through an example. We will walk through both sides of the sharing interaction–sender and receiver–to get a better understanding. Sending Example Let’s start out by building a simple app that can share some text to other apps. I created a fresh project in Android Studio with an empty main activity. In the main activity’s layout XML, I added an EditText element where text will be entered and a Button that will trigger the sharing action. Then in the main activity’s onCreate() method, I wired up the button’s onClickListener() method to pass the EditText’s input to a method that we will define later on. public class MainActivity extends AppCompatActivity { private EditText textInput; private Button shareButton; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); textInput = findViewById(R.id.text_input); shareButton = findViewById(R.id.share_button); shareButton.setOnClickListener(v -> startShareIntent()); } private void startShareIntent() { ... } } In this startShareIntent() method, we will build up and start an implicit Intent for the “Send” action. Note that we won’t explicitly state which component will resolve this Intent because the user gets to decide.); } } The first line of this method creates a new Intent object. The empty constructor is what tells us that we are using an implicit Intent. The next three lines describe the payload of the Intent–the content itself and some meta-data about the content. Every application declares which, if any, kinds of Intents they are willing to accept by filtering on the Intent’s meta-data (more on this later). The Action parameter describes the purpose of the Intent. In our case, we are using the “Send” action, which is how Android describes a “Share” Intent. Other types of actions include opening a URL in a browser, starting a web search, media playback, and more; see the full list of predefined actions here. The Type parameter describes the type of content being sent. For example, an audio app might accept .mp3 format files in a media Intent, but not .mov files. The Extra parameter is where you pass along any actual content and other information that the receiving application may need. In the remainder of the method, we submit our Intent. To do this, we call the startActivity() method with our Intent. When using implicit Intents, it is important to first check that the user’s device has any apps that would be willing to resolve this Intent. If you submit an Intent that cannot be resolved, your app will crash. We check for this by testing that the Intent’s resolveActivity() method does not return null. If the result is non-null, it is safe to submit the Intent. That’s all there is to a basic approach to sharing content between applications. You can test out this implementation by running the example and typing some text in the field. When you tap the button, it should present you with the Direct Share menu. In a typical simulator, this menu will include options like, “Copy to clipboard,” “Messages,” and “Gmail.” Receiving Example Now, let’s build an app that’s capable of receiving the Intent sent from our first example. Again, I created a fresh Android Studio project with an empty main activity. This time, the main activity’s layout XML will need only a TextView element capable of displaying our received text. The logic for getting the received Intent and wiring it up to the TextView is easy enough. In the activity’s onCreate() method, we declare an Intent object and then assign it to the value of the getIntent() method, which is included in the Activity class. The getIntent() method returns whatever Intent that launched that activity–so if you open the app through the Direct Share menu, you will get a “Send” Intent. If you open it from the phone’s application menu, you will get a more generic Intent. Because of this, we can check that we indeed have a Send Intent. Then we can access the Intent’s content as you see on line 15 below and wire it up to the TextView’s text property. public class MainActivity extends AppCompatActivity { private Intent intent; private TextView textView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); intent = getIntent(); textView = findViewById(R.id.intent_input); if (intent.getAction() != null && intent.getAction().equals(Intent.ACTION_SEND)) { String intentContent = intent.getStringExtra(Intent.EXTRA_TEXT); textView.setText(intentContent); } } } The trickier part of receiving an implicit Intent is declaring what kinds of Intents your application will accept. We do this by adding an Intent filter inside the activity in the application’s manifest file as shown below. <activity android: ... <intent-filter> <action android: <category android: <data android: </intent-filter> </activity> You will recognize the action tag within the Intent filter. This is how we specify exactly which kinds of actions the app is prepared to handle. The mimeType property in the data tag allows us to specify that we only accept Send Intents if they send plain text. That way, if another app creates a Send Intent that attempts to share an image, our app won’t advertise itself since our simple text view setup would not know how to handle an image. In order to receive implicit Intents, an Intent filter must specify the default category. This category is set by the activity’s startActivity() method. Omitting the default category will lead to never receiving any implicit Intents. When an implicit Intent makes it through one of your app’s Intent filters, it will launch the activity that it is declared within. With this in mind, an Intent filter like this should be in an activity that is specifically designed to handle the expected Intent, not your main activity. But since this sample app’s only purpose is to handle these incoming Intents, the main activity will do. Finally, we can test everything together. If you run our first sample app, enter some text, and tap the button, you should see our receiver sample app within the Direct Share menu. If you select the receiver app in this menu, it will launch the app, and the text view should display the text that was output from the sender app.
https://spin.atomicobject.com/2019/11/26/android-intents-sharing/
CC-MAIN-2020-34
refinedweb
1,460
54.52
nvme-id-ctrl - Man Page Send NVMe Identify Controller, return result and structure Synopsis nvme id-ctrl <device> [-v | --vendor-specific] [-b | --raw-binary] [-o <fmt> | --output-format=<fmt>] Description For the NVMe device given, sends an identify controller command and provides the result and returned structure. The <device> parameter is mandatory and may be either the NVMe character device (ex: /dev/nvme0), or a namespace block device (ex: /dev/nvme0n1). On success, the structure may be returned in one of several ways depending on the option flags; the structure may be parsed by the program or the raw buffer may be printed to stdout. Options - -b, --raw-binary Print the raw buffer to stdout. Structure is not parsed by program. This overrides the vendor specific and human readable options. - -v, --vendor-specific In addition to parsing known fields, this option will dump the vendor specific region of the structure in hex with ascii interpretation. - -H, --human-readable This option will parse and format many of the bit fields into human-readable formats. - -o <format>, --output-format=<format> Set the reporting format to normal, json, or binary. Only one output format can be used at a time. Examples Has the program interpret the returned buffer and display the known fields in a human readable format: # nvme id-ctrl /dev/nvme0 In addition to showing the known fields, has the program to display the vendor unique field: # nvme id-ctrl /dev/nvme0 --vendor-specific # nvme id-ctrl /dev/nvme0 -v The above will dump the vs buffer in hex since it doesn’t know how to interpret it. Have the program return the raw structure in binary: # nvme id-ctrl /dev/nvme0 --raw-binary > id_ctrl.raw # nvme id-ctrl /dev/nvme0 -b > id_ctrl.raw It is probably a bad idea to not redirect stdout when using this mode. Alternatively you may want to send the data to another program that can parse the raw buffer. # nvme id-ctrl /dev/nvme0 --raw-binary | nvme_parse_id_ctrl The parse program in the above example can be a program that shows the structure in a way you like. The following program is such an example that will parse it and can accept the output through a pipe, '|', as shown in the above example, or you can 'cat' a saved output buffer to it. /* File: nvme_parse_id_ctrl.c */ #include <linux/nvme.h> #include <stdio.h> #include <unistd.h> int main(int argc, char **argv) { unsigned char buf[sizeof(struct nvme_id_ctrl)]; struct nvme_id_ctrl *ctrl = (struct nvme_id_ctrl *)buf; if (read(STDIN_FILENO, buf, sizeof(buf))) return 1; printf("vid : %#x\n", ctrl->vid); printf("ssvid : %#x\n", ctrl->ssvid); return 0; } Nvme Part of the nvme-user suite Referenced By nvme(1).
https://www.mankier.com/1/nvme-id-ctrl
CC-MAIN-2021-10
refinedweb
451
52.19
! What are Higher-Order components? Higher-Order Components (HOCs) are JavaScript functions which add functionality to existing component classes. Just as React components let you add functionality to an application, Higher-Order Components let you add functionality to components. You could say they’re components for components. Another way of thinking about HOCs is that they let you generate code automatically. You might be familiar with this concept from other languages. For example, Ruby programmers often call it metaprogramming. Of course, the best way to understand HOCs is to see one in action! A simple example Let’s have a look at our first tiny HOC: // See import except from 'except'; function passthrough(component) { const passthroughProps = Object.keys(component.propTypes); return class extends component { render() { super.render(except(this.props, passthroughProps)) } } } As with every HOC, passthrough is just a function; it takes a component as its argument, and returns a new component. To use it, just pass your component in: class MyComponent extends React.component { render(passthrough) { //... } } MyComponent = passthrough(MyComponent); You can also use HOCs as ES7 decorators: @passthrough class MyComponent extends React.component { render(passthrough) { //... } } Do you understand how MyComponent will differ after applying the passthrough HOC? Think about it for a bit, then touch or hover your mouse over the box below for an answer. passthroughreturns a new component which is an extension of your existing one. The new component defines a new rendermethod – which calls the existing rendermethod with an object called passthrough. This passthroughobject is mostly identical to this.props, except that it doesn’t include any properties specified in the class’s propTypesobject. This HOC gives you a shortcut for passing props which your component doesn’t expect through to the underlying DOM components, without passing through props which you actually use. In short, HOCs let you perform repetitive tasks programatically instead of by hand. Great, but how do Higher-Order Components help me Structure My Application? Where patterns involve writing out code with a single purpose multiple times across many components, HOCs allow you to confine code with a single aim to a single place. Or in fancy talk, HOCs can help your application achieve separation of concerns. The flip side is that not all separation is good separation. It is also possible to use HOCs to spread related code over multiple arbitrary modules, setting up dependencies which make code harder to reason about and maintain. Which begs the question: How do I design useful Higher-Order Components? The core rule is that quality HOCs do one thing, and do it well. A HOC should perform a task which is clearly defined, and unrelated to other tasks. While designing HOCs, it may help to ask yourself the following questions: - Will component code be clearer when using the HOC? Well-designed HOCs indicate what they actually do; patterns spend a long time explaining how they do it. - Will the application be easier to maintain with the HOC? Confining code with a single purpose to a single location means updates to the code only need to happen in one place – not across the entire codebase. - Does the HOC have dependencies? Dependencies are not the end of the world, but consider whether they indicate that the HOC is doing too little or too much. - Can the HOC be re-used in other applications? Reusable components indicate good separation from the application’s internals. You’ll also likely find that good HOCs reduce the amount of code in your application, but this is not always the case. In fact, factoring out tiny but common patterns may increase line count while still improving readability and maintainability. The converse is also true; abstracting uncommon patterns may just be moving the mess somewhere else. The thing to remember here is that good design and less keystrokes are not the same thing. But all the theory in the world isn’t going to help you design great HOCs. So let’s look at: A real-world example Memamug uses the Pacomo system for styling. Pacomo makes stylesheets much easier to reason about, but involves adding an application-wide prefix and the component’s JavaScript class name to every single CSS class. Given this pattern can be consistently used across multiple applications and involves a lot of repetition, it is a great candidate for hocification. To start, let’s have a look at the render function in Memamug’s Paper class, without using any HOCs: function render() { const className = `app-Paper app-Paper-${this.props.shape} ${this.props.className || ''}`; return ( <div className={className}> <div className="app-Paper-inner"> ... </div> </div> ); } How would a HOC help us here? Well for a start, the app in each CSS class shouldn’t need to be repeated across the entire application. And manually writing out Paper for each class makes copying/pasting between components seriously error-prone. A HOC should also leave us with something a little more readable, showing us only the non-standard classes we’re adding instead of the ones which must be repeated over every component. In short, it should allow us to write something like this: @c('app') function render() { return ( <div className={this.cRoot(this.props.shape)}> <div className={this.cPrefix('inner')}> ... </div> </div> ); } Exercise Let’s test what you’ve learned by building the c higher order component which is used above. The following hints may help: - You can find the name of a component’s JavaScript class using it’s nameproperty. For example, component.name. - You can add new methods to a component by adding them to it’s prototypeobject. For example, component.propotype.cPrefix = function() {};. Have a go at implementing c, then touch or hover your mouse over the box below to see my answer. function c(appPrefix) { return function(component) { const prefix = `${appPrefix}-${component.name}`; component.prototype.cPrefix = function(name) { return `${prefix}-${name}`; }; component.prototype.cRoot = function(name) { return `${prefix} ${name ? prefix+'-'+name : ''} ${this.props.className || ''}` ; }; } } Congratulations! You’ve built your first HOC. Learn more by reading (and using) existing code. Lots of existing code. In the process of building Memamug and Numbat UI, I’ve found a number of patterns which work well as HOCs. And lucky for you, they’re now on GitHub! Read and use them to get an intuitive grasp of HOCs, then make your own HOCs and send me links at @james_k_nelson! - react-c – A tool to help implement Pacomo, a system to add structure to your React stylesheets. - react-passthrough – Helps pass through unknown props to a child component - react-callback-register – Helps merge callbacks from props, your component, and decorators - react-base – Include react-c, react-passthrough and react-callback-register with a single decorator - react-base-control – Manages control-related events and state And there will be more where these come from. Find out about new Higher-Order Components so you don’t have to write them yourself. I’m constantly looking for ways to improve my own projects, and when I find them, I like to share them as articles and code. So if you want to make top-quality apps without doing all the hard work yourself, make sure to sign up for my newsletter! And in return for your e-mail, you’ll also immediately receive 3 bonus print-optimised PDF cheatsheets – on React, ES6 and JavaScript promises. I will send you useful articles, cheatsheets and code. Get in touch Do you have something else you’d like to read more about, or other questions/comments? Send me an e-mail, or send @james_k_nelson a tweet. I seriously love hearing from readers! Related reading - Learn Raw React – no JSX, no Flux, no ES6, no Webpack - Building a property passthrough Higher-Order Component for React - Pacomo – A System For Structuring Stylesheets In React Applications - Unlocking decorators and other ES7 features with Webpack and Babel - ES6 – The Bits You’ll Actually Use Hey James, It’s a great article. Thank you! And did you think about component inheritance? For example, we have class Ticket extends React.Component … with basic markup. And, what if I want Ticket that could be edited but I do not want to create bunch of if/else in one component. I want to inherit from it. I could create class TicketEditable extends Ticket … And invoke super.render() inside render method. But! How could I modify this react element tree that was rendered after super.render had invoked? It seems that I found the answer. We could define ticketReactElement = super.render() and after that we could modify all his children just as: ; questionBodyDivElement.props.children.push(Bla-bla-bla) It’s awesome because we could inherit one from another and modify. sorry, ticketReactElement instead of questionBodyDivElement, of course render() { ticketReactElement.props.children.push(Bla-bla-bla); return ticketReactElement; } Hey, that’s cool. I’m looking for a method to implement this, and your answer gives me the solution. Thanks a lot. This is soo much different from any other other examples of using HO func. All examples I’ve seen rendered decorated component in render method and used React.Component as base class. Your decorator extends directly decorating class and calling super.render instead. I like it. Ctrl + f ‘propotype’ Replace with ‘prototype’ Thanks for writing this. You mangled a ‘prototype’: component.propotype.cPrefix = function() {};
http://jamesknelson.com/structuring-react-applications-higher-order-components/
CC-MAIN-2017-34
refinedweb
1,550
57.57
>>> Ravikiran Rajagopal <ravi.rajagopal@...> seems to think that: >[Removed cedet-devel as the following is semantic-specific.] > >On Monday 24 March 2008 07:03:08 pm Eric M. Ludlam wrote: >> I've updated the scope calculator to find and use the >> merged A namespace instead of only the namespace block that bar::xx >> shows up in. > >Thank you; the simple test case works. However, I have a more complex case >which does not work yet. Once I isolate the problem further, I will send you >a new case. Should I regenerate all the cached databases after your last >change? The database files only contain the raw tags. Part of the analysis step is to create a typecache that merges all known types into a single searchable table. That table gets rebuilt whenever semantic detects a change somewhere. When in doubt, do this: C-u M-x bovinate RET to get all such caches flushed out. >I have also been playing with gccxml which handles virtually every C++ parsing >case I have thrown at it (better than Eclipse CDT, KDevelop, Visual C++ 2008 >Express) and produces very nice XML output which can be parsed very easily. >The main drawbacks are that only instantiated templates are output and that >incremental parsing is impossible. How difficult would it be to add a new >back end (a la ebrowse) into semantic? Again, I am a novice at elisp but >could whip up a python script fairly easily to massage the XML into virtually >any sort of output. [ ... ] I've been reading about that, and hoping to try it out sometime. The only restriction on a Semantic parser is that it can be done with a single function that returns tags. The texi parser is a good example. It's setup is like this: (semantic-install-function-overrides '((parse-region . semantic-texi-parse-region) (parse-changes . semantic-texi-parse-changes))) If the gcc->xml->python->elisp cycle is fast, then an incremental parser is likely still possible, even mixing the existing semantic parser with the gcc one for full buffer parsing. The first step would be an external script that creates something that Emacs can call `read' on. There is a chapter in the semantic appdev manual for Tag basics. It describes the output format. The second step is to read in that stream, then 'cook' the tags into something Semantic can create overlays for. If that all works out, a script that builds database files for Emacs to read in would be the bit that would really speed things up. Eric -- Eric Ludlam: eric@... Siege: Emacs: I could not replicate the error with your sample. Is wx/ wxwindows or something? I don't have that. To debug a thrown error, do this: M-x toggle-debug-on-error RET then repro the problem. The stack trace is often quite helpful. I'm guessing somewhere in the headers is a varargs type function like printf, so it would be good to find it so the parser can be fixed. Eric >>> "koala sc" <ssc21st@...> seems to think that: ; >} > -- Eric Ludlam: eric@... Siege: Emacs:; } Regards, Edmund I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/cedet/mailman/cedet-semantic/?viewmonth=200803&viewday=25
CC-MAIN-2017-09
refinedweb
562
74.59
import "github.com/spf13/hugo/hugofs" Package hugofs provides the file systems used by Hugo. Os points to an Os Afero file system. type Fs struct { // Source is Hugo's source file system. Source afero.Fs // Destination is Hugo's destionation file system. Destination afero.Fs // Os is an OS file system. Os afero.Fs // WorkingDir is a read-only file system // restricted to the project working dir. WorkingDir *afero.BasePathFs } NewDefault creates a new Fs with the OS file system as source and destination file systems. NewFrom creates a new Fs based on the provided Afero Fs as source and destination file systems. Useful for testing. NewMem creates a new Fs with the MemMapFs as source and destination file systems. Useful for testing. Package hugofs imports 2 packages (graph) and is imported by 398 packages. Updated 2017-03-09. Refresh now. Tools for package owners.
https://godoc.org/github.com/spf13/hugo/hugofs
CC-MAIN-2017-22
refinedweb
146
69.58
Description Features * minimal size: weighing 4kb, Choo is a tiny little framework * event based: our performant event system makes writing apps easy * small api: with only 6 methods there's not much to learn * minimal tooling: built for the cutting edge browserify compiler * isomorphic: renders seamlessly in both Node and browsers * very cute: choo choo! Choo alternatives and similar libraries Based on the "MVC Frameworks and Libraries" category. Alternatively, view Choo alternatives based on common mentions on social networks and blogs. react-native10.0 9.9 L3 Choo VS react-nativeA framework for building native applications using React Vue.js10.0 8.0 L3 Choo VS Vue.js🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web. react10.0 9.8 L2 Choo VS reactA declarative, efficient, and flexible JavaScript library for building user interfaces. Element UI9.9 8.1 Choo VS Element UIA Vue.js 2.0 UI Toolkit for Web angular.js9.9 5.3 L3 Choo VS angular.jsAngularJS - HTML enhanced for web apps! angular9.9 10.0 Choo VS angularThe modern web developer’s platform svelte9.8 9.7 Choo VS svelteCybernetically enhanced web apps meteor9.8 9.9 L2 Choo VS meteorMeteor, the JavaScript App Platform backbone9.6 0.0 L2 Choo VS backboneGive your JS App some Backbone with Models, Views, Collections, and Events preact9.4 8.4 L3 Choo VS preact⚛️ Fast 3kB React alternative with the same modern API. Components & Virtual DOM. ember.js9.4 9.5 L5 Choo VS ember.jsEmber.js - A JavaScript framework for creating ambitious web applications Polymer9.2 3.4 Choo VS PolymerOur original Web Component library. nativescript9.1 9.4 Choo VS nativescriptNativeScript empowers you to access native platform APIs from JavaScript directly. Angular, Capacitor, Ionic, React, Svelte, Vue and you name it compatible. GrapesJS8.9 9.7 Choo VS GrapesJSFree and Open source Web Builder Framework. Next generation tool for building templates without coding Alpine.js8.8 9.6 Choo VS Alpine.jsA rugged, minimal framework for composing JavaScript behavior in your markup. hyperapp8.8 7.0 L4 Choo VS hyperappThe tiny framework for building hypertext applications. Blockly8.6 9.9 Choo VS BlocklyThe web-based visual programming editor. riot8.6 7.1 L2 Choo VS riotSimple and elegant component-based UI library inferno8.5 8.1 L2 Choo VS inferno:fire: An extremely fast, React-like JavaScript library for building modern user interfaces feathers8.3 8.3 L5 Choo VS feathersA framework for real-time applications and REST APIs with JavaScript and TypeScript mithril.js8.3 3.7 Choo VS mithril.jsA JavaScript Framework for Building Brilliant Applications knockout8.2 1.8 L3 Choo VS knockoutKnockout makes it easier to create rich, responsive UIs with JavaScript Adonis8.1 8.3 L4 Choo VS Adonis🚀 The Node.js Framework highly focused on developer ergonomics, stability and confidence aurelia8.1 5.6 L5 Choo VS aureliaThe Aurelia 1 framework entry point, bringing together all the required sub-modules of Aurelia. Stimulus7.7 6.0 Choo VS StimulusA modest JavaScript framework for the HTML you already have [Moved to:] marionette7.4 2.5 L4 Choo VS marionetteThe Backbone Framework Rete.js7.0 2.3 Choo VS Rete.jsJavaScript framework for visual programming and creating node editor ractive6.5 6.1 L2 Choo VS ractiveNext-generation DOM manipulation derby6.0 2.9 L5 Choo VS derbyMVC framework making it easy to write realtime, collaborative applications that run in both Node.js and browsers spine5.6 0.0 L4 Choo VS spineLightweight MVC library for building JavaScript applications rivets5.4 0.0 L5 Choo VS rivetsLightweight and powerful data binding. litegraph.js5.1 8.0 Choo. way.js5.1 0.0 L4 Choo VS way.jsSimple, lightweight, persistent two-way databinding chaplin5.1 0.0 Choo VS chaplinHTML5 application architecture using Backbone.js jsblocks4.9 1.3 L3 Choo VS jsblocksjsblocks is better MV-ish framework. canjs4.7 1.6 L3 Choo VS canjsBuild CRUD apps in fewer lines of code. Drawflow4.1 7.7 Choo VS DrawflowSimple flow library 🖥️🖱️ thorax3.9 1.7 L4 Choo VS thoraxStrengthening your Backbone donejs3.8 0.0 L4 Choo VS donejsYour app. Done. batman.js3.8 0.0 Choo VS batman.jsThe best JavaScript framework for Rails developers. FFCreator3.7 9.2 Choo Choo VS FoalTSElegant and fully-featured Node.Js web framework based on TypeScript. :rocket:. Monkberry3.7 0.0 L2 Choo VS MonkberryMonkberry is a JavaScript library for building web user interfaces Million3.5 9.8 Choo VS Million🦁 <1kb compiler-augmented virtual DOM. It's fast! ripple3.4 0.0 L3 Choo VS rippleA tiny foundation for building reactive views Lucia2.6 9.5 Choo VS Lucia🙋♀️ 3kb library for tiny web apps Simulacra.js2.4 1.0 Choo VS Simulacra.jsA data-binding function for the DOM. espresso.js2.4 0.0 L4 Choo VS espresso.jsSuper minimal MVC library rxweb2.3 7.5 Choo VS rxwebTons of extensively featured packages for Angular, VUE and React Projects atvjs2.0 0.0 Choo Choo or a related project? Popular Comparisons README Choo :steam_locomotive::train::train::train::train::train: Fun functional programming A 4kb framework for creating sturdy frontend applications <!-- Stability --> <!-- NPM version --> <!-- Build Status --> <!-- Test Coverage --> <!-- Downloads --> <!-- Standard --> Website | Handbook | Ecosystem | <!-- --> <!-- CLI --> <!-- --> <!-- | --> Contributing | Reddit | Chat The little framework that could. Built with ❤︎ by Yoshua Wuyts and contributors Table of Contents - Features - Example - Philosophy - Events - State - Routing - Server Rendering - Components - Optimizations - API - Installation - See Also - Support Features - minimal size: weighing 4kb, Choo is a tiny little framework - event based: our performant event system makes writing apps easy - small api: with only 6 methods there's not much to learn - minimal tooling: built for the cutting edge browserifycompiler - isomorphic: renders seamlessly in both Node and browsers - very cute: choo choo! Example') }) } Want to see more examples? Check out the Choo handbook. Philosophy We believe programming should be fun and light, not stern and stressful. It's cool to be cute; using serious words without explaining them doesn't make for better results - if anything it scares people off. We don't want to be scary, we want to be nice and fun, and then casually be the best choice around. Real casually. We believe frameworks should be disposable, and components recyclable. We don't want a web where walled gardens jealously compete with one another. By making the DOM the lowest common denominator, switching from one framework to another becomes frictionless. Choo is modest in its design; we don't believe it will be top of the class forever, so we've made it as easy to toss out as it is to pick up. We don't believe that bigger is better. Big APIs, large complexities, long files - we see them as omens of impending userland complexity.. Events At the core of Choo is an event emitter, which is used for both application logic but also to interface with the framework itself. The package we use for this is nanobus. You can access the emitter through app.use(state, emitter, app), app.route(route, view(state, emit)) or app.emitter. Routes only have access to the emitter.emit method to encourage people to separate business logic from render logic. The purpose of the emitter is two-fold: it allows wiring up application code together, and splitting it off nicely - but it also allows communicating with the Choo framework itself. All events can be read as constants from state.events. Choo ships with the following events built in: 'DOMContentLoaded'| state.events.DOMCONTENTLOADED Choo emits this when the DOM is ready. Similar to the DOM's 'DOMContentLoaded' event, except it will be emitted even if the listener is added after the DOM became ready. Uses document-ready under the hood. 'render'| state.events.RENDER This event should be emitted to re-render the DOM. A common pattern is to update the state object, and then emit the 'render' event straight after. Note that 'render' will only have an effect once the DOMContentLoaded event has been fired. 'navigate'| state.events.NAVIGATE Choo emits this event whenever routes change. This is triggered by either 'pushState', 'replaceState' or 'popState'. 'pushState'| state.events.PUSHSTATE This event should be emitted to navigate to a new route. The new route is added to the browser's history stack, and will emit 'navigate' and 'render'. Similar to history.pushState. 'replaceState'| state.events.REPLACESTATE This event should be emitted to navigate to a new route. The new route replaces the current entry in the browser's history stack, and will emit 'navigate' and 'render'. Similar to history.replaceState. 'popState'| state.events.POPSTATE This event is emitted when the user hits the 'back' button in their browser. The new route will be a previous entry in the browser's history stack, and immediately afterward the 'navigate' and 'render'events will be emitted. Similar to history.popState. (Note that emit('popState') will not cause a popState action - use history.go(-1) for that - this is different from the behaviour of pushState and replaceState!) 'DOMTitleChange'| state.events.DOMTITLECHANGE This event should be emitted whenever the document.title needs to be updated. It will set both document.title and state.title. This value can be used when server rendering to accurately include a <title> tag in the header. This is derived from the DOMTitleChanged event. State Choo comes with a shared state object. This object can be mutated freely, and is passed into the view functions whenever 'render' is emitted. The state object comes with a few properties set. When initializing the application, window.initialState is used to provision the initial state. This is especially useful when combined with server rendering. See server rendering for more details. state.events A mapping of Choo's built in events. It's recommended to extend this object with your application's events. By defining your event names once and setting them on state.events, it reduces the chance of typos, generally autocompletes better, makes refactoring easier and compresses better. state.params The current params taken from the route. E.g. /foo/:bar becomes available as state.params.bar If a wildcard route is used ( /foo/*) it's available as state.params.wildcard. state.query An object containing the current queryString. /foo?bin=baz becomes { bin: 'baz' }. state.href An object containing the current href. /foo?bin=baz becomes /foo. state.route The current name of the route used in the router (e.g. /foo/:bar). state.title The current page title. Can be set using the DOMTitleChange event. state.components An object recommended to use for local component state. state.cache(Component, id, [...args]) Generic class cache. Will lookup Component instance by id and create one if not found. Useful for working with stateful components. Routing Choo is an application level framework. This means that it takes care of everything related to routing and pathnames for you. Params Params can be registered by prepending the route name with :routename, e.g. /foo/:bar/:baz. The value of the param will be saved on state.params (e.g. state.params.bar). Wildcard routes can be registered with *, e.g. /foo/*. The value of the wildcard will be saved under state.params.wildcard. Default routes Sometimes a route doesn't match, and you want to display a page to handle it. You can do this by declaring app.route('*', handler) to handle all routes that didn't match anything else. Querystrings Querystrings (e.g. ?foo=bar) are ignored when matching routes. An object containing the key-value mappings exists as state.query. Hash routing By default, hashes are ignored when routing. When enabling hash routing ( choo({ hash: true })) hashes will be treated as part of the url, converting /foo#bar to /foo/bar. This is useful if the application is not mounted at the website root. Unless hash routing is enabled, if a hash is found we check if there's an anchor on the same page, and will scroll the element into view. Using both hashes in URLs and anchor links on the page is generally not recommended. Following links By default all clicks on <a> tags are handled by the router through the nanohref module. This can be disabled application-wide by passing { href: false } to the application constructor. The event is not handled under the following conditions: - the click event had .preventDefault()called on it - the link has a target="_blank"attribute with rel="noopener noreferrer" - a modifier key is enabled (e.g. ctrl, alt, shiftor meta) - the link's href starts with protocol handler such as mailto:or dat: - the link points to a different host - the link has a downloadattribute :warn: Note that we only handle target=_blank if they also have rel="noopener noreferrer" on them. This is needed to properly sandbox web pages. Navigating programmatically To navigate routes you can emit 'pushState', 'popState' or 'replaceState'. See #events for more details about these events. Server Rendering Choo was built with Node in mind. To render on the server call .toString(route, [state]) on your choo instance. var html = require('choo/html') var choo = require('choo') var app = choo() app.route('/', function (state, emit) { return html`<div>Hello ${state.name}</div>` }) var state = { name: 'Node' } var string = app.toString('/', state) console.log(string) // => '<div>Hello Node</div>' When starting an application in the browser, it's recommended to provide the same state object available as window.initialState. When the application is started, it'll be used to initialize the application state. The process of server rendering, and providing an initial state on the client to create the exact same document is also known as "rehydration". For security purposes, after window.initialState is used it is deleted from the window object. <html> <head> <script>window.initialState = { initial: 'state' }</script> </head> <body> </body> </html> Components From time to time there will arise a need to have an element in an application hold a self-contained state or to not rerender when the application does. This is common when using 3rd party libraries to e.g. display an interactive map or a graph and you rely on this 3rd party library to handle modifications to the DOM. Components come baked in to Choo for these kinds of situations. See nanocomponent for documentation on the component class. // map.js var html = require('choo/html') var mapboxgl = require('mapbox-gl') var Component = require('choo/component') module.exports = class Map extends Component { constructor (id, state, emit) { super(id) this.local = state.components[id] = {} } load (element) { this.map = new mapboxgl.Map({ container: element, center: this.local.center }) } update (center) { if (center.join() !== this.local.center.join()) { this.map.setCenter(center) } return false } createElement (center) { this.local.center = center return html`<div></div>` } } // index.js var choo = require('choo') var html = require('choo/html') var Map = require('./map.js') var app = choo() app.route('/', mainView) app.mount('body') function mainView (state, emit) { return html` <body> <button onclick=${onclick}>Where am i?</button> ${state.cache(Map, 'my-map').render(state.center)} </body> ` function onclick () { emit('locate') } } app.use(function (state, emitter) { state.center = [18.0704503, 59.3244897] emitter.on('locate', function () { window.navigator.geolocation.getCurrentPosition(function (position) { state.center = [position.coords.longitude, position.coords.latitude] emitter.emit('render') }) }) }) Caching components When working with stateful components, one will need to keep track of component instances – state.cache does just that. The component cache is a function which takes a component class and a unique id ( string) as its first two arguments. Any following arguments will be forwarded to the component constructor together with state and emit. The default class cache is an LRU cache (using nanolru), meaning it will only hold on to a fixed amount of class instances ( 100 by default) before starting to evict the least-recently-used instances. This behavior can be overriden with options. Optimizations Choo is reasonably fast out of the box. But sometimes you might hit a scenario where a particular part of the UI slows down the application, and you want to speed it up. Here are some optimizations that are possible. Caching DOM elements. var el = html`<div>node</div>` // tell nanomorph to not compare the DOM tree if they're both divs el.isSameNode = function (target) { return (target && target.nodeName && target.nodeName === 'DIV') } Reordering lists. var el = html` <section> <div id="first">hello</div> <div id="second">world</div> </section> ` Pruning dependencies We use the require('assert') module from Node core to provide helpful error messages in development. In production you probably want to strip this using unassertify. To convert inlined HTML to valid DOM nodes we use require('nanohtml'). This has overhead during runtime, so for production environments we should unwrap this using the nanohtml transform. Setting up browserify transforms can sometimes be a bit of hassle; to make this more convenient we recommend using bankai build to build your assets for production. Why is it called Choo? Because I thought it sounded cute. All these programs talk about being "performant", "rigid", "robust" - I like programming to be light, fun and non-scary. Choo embraces that. Also imagine telling some business people you chose to rewrite something critical for serious bizcorp using a train themed framework. :steam_locomotive::train::train::train: Is it called Choo, Choo.js or...? It's called "Choo", though we're fine if you call it "Choo-choo" or "Chugga-chugga-choo-choo" too. The only time "choo.js" is tolerated is if / when you shimmy like you're a locomotive. Does Choo use a virtual-dom? Choo uses nanomorph, which diffs real DOM nodes instead of virtual nodes. It turns out that browsers are actually ridiculously good at dealing with DOM nodes, and it has the added benefit of working with any library that produces valid DOM nodes. So to put a long answer short: we're using something even better. How can I support older browsers? Template strings aren't supported in all browsers, and parsing them creates significant overhead. To optimize we recommend running browserify with nanohtml as a global transform or using bankai directly. $ browserify -g nanohtml Is choo production ready? Sure. API This section provides documentation on how each function in Choo works. It's intended to be a technical reference. If you're interested in learning choo for the first time, consider reading through the handbook first :sparkles: app = choo([opts]) Initialize a new choo instance. opts can also contain the following values: - opts.history: default: true. Listen for url changes through the history API. - opts.href: default: true. Handle all relative <a href="<location>"></a>clicks and call emit('render') - opts.cache: default: undefined. Override default class cache used by state.cache. Can be a a number(maximum number of instances in cache, default 100) or an objectwith a nanolru-compatible API. - opts.hash: default: false. Treat hashes in URLs as part of the pathname, transforming /foo#barto /foo/bar. This is useful if the application is not mounted at the website root. app.use(callback(state, emitter, app)) Call a function and pass it a state, emitter and app. emitter is an instance of nanobus. You can listen to messages by calling emitter.on() and emit messages by calling emitter.emit(). app is the same Choo instance. Callbacks passed to app.use() are commonly referred to as 'stores'. If the callback has a .storeName property on it, it will be used to identify the callback during tracing. See #events for an overview of all events. app.route(routeName, handler(state, emit)) Register a route on the router. The handler function is passed app.state and app.emitter.emit as arguments. Uses nanorouter under the hood. See #routing for an overview of how to use routing efficiently. app.mount(selector) Start the application and mount it on the given querySelector, the given selector can be a String or a DOM element. In the browser, this will replace the selector provided with the tree returned from app.start(). If you want to add the app as a child to an element, use app.start() to obtain the tree and manually append it. On the server, this will save the selector on the app instance. When doing server side rendering, you can then check the app.selector property to see where the render result should be inserted. Returns this, so you can easily export the application for server side rendering: module.exports = app.mount('body') tree = app.start() Start the application. Returns a tree of DOM nodes that can be mounted using document.body.appendChild(). app.toString(location, [state]) Render the application to a string. Useful for rendering on the server. choo/html Create DOM nodes from template string literals. Exposes nanohtml. Can be optimized using nanohtml. choo/html/raw Exposes nanohtml/raw helper for rendering raw HTML content. Installation $ npm install choo See Also - bankai - streaming asset compiler - stack.gl - open software ecosystem for WebGL - yo-yo - tiny library for modular UI - tachyons - functional CSS for humans - sheetify - modular CSS bundler for browserify Support Creating a quality framework takes a lot of time. Unlike others frameworks, Choo is completely independently funded. We fight for our users. This does mean however that we also have to spend time working contracts to pay the bills. This is where you can help: by chipping in you can ensure more time is spent improving Choo rather than dealing with distractions. Sponsors Become a sponsor and help ensure the development of independent quality software. You can help us keep the lights on, bellies full and work days sharp and focused on improving the state of the web. Become a sponsor Backers Become a backer, and buy us a coffee (or perhaps lunch?) every month or so. Become a backer License *Note that all licence references and agreements mentioned in the Choo README section above are relevant to that project's source code only.
https://js.libhunt.com/choo-alternatives
CC-MAIN-2022-05
refinedweb
3,649
60.31
Servlets running together in the same server have several ways to communicate with each other. There are three major reasons to use interservlet communication: A servlet can gain access to the other currently loaded servlets and perform some task on each. The servlet could, for example, periodically ask every servlet to write its state to disk to protect against server crashes. One servlet can use another's abilities to perform a task. Think back to the ChatServlet from the previous chapter. It was written as a server for chat applets, but it could be reused (unchanged) by another servlet that needed to support an HTML-based chat interface. The most common, situation involves two or more servlets sharing state information. For example, a set of servlets managing an online store could share the store's product inventory count. Session tracking (see Chapter 7, "Session Tracking" ) is a special case of servlet collaboration. This chapter discusses why interservlet communication is useful and how it can be accomplished. Direct servlet manipulation involves one servlet accessing the loaded servlets on its server and optionally performing some task on one or more of them. A servlet obtains information about other servlets through the ServletContext object. Use getServlet() to get a particular servlet: public Servlet ServletContext.getServlet(String name) throws ServletException This method returns the servlet of the given name, or null if the servlet is not found. The specified name can be the servlet's registered name (such as "file") or its class name (such as "com.sun.server.webserver.FileServlet"). The server maintains one servlet instance per name, so getServlet("file") returns a different servlet instance than getServlet("com.sun.server.webserver.FileServlet").[1] If the servlet implements SingleThreadModel, the server may return any instance of the servlet from the current pool. The server may--but isn't required to--load the named servlet and execute its init() method if it isn't already loaded. The Java Web Server does not perform this load. A ServletException is thrown if there is a problem during the load. [1]getServlet("file") returns the instance that handles /servlet/file, while getServlet("com.sun. server.webserver.FileServlet") returns the instance that handles /servlet/com.sun.server.webserver. FileServlet. You can also get all of the servlets using getServlets() : public Enumeration ServletContext.getServlets() This method returns an Enumeration of the Servlet objects loaded in the current ServletContext. Generally there's one servlet context per server, but for security or convenience, a server may decide to partition its servlets into separate contexts. The enumeration always includes the calling servlet itself. This method is deprecated in the Servlet API 2.0 in favor of getServletNames() : public Enumeration ServletContext.getServletNames() This method returns an Enumeration of the names of the servlet objects loaded in the current ServletContext. The enumeration always includes the calling servlet itself. When used with getServlet(), this method can perform the same function as the deprecated getServlets(). The name returned can be a registered name (such as "file") or a class name (such as "com.sun.server.webserver.FileServlet"). This method was introduced in Version 2.0 of the Servlet API. Casting the Servlet object returned by getServlet() or getServlets() to its specific subclass can, in some situations, throw a ClassCastException. For example, the following code sometimes works as expected and sometimes throws an exception: MyServlet servlet = (MyServlet)getServletContext().getServlet("MyServlet"); The reason has to do with how a servlet can be automatically reloaded when its class file changes. As we explained in Chapter 3, "The Servlet Life Cycle", a server uses a new ClassLoader each time it reloads a servlet. This has the interesting side effect that, when the MyServlet class is reloaded, it is actually a different version of MyServlet than the version used by other classes. Thus, although the returned class type is MyServlet and it's being cast to the type MyServlet, the cast is between different types (from two different class loaders) and the cast has to throw a ClassCastException. The same type mismatch can occur if the class performing the cast (that is, the servlet containing the above code) is reloaded. Why? Because its new ClassLoader won't find MyServlet using the primordial class loader and will load its own copy of MyServlet. There are three possible workarounds. First, avoid casting the returned Servlet object and invoke its methods using reflection (a technique whereby a Java class can inspect and manipulate itself at runtime). Second, make sure that the servlet being cast is never reloaded. You can do this by moving the servlet out of the default servlet directory (usually server_root/servlets) and into the server's standard classpath (usually server_root/classes). The servlet performing the cast can remain in the servlets directory because its ClassLoader can find MyServlet using the primordial class loader. Third, cast the returned servlet to an interface that declares the pertinent methods and place the interface in the server's standard classpath where it won't be reloaded. Every class but the interface can remain in the servlets directory. Of course, in this case, the servlet must be changed to declare that it implements the interface. Example 11-1 uses these methods to display information about the currently loaded servlets, as shown in Figure 11-1. import java.io.*; import java.util.*; import javax.servlet.*; import javax.servlet.http.*; public class Loaded("Servlet name: " + name); out.println("Servlet class: " + servlet.getClass().getName()); out.println("Servlet info: " + servlet.getServletInfo()); out.println(); } } } There's nothing too surprising in this servlet. It retrieves its ServletContext to access the other servlets loaded in the server. Then it calls the context's getServletNames() method. This returns an Enumeration of String objects that the servlet iterates over in a while loop. For each name, it retrieves the corresponding Servlet object with a call to the context's getServlet() method. Then it prints three items of information about the servlet: its name, its class name, and its getServletInfo() text. Notice that if the Loaded servlet used the deprecated getServlets() method instead of getServletNames(), it would not have had access to the servlets' names. The next example demonstrates another use for these methods. It works like Loaded, except that it attempts to call each servlets' saveState() method, if it exists. This servlet could be run periodically (or be modified to spawn a thread that runs periodically) to guard against data loss in the event of a server crash. The code is in Example 11-2; the output is in Figure 11-2. import java.io.*; import java.lang.reflect.*; import java.util.*; import javax.servlet.*; import javax.servlet.http.*; public class SaveState("Trying to save the state of " + name + "..."); out.flush(); try { Method save = servlet.getClass().getMethod("saveState", null); save.invoke(servlet, null); out.println("Saved!"); } catch (NoSuchMethodException e) { out.println("Not saved. This servlet has no saveState() method."); } catch (SecurityException e) { out.println("Not saved. SecurityException: " + e.getMessage()); } catch (InvocationTargetException e) { out.print("Not saved. The saveState() method threw an exception: "); Throwable t = e.getTargetException(); out.println(t.getClass().getName() + ": " + t.getMessage()); } catch (Exception e) { out.println("Not saved. " + e.getClass().getName() + ": " + e.getMessage()); } out.println(); } } public String getServletInfo() { return "Calls the saveState() method (if it exists) for all the " + "currently loaded servlets"; } } SaveState uses reflection to determine if a servlet has a public saveState() method and to invoke the method when it exists. If the invocation goes without a hitch, it prints "Saved!". If there's a problem, it reports the problem. Why does SaveState use reflection? Because otherwise it would have to cast each Servlet object to some class or interface that includes a saveState() method, and the code for each servlet would have to be modified to extend or implement that class or interface. Using reflection is an easier approach that doesn't require code modification. Reflection also avoids the ClassCastException problem noted earlier.
http://docstore.mik.ua/orelly/java-ent/servlet/ch11_01.htm
crawl-002
refinedweb
1,313
56.55
note jimt <p>Just so there isn't any confusion, I'll hit a few of these points...</p> <p><i>For code which should insert something into html I've used three different construct: @~...~@, ^~...~^ and #~...~#, so web designer is able to see: here will be inserted some text. Only difference between them is escaping: @~~@ add html-escaping, ^~~^ add uri-escaping and #~~#</i></p> ?</p> <p>For example, internally on my site, I have a blogger style syntax processor. So I can type ~some text~ and it'll translate to <i>some text</i>.</p> <p>Incidentally, I also made all of my template tags user configurable with no speed hit. So if you want to use something else, it's simply changing a value in a config file.</p> <p><i>That's all, only 4 special tags!</I></p> <p>[cpan:/. :-) </p> <p><i>I've used do{}, as Jenda proposed in previous comment.</i></p> <p>This was an awesome suggestion (thanks Jenda!) and I have added it to the internal builds. Admittedly, it does prevent the user from typing in <% return $value %>, but I figured that it was a worthwhile modification to get rid of the anonymous subs.</p> <p><i> 150 lines of code (parser itself is 30 lines), less than 6KB.</i></p> <p>Even if I rip out the documentation, examples, sugar, and unique features, i still only get down to 405 lines, so I'll give this one to you.</p> <ul> <li><i>namespace issue I've solved by using no global variables (as mod_perl recommend), this way everything works fine in 'main::'</i><br/><br/>The globals are used for importing into the package that the template appears in. Otherwise, I didn't have a way to send variables into the template. I wouldn't mind hearing a non-global approach. </li> <li><i>instead of using 'print' I've added text into some variable (accumulator), this solve STDOUT/OUT issue</i><br/><br/> As said in my original post, the filehandles are long gone, instead accumulating into a scalar. Though "print OUT" still exists for backwards compatability (not that I use it).</li> <li><i>no problem with non-scalar variables and references because I can use any my() scalars/arrays/hashes</i><br/><br/> I have no problem importing non-scalars or refs, you just need to hand in a reference to what you want to the processing engine. I'm willing to bet you need to do that, too.</li> <li><i>to comment my perl code I use usual #-comments inside <--& ... --> block, no need for special tag because that perl block will not be included in generated html anyway</i><br/><br/> Basset::Template could have comments as: <code> %% # this is a comment or <% # this is a comment %> or <# this is a comment #> </code> it's all pure sugar. </li> 572402 572618
http://www.perlmonks.org/?displaytype=xml;node_id=572631
CC-MAIN-2015-35
refinedweb
486
63.19
Top Down Map Demo A simple demonstration of a sprite moving around a map. Includes *very* simple graphical map editor. Valentine Blacker (valentineblacker) This is a small tutorial code for a top-down map in Pygame. It's not much, but it has a very simple map editor, and it demonstrates a way to use sprite-sheets for animation. Please email me with any questions or comments. Feel free to use any part of this code for any reason. I'd love to hear from you if you do, though! [email protected] Links Releases Pygame.org account Comments Ian1045 2013-06-16 02:56:11 Didn't work for me for some reason... I ran demo.py, and it crashed. :/ Seems awesome though. mindkeep 2013-08-04 22:13:47 I had to update map_generator.py, but it seems to work well enough with this change: def scroll_bumptiles(self): #this takes in the list of tiles you can't walk through, and #returns a new list of those tile rect translated with the camera. #collision_tiles_list = [] for b in self.bumplist: if self.scrolling_up == True: b.y += self.scroll_speed elif self.scrolling_down == True: b.y -= self.scroll_speed elif self.scrolling_right == True: b.x -= self.scroll_speed elif self.scrolling_left == True: b.x += self.scroll_speed # I'm really not sure about this line, total guess at the original intent return self.bumplist
http://www.pygame.org/project-Top+Down+Map+Demo-2816-.html
CC-MAIN-2017-22
refinedweb
229
80.07
The 1.0.43 version of the Python client was released on April 24, 2015 and can be downloaded here. This release boasts the following features: - Allow for a ‘None’ as the set value for index creation methods. Support scans and queries against a namespace alone, with the set given as None. (AER-3462) - Consistent Unicode Handling. This is a change in how string values are returned. Both ‘str’ and ‘unicode’ values are converted by the client into UTF-8 encoded strings for storage on the Aerospike server. Read methods such as get, Query, Scan and operate will return that data as UTF-8 encoded str values. To get a Unicode, you will need to manually decode the string. The release also fixes three issues: - Fixed Issue 49 - udf_put() hangs. (AER-3537) - Fixed a segfault which happened while scanning a namespace with ‘None’ given for the set. (AER-3464) - Fixed Issue 50 - test harness skips running security methods against community edition. The full release notes are available here.
https://discuss.aerospike.com/t/python-client-release-1-0-43-april-24-2015/1261
CC-MAIN-2018-30
refinedweb
168
74.79
Key Takeaways - Understanding and using Unit Tests correctly are important for your ASP.NET Core Web API solutions - Learning about and using Mock data for your unit testing will allow you to have stable testing scenarios - Creating Mock Data projects in .NET Core 2.1 for use in your ASP.NET Core Web API solutions - Understanding and setting up Integration Testing to test your APIs externally for a fully tested ASP.NET Core 2.1 Web API solution. When architecting and developing a rich set of APIs using ASP.NET Core 2.1 Web API, it is important to remember this is only the first step to a stable and productive solution. Having a stable environment for your solution is also very important. The key to a great solution includes not only building the APIs soundly, but also rigorously testing your APIs to ensure the consumers have a great experience. This article is a continuation of my previous article for InfoQ called “Advanced Architecture for ASP.NET Core Web API.” Rest assured, you do not have to read the other article to get the benefits from testing, but it may help give you more insight into how I architected the solution I discuss. Over the last few years, I spent a lot of time thinking about testing while building APIs for clients. Knowing the architecture for ASP.NET Core 2.1 Web API solutions may help broaden your understanding. The solution and all code from this article’s examples can be found in my GitHub repository. Primer for ASP.NET Core Web API Let's take a quick moment and look at .NET and ASP.NET Core. ASP.NET Core is a new web framework which Microsoft built to shed the legacy technology that has been around since ASP.NET 1.0. By shedding these legacy dependencies and developing the framework from scratch, ASP.NET Core 2.1 gives the developer much better performance and is architected for cross-platform execution. What is Unit Testing? Testing your software may be new to some people, but it is quite easy. We will start with Unit Testing. The rigid definition from Wikipedia is “a software testing method by which individual units of source code, sets of one or more computer program modules together with associated control data, usage procedures, and operating procedures, are tested to determine whether they are fit for use.” A layman's definition I like to use is that Unit Testing is used to make sure that your code within your solution performs as expected after you add new functionality or fix defects. We test a small sample of code to ensure we match our expectations. Let's take a look at a sample unit test: [Fact] public async Task AlbumGetAllAsync() { // Arrange // Act var albums = await _repo.GetAllAsync(); // Assert Assert.Single(albums); } There are three parts of a good unit test. The first is the Arrange part which is used for setting up any resources that your test may need. In the example above, I do not have any setup, so the Arrange part is empty (but I still keep a comment for it). The next part called the Act is the part that performs the action of the test. In our example, I am calling the data repository for the Album entity type to return the entire set of albums from the data source the repository is using. The last part of the test is when we verify or Assert that the action of the test was correct. For this test, I am verifying that I returned a single album from the data repository. I will be using the xUnit tool for my unit testing throughout this article. xUnit is an open source package for .NET Framework and now .NET Core. We will be looking at the .NET Core version of xUnit that is included when you install the .NET Core 2.1 SDK. You can create a new Unit Test project either through the .NET Core cli command dotnet test or through the project template in your favorite IDE such as Visual Studio 2017, Visual Studio Code or JetBrain’s Rider. Figure 1: Creating a new Unit Test project in Visual Studio 2017 Now let’s dive into unit testing your ASP.NET Core Web API solution. What should be Unit Tested with Web APIs? I am a huge proponent of using unit testing to keep a stable and robust API for your consumers. But I keep a healthy prospectus on how I use my unit tests and what I test. My philosophy is that you unit test your solution just enough and not anymore than necessary. What do I mean by that? I may get a lot of comments from this view, but I am not overly concerned with having 100% coverage with your tests. Do I think that we need to have tests that cover the important parts of the API solution and isolate each area independently to ensure the contract of each segment of code is kept? Of course! I do that, and that is what I want to discuss. Since our demo Chinook.API project is very thin and able to be tested using Integration Testing (discussed later in the article), I find that I concentrate the most with unit tests in my Domain and Data projects. I am not going to go into detail about how you unit test (as that topic goes beyond this article). I do want you to test as much of your Domain and Data projects as you can using data that does not depend on your production database. That is the next topic we’re covering called Mock Data and Objects. Why use Mock Data/Objects with your Unit Tests? We have looked at why and what we need to unit test. It is important to know how to correctly unit test the code for your ASP.NET Core Web API solution. Data is key to testing your APIs. Having a predictable data set that you can test is vital. That is why I would not recommend using production data or any data that can change over time without your knowledge and expectations. We need a stable set of data to make sure all unit tests run and confirm that the contract between the code segment and the test is satisfied. As an example, when I test the Chinook.Domain project for getting an Album with an ID of 42 I want to make sure that it exists and has the expected details like the Album’s name and it is associated with an Artist. I also want to make sure that when I retrieve a set of Albums from the data source, I get the expected shape and size to meet the unit test I coded. Many in the industry use the term “mock data” to identify this type of data. There are many ways to generate mock data for the unit tests, and I hope you create as “real-world” set of data as possible. The better your data you create for your tests, the better your test will perform. I would suggest that you make sure your data is also clean of privacy issues and does not contain personal data or sensitive data for your company or your customer. To meet our need for clean, stable data, I create unique projects that encapsulate the mock data for my Unit Test projects. Let’s call my mock data project for the demo Chinook.MockData (as you can view in the demo source). My MockData project is almost identical to my normal Chinook.Data project. It has the same number of data repositories, and each one adheres to the same interfaces. I want the MockData project to be stored in the Dependency Injection (DI) Container so that the Chinook.Domain project can use it just like the Chinook.Data project that is connected to the production data source. That is why I love dependency injection. It allows me to switch Data projects through configuration and without any code changes. Integration Testing: What is this new testing for Web APIs? After we have performed and verified the Unit Tests for our ASP.NET Core Web API solution, we will look at a different type of testing. I look at Unit Testing to verify and confirm expectations on the internal components of the solution. When we are satisfied with the quality of the internal tests, we can move on to testing the APIs from the external interface, what we call Integration Testing. Integration Tests will be written and performed at the completion of all components, so your APIs can be consumed with the correct HTTP response to verify. I look at unit tests as testing independent and isolated segments of code while the integration tests are used to test the entire logic for each API on my HTTP endpoint. This testing will follow the entire workflow of the API from the API project’s Controllers to the Domain project Supervisor and finally the Data project’s Repositories (and back the entire way to respond). Creating the Integration Test Project To leverage your existing knowledge of testing, the Integration Testing functionality is based on current unit testing libraries. I will use xUnit for creating my integration tests. After we have created a new xUnit Test Project named Chinook.IntegrationTest, we need to add the appropriate NuGet package. Add the package Microsoft.AspNetCore.TestHost to Chinook.IntegrationTest project. This package contains the resources to perform the integration testing. Figure 2: Adding the Microsoft.AspNetCore.TestHost NuGet package We can now move on to creating our first integration test to verify our API externally. Creating your first Integration Test To start with the external testing of all of the APIs in our solution, I am going to create a new folder called API to contain our tests. I will also create a new test class for each of the Entity types in our API domain. Our first integration test will cover the Album entity type. Create a new class called AlbumAPITest.cs in the API folder. We will now add the following namespaces to our file. using Xunit; using Chinook.API; using Microsoft.AspNetCore.TestHost; using Microsoft.AspNetCore.Hosting; Figure 3: Integration Test Using Directives We now have to set up the class with our TestServer and HttpClient to perform the tests. We need a private variable called _client of type HttpClient that will be created based on the TestServer initialized in the constructor of the AlbumAPITest class. The TestServer is a wrapper around a small web server that is created based on the Chinook.API Startup class and the desired development environment. In this case, I am using the Development environment. We now have a web server that is running our API and a client that understand how to call the APIs in the TestServer. We can now write the code for the integration tests. Figure 4: Our first integration test to get all Albums In addition to the constructor code, Figure 4 also shows the code for our first integration test. The AlbumGetAllTestAsync method will test to verify that the call to get all Albums from the API works. Just like in the previous section where we discussed unit testing, the logic for our integration testing also using the Arrange/Act/Assert logic. We first create an HttpRequestMessage object with the HTTP verb supplied as a variable from the InlineData annotation and the URI segment that represents the call for all Albums (“/api/Album/”). We next will have the HttpClient _client send an HTTP Request, and finally, we will check to verify if the HTTP Response meets our expectations which in this case is a 200 OK. I have shown in Figure 4 two ways to verify our call to the API. You can use either, but I prefer the second way as it allows me to use the same pattern for checking responses to specific HTTP Response Codes. response.EnsureSuccessStatusCode(); Assert.Equal(HttpStatusCode.OK, response.StatusCode); We can also create integration tests that need to test for specific entity keys from our APIs. For this type of test, we will add additional value to the InlineData annotation that will be passed through the AlbumGetTestAsync method parameters. Our new test follows the same logic and uses the same resources as the previous test, but we will pass the entity key in the API URI segment for the HttpRequestMessage object. You can see the code in Figure 5 below. Figure 5: The second integration test for a single Album After you have created all of your integration tests to test your API, you will need to run them through a Test Runner and make sure they all pass. All of the tests you have created can also be performed during your DevOps Continuous Integration (CI) process to test your API during the entire development and deployment process. You should now have an execution path for keeping your API well tested and maintained through the development, quality assurance, and deployment phases for the consumers of your APIs to have a great experience without incidents. Figure 6: Running the Integration Tests in Visual Studio 2017 Conclusion Having a well thought-out test plan using both unit testing for internal testing and integration testing for verifying external API calls is just as important as the architecture you created for the development of your ASP.NET Core Web API solution. About the Author Suggestion by James Johnson / Re: Suggestion by Chris Woodruff / test philosophy by Bob Fischer / Re: test philosophy by Chris Woodruff / Suggestion by James Johnson / Your message is awaiting moderation. Thank you for participating in the discussion. Nice article, but I would recommend using Postman instead for integration testing APIs. It is built specifically for that purpose, and provides support for environments, environment variables, request chaining, mock servers, active monitoring, and much more. test philosophy by Bob Fischer / Your message is awaiting moderation. Thank you for participating in the discussion. Excellent article, thanks for some new ideas. One comment, however, one you expected already. The world is littered with software that is good but visibly flawed. My clients have never accepted anything but 100%. I can't always charge for all my testing but I sleep better when I overtest everything before release. It is a value judgment on when to stop testing but if you are running your own business, you had better overtest or you will not be in business for long. Re: Suggestion by Chris Woodruff / Your message is awaiting moderation. Thank you for participating in the discussion. I agree that Postman is a great tool for APIs and I use it all the time to look at the workflows of my and other's APIs. The great thing about the Integration Testing I write about is it can be included with DevOps workflows. Thanks for the comment and Happy New Year! Re: test philosophy by Chris Woodruff / Your message is awaiting moderation. Thank you for participating in the discussion. Nice comment and always a balancing act especially for consultants.
https://www.infoq.com/articles/testing-aspnet-core-web-api?utm_campaign=ASP.NET%20Weekly&amp;utm_medium=email&amp;utm_source=Revue%20newsletter
CC-MAIN-2019-18
refinedweb
2,531
64.2
#include <Subscription.h> Construction: A subscription is created by calling ConsoleSession::subscribe. Cancel subscriptions to all subscribed agents. After this is called, the subscription shall be inactive. Check to see if this subscription is active. It is active if it has a live subscription on at least one agent. If it is not active, there is nothing that can be done to make it active, it can only be deleted. lock and unlock should be used to bracket a traversal of the data set. After lock is called, the subscription will not change its set of available data objects. Between calls to getDataCount and getData, no data objects will be added or removed. After unlock is called, the set of data will catch up to any activity that occurred while the lock was in effect.
http://qpid.apache.org/releases/qpid-0.24/qmf/cpp/api/classqmf_1_1Subscription.html
CC-MAIN-2014-42
refinedweb
135
66.44
I want to create a toString test but i have alot of descriptive text inside my string and two strings will never be the same as shown below: public String toString(){ return ("Name: " + fname + ", " + lname + "Product: " + product); } aCustomer.toString("Andrew", "Bogart", "Coke"); It will not be the sane as name and product are not included. I am currently printing out the toString extracted from customer class and using println to show the expected result as shown below System.out.println("Expected " + "Name :" + "Andrew" + ", " + "Boggart" + " Product: " + Coke); System.out.println(aCustomer.toString("Andrew", "Bogart", "Coke")); Does anyone have a better way to test the toString method using Junit tests? Pattern matching is usually a viable alternative, if the version of java that you are using supports it. Check out the Regular Expression tutorial at: Hope this helps. ~ev.
https://forums.devx.com/showthread.php?145815-Junit-tests-for-toString()&s=fa3ab02b78ce5b815ee65b89c30390eb
CC-MAIN-2021-10
refinedweb
137
53.1
sl_cli Struct Reference Struct representing an instance of the CLI. #include <sl_cli_types.h> Struct representing an instance of the CLI. Field Documentation ◆ tick_in_progress True when a tick is in progress. ◆ prompt_string The current command prompt. ◆ req_prompt The next tick shall write a prompt. ◆ input_buffer The input buffer. ◆ input_size Length of input buffer. ◆ input_pos The position the user is currently typing to. ◆ input_len The total length of the current input. ◆ last_input_type Keeps track of last input. ◆ command_group Base for the command group list. ◆ command_function Function pointer to an alternate command function. ◆ aux_argument User defined command argument. ◆ iostream_handle The iostream used by the CLI. ◆ start_delay_tick A delay after the CLI task has started before any actions in ticks. ◆ loop_delay_tick A delay in the CLI task loop in ticks.
https://docs.silabs.com/gecko-platform/3.2/service/api/structsl-cli
CC-MAIN-2022-33
refinedweb
126
63.56
Win a copy of Barcodes with iOS this week in the iOS forum or Core Java for the Impatient in the Java 8 forum! Do i need a Mediator class to create Model, View and Contoller object and set up their relationship ? Or Contoller class can do this job ? Strict MVC does require all three participants. The problem that I am worried about, is that in one discussuion, somebody said that Contoller MUST have a reference to the view. I thought that good MVC should decouple View from Controller, what I have done. My View creates GUIContoller object. My GUIContoller creates table Model object. my view has reference then to the controller and get references to the Table model througg the GUIContoller.getTableModel() method. So, My contoller doesn't have any reference to the View. Are you sure it is Ok? I would prefer, let some other class create view and controller object. In real world also these are created by some one else public class ConnectionProps{ private java.util.Properties props = ... public ConnectionProps(){ ... } public void setIp(String port){ props_.setProperty("client.serverport", port); } } So this model(in Swing) allows to become observable. But in bean, typically we do not see this method. Am i right ?
http://www.coderanch.com/t/184088/java-developer-SCJD/certification/FBN-MVC-pattern-Model-Class
CC-MAIN-2015-11
refinedweb
206
68.16
Havoc Pennington has written about the future of desktop development, and the cross roads that some of the major projects are at (GNOME, Mozilla, OpenOffice.org, Evolution). He writes about the various choices moving forward, and the issues behind those choices. Background In the Linux desktop world, there's widespread sentiment that high-level language technologies such as garbage collection, sandboxed code, and so forth would be valuable to have and represent an improvement over C/C++. Several desktop projects are actively interested in this kind of technology: GNOME: many developers feel that this is the right direction Mozilla: to take full advantage of XUL, it has to support more than just JavaScript OpenOffice.org: has constantly flirted with Java, and is considering using Java throughout the codebase Evolution: has considered writing new code and features in Mono, though they are waiting for a GNOME-wide decision. No Rewrites Please None of the huge desktop projects are considering a one-pass rewrite to a new language; managed code can invoke C/C++ code, so there is an evolutionary path where newly-written code can be managed. This allows gradual refactoring. Read more at Commentary from the community James Strachan in Java, Mono, or C++? - time for open sourced Java? Dion Almaer in Open Source projects ponder move from C/C++ to Java/.NET/other Future of desktop development: Java, Mono, C++, ? (53 messages) - Posted by: Arnaldo Riquelme - Posted on: March 17 2004 14:09 EST Threaded Messages (53) - Gnome continues to make bad decisions by bruce mcdonald on March 18 2004 11:15 EST - RE: Gnome continues to make bad decisions by Tako Schotanus on March 18 2004 12:03 EST - Microsoft is untrustworthy by bruce mcdonald on March 18 2004 12:17 EST - You want to slap Sun? by Krasna Halopti on March 19 2004 09:58 EST - No mention of Guile/Scheme by Joel Crisp on March 18 2004 12:09 EST - functional languages are great but... by Leo Lipelis on March 18 2004 04:58 EST - Modern libraries exist by Joel Crisp on March 19 2004 04:00 EST - No mention of Qt?? by frustrated developer on March 18 2004 05:31 EST - Why do we need a viable open source JVM? by Paul-Michael Bauer on March 18 2004 13:04 EST - Remember who you are dealing with here... by bruce mcdonald on March 18 2004 14:53 EST - Re: viable open source JVM by Gerry G on March 18 2004 17:48 EST - Re: viable open source JVM by McCorney Severin on March 18 2004 08:52 EST - I do not need a open source JVM! I need corporate profit! by zhou en on March 18 2004 20:32 EST - lol: zealots abound on all sides by Paul-Michael Bauer on March 18 2004 08:45 EST - Why do we need a viable open source JVM? by wasedaxiao wasedaxiao on March 18 2004 23:08 EST - Why we need an Open Source JVM implementation by anon anon on March 19 2004 02:08 EST - it's not just ethical problems by gabor farkas on March 19 2004 04:48 EST - Helping MS by johnyzee on March 19 2004 05:36 EST - it's not just ethical problems by Ivan Markov on March 19 2004 07:18 EST - How do the GPL restrictions apply in this case? by Paul-Michael Bauer on March 19 2004 08:45 EST - JRE free for redistribution by Roshan Shrestha on March 19 2004 09:39 EST - Re: JRE free for redistribution by Gerry G on March 19 2004 12:44 EST - "What are the chances that Gnome will choose Java? by Rolf Tollerud on March 19 2004 12:45 EST - Re: Miguel on Java... Interesting by Gerry G on March 19 2004 18:57 EST - the Mono team is alert by Rolf Tollerud on March 20 2004 05:28 EST - the Mono team is alert by Star Trooper on March 22 2004 07:24 EST - poor guy by Star Trooper on March 20 2004 11:21 EST - classpath is a good thing by Roger Voss on March 20 2004 02:24 EST - Java VM version problem is far worse than DLL hell by Rolf Tollerud on March 20 2004 04:35 EST - Java VM version problem is far worse than DLL hell by dimiter dimitrov on March 20 2004 10:11 EST - everything is allowed in a "just" cause- is that smart? by Rolf Tollerud on March 21 2004 01:26 EST - everything is allowed in a "just" cause- is that smart? by Star Trooper on March 21 2004 04:57 EST - everything is allowed in a "just" cause- is that smart? by dimiter dimitrov on March 21 2004 07:23 EST - progress by Rolf Tollerud on March 21 2004 07:42 EST - progress? by dimiter dimitrov on March 21 2004 09:05 EST - most people are dragged screaming and kicking into the future by Rolf Tollerud on March 21 2004 11:07 EST - most people are dragged screaming and kicking into the future by gilles cadignan on March 22 2004 04:41 EST - no arguments, apparently by Rolf Tollerud on March 22 2004 06:41 EST - the only one who screams and kick is you... by Star Trooper on March 22 2004 07:14 EST - Classpaths, beginners, versioning by Gerry G on March 22 2004 11:01 EST - nothing for those with a faint heart by Rolf Tollerud on March 22 2004 12:09 EST - Open Source is not at road's end... by Gerry G on March 22 2004 07:26 EST - Linux will be promoted at the cost of Java by Rolf Tollerud on March 22 2004 10:21 EST - Workspaces point taken, SourceForge not enemy territory by Gerry G on March 23 2004 11:26 EST - Danger of Longhorn by Staffan Gustafsson on March 23 2004 12:59 EST - nothing for those with a willy by Cameron Purdy on March 22 2004 11:22 EST - the poor girl is lying on the track with the train coming by Rolf Tollerud on March 23 2004 02:40 EST - everything is allowed in a "just" cause- is that smart? by gerhard haak on March 22 2004 08:33 EST - sometimes I have scruples.. by Rolf Tollerud on March 22 2004 08:54 EST - hahaha, you just have leaces.... by Star Trooper on March 22 2004 09:32 EST - sometimes I have scruples.. by Henrique Steckelberg on March 22 2004 10:28 EST - as if the .NET Runtime were by Patrick Schriner on March 22 2004 07:34 EST - We need to move on by Staffan Gustafsson on March 22 2004 08:08 EST Gnome continues to make bad decisions[ Go to top ] Unfortunately the GNOME folks continues to make bad decisions. Their insistence on developing a desktop environment in C is the chief among them. This was largely dictated by their choice of toolkit, GTK+. This implements OO semantics in C - the result is *VERY* ugly and quite tedious to understand. Additionally their component solution is CORBA based - they even developed thier own ORB (ORBit) to handle the performance issues. Consequently the componentization of Gnome is spotty at best. This can largely be considered a failed effort as it remains very problematic to create and assemble component applications on Gnome. - Posted by: bruce mcdonald - Posted on: March 18 2004 11:15 EST - in response to Arnaldo Riquelme Now they want to start implementing parts GNOME in MONO... What the hell are these people thinking about - its not like they are not intelligent, they just keep making the *WRONG* technical decisions. C# without .Net is like Java without most of the class libraries and certainly without J2EE. Additionally there is no advantage beyond the ECMA standardization of the language can be derived from this decision. In my mind, C# does not add enough to programming semantics to be seriously considered as a Java replacement. It does go some way to address the issues in C/C++. RE: Gnome continues to make bad decisions[ Go to top ] Besides the fact, as mentioned in the article, why the heck should we help Microsoft by supporting one of their core technologies? This has happened before with OS/2 and look where it got them! Most developers never considered writing specifically for OS/2 because Windows software ran fine on OS/2 because of the Win32 API emulation. Microsoft just had to wait until 99% of the developers were accustomed to the Win32 API and start chaning it at such a pace that OS/2 could never keep up. The end result is in the history books. So why are some people even considering running the same risk? It just makes me want to slap Sun even more for not wanting to open up Java. (Not that I can't see their POV but it would be such a shame to let go to waste any opportunity of Java becoming a big part of Linux/GNOME/KDE or OSS in general) - Posted by: Tako Schotanus - Posted on: March 18 2004 12:03 EST - in response to bruce mcdonald Microsoft is untrustworthy[ Go to top ] I agree with you - there are just so many reasons to be very suspicious about hitching your cart with to the C# horse. As you rightfully point out, this will indirectly help Microsoft since it will be introducing more people to their technologies... There is no way this can challenge MS because of the control issues that you mention and the fact that they can throw resources at this indefinitely. As you rightly point out, if IBM was not able to keep up with MS with OS/2, then a bunch of part timers certainly can't. And I mean absolutely no disrepect to the Mono guys. - Posted by: bruce mcdonald - Posted on: March 18 2004 12:17 EST - in response to Tako Schotanus You want to slap Sun?[ Go to top ] You want to slap Sun? Why not Ximian, for causing what some see as such a colossal waste of effort? - Posted by: Krasna Halopti - Posted on: March 19 2004 09:58 EST - in response to Tako Schotanus No mention of Guile/Scheme[ Go to top ] Suprising that there is no mention of the functional language side of things in this. Guile is maturing nicely and Scheme in general has a lot of very interesting features. Surely it should at least be considered? - Posted by: Joel Crisp - Posted on: March 18 2004 12:09 EST - in response to bruce mcdonald After all, the other big desktop utility EMACS is written in lisp and has blazed a trail all of its own! functional languages are great but...[ Go to top ] I think Scheme, and all kinds of LISPs in general, while being some of the oldest languages in existence, are also way ahead of their time. All the popular languages are "stealing" features, one by one, from functional languages. The result is a mess. Why not just go straight to the source and use the real thing? - Posted by: Leo Lipelis - Posted on: March 18 2004 16:58 EST - in response to Joel Crisp There is one problem. Scheme in particular was meant as a "research" language. That's a shame. While Scheme standard is small and elegant, it has no teeth for the real world, like IO, etc. What ends up happening is all the Schemes and LISPS end up with incompatible parts and that slows the adoption of the language in the real world. In the real world you need to talk to databases. You need libraries to do graphics. You need GUI widgets. You need cross-platform compatibility (where is that in Guile?). You need legacy support. I love Scheme! I think it's the best language ever. The problem is, the custodians of the language have killed it for the rest of the world by designating Scheme as a toy^H^H^Hresearch language for eternity. I'd love to use Guile for everything, but is it crossplatform? Does it have libs to talk to PostgreSQL? Are they version 2.3.5 or 0.0.0.1 like in Ruby land? Modern libraries exist[ Go to top ] Most scheme implementations either come with libaries or bind directly to C/C++. Note that the article was suggesting that these projects would continue to use their own GUI libraries irrespective of choice of language. - Posted by: Joel Crisp - Posted on: March 19 2004 04:00 EST - in response to Leo Lipelis Personally I'd be more concerned about threading support. Guile is intended AFAIK to be cross platform but also is focussed on embedding like any control language so uses the host's GUI library. It is also focussed very strongly on integration with existing libraries via the native interface functionality. BTW, my experience of ruby libraries is that they are generally more mature than their version would suggest - whether this is down to the language or the type of person who adopts it I don't know ;-) No mention of Qt??[ Go to top ] And why not Qt ?? - Posted by: frustrated developer - Posted on: March 18 2004 17:31 EST - in response to Joel Crisp Why do we need a viable open source JVM?[ Go to top ] I do not understand why Pennington insists that an open source implementation of the JVM is a necessary condition for Java to flourish as a development standard. - Posted by: Paul-Michael Bauer - Posted on: March 18 2004 13:04 EST - in response to Arnaldo Riquelme Open source and open standards both solve similar problems. Unlike C#/.NET, Java (language & platform) is a complete open standard. Implementations of that standard need not necessarily be open source for open source projects to target this platform. I am satisfied with the free (as in puppy) offerings of IBM, Sun, and BEA. Remember who you are dealing with here...[ Go to top ] Not to put words in their mouths or to misrepresent them, these are the same people who started a new desktop environment project because the current project (KDE) was then based on a non-free (similiar to JDK licensing tho.) toolkit. These people are fervent in their opposition to any kind of proprietary licensing. As far as I am concerned, I took a look at the C infrastructure and it was *HORRIBLE*. I cannot stress this enough - If I am going to contribute my time freely, its not going to be in a medieval dungeon of a development environment. - Posted by: bruce mcdonald - Posted on: March 18 2004 14:53 EST - in response to Paul-Michael Bauer PS. And yes, I do contribute to an open sources project: babeldoc. Re: viable open source JVM[ Go to top ] I'm with Paul-Michael - I'm not sure an open-source JVM is truly needed, especially more than one. Sun should be giving much more support to Java on Linux with their new (and apparently successful so far) Java Desktop System and Java Enterprise System. - Posted by: Gerry G - Posted on: March 18 2004 17:48 EST - in response to Paul-Michael Bauer I still don't understand the need for Mono -- isn't the point of .NET to allow apps to communicate with web services (an open messaging protocol) regardless of the client or server??? Keep writing normal linux-native clients/servers or Java clients/servers and communicate with .NET stuff using those web services!! DUH! And, begging to differ with Havoc, I prefer KDE over Gnome, and I think a lot of other people do, too. Ignoring KDE when talking about a linux desktop is poor judgement, IMO. I don't use linux much, and have used both desktops and just prefer KDE, and I read about both from a programming perspective and it's a rare bird that says he/she truly likes programming for Gnome. Like some others posting already, I'm not wrapped around the axle with advocating a specific desktop or toolset or licensing scheme. I'll admit I'm no fan of Microsoft, though, and I would like to see Java go further on linux from both client and server perspectives, but know it's a tough road. Doing anything on linux is tough, IMO, but getting better. Getting off the whole licensing kick would probably do more good for open source than anything else. It's thoroughly annoying to see news stories every month about somebody who screamed bloody murder because to software packages have "incompatible" licenses. Get over it! Re: viable open source JVM[ Go to top ] Say what you want about the viability of Mono. However, Mono simply rocks! Have you guys tried it yet? If not, and since this is suppose to be a scientific community, then I reccommend that you download and intall it on your linux machine. Writting desktop application just rocks! Just try some of the the sample apps that come with the download so you can see what I am talking about. - Posted by: McCorney Severin - Posted on: March 18 2004 20:52 EST - in response to Gerry G -M I do not need a open source JVM! I need corporate profit![ Go to top ] I am a chinese java developer with 7+ year java experience and poor english. I do not think that I need a open source JVM! I just need a free and standard JVM such as SUNs JVM! I hate those who ask more from SUN such as IBM(it make profit from java but destory SUNs profit and also destory our java groups profit). I want to ask those despicable companies: why not aim at our corporate enemy such as M$ or make our corporate group bigger? Why just aim at ourself and destroy our groups profit? - Posted by: zhou en - Posted on: March 18 2004 20:32 EST - in response to Paul-Michael Bauer lol: zealots abound on all sides[ Go to top ] enough said - Posted by: Paul-Michael Bauer - Posted on: March 18 2004 20:45 EST - in response to zhou en Why do we need a viable open source JVM?[ Go to top ] To open JVM would demolish the compatibility and Java - Posted by: wasedaxiao wasedaxiao - Posted on: March 18 2004 23:08 EST - in response to Arnaldo Riquelme would be only dominated by IBM just for their own profits. Why we need an Open Source JVM implementation[ Go to top ] The biggest reason we need an Open Source JVM is so that the distros can ship a fully fledged and certified JVM without being in conflict with any licensing restrictions. - Posted by: anon anon - Posted on: March 19 2004 02:08 EST - in response to Arnaldo Riquelme This is not really a problem for the more commercial Distros, I think SuSe ships with a Sun JVM but the open source Distros such as Debian and Gentoo cannot ship any of the Certified JVMs. So basically you need to first need to download and install a JVM before being able to use it, so if Gnome used Java, you would have to do a download and post install of Java and only then try to get Gnome up and running. Not exactly a nice way to have an install and OS and desktop environment is it... it's not just ethical problems[ Go to top ] the biggest problem with java not being open source (and here i mean for example GPL, LGPL, X11,BSD or something like that) is that you CAN NOT link gpl stuff to them. for example if the gnome people wanted to use java to develop gnome apps, they would like to use their GTK toolkit with java. but it is not that simple, because some GNOME stuff can be under GPL, which is very specific about what you can link it to. look at sun: they support/contribute to the gnome platform, but they also have problems when they want to enhance it with gnome (because they just can't bind java to the most of it). so the problem is mostly with binding java to other non-java technologies. - Posted by: gabor farkas - Posted on: March 19 2004 04:48 EST - in response to anon anon Helping MS[ Go to top ] Like other posters I don't understand the drive to enable C# development on Linux. Clearly it would be a scoop for Microsoft to have additional commoditization of C# applications (and developers) through the support of the Linux community, increasing the value of the entire network of existing and future C# applications, platforms and developers. I am sure lending Microsoft a helping hand to fortify their monopoly is the last reason C#/Mono proponents would cite, but what is the reason then? Is C# that much better a language than Java? Does it have superior tools? Or is it something else I am missing? - Posted by: johnyzee - Posted on: March 19 2004 05:36 EST - in response to gabor farkas it's not just ethical problems[ Go to top ] Really? - Posted by: Ivan Markov - Posted on: March 19 2004 07:18 EST - in response to gabor farkas AFAIK, most of the library stuff in GNOME is LGPL, not pure GPL so there are no linkage problems with it. Besides, even if it was pure GPL, there would still be no problems calling into it from Java, as long as your application is GPL too, or am I mistaken? If you think otherwise, please warn: - Java-Gnome () - Richard Dale of QT/KDEJava () that they have based their work on a shaky ground (using Java to call into LGPL/GPL code). How do the GPL restrictions apply in this case?[ Go to top ] Must dependencies for a GPLd product also be released under the GPL? - Posted by: Paul-Michael Bauer - Posted on: March 19 2004 08:45 EST - in response to gabor farkas For instance, the fact that the Win32 API is not GPLd does not proclude me from writing GPLd software for Windows(TM). The way I understand the GPL, only works based on or linked with a GPLd product must themselves be released under the GPL. Therefore, why can't the GNOME folks write a GTK/JNI/Java layer that targets the Sun JDK? (or would this offend their ideological "purity") JRE free for redistribution[ Go to top ] Maybe I am wrong, but I thought Sun's JRE was freely redistributable. Of course, if you plan to run JSPs, then you would need a Java compiler which is not part of JRE, but then as a Linux desktop for the average consumer, I do not see much of a demand for JSPs. Java applications would run just fine. To run Eclipse IDE, for example, you only need a JRE. - Posted by: Roshan Shrestha - Posted on: March 19 2004 09:39 EST - in response to anon anon Also, developers would have no problem downloading the latest JVMs. I am not compalining that I have to download the JVM. Re: JRE free for redistribution[ Go to top ] I agree - I think the JRE is freely redistributable, which I believe it is bundled into many linux distros. The issue seems to be the SDK (compiler, tools, etc). But I agree, I have no problem downloading and using the SDKs. As long as linux makes it easy to download/install, that is. - Posted by: Gerry G - Posted on: March 19 2004 12:44 EST - in response to Roshan Shrestha Since the SDKs are free to download, free to use, and the "binary output" (.class, .jar, etc) is freely redistributable (and hey, it runs on different OSes!), what good would there be in having an open-source version of the same SDK? It would just introduce compatibility bugs, encourage forks (from well-meaning developers, I'm sure, but it's WRONG! Write a new language instead, for goodness sake!), and be a collosal waste of programming hours that could be better devoted elsewhere. Just because you *can* do it, doesn't mean you should. And just because there's "millions of open-source developers in the community across the world" doesn't mean those resources are infinite and should tackle every hare-brained project that comes along. If we're to "let the market decide" whether an open-source Java SDK is needed, my market vote is "NO". I'm already wasting time fighting this lame idea. I'll blame it on the people who think each and every piece of software in the world should be GPL, and if it's not GPL they won't touch it with a 10-foot pole. Get real, people. Open source and community development are good things, but GPL isn't the way to solve all software problems. "What are the chances that Gnome will choose Java?[ Go to top ] (it is a trick question.." - Posted by: Rolf Tollerud - Posted on: March 19 2004 12:45 EST - in response to Arnaldo Riquelme Miguel de Icaza: "Compiling our program. So I begun with a small Java program, I had to Google for Hello World, since I did not remember much of Java, and I have to say, I am still not comfortable with the classpath thing..." Regards Rolf Tollerud Re: Miguel on Java... Interesting[ Go to top ] Rolf, thanks for the link. Interesting reading. I would say that it doesn't seem any better than just using the Sun Java SDKs, since you still have to use Mono (an extra dependency) and it's not cross-platform (is it even cross-linux, or would the .exe need to be recompiled for each distro, version of Mono, verison of the kernel, etc.?). But, one thing I don't know about is how easy and efficient it is to build GUI apps using the mono libraries that are getting included. Are they any better than AWT, Swing, or SWT? (each has it's ups/downs) I'm intrigued, but I'm not sold on what it's benefit is (that is, to write programs using Miguel's method). - Posted by: Gerry G - Posted on: March 19 2004 18:57 EST - in response to Rolf Tollerud Thanks again for the info. Others should be visiting that link and commenting, too! the Mono team is alert[ Go to top ] It is just to show that Miguel de Icaza does not miss any trick. - Posted by: Rolf Tollerud - Posted on: March 20 2004 05:28 EST - in response to Gerry G If ever C# is to succed on Linus, there must be an easy migration path and gradual refactoring from Java. They are also in preperation for porting the XAML. BTW, here is another link you might find interesting, by Miguel. Regards Rolf Tollerud the Mono team is alert[ Go to top ] - "My particular decefit is that I can resist everything, except the latest fashions :(" - Posted by: Star Trooper - Posted on: March 22 2004 07:24 EST - in response to Rolf Tollerud - "OO is out, WS in" - "Java? this is so last week!!" - "You must follow the fashion" - "Platon says...." - "Carefully handcrafted applications... a jewell!" <List goes on> Just got back home and noticed that most "technical arguments" from our favourite troll sound very similar to the kind of talking you can find in "Queer eye for the straight guy" reality show... I was laughing my guts off!!! If you wanna mark me as noisy, it's fine with me... hahahahah :-P O realize it is not a techie post... but just could not help it. :-P BTW. I DO NO MEAN ANY DISRESPECT TO ANYONE. poor guy[ Go to top ] millions of developers over the world use java classpath efficiently and just because he does not like it... what a lame reference! - Posted by: Star Trooper - Posted on: March 20 2004 11:21 EST - in response to Rolf Tollerud classpath is a good thing[ Go to top ] - Posted by: Roger Voss - Posted on: March 20 2004 14:24 EST - in response to Star Trooper millions of developers over the world use java classpath efficiently and just because he does not like it... what a lame reference!Yeah, the alternative is something like the Win32 COM machine-global registry - which is the basis for a lot of the infamous "DLL hell" phenomena of that platform. The problem with global registries is either the last component (of a particular type) registered wins or else have to come up with versioning schemes. These versioning schemes never seem to pan out in practice, though, do they? (Of those doing .NET development I'd like to know if you really bother to use the versioning system of the GAC and specifically depend on it. IMHO it's much simpler to juse do .NET private assemblies and let a .NET application run in full isolation.) The thing about Java classpath is every Java program can have essentialy its own private classpath - which makes for wonderful application isolation and thus software application stability. Via the classpath it's more straightforward to insure that a Java program runs with versions of components that it's known to work well with. One thing that history has now well demonstrated is that the Win32 COM global registery was a very bad idea. Java VM version problem is far worse than DLL hell[ Go to top ] Well if you reason with yourself for a while you eventually will convince yourself that you have right. The classpath is one of the most stupid - almost comical idiotic things in Java. You just have to choose namespaces in .NET (as 99% do) to have the same application isolation than Java without any global cache. - Posted by: Rolf Tollerud - Posted on: March 20 2004 16:35 EST - in response to Roger Voss Neither do you have the problem with 30 or so different Java VM that clutters your hard disk (as every downloaded program comes with one :) Neither do you have the problem with every new "Open Source" release that breaks backward compatibility. Regards Rolf Tollerud Java VM version problem is far worse than DLL hell[ Go to top ] Rolf, - Posted by: dimiter dimitrov - Posted on: March 20 2004 22:11 EST - in response to Rolf Tollerud Are you trying to sell namespaces as versioning mechanism? LOL This reminds me on another Microsoft "solution" - to version explicitly the COM interfaces changing their names, so you have D3DDevice3, D3DDevice4, D3DDevice5 and D3DDevice7 (perhaps now they are up to 9 :-) In this regard I somebody said "there is an obvious, simple and wrong solution to each complex problem". cheers, dimiter everything is allowed in a "just" cause- is that smart?[ Go to top ] There are a number of problems with Microsoft becoming too big. If Java and Open Source zealots try to attack this problem with untruths (=lies) like "Microsoft makes inferior products" or Microsoft is criminal" or Microsoft does not have right to respect" then your effort will be in vain. The time has passed when it is possible to manipulate the IT community. The maxim "everything is allowed in a just cause", the just cause being the demise of Microsoft does not work- on the contrary it will backfire. - Posted by: Rolf Tollerud - Posted on: March 21 2004 01:26 EST - in response to dimiter dimitrov The truth is, as everybody capable of independent though can reason out in 5 minutes, is that "Microsoft makes far better products", Microsoft is less criminal than most people and companies" and Microsoft have earned a great deal of respect. If you could only admit that, then we maybe could try to find some solution to the real problem. Regards Rolf Tollerud everything is allowed in a "just" cause- is that smart?[ Go to top ] "There are a number of problems with Microsoft becoming too big" well, at least you show some honesty. Are you refering to all those lame bugs around in almost any version of any software that MS sells? I realize software may be buggy but not THAT BUGGY. - Posted by: Star Trooper - Posted on: March 21 2004 04:57 EST - in response to Rolf Tollerud Microsoft is less criminal than most people and companies :-P Yeah!!! release the kidnapper since he is lees criminal than a murderer!! The logic measure should be to lock both of them up... am i right? (since you love logic so muchhhhhhhhh!!!) Microsoft have earned a great deal of respect not mine, and it is worth my weight in gold!!! Annnyways... let's cut the crap!!! What does the above have to do with the Classpath thing?? Can you not provide enough technical arguments and just go for making noise (Look!! Look!! The Moon is full!!!) Bye M-O-R-O-N (If you think hard you may discover what the acronym stands for) ps. everything is allowed in a "just" cause- is that smart? well, is MS smart for funding the SCO legal demands against Linux users??? everything is allowed in a "just" cause- is that smart?[ Go to top ] - Posted by: dimiter dimitrov - Posted on: March 21 2004 07:23 EST - in response to Rolf Tollerud If Java and Open Source zealots try to attack this problem with untruths (=lies) like "Microsoft makes inferior products" or Microsoft is criminal" or Microsoft does not have right to respect" then your effort will be in vain.Did I said any of these, did you actualy read my post? The time has passed when it is possible to manipulate the IT community.Yet you fail to realize it cause it seems that you try to do just that. The truth is, as everybody capable of independent though can reason out in 5 minutes, is that "Microsoft makes far better products"The truth is that Microsoft makes good enough products, though they are far from perfect. Microsoft is less criminal than most people and companies" and Microsoft have earned a great deal of respect. If you could only admit that, then we maybe could try to find some solution to the real problem.I admit that MS is a big respectable company. They plan their corporate strategy to make the best possible profits. Nothing wrong with all this, until you start reading their user manuals which scream PROPAGANDA. I feel personally offended by the MS style of writing and treating me like a moron who can't tell crap from food. And what was that real problem you say? In my previous post I just made a note that the namespaces are not exactly meant for versioning. progress[ Go to top ] If you read Roger Voss original post it was he that talked about the Java classpath and "DLL hell" in the same breath. But if you want I can divide the issue into two. - Posted by: Rolf Tollerud - Posted on: March 21 2004 07:42 EST - in response to dimiter dimitrov 1) Classpath".By the "idiotic and ridiculous classpath" I am referring to the lame idea of tying the class names and hierarchy with the file system, causing untold grief to hundreds of 1000's of beginners. 2) Version conflict. Java has a greater trouble with version conflicts between different VM's that MS ever had with "DLL hell". But the difference is that MS has solved their DLL problem with .NET while Java still have the same problem. Regards Rolf Tollerud progress?[ Go to top ] - Posted by: dimiter dimitrov - Posted on: March 21 2004 09:05 EST - in response to Rolf Tollerud 1) Classpath".By the "idiotic and ridiculous classpath" I am referring to the lame idea of tying the class names and hierarchy with the file system, causing untold grief to hundreds of 1000's of beginners.That's interesting, because this was one of the few things I actually got right from the first time. Do you suggest that pouring all files in a single directory is better? Btw, if it's your style of working, you don't have to change it - the compiler creates the output structure for you (though I prefer to keep my packages in different folders.) If your grief is about the output being spread in different folders, well this seems to be pretty straightforward compared to registry with CLASSID for each interface and another GUID for each COM server. 2) Version conflict. Java has a greater trouble with version conflicts between different VM's that MS ever had with "DLL hell". But the difference is that MS has solved their DLL problem with .NET while Java still have the same problem.Actually in ideal world, the problem shouldn't exist to begin with. The registry is designed to keep multiple versions of each component if each application instantiated exactly the version it was designed for it would be ok. Unfortunately, in the real world there are just too many versions to test with (anybody know how many versions of comctl are out there?) So the applications instatiate the controls by "short name". All nice and fine, since the APIs are compatible. Except for the times when it doesnt work. As for the .NET solution, most projects I've seen deploy their applications in a single folder. In this folders you have a few files, each of them containing more or less encapsulated set of classes and some metadata. Does it remind you of something? Since the assemblies are in the same folder, there's no need of classpath, but you can easily do this in java if you specify the jar's dependencies in the manifest (yet very few people do this.) Let me tell you what will happen next. After 1-2 major releases of .NET, MS would suggest everybody should move to their brand new super-mega-release of long-horn-net-KSink++/2 featuring the long craved by the community implementation of RFC2324. It would break compatibility just like Java2 did, but hey - your GUI frontend would still be running! Then the more sensible developers would start making fuss about the need to run applications requiring different versions of .NET and (in the best case) MS would catter to their needs by providing an embeddable .NET environment, much like SUN's JRE. And in 12 years, you would get to a state, where each application would package it's own .NET environment just in case to avoid conflicts. Of course I might be wrong and MS could be the only exception to the rule... regards, dimiter most people are dragged screaming and kicking into the future[ Go to top ] Hi Dimiter, - Posted by: Rolf Tollerud - Posted on: March 21 2004 11:07 EST - in response to dimiter dimitrov Mentioning the future? When I do that I usually get bashed thoroughly :) But I don't mind talking about the future. It is incredible to me that you and the "Java camp" in general do not see what is coming up. Theserverside has opened TSS.NET, and Javalobby? Look, Henk van Jaarsveld: "Why not DEVELOPER lobby (without java)" Rick Ross: "I completely agree, but I thought it would be better to move stepwise towards that. I did, however, acquire the domains developerlobby.org and developerslobby.org" How many was at JavaOne last year? Will there be a JavaOne this year? The Gnome desktop? Subject of this thread and called "Java desktop" by Sun for some reason, they are discussing remaking it with C# and GTK#! (Sun will not take lightly to that :-). It will happen. There no less than 3 different ways to obtain an easy migration path and gradual refactoring from Java to .NET, depending on your needs. 1) Precompile the Java code directly into .NET code with IKVM. 2) Open it directly i Visual Studio.NET if it is 1.1 compatible. 3) Use Microsoft's Java-to-C# Converter Why is it not so easy to convert the other way? Because Java is a subset of C# of course. C#/.NET is the natural evolution path for Java developers. Do your kids care about if they are feed with money earned on the java, .NET or any other platform? See the writing in the wall. Regards Rolf Tollerud most people are dragged screaming and kicking into the future[ Go to top ] Rolf, - Posted by: gilles cadignan - Posted on: March 22 2004 04:41 EST - in response to Rolf Tollerud Why don't you just leave us alone and post your comments on TSS.NET ? I don't see any advantage in reading that kind of M$-monomaniac pieces of sh*t. We (i-e the readers of theserverside.com) are waiting for parctical information about our enterprise domain. "Your enterprise Java Community" this is it. Regards, gilles- no arguments, apparently[ Go to top ] gilles: "..information about our enterprise domain" - Posted by: Rolf Tollerud - Posted on: March 22 2004 06:41 EST - in response to gilles cadignan The title of this thread is: "Future of desktop development: Java, Mono, C++, ?" Regards Rolf Tollerud the only one who screams and kick is you...[ Go to top ] Do your kids care about if they are feed with money earned on the java, .NET or any other platform? Ok, if your kid DOES NOT care about having his/her baby bottle funded with java-money... why are you so histerycal? if you baby DO care about .NET-money... shouldn't you be discussing at tss.net? why arent you there? lack of interesting stuff to share with your MS-peers? - Posted by: Star Trooper - Posted on: March 22 2004 07:14 EST - in response to Rolf Tollerud We know you dont like Java and are teenager-like fan of MS. OK, good on you. Why do you keep coming here? Is it rational? Do you come here just for the satisfaction of proving yourself able to write and moving your lips at the same time? Again... Does your baby care? Would he be proud of you? So buddy, come easy, relax, we do need sometimes a clown around to make us laugh from time to time since our WORK is quite demanding and stressful. But you aren't that clown. Sorry, you are doing your best but you aren't. Just get over it. If you want to share technical arguments in a civilized manner, OK, YOU ARE WELCOME. Otherwise, You will be just our second-rate troll. I just wonder why TSS.COM does not cut you off DESPITE ALL PEOPLE REQUESTING THEM TO DO SO. Maybe... some vested interest? Now...back to work!! Classpaths, beginners, versioning[ Go to top ] Rolf, - Posted by: Gerry G - Posted on: March 22 2004 11:01 EST - in response to Rolf Tollerud Yes, classpath does cause some *beginners* to trip up when they first start programming in Java, but that doesn't mean it's bad, just that it's not obvious right away and it needs to be learned. I really don't think the Microsoft's scheme is any better, and if you ask me there's even more to learn (more APIs, the registry, more layers, etc.) with more possibilities of trip-up. And it's not just the "DLL hell" problem that Microsoft has had, but registry issues, cleanup of files scattered around the HD, and broken shortcuts. Taken as a whole, in my personal experience I've had much more trouble with poorly written software for Windows than I've had with Java. That might be because I know Java well enough, but when I do have problems the fix for Java software is usually much easier than the fix for Microsoft software. I have versioning conflicts with both Java and Microsoft. But it's much more likely that I can run a Java 1.0 GUI client program on my PC today than a GUI client program written for Windows 95. Yes, Java does have some issues with versioning, and so does Microsoft-based programs, but it's less an issue with the design of the technology than the capability of the programmer to write his/her software correctly. And with Java you can usually fix the problem very quickly with a tweak or download of the appropriate version of a library, whereas with Windows software (.NET or otherwise) you have registry cleanup to deal with (if you don't uninstall the software in order to 'fix' it), and most likely a rebuild of the software. And there's likely a lot more dependecies to deal with than with a Java program, which increases complexity, as well as more APIs to learn/remember/master. Java has two good things going for it right now -- server-side java using .WAR and .EAR files significantly reduces deployment issues related to classpath; and Java WebStart (for client-side software) now appropriately manages both JVMs and software versions without resorting to a global registry that is difficult if not impossible to manage by hand. And I don't think that Miguel's process of getting a Java client written using Mono/IKVM was easier than the standard Java way. Plus, you end up with a .EXE file and you're stuck -- if you want to patch the software you have to rebuild and replace the whole thing or use a complicated patching program, either of which is a pain. These are my opinions, of course, and based on my experiences using and programming with Sun-based Java, I feel more comfortable using it and programming with it than I do with Microsoft/.NET/Mono. I'll still investigate and try out Microsoft/Mono now and then and keep myself open to changing my opinions, but at the moment I know where I stand and you haven't managed to give me good enough reasons to think Microsoft does installation/management/versioning any better than Java. I'd rather hear more convincing arguments and descriptive details than statements like "idiotic and ridculous" or "lame idea". Why do you say those things and how can you back them up with useful details? Thanks for your input, though. nothing for those with a faint heart[ Go to top ] Gerry, - Posted by: Rolf Tollerud - Posted on: March 22 2004 12:09 EST - in response to Gerry G It is the current situation that is interesting, will Gnome be done in C#, as Miguel de Icaza wants, or will it be done with Java as Staffan Gustafsson outlines? This is the 10 000 dollar question, a kind of blackmail from Open Source. "If you do not release Java we will use the Microsoft clone". When I started with programming I had no idea that I would be involved in a thriller scheme that makes a Hollywood movie look like an afternoon tea party. A real cliff-hanger! And then some people say that life in IT is boring :) IMO, Open Source has come to roads end. If they choose Java, Linux is almost certain to fail- On the other hand if they choose Linux- then they have to abandon Java! We have to wait for the sequel- try not to bite your nails.. (Who is the helpless heroine in this movie?) Regards Rolf Tollerud Open Source is not at road's end...[ Go to top ] - Posted by: Gerry G - Posted on: March 22 2004 19:26 EST - in response to Rolf Tollerud Open Source has come to roads end. If they choose Java, Linux is almost certain to fail- On the other hand if they choose Linux- then they have to abandon Java!I'm not sure what gave you the impression that Open Source is hanging in the balance. Open Source, Linux, and Java have coexisted peacefully for years and years. I don't expect any of them to suddenly wither and die. And I certainly don't think that a software project like Gnome has the power to make or break Java. Yeah, there's been some ranting from GPL advocates over licensing, but they don't have any power over anyone to make people use or not use a software technology, much less one like Java that has proven itself to be successful in both the Commericial and Open Source realms and appears to be growing. What the GPL advocates are hoping to gain is more mindshare for the GPL that they would hope to use as leverage on even more technologies and software projects. You have to remember that the GPL was created with as much a political agenda as a practical one. But the "Open Source" community seems to be able to thrive on more licenses than just GPL or even GPL/LGPL, so GPL is certainly not the be-all/end-all of software licensing. And your logic escapes me -- how are Java and Linux equivalent and why on earth would "Open Source" want to choose between them? Saying that the Gnome project is the totality of the "Open Source" community would be a faulty assumption. There's as much "Open Source" software running on Windows and MacOS as there is on Linux. KDE and other desktops are just as popular as Gnome is. Java runs on all three major desktop OSes. Linux is *not* a desktop environment, Gnome is *not* Linux, Java is just a programming language, and you're doing apples-to-oranges-to-strawberries comparisons. And you still haven't submitted any proof or evidence of your assertions. Do you have links to news articles, interviews, market research, project roadmaps, SourceForge statistics, or anything else to back up your claims. Here's a couple of SourceForge stats on programming language:. Java has 11,000+ projects, and C# has about 1300, which is less than both Visual Basic and Assembly, of all things. C and C++ of course have more than Java with about 13,000 each, but that's only about 15% difference. GPL has a 70% share of the "OSI Approved" licences on SourceForge (), so it's definitely strong. Now if you look at projects by Operating System (), things get interesting. The POSIX family, which inlucdes Linux, has 31,000 projects, Microsoft has 19,000, and 'OS-Independent' has 19,000, too. If you look into POSIX, you'll find that Linux has a hair under 20,000 projects -- nearly the same as Microsoft and 'OS-Independent'. Oh, BTW, did you notice that the most active project on SourceForge this week is "Azureus - BitTorrent Client - A Java Based BitTorrent Client"? #7 is "Compiere ERP + CRM Business Solution", which is also a Java-only project. #14 is a Java-only game. Looking across the list, it's evident there is a large mix of project types and technologies. #20 is a Visual Basic tool! But, of course, SourceForge isn't the only place where open-source projects live. Plenty live on their own websites. Java.net has a SourceForge-like setup with CVS and everything, and many projects choose to live there instead of SourceForge. And all of this ignores the commercial side of things, which has a lot to do with just how large a user and programmer base any one language, OS, or technology has. I'm sure you've done your homework, though, Rolf, and have plenty of details to add to the conversation. I'd rather hope you're not just making outrageous statements but have plenty of useful points to add to a constructive discussion. Linux will be promoted at the cost of Java[ Go to top ] Hi Gerry, - Posted by: Rolf Tollerud - Posted on: March 22 2004 22:21 EST - in response to Gerry G Didn't you read Staffan Gustafsson post? We need to move on It is not only about Gnome, Gnome is only the start. Staffan: "In 2-4 years, Microsoft will deliver an all/mostly managed environment, giving the developers on the MS platform a high productivity environment. Both on the client side(avalon), command line (monad) and on the server side (indigo/asp.net)" It is all about how Linux shall meet this challenge. Do you really think Java will be up to it? Icaza doesn't appear to think so. Gerry: "Java has 11,000+ projects, and C# has about 1300" You just forgot a little important bit of info, that Sourceforge is in essence an "enemy country" to MS developers, and that they have their own "Sourceforge" at Workspaces. Why don't you add the 5805 .NET projects (more then 4000 C#) at Workspaces in just one year to the list? Regards Rolf Tollerud Workspaces point taken, SourceForge not enemy territory[ Go to top ] Steffan makes some interesting points, I agree. But I think he overstates the threat of Microsoft somewhat. And trying to galvanize the Open Source community to back just one technology just won't happen because it's nature is to be driven by diversity. Steffan is thinking from a vendor or standards-group viewpoint, which just won't happen in Open Source land. It's like pushing string, herding cats, etc. - Posted by: Gerry G - Posted on: March 23 2004 11:26 EST - in response to Rolf Tollerud It may just be me but I don't think that Longhorn will be any more successful than Windows XP, and my bet is early on it will be less so. With each OS Microsoft has had an increasingly tough time trying to get people to upgrade or to select a new PC that has the new OS installed instead of the old OS they're replacing. And the fact that Longhorn will likely, from what I understand, not be natively backward compatible with WinXP and prior, that means people will have to run their older apps in emulation mode (MS owns Virtual PC) or purchase/upgrade their apps to a native Longhorn version. That is going to give people a lot more pause, and they'll be weighing their options and their pocketbook much more carefully. But I'm sure Longhorn will have some really cool technology and lots of whiz-bang features. I would never suggest that ANY OS be banned outright, and I agree with the general open-source philosohy of "let the market decide", so I'm all for giving Longhorn a shot at the limelight. I just think Microsoft has a much tougher job this time. In the interim, I agree that Linux has a long ways to go to be considered a better option, but I think that by the time Linux will have made itself quite comparable. Investment by companies and government into Linux is increasing every day. Sun just announced they plan a 3D desktop environment, obviously planned to rival Longhorn. Novell is back in the game with SuSE and is hoping to steal Microsoft's playbook and become even more of a player than Red Hat. I didn't know about the Workspaces community. But I did say previously that there are lots of personal websites and other places where Open Source development takes place. But please don't ignore the fact that I called out the fact that there are 19,000+ Windows projects, and about the same number of OS-independent projects, a large amount of which we could safely predict run well on Windows. I'd hardly call SourceForge enemy territory for Windows/MS developers. There's certainly no restrictions or drawbacks for Windows/MS projects being hosted there, unless you think of CVS as one (which some do), and not any text on the site that implies they aren't welcome, either. Danger of Longhorn[ Go to top ] - Posted by: Staffan Gustafsson - Posted on: March 23 2004 12:59 EST - in response to Gerry G ... And the fact that Longhorn will likely, from what I understand, not be natively backward compatible with WinXP and prior, that means people will have to run their older apps in emulation mode ...This is (unfortunately??) not correct. Win32 apps will run just as they do on XP today. They just don't get the wiz-bang features of Longhorn. Gerry, you are correct that I don't have a full understanding of how the thinking goes in the Open Source community. What I do see is that Microsoft, for the first time ever in my opinion, is getting it's act together. End to end. And I strongly beleive that both Linux and java as technologies has their futures threatened by Microsoft dominance, if they don't provide a compelling alternative to Longhorn. Maybe I do overstate the danger of Longhorn, but I'd hate to be proven correct just by us not getting our act together in time... Regards, Staffan nothing for those with a willy[ Go to top ] Rolf: (Who is the helpless heroine in this movie?) - Posted by: Cameron Purdy - Posted on: March 22 2004 23:22 EST - in response to Rolf Tollerud Um ... let me guess ... it's you? Rolf, you really confuse me sometimes, but now I see where it's coming from. Peace, Cameron Purdy Tangosol, Inc. Coherence: Clustered JCache for Grid Computing! the poor girl is lying on the track with the train coming[ Go to top ] Oh then. I see I have to spell it out. I forgot that not everybody can imagine that it is possible to see things from both sides! Well, from your side it looks like this: - Posted by: Rolf Tollerud - Posted on: March 23 2004 02:40 EST - in response to Cameron Purdy Open Source is the common decent people MS is the villain the has lured the people Sun is the Hero Java is the Heroine. So I give you this task, Describe how it looks like from my point of view :) Regards Rolf Tollerud everything is allowed in a "just" cause- is that smart?[ Go to top ] - Posted by: gerhard haak - Posted on: March 22 2004 08:33 EST - in response to Rolf Tollerud Microsoft is less criminal than most people and companiesSurely, Rolf, you are kidding. Most companies haven't been found guilty of serious violations of the Sherman Antitrust Act! Any company with a slogan such as "Dos ain't done 'till lotus don't run" has serious ethical issues. Remember the Frog and the Scorpion fable - its in their nature. sometimes I have scruples..[ Go to top ] You know, now and then I take pity on the Java community, what a terrible year for Java and shall I come here and bash the already lying? Why did I hack upon the Java class path- its stupid but after all only a bagatelle, Rolf you are mean.. - Posted by: Rolf Tollerud - Posted on: March 22 2004 08:54 EST - in response to gerhard haak But then comes a post like yours again and all reservations pass away! No mercy. Regards Rolf Tollerud hahaha, you just have leaces....[ Go to top ] shall I come here and bash the already lying? poor buddy, he does not have anything better to do than "bashing the one already dying". Ethical? - Posted by: Star Trooper - Posted on: March 22 2004 09:32 EST - in response to Rolf Tollerud I just wonder if his curremt behaviour is caused by some horrible thing that happened when he was a kid... <Buddy, get over it... Santa claus DOES NOT exist> No mercy Why TSS.COM DOES NOT CUT THIS TROLL OFF? mmmm... VERY, VERY suspicious... sometimes I have scruples..[ Go to top ] Hey Rolf, - Posted by: Henrique Steckelberg - Posted on: March 22 2004 10:28 EST - in response to Rolf Tollerud how is your java-like MVC web framework open source project going at gotdotnet's "workspaces"? Any news about it? Any source code already posted? ;) Good to see that after bashing open source community so much, your mind has changed and you decided to join the fun! I am sure you're going to change your mind about other things too. Welcome to the real world! Regards, Henrique Steckelberg as if the .NET Runtime were[ Go to top ] any different. - Posted by: Patrick Schriner - Posted on: March 22 2004 07:34 EST - in response to Rolf Tollerud I think it´s even worse: Like not implementing features on Windows XP, but still including them in the Runtime... We need to move on[ Go to top ] I think Havoc's thoughts must be taken under serious consideration. - Posted by: Staffan Gustafsson - Posted on: March 22 2004 08:08 EST - in response to Arnaldo Riquelme I think there are a number of things to consider. 1. Productivity in a managed language is much higher than it is in for example C. 2. In 2-4 years, Microsoft will deliver an all/mostly managed environment, giving the developers on the MS platform a high productivity environment. Both on the client side(avalon), command line (monad) and on the server side (indigo/asp.net) 3. To compete with that Linux, as a platform, need an equally productive environment. If that is Java or C# doesn't might not matter from this perspective (although I hope for java, as I'm working for a company building one of the JVM's. 4. Choosing C# might help Microsoft in some way, by removing the "only runs on windows" argument. I personally believe it is critical for the Linux community to realize the need for a managed main stream programming language. In the long run, it's the only way to stay competitive compared to the Microsoft platform. I java is choosen, (or maybe before it can be) there are a number of obstacles to overcome. Licencing (The licence from Sun does not allow us to go Open Source for example) is one big issue. Another is that a lot more expressiveness might be needed in some cases (the java.lang.Process does not even give you access to a pid, for example). This however should be easy for the Open Source community to overcome. But I ask all of you to not just dismiss what Microsoft is doing in Longhorn as vaporware, but to start perceive is as something that requires action from our part. I've been tracking thier longhorn plans fairly closely, and it is _not_ something we can safely ignore. Start asking "What counter measures should we put up?" My answer would be to integrate the Java platfrom more closely in the Linux platform, and provide a unified Java/Linux alternative to what Microsoft provides. This is not a path without obstacles, but it is the best I can come up with. Regards, Staffan
http://www.theserverside.com/discussions/thread.tss?thread_id=24562
CC-MAIN-2015-35
refinedweb
10,431
69.52
Red Hat Bugzilla – Bug 467224 tix can't cope with hex colors Last modified: 2008-11-05 23:06:03 EST Description of problem: when running following little example, I get error 'unknown color name "{#d9d9d9}"' #!/usr/bin/python from Tkinter import * import Tix root = Tix.Tk() win = Frame() win.pack() shl = Tix.ScrolledHList(win) Version-Release number of selected component (if applicable): tix-8.4.2-5.fc9.x86_64 tkinter-2.5.1-26.fc9.x86_64 tcl-8.5.2-2.fc9.x86_64 How reproducible: always Additional info: seems to be fixed upstream, see tix-8.4.2-6.fc9 has been submitted as an update for Fedora 9. tix-8.4.2-6.fc9 has been pushed to the Fedora 9 testing repository. If problems still persist, please make note of it in this bug report. If you want to test the update, you can install it with su -c 'yum --enablerepo=updates-testing update tix'. You can provide feedback for this update here: tix-8.4.2-6.fc9 has been pushed to the Fedora 9 stable repository. If problems still persist, please make note of it in this bug report.
https://bugzilla.redhat.com/show_bug.cgi?id=467224
CC-MAIN-2018-17
refinedweb
194
68.47
I feel it is time for some updates on my game as I really did not say much about it. So I would like to introduce you to the concept of a game I have been wanting to make for years. The game is called Orbis. The general idea behind the game is Asteroids with a twist. So ultimately I will be making a Asteroids clone with a few twists to spice up an old game I use to love to play at the Arcades or even on the Atari!!!! I am not sure if I am ready to really detail out all the features quite yet as I am not sure exactly what will make it into the game just yet. So we will leave it at Asteroids with a twist for now till I flesh out more of the concepts. I also decided to make some tool changes for the game. I decided I would stay with C++ even though after my first foray back into C++ I wanted to scream back to C. Ultimately I ditched QTCreator and MinGW. For some reason I was having issues with MinGW on Windows 8 so I decided to install Visual Studio 2013 Express Windows Desktop edition. I must say I am really impressed. I also decided to stick with SFML. To use SFML with VS2013 I needed to rebuild the library and building SFML 2.1 did not work out to well so I ended up going with the Git repo and building from there. So far so good. So here is what my new environment looks like. - Visual Studio 2013 Express Windows Desktop - CMake 2.8.12.1 - SFML (master) - Git Version Control (on BitBucket) Now a bit on the progress. Not much honestly. Much of my time is taken up by school and on top of it I am trying to get back into the groove of C++ after spending a few years in the world of C. So bear with me we will get there. The first task I really wanted to get done was make sure SFML actually worked and it did. From there I felt the most important thing I should get out of the way is Resource Management because this is something I really can't have a game without. Sadly this was probably not the best place to start when I am trying to get my C++ groove back but none the less I think I was successful. My goal here was to put together a cache for my resources. This will be the core of ensuring all resources are properly freed up when no longer needed and will also be the core of my TextureAtlas system which is what I will be building next. I really needed this to be generic because SFML has many types of resources. So this resource cache is built to handle sf::Image, sf::Texture, sf::Font, and sf::Shader. There may be a few things but this is what I can think of off the top of my head. It will not handle music because sf::Music handles everything very differently so I will need to take a different approach for music. I also wanted to ensure that the memory of the cache was handled automatically. Since I am not in the world of C and the fun void* generic programming world I figured I might as well try to use some C++11. So my first foray into C++ after years and years of not touching it includes Templates, and some C++11. In other words AHHHH MY EYES!!!! Sorry for no comments but here is the code I came up with using unique_ptr for the resource which gets stored in a map. The actual key to the map will be implemented as a enum elsewhere so I can index into the cache to get what is needed. There are 4 methods. 2 load_resource methods and 2 get_resource methods there is no way to remove a resource at this point as I am not sure I need it yet for this game at least. One load_resource takes care of the basic loadFromFile. sf::Shader as a extra param and so can sf::Texture so the overloaded load_resource takes care of that. get_resource just returns the resource and there is a overloaded version to be called in case the cache is a const. Again sorry for no comments I feel the code is simple enough to not need any. #ifndef RESOURCECACHE_H #define RESOURCECAHCE_H #include <memory> #include <map> #include <string> #include <stdexcept> template <typename Resource, typename ResourceID> class ResourceCache { public: void load_resource(ResourceID id, const std::string& file); template <typename Parmeter> void load_resource(ResourceID id, const std::string& file, const Parmeter& parameter); Resource& get_resource(ResourceID); const Resource& get_resource(ResourceID) const; private: std::map<ResourceID, std::unique_ptr<Resource>> resources; }; template <typename Resource, typename ResourceID> void ResourceCache<Resource, ResourceID>::load_resource(ResourceID id, const std::string& file) { std::unique_ptr<Resource> resource(new Resource()); if (!resource->loadFromFile(file)) throw std::runtime_error("ResourceCache::load_resource: Failed to load (" + file + ")"); resources.insert(std::make_pair(id, std::move(resource))); } template <typename Resource, typename ResourceID> template <typename Parameter> void ResourceCache<Resource, ResourceID>::load_resource(ResourceID id, const std::string& file, const Parameter& parameter) { std::unique_ptr<Resource> resource(new Resource()); if (!resource->loadFromFile(file, parameter)) throw std::runtime_error("ResourceCache::load_resource: Failed to load (" + file + ")"); resources.insert(std::make_pair(id, std::move(resource))); } template <typename Resource, typename ResourceID> Resource& ResourceCache<Resource, ResourceID>::get_resource(ResourceID id) { auto resource = resources.find(id); return *resource->second; } template <typename Resource, typename ResourceID> const Resource& ResourceCache<Resource, ResourceID>::get_resource(ResourceID id) const { auto resource = resources.find(id); return *resource->second; } #endifHere is the main.cpp file which I used for my functional test as well so you can see it in use. #include <SFML/Graphics.hpp> #include "ResourceCache.h" enum TextureID { Background }; int main() { sf::RenderWindow window(sf::VideoMode(250, 187), "SFML Works!"); ResourceCache<sf::Texture, TextureID> TextureCache; TextureCache.load_resource(TextureID::Background, "./Debug/background.png"); sf::Texture bkg = TextureCache.get_resource(TextureID::Background); sf::Sprite bkg_sprite(bkg); while (window.isOpen()) { sf::Event event; while (window.pollEvent(event)) { if (event.type == sf::Event::Closed) window.close(); } window.clear(); window.draw(bkg_sprite); window.display(); } return 0; }Like stated this is my first foray back into C++ so feel free to let me know if you see anything obviously wrong with the ResourceCache class. Much appreciated in advance. Until Next Time. Hey. I just a question about the code. I am curious about why you chose to use a unique pointer to store the resource instead of, say, a shared pointer. It seems to me that the resource is intended to be shared so a shared pointer would seem right, and then you could provide weak pointers to those components that wanted to use the resource. In that way, if the resource cache is deleted the components are not left with dangling references. Also, you probably want to check the results of those find() calls but you already know that
http://www.gamedev.net/blog/468/entry-2258936-a-bit-about-my-game-and-some-slow-progress/?k=880ea6a14ea49e853634fbdc5015a024&setlanguage=1&langurlbits=blog/468/entry-2258936-a-bit-about-my-game-and-some-slow-progress/&langid=2
CC-MAIN-2015-11
refinedweb
1,172
63.8
public class Solution { public int longestValidParentheses(String s) { return ltr(s, 0, s.length()); } public int ltr(String s, int start, int end) { int left = start; int openLeft = 0; int max = 0; for(int i = start; i < end; i++) { if(s.charAt(i) == '(') openLeft++; else openLeft--; if(openLeft < 0) { int length = i - left; if(length > max) max = length; left = i + 1; openLeft = 0; } } if(openLeft == 0) { int length = end - left; if(length > max) max = length; } { int length = rtl(s, left, end); if(length > max) max = length; } return max; } public int rtl(String s, int start, int end) { int right = end; int openRight = 0; int max = 0; for(int i = end - 1; i >= start; i--) { if(s.charAt(i) == ')') openRight++; else openRight--; if(openRight < 0) { int length = right - (i + 1); if(length > max) max = length; right = i; openRight = 0; } } if(openRight == 0) { int length = right - start; if(length > max) max = length; } return max; } } I am pretty sure the code can be shorten to maybe half of the current size...... but why bother... The idea is very simple, 2 iteration at most. First time from left to right, second time from right to left. The second iteration is only needed if the first iteration has ends with unclosed left bracket. so when the first iteration ends (left to right), we have 2 scenarios: 1, all left brackets are closed (every left bracket matches a right bracket) 2, some left brackets are open (couldn't find enough right brackets to finish them). In the first case, things are perfect, we just return the max value. In the second case, we start the second iteration from right to left. This time, we try to find left brackets to match right brackets. Remember, the condition to start the second iteration is that we are having more left brackets than right brackets. Therefore, we know each right bracket will guarantee to find a left bracket to form a pair. // LeetCode, Longest Valid Parenthese // 两遍扫描,时间复杂度O(n),空间复杂度O(1) // @author 曹鹏() class Solution { public: int longestValidParentheses(string s) { int answer = 0, depth = 0, start = -1; for (int i = 0; i < s.size(); ++i) { if (s[i] == '(') { ++depth; } else { --depth; if (depth < 0) { start = i; depth = 0; } else if (depth == 0) { answer = max(answer, i - start); } } } depth = 0; start = s.size(); for (int i = s.size() - 1; i >= 0; --i) { if (s[i] == ')') { ++depth; } else { --depth; if (depth < 0) { start = i; depth = 0; } else if (depth == 0) { answer = max(answer, start - i); } } } return answer; } }; try this test case '()(()))(' , i think the answer should be 2 ,but the program gives me a 6 Thank you for explaining this to zhiyuan.MA.chn. I think 6 is the right answer for the given input. hi I am agree with you! I used two scan for the input : one from left to right and another from right to left. the time complexity is O(n) and space complexity is O(1) Oh, the post code(@beijunyi) is not similar with @Qili1 , Not SAME logic !!! please pay attention ! the first's backward's interval is [postForwardLastStartPos, end] , while the second use [0, end] ! They are all right , but you can't mess the interval ! because I got errors for mess it. ... Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/7376/why-people-give-conclusion-that-this-cannot-be-done-with-o-1-space-my-ac-solution-o-n-run-time-o-1-space
CC-MAIN-2017-47
refinedweb
555
77.67
> On Wednesday 26 February 2003 10:48 pm, Mark Bucciarelli wrote: > > > Stupid question ... what's the proper way to retrieve the value of the name > > attribute using pnode.getAttributeNS()? > > Nope. To get at an attribute without a namespace prefix, you must use > getAttribute, not getAttributeNS. Noooooooooo! :-) Please do not mix up getAttribute and getAttributeNS. If you use namespaces, stick to the latter. Your problem, as I mentioned, was that you should have been using xml.dom.EMPTY_NAMESPACE, which is None, not ''. > Another interesting thing I found is if an attribute does have a prefix, then > the xmlns declaration must be in a parent tag--it cannot be in the same tag > as the attribute. Hmm? Not true at all. > So, there is no way to get the value of the name attribute from the following > xml: > > > Maybe this isn't legal xml. Course it is, and works just fine for me. If you insist on using 4DOM: >>> from xml.dom.ext.reader import Sax2 >>> reader = Sax2.Reader() >>> doc = reader.fromString("<root><test xmlns: </root>") >>> test_node = doc.documentElement.firstChild >>> test_node.getAttributeNS(u'', u'name') u'abc' BTW, using 4Suite's Domlette, which is *much* faster than 4DOM you would use: >>> from Ft.Xml.Domlette import NonvalidatingReader >>> doc = NonvalidatingReader.parseString("<root><test xmlns:</root>","file:bogus.xml") >>> test_node = doc.documentElement.firstChild >>> test_node.attributes[(u'', u'name')].value u'abc' May I suggest my Python/XML articles? They cover some of the issues you've run into: There is also a lot of relevant stuff on my Akara site: -- Uche Ogbuji Fourthought, Inc. The open office file format - 4Suite Repository Features - XML class warfare - See you at XML Web Services One -
https://mail.python.org/pipermail/xml-sig/2003-March/009083.html
CC-MAIN-2017-39
refinedweb
278
59.9
There are some changes planned in how things are going to be done around here. I want to streamline things a bit more, and make it easier for people to get involved. One of the major changes is to move all JeeLabs open source software to GitHub: The main reason for doing this, is that it makes it much easier for anyone to make changes to the code, regardless of whether these are changes for personal use or changes which you’d like to see applied to the JeeLabs codebase. For the upcoming Arduino IDE 1.0 release (which appears to break lots of 0022 projects), I’ve moved and converted a couple of JeeLabs libraries so far: - Ports and RF12 have been merged into a single new library called JeeLib - the EtherCard and GLCDlib libraries have been moved without name change - all *.pdefiles have been renamed to *.ino, the new 1.0 IDE filename extension - references to WProgram.hhave been changed to Arduino.h - the return type of the write()virtual function has been adjusted - some (char)casts were needed for byteto fix unintended hex conversion If you run into any other issues while using this code with the new Arduino IDE 1.0beta2, let me know. So what does all this mean for you, working with the Arduino IDE and these JeeLabs libraries? Well, first of all: if you’re using IDE 0022, there’s no need to change anything. The new code on GitHub is only for the next IDE release. The subversion repositories and ZIP archives on the libraries page in the Café will remain as is until at least the end of this year. New development by yours truly will take place on GitHub, however. This applies to all embedded software as well as the JeeMon/JeeRev stuff. The new JeeLib should make it easier to use Ports and RF12 stuff – just use #include <JeeLib.h>. Note that you don’t need to sign up with GitHub to view or download any of the JeeLabs software. The code stored there is public, and can be used by anyone. Just follow the zip or tar links in the README section at the bottom of the project pages, or use git to “clone” a repository. To follow all my changes on GitHub, you can use this RSS feed. To follow just the changes to JeeLib in slightly more detail, this feed should do the trick. One of the features provided by GitHub is “Issue Tracking”, i.e. the ability to file bugs, comment on them, and see what has been reported and which ones are still open. This too is open to anyone, but you have to be signed up and logged in to GitHub to submit issues or discuss them. For questions and support, please continue to use the JeeLabs forum as before. But if you’re really pretty sure there’s a bug in there, you’re welcome to use the issue trackers (as you know, Mr Murphy and me tend to sprinkle bugs around from time to time, just to keep y’all sharp and busy ;) And if you’d like to suggest a change, consider forking the code on GitHub and submitting a “pull request”. This is GitHub-speak for submitting patches. Small changes (even multiple ones) are more likely to go in than one big sweeping request to change everything. I’m open for suggestions (in fact, I’ve got a couple of patches from people still waiting to be applied), but please do keep in mind that code changes often imply doc changes, as well as making sure nothing breaks under various scenarios. All in all, I hope that GitHub will help us all keep better track of all the latest changes to the software, work together more actively to fix bugs and add more functionality. I haven’t heard of GitHub ever going offline, but if it ever does, I’ll make sure that the latest code is also available from the JeeLabs servers as backup. Update – Here’s an excellent article on how to collaborate via Git and GitHub. Hello, it seems the holidays were not only spent on bike and sightseeing. A lot of thinking (perhaps due to be offline) took place. Congratulations.
http://jeelabs.org/2011/09/03/the-bits-have-moved/
CC-MAIN-2015-35
refinedweb
715
77.47
Sample Source Code: ParallelDots Intent Analysis Followers ParallelDots Intent Analysis tells whether the underlying intention behind a sentence is an opinion, news, marketing, complaint, suggestion, appreciation, or a query. The type of intention is returned is based on the ParallelDots proprietary data set. This code sample is for use with the ParallelDots Intent Analysis API. This code sample is for use with the ParallelDots Intent Analysis API. from paralleldots.config import get_api_key import requests import json def get_intent( text ): apikey = get_api_key() if not apikey == None: if type( text ) != str: return "Input must be a string." elif text in [ "", None ]: return "Input string cannot be empty." url = "" r = requests.post( url, params={ "apikey": apikey, "text": text } ) if r.status_code != 200: return "Oops something went wrong ! You can raise an issue at." r = json.loads( r.text ) r["usage"] = "By accessing ParallelDots API or using information generated by ParallelDots API, you are agreeing to be bound by the ParallelDots API Terms of Use:" return r else: return "API key does not exist"
https://www.programmableweb.com/sample-source-code/paralleldots-intent-analysis/followers
CC-MAIN-2017-34
refinedweb
170
57.57
C++/Simple Math Contents Simple C++ Math[edit] Math in C++ is very simple. Keep in mind that C++ mathematical operations follow a particular order much the same as high school math. For example, multiplication and division take precedence over addition and subtraction. The order in which these operations are evaluated can be changed using parentheses. Adding, Subtracting, Multiplying and Dividing[edit] #include <iostream> using namespace std; int main() { int myInt = 100; myInt = myInt / 10; //myInt is now 10 myInt = myInt * 10; //myInt is back to 100 myInt = myInt + 50; //myInt is up to 150 myInt = myInt - 50; //myInt is back to where it started myInt = myInt + 100 * 2; // myInt is now 300 because multiplication takes precedence over addition myInt = (myInt + 100) * 2; // myInt is now 800 because we have changed the precedence using parentheses myInt -= 10; // myInt is now 790 because this line is the short-hand for myInt = myInt - 10; myInt = myInt % 100; // myInt is now 90 because % is modulus operator cout << myInt << endl; cin.get();//Taking one character or waiting after displaying output return 0; //Passing message to the Operating System saying that the code has been successfully execute. } //C++ arithmetic operators // + (add) // - (subtract) // / (divide) // * (multiply) // % (modulus division) 4 % 5 = 4 the remainder is returned 6 % 5 = 1 // += (add and assign) // -= (subtract and assign) // /= (divide and assign) // *= (multiply and assign) // %= (mod and assign) C++ math library[edit] The C++ math library is actually C's math library. It is easy to use and is accessed by including cmath. #include <cmath> Math functions[edit] Now that we have the C math library let's use some neat functions. Square Root[edit] 1 #include <iostream> 2 #include <cmath> 3 4 using namespace std; 5 6 int main() 7 { 8 float myFloat = 0.0f; //the f (requires decimal) tells the compiler to treat this real number as a 32 bit float 9 //and not as a 64 bit double. this is more of a force of habit than a requirement 10 cout << "Enter a number. ENTER: "; 11 cin >> myFloat; 12 cout << "The square root of " << myFloat << " is " << sqrt(myFloat) << endl; 13 cin.clear(); 14 cin.sync(); 15 cin.get(); 16 17 return 0; 18 } Powers[edit] 1 #include <iostream> 2 #include <cmath> 3 4 using namespace std; 5 6 int main() 7 { 8 float myFloat = 0.0f; 9 10 cout << "Enter a number. ENTER: "; 11 cin >> myFloat; 12 cout << myFloat << " in the power of 2 is " << pow(myFloat, 2) << endl; 13 cout << myFloat << " in the power of 3 is " << pow(myFloat, 3) << endl; 14 cout << myFloat << " in the power of 0.5 is " << pow(myFloat, 0.5) << endl; 15 cin.clear(); 16 cin.sync(); 17 cin.get(); 18 19 return 0; 20 } Trigonometry[edit] Note: Trigonometric functions in cmath use RADIANS. #include <iostream> #include <cmath> using namespace std; int main() { float myFloat = 0.0f; cout << "enter a number. ENTER: "; cin >> myFloat; cout << "sin(" << myFloat << ") = " << sin(myFloat) cout << "cos(" << myFloat << ") = " << cos(myFloat) << endl; cout << "tan(" << myFloat << ") = " << tan(myFloat) << endl; cin.clear(); cin.sync(); cin.get(); return 0; }
https://en.wikiversity.org/wiki/C%2B%2B/Simple_Math
CC-MAIN-2017-51
refinedweb
506
70.63
The best way to learn React is to re-create Hello World but for React. Let’s learn all there is to know about building a simple Hello World app in React! What We’re Building This tutorial will thoroughly explain everything there is to know about creating a new React app in the quickest way possible. If you’re someone who wants to learn how to spin up a brand new React app, then this tutorial is for you. I’ve summarized the most important details for each step under each of the headings so you can spend less reading and more coding. By the end of this React Hello World tutorial you’ll have a running React app and have learned how to do the following: - Generate a New React App Using Create React App - Run the React App - Understand the Folder Structure - Install Additional React Libraries - Create a Hello World React Component - Use the Hello World React Component - Wrapping Up Generate a New React App Using Create React App - Create React App (CRA) is a tool to create a blank React app using a single terminal command. - CRA is maintained by the core React team. Configuring a modern React app from scratch can be quite intricate, and requires a fair amount of research and tinkering with build tools such as Webpack, or compilers like Babel. Who has time for that? It’s 2019, so we want to spend more time coding and less time configuring! Therefore, the best way to do that in the React world is to use the absolutely fantastic Create React App tool. Open up your terminal and run the following command: npx create-react-app hello-world This generates all of the files, folders, and libraries we need, as well as automatically configuring all of the pieces together so that we can jump right into writing React components! Once Create React App has finished downloading all of the required packages, modules and scripts, it will configure webpack and you’ll end up with a new folder named after what we decided to call our React project. In our case, hello-world. Open up the hello-world directory in your favorite IDE and navigate to it in your terminal. To do that, run the following command to jump in to our Hello World React app’s directory. cd hello-world Run the React App - Start the React app by typing npm start into the terminal. You must be in the root folder level (where package.json is!). - Changes made to the React app code are automatically shown in the browser thanks to hot reloading. - Stop the React app by pressing Ctrl + C in the terminal. I know what you’re thinking: “How can I jump straight into a React component and start coding?”. Hold your horses! 🐴 Before we jump into writing our first Hello World React component, it’d be nice to know if our React app compiles and runs in the browser with every change that we make to the code. Luckily for us, the kind folks over at Facebook who develop Create React App have included 🔥 hot reloading out of the box to the generated React project. Hot reloading means that any changes we make to the running app’s code will automatically refresh the app in the browser to reflect those changes. It basically saves us that extra key stroke of having to refresh the browser window. You might not think hot reloading is important, but trust me, you’ll miss it when it’s not there. To start our React app, using the same terminal window, run the following command: npm start Let’s back pedal for just one moment because it’s important that we understand what every command does and means. So, what on earth does npm start mean? Well, npm stands for Node Package Manager which has become the de facto package manager for the web. If all goes well, a new browser tab should open showing the placeholder React component, like so: If for whatever reason the browser does not appear, or the app doesn’t start, load up a new browser window yourself and navigate to:. Create React App runs the web app on port 3000. You can change this if you want to by creating a new file named .env in your root project directory and add the following to it: PORT=3001 You can even access the running React app from another laptop or desktop that’s on the same network by using the network IP address, as shown below: Understand the Folder Structure Open the hello-world project folder in your IDE or drag the whole folder to the IDE shortcut (this usually opens up the project). You’ll see three top level sub-folders: - /node_modules: Where all of the external libraries used to piece together the React app are located. You shouldn’t modify any of the code inside this folder as that would be modifying a third party library, and your changes would be overwritten the next time you run the npm install command. - /public: Assets that aren’t compiled or dynamically generated are stored here. These can be static assets like logos or the robots.txt file. - /src: Where we’ll be spending most of our time. The src, or source folder contains all of our React components, external CSS files, and dynamic assets that we’ll bring into our component files. At a high level React has one index.html page that contains a single div element (the root node). When a React app compiles, it mounts the entry component — in our case, App.js — to that root node using JavaScript. Have you heard the term SPA? (I’m not talking about a place where you relax with two 🥒 cucumbers over your eyeballs). SPA stands for “Single Page App” and it applies to web apps that use a single HTML file as the entry point (hence the term “Single Page”). JavaScript is then handles things like routing or component visibility. Install Additional React Libraries One of the more important files inside of our React project is the package.json file. Think of this file as the React app’s configuration file. It is central to providing any metadata about our project, adding any additional libraries to our project, and configuring things like run scripts. The package.json file for a fresh React app created with CRA looks like this: { "name": "hello-world", "version": "0.1.0", "private": true, "dependencies": { "react": "^16.10.1", "react-dom": "^16.10.1", "react-scripts": "3.1" ] } } For example, instead of writing an entire routing library from scratch, (which would take a very long time) we could simply add a routing library (React Router, for example) to our project by adding it to the package.json file, like so: ... "dependencies": { "react": "^16.10.1", "react-dom": "^16.10.1", "react-router-dom": "^5.1.2", "react-scripts": "3.1.2" }, ... Once you have typed the library name followed by the version of that library that you’d like to install, simply run the npm install command in the terminal, like so: npm install This will download the library (or libraries if you added multiple to the package.json file) and add it to the node_modules folder at the root level of you project. Alternatively, you can add a library to a React project by typing npm install followed by the name of the library you wish to add, into the terminal: npm install react-router-dom Once that library is installed, we can simple import it into any React component. Create a Hello World React Component - A React component is written as either a .JSX file or .JS file. - A React component name and filename is always written in Title Case. - The component file contains both the logic and the view, written in JavaScript and HTML, respectively. - JSX enables us to write JavaScript inside of HTML, tying together the component’s logic and view code. Go ahead and create a new file under the /src directory. We’ll stick to convention and name our new React component HelloWorld.js. Next, type or copy the following code into the file. import React from 'react'; const HelloWorld = () => { function sayHello() { alert('Hello, World!'); } return ( <button onClick={sayHello}>Click me!</button> ); }; export default HelloWorld; This is a very simple React component. It contains a button, which when clicked, shows an alert that says “Hello, World!”. Yes, it’s trivial, but the Hello World React component above is actually a really good example of a first React component. Why? Because it has both view code and logic code. Let’s explore the view code first, inside of the return statement: ... return ( <button onClick={sayHello}>Click me!</button> ); ... This component contains one button that when clicked calls a function named sayHello which is declared directly above the return statement. Any function that this component’s View code calls will likely be inside of the same component code. I say it’s likely because there are occasions where you may reference functions that are contained outside of the Component, for example, if passed down through props. Use the Hello World React Component - Imports are made at the top of a React component. - React components are imported into other React components before using them. - Using a component is as simple as declaring it inside of tags, for example: <HelloWorld /> Open up App.js. It should look something; Go ahead and delete everything apart from the div wtith the App class. Then, import our new HelloWorld React component at the top of the file, alongisde the other imports. Finally, use the HelloWorld component by declaring it inside of the return statement: import React from 'react'; import HelloWorld from './HelloWorld'; import './App.css'; function App() { return ( <div className="App"> <HelloWorld /> </div> ); } export default App; Save App.js (🔥 Hot reloading takes care of reloading the running app in the browser, remember?) and jump on back to your browser. You should see our HelloWorld component now displayed. Go ahead, click the button and you’ll see something like this: Wrapping Up Well, there you have it. A complete beginning-to-end tutorial on building your first React component. I hope you enjoyed it. If you have any questions about this tutorial or indeed getting started with React, leave a comment below. See you next time! 💻 More React Tutorials Thanks James. Well written introduction article. Good one to get introduced to React. Found it to be very helpful. Really good tutorial! I could do it so easy and I learn more than I expected for a Hello World
https://upmostly.com/tutorials/react-hello-world-your-first-react-app
CC-MAIN-2020-29
refinedweb
1,782
63.39
Barcode Software barcode library c# 7: Advanced Native Code Techniques with Visual Studio .NET in visual C# Integration Code 3/9 in visual C# 7: Advanced Native Code Techniques with Visual Studio .NET Many NAS systems are built on hardware RAID, providing an easy and cost-effective way to expand your original server storage in a highly fault-tolerant way. However, it pays to look closely at exactly what you are buying some are built on RAID 0, which is not fault-tolerant at all and actually increases your risk. We only brie y mentioned Storage Area Networks (SANs) earlier, and we won t mention them again. Although they are excellent, fast, exible, and highly faulttolerant, they are only for those with really large IT budgets at this point. Plus, they can be rather tricky to implement and con gure. Given the strong advances in NAS, we think NAS provides a better solution for those running on realistic budgets. using define .net framework crystal report to attach barcode with asp.net web,windows application BusinessRefinery.com/barcode java api barcode scanner generate, create barcodes service none with java projects BusinessRefinery.com/ barcodes 6 using panel winforms to encode bar code with asp.net web,windows application BusinessRefinery.com/ barcodes using barcode drawer for visual studio .net crystal report control to generate, create barcodes image in visual studio .net crystal report applications. find BusinessRefinery.com/ barcodes n Note In the code listings in this chapter, any code that s within classes is indented, as with the use office excel barcodes integrated to deploy barcode in office excel object BusinessRefinery.com/ barcodes using barcode printer for jasper control to generate, create barcodes image in jasper applications. ms BusinessRefinery.com/ bar code When you want to talk to a specific database, you usually need to connect to it . At the very least, most of the time this involves specifying the location of the database . For many scenarios, connecting also requires managing security (with user names and passwords) . More advanced scenarios might also require dealing with such issues as connection pooling and crystal reports 2013 qr code using support visual .net crystal report to connect qr-codes on asp.net web,windows application BusinessRefinery.com/Quick Response Code qr size dll with .net BusinessRefinery.com/qr-codes Select Group Policy Management from the Administrative Tools menu. Select the Group Policy object you want to lter and click the Scope tab. On the Scope tab in the Security Filtering section, click Add and locate the groups or users who should have the policy applied to them, as shown in Figure 20-8. Make your selection and click OK. using barcode generator for word microsoft control to generate, create qr code image in word microsoft applications. service BusinessRefinery.com/QR Code java qr code generator download using barcode writer for servlet control to generate, create denso qr bar code image in servlet applications. batch BusinessRefinery.com/qr-codes SELECT id, N.c1.value('declare namespace VI= ""; (VI:Company)[1]','NVARCHAR(30)') AS Company, N.c1.value('declare namespace VI= ""; (VI:Creator)[1]','NVARCHAR(30)') AS Creator FROM dbo.VisioDocs CROSS APPLY doc.nodes('declare namespace VI= ""; /VI:VisioDocument/VI:DocumentProperties') AS N(c1); qrcode size simplify with office excel BusinessRefinery.com/qr barcode qrcode data classes in .net BusinessRefinery.com/qr codes Sample of Visual Basic Code Protected Sub Button_Command(ByVal sender As Object, _ ByVal e As System.Web.UI.WebControls.CommandEventArgs) _ Handles Forward.Command, Up.Command, Back.Command Select Case e.CommandName Case "Back" FeedbackLabel.Text = "Back" Exit Select Case "Up" FeedbackLabel.Text = "Up" Exit Select use microsoft excel code 128 code set c development to draw code-128b in microsoft excel adjust BusinessRefinery.com/code 128c crystal reports data matrix barcode using company visual studio .net crystal report to draw barcode data matrix on asp.net web,windows application BusinessRefinery.com/DataMatrix Encapsulating an AjAX Client Control as a Custom Control rdlc barcode 128 using border rdlc report files to compose barcode standards 128 on asp.net web,windows application BusinessRefinery.com/Code 128 Code Set A crystal reports 2008 barcode 128 generate, create uss code 128 procedure none in .net projects BusinessRefinery.com/ANSI/AIM Code 128 A DropDownList control named ChoiceDropDownList. Add three ListItem controls to the DropDownList (one for each choice). A Button control named submitbutton. use asp.net aspx barcode code 128 creator to make code 128 code set a for .net programming BusinessRefinery.com/code-128b use word document 3 of 9 integrated to attach barcode 3 of 9 on word document background BusinessRefinery.com/Code 39 Extended 3 . . Select the AutoFormat option on the Configuration menu . Here you have the opportunity to apply a couple of predefined styles to the FormView . The example accompanying this text uses the Classic formatting style . 4 . . Enable paging by selecting Enable Paging on the FormView Configuration menu . Set the HeaderText property (in the Visual Studio Properties pane) to give the FormView a title (perhaps something like .NET Reference Authors and Titles ) . 5 . . After configuring the FormView, Visual Studio will show you a representation of the format the query will use when it is rendered to the browser: code 39 barcode generator c# generate, create code39 pixel none with visual c#.net projects BusinessRefinery.com/USS Code 39 how to use code 39 barcode font in crystal reports using barcode implement for .net framework control to generate, create barcode 39 image in .net framework applications. type BusinessRefinery.com/Code39 Silverlight 3 adds H.264 decoder support so that the <MediaElement> in Silverlight can play back H.264-encoded content. This is an important step for companies that have invested in digitizing their assets using this format and that would like to take advantage of Silverlight for building rich user interfaces. If you haven t played with H.264 before, you can do so with the full version of Expression Encoder 2, which, with Service Pack 1 added, supports encoding in this format. If you don t have Expression Encoder 2, there s a free encoder at. The rest of this section discusses encoding files using Expression Encoder. With Expression Encoder, you can import files from many different formats. This example uses a MOD file as commonly produced by many camcorders. Start Expression Encoder and click the Import button. Point it at your file, and you ll see the file being loaded and prepared by Expression Encoder. Before you encode, you can select the encoder profile to use in the Encode tab, as shown in Figure 10-14. If success against the strategy requires training, those needs should be described, including analysis of how the training supports the strategy. Then, right-click the Default Web Site node and click Add Application on the shortcut menu . (The illustration shows how to perform this operation in IIS 7 .5 . If you re using earlier versions of IIS, the screen will look slightly different though you can add new virtual directories in the same way .) IIS will ask you to provide a name for the application/virtual directory: sh a ring code bet ween silv erlight a nd wpf 23 Assembly Loading and Reflection public sealed class Program { public static void Main() { // No cast needed since new returns an Employee object // and Object is a base type of Employee. Object o = new Employee(); // Cast required since Employee is derived from Object. // Other languages (such as Visual Basic) might not require // this cast to compile. Employee e = (Employee) o; } } For example, A appears 3 times, and there are 0 rows with a col1 value smaller than A. B appears 2 times, and there are 3 rows with a col1 value smaller than B. And so on. The next step (which produces the output shown in Table 4-25) is to expand the number of rows or create sequentially numbered copies of each row. You achieve this by creating a derived table out of the previous query and joining it to the Nums table as follows, based on n <= dups: SELECT col1, dups, smaller, n Figure 9-4 The new page of the Active Directory Installation Wizard, showing the additional options for the domain controller installation. Registered Back-End Users Registered back-end user groups contain the various administrative users of the site. Administrators have the ability to change access and permissions, alter the site template, create new sections and categories, install new components, and other functions. The three groups for back-end users are as follows: Manager: The manager group has the lowest authority in the administrative pyramid. Members of this group have limited access to the administrator Control Panel, and can confirm registration for users and perform basic maintenance such as categorizing an article or managing sections and categories. The Stocks table contains daily stock prices. More Code 39 on C# c# barcode reader free: Figure 7-3: Hitting a data breakpoint 245 in visual C# Draw Code 39 Extended in visual C# Figure 7-3: Hitting a data breakpoint 245 how to generate barcode in asp.net c#: EAX EBX ECX EDX ESI in visual C# Embed barcode 3/9 in visual C# EAX EBX ECX EDX ESI c# barcode reader free: __asm NOP __asm NOP in .net C# Printer Code 3 of 9 in .net C# __asm NOP __asm NOP barcode library c#: MemCMP_GreaterThan // If szSrc szDest // The memory blocks are equal. in c sharp Creator 39 barcode in c sharp MemCMP_GreaterThan // If szSrc szDest // The memory blocks are equal. c# barcode reader free: Figure 7-13: The stack displayed in the Visual Studio .NET debugger Memory window in C# Encoder 3 of 9 in C# Figure 7-13: The stack displayed in the Visual Studio .NET debugger Memory window barcode library c#: BugslayerUtil in c sharp Insert ANSI/AIM Code 39 in c sharp BugslayerUtil c# barcode reader free: [ 0] WTExample!wmain 232 [ 0] WTExample!wmain in visual C#.net Generator Code39 in visual C#.net [ 0] WTExample!wmain 232 [ 0] WTExample!wmain c# code 39 reader: Win32 Thread ID ThreadObj State PreEmptive GC GC Context Domain Lock Count APT Exception Alloc in C# Develop 3 of 9 in C# Win32 Thread ID ThreadObj State PreEmptive GC GC Context Domain Lock Count APT Exception Alloc c# barcode reader free: ref System.Array custom in visual C# Encoding USS Code 39 in visual C# ref System.Array custom Transitions Run suspension in .net C# Draw barcode 39 how to generate barcode in asp.net c#: in-process in C# Incoporate barcode 3/9 in C# in-process c# barcode reading library: PUSH ECX CALL method. POP POP POP EDX ECX EAX CFlowTrace::FuncEnter in visual C#.net Generation Code39 in visual C#.net PUSH ECX CALL method. POP POP POP EDX ECX EAX CFlowTrace::FuncEnter how to generate barcode in asp.net c#: ( (PWSTR)pImage- in C#.net Printing barcode 3/9 in C#.net ( (PWSTR)pImage- c# itextsharp create barcode: still in c sharp Encode Code 3 of 9 in c sharp still c# barcode reading library: 13-1: Options Option 0 in .net C# Deploy ANSI/AIM Code 39 in .net C# 13-1: Options Option 0 c# itextsharp create barcode: 14: Debugging Windows Services and DLLs That Load into Services in c sharp Include Code 39 in c sharp 14: Debugging Windows Services and DLLs That Load into Services c# barcode reading library: Critical-section functions in C# Insert Code-39 in C# Critical-section functions c# barcode reading library: pImportDesc- OriginalFirstThunk in C# Deploy ANSI/AIM Code 39 in C# pImportDesc- OriginalFirstThunk c# barcode reading library: lInitialCount lMaximumCount lpName in visual C# Add Code39 in visual C# lInitialCount lMaximumCount lpName c# itextsharp create barcode: LPCTSTR szString in c sharp Implement ANSI/AIM Code 39 in c sharp LPCTSTR szString Articles you may be interested how to generate barcode in visual basic 2010: Preventing class instantiation in visual basic.net Make Code128 in visual basic.net Preventing class instantiation vb.net barcode reader code: Part II Designing Types in C#.net Create PDF 417 in C#.net Part II Designing Types c# barcode reader open source: Primitive Thread Synchronization Constructs in C#.net Generate barcode pdf417 in C#.net Primitive Thread Synchronization Constructs create barcode in asp.net c#: Data Modi cation in .net C# Drawer Quick Response Code in .net C# Data Modi cation barcode generator in asp.net code project: Part IV in .NET Assign Denso QR Bar Code in .NET Part IV visual basic 6 barcode generator: Denotes parameters new with .NET 3.5. in .NET Print QR Code 2d barcode in .NET Denotes parameters new with .NET 3.5. barcode generator dll c#: Capability List in Office Word Draw Quick Response Code in Office Word Capability List barcode maker vb.net: Out-of-Line Constraints in Java Embed DataMatrix in Java Out-of-Line Constraints free barcode generator source code in vb.net: The Implementation of a Before Advice in Java Generation Data Matrix in Java The Implementation of a Before Advice barcode library vb net: n n n n in .NET Add qr barcode in .NET n n n n c# itextsharp create barcode: A. Page_Load B. Page_Render c. Page_PreRender D. Page_PreInit in c sharp Produce QR Code in c sharp A. Page_Load B. Page_Render c. Page_PreRender D. Page_PreInit qr code generator vb net: Entity Customization: Relationships in visual basic.net Generator QR-Code in visual basic.net Entity Customization: Relationships c# print barcode: ch a pter si x in vb Printer Code 3/9 in vb ch a pter si x generate 2d barcode c#: Inside Microsoft SQL Server 2008: T-SQL Querying in c sharp Display Quick Response Code in c sharp Inside Microsoft SQL Server 2008: T-SQL Querying code 39 vb.net: ch a pter si x in visual basic Use Code-39 in visual basic ch a pter si x c# code to create barcode: of Contents in .NET Printing QR in .NET of Contents c# code 128 generator: Conditional Formatting and Data Visualizations in visual C#.net Integrating Code 128 Code Set B in visual C#.net Conditional Formatting and Data Visualizations how to generate barcode in vb.net 2010: Lesson 3 in .NET Develop Denso QR Bar Code in .NET Lesson 3 print barcode with vb.net: Using Forms Authentication in .NET Encoder barcode 128a in .NET Using Forms Authentication barcode generator github c#: Testing Signals and Slots Using QTest in c sharp Implement DataMatrix in c sharp Testing Signals and Slots Using QTest
http://www.businessrefinery.com/yc1/253/33/
CC-MAIN-2022-05
refinedweb
2,426
57.06
After another pat on the back from our manager, we get a notification that a new task has been assigned to us: “While most users find it useful, some have asked if they can disable the spinner. Add a feature that turns off the spinner functionality once a certain combination of keys has been pressed within a time period”. So in this lesson, we will look at building an observable factory, that can be initialized with a certain list of key codes, and will fire whenever that key combo gets pressed quickly, within a 5 second window. We will be introducing the fromEvent, concat and takeWhile operators. Was wondering what will happen if you press the A key twice? It should start the combo again. Was wondering what will happen if you press the A key twice? It should start the combo again. Very good point - I guess you've seen the next lesson by now - where we'll answer this! Hi I was following along and I got to the point where I am supposed to watch the console log (5:15) and I am not getting any log output in my console, either on the interval or the keypresses. I've tried this in both Firefox and Chrome, incognito mode, disabled all extensions, still no luck. I cloned lesson 12 branch to my machine and added the logging code, still no luck. If it matters I am using WebStorm as my IDE on a Macbook running OSX Catalina. EDIT: I was able to get this working by exporting the keyCombo() function from EventCombo.js and importing it into TaskProgressService.js. I will admit that I am not the greatest user of the console, so if there's a way to directly monitor only EventCombo.js I'd appreciate some feedback on how. Hi Colin! The reason I suspect it didn't work for you initially is because the EventCombo.js file is not imported by anything. So whenever you compile your app, the combo code doesn't even make it into the final product that runs in the browser. So if you add this line to TaskProgressService.js it should start working: import './EventCombo'; Which is why it probably started working when you imported keyCombo in TaskProgressService.js. If you want to practice on the EventCombo.js in isolation, I prepared this stackblitz for you: Thanks Rares. I've really enjoyed your course - I think you do a great job explaining things and you have an enjoyable speaking style. I also have admired your animations - can you share what you're using to create them? Thanks for the really nice words, Colin! They made me smile in real life. I'm happy you found this useful. The animations are made by a really talented Egghead.io crew member using After Effects - see this tweet:
https://egghead.io/lessons/rxjs-build-an-event-combo-observable-with-rxjs
CC-MAIN-2021-21
refinedweb
476
73.88
0 I am trying to figure out a way to write a program that takes the "digital product" of a number (has to be positive). Like if I had 123456789, it would take 1*2*3*4*5*6*7*8*9 which is 362880. It would then take 3*6*2*8*8 (leaves out zeros) which is 2304. Then 2*3*4 which is 24. Then 2*4 which is 8. So it would print out "123456789 --> 362880 --> 2304 --> 24 --> 8". I have done this iteratively: public class DigProd { public static void main(String args[ ]) { long a = 123456789; int ai = DigProdIterative(a); System.out.println("Digital Product of a: " + ai); int ar = DigRootRecursive(a); System.out.println("Digital Product of a: " + ar); } public static int DigProdIterative(long a) { String str = String.valueOf(a); int first = 1; int y = (int) a; while(str.length() != 1) { str = String.valueOf(y); if(str.length() == 1) { System.out.print(y + "\n"); } else { System.out.print(y + " --> "); } y = 1; for(int i = 0; i < str.length(); i++) { int b = str.charAt(i); b = b - 48; //for ascii code if(b != 0) { first = (int) str.charAt(i); first = first - 48; y = y * first; } } } return y; } but I now need to do it recursively: public static int DigRootRecursive(long x) { for() { } } Someone please help me with this! I have tried doing it on my own but this is too hard!! PLEASE HELP!!
https://www.daniweb.com/programming/software-development/threads/74757/help-needed
CC-MAIN-2017-26
refinedweb
237
78.25
"HELO/EHLO domain of sending mail account" should be "HELO/EHLO host .domain of machine running Mozilla" RESOLVED FIXED Status People (Reporter: briggs, Assigned: ch.ey) Tracking ({fixed1.7}) Firefox Tracking Flags (Not tracked) Details Attachments (1 attachment, 20 obsolete attachments) The argument to the SMTP HELO command is supposed to the be fully qualified domain name corresponding to the IP address of the interface on which the host is originating the SMTP connection. The current system uses the domain component only, not the full host name. That is, from host xxx.parc.xerox.com, the HELO command should be "HELO xxx.parc.xerox.com", not "HELO parc.xerox.com" as it currently does. p.s., there's no Networking: SMTP in the component list. Reporter what version of Mozilla are you using? Have you tried with the latest nightly builds and with a new Profile/ does the problem still appear? Summary: SMTP "HELO domain" should be "HELO host.domain" → SMTP: "HELO domain" should be "HELO host.domain" Reassigning to mailnews, where this bug belongs Assignee: neeti → mscott Component: Networking → Networking - SMTP Product: Browser → MailNews QA Contact: tever → esther Marking NEW. Status: UNCONFIRMED → NEW Ever confirmed: true I recommend sending a bracketed dotted quad: HELO [128.2.15.1] > HELO [128.2.15.1] That should be a last resort. It's deprecated in the relevant RFCs. You're supposed to send the fully qualified primary name associated with the IP address of the interface on which you are speaking. This SHOULD match the fully qualified name retrieved with a reverse DNS lookup -- e.g., in your example, the PTR type query on 1.15.2.128.in-addr.arpa. These days, some sites block mail from anyone (typically spammers) who won't HELO with the FQDN for the host/interface. Address literals are not deprecated in draft-ietf-drums-smtpupd-13.txt, the latest SMTP standard. They are perfectly legal. (HELO, on the other hand, is deprecated in favor of EHLO.) The argument to HELO/EHLO provides no useful information. Obtaing the correct FQDN of the client is extremely difficult on some MUA platforms. Blocking sites based on the argument to HELO/EHLO is not a rational anti-spam strategy, as spammers can get this information just as easily if not more easily as legitimate MUAs can. The drums draft. If you CAN get the FQDN of the client, you should; if you can't, you should send the address literal. Note that my original complaint was that for a host whose name was host.a.b, the mozilla SMTP client was issuing a "HELO a.b", or "EHLO a.b", neither of which conform with any possible reading of ANY of the SMTP specs. BTW, I didn't find any correct way to find host's FQDN. I can get hostname using NSPR, but I cannot get domain name of the host mozilla's running on (currently, mozilla uses user's email address to get domain name, but this is not always the same as localhost's domain name) *** Bug 159875 has been marked as a duplicate of this bug. *** ..or bug 151674 ? *** Bug 179905 has been marked as a duplicate of this bug. *** Can't we use our own ip-address, and then do a reverse-lookup ? Or would that be too difficult on multi-home hosts ? Doing reverse name lookups stinks. The current way of doing it is incorrect though: (RFC 2821) The domain name given in the EHLO command MUST BE either a primary host name (a domain name that resolves to an A RR) or, if the host has no name, an address literal as described in section 4.1.1.1. Maybe someone ought to check what other clients do. As for the address literal, I think that should be doable, as the string has to be sent over a network socket, so you could get the source address from that socket. Since we can't do reverse lookups (unreliable and slow), my vote is for the ip-address too. It's more correct that the current domainname. We can use getsockname() to get the address from the socket that was opened to the mail-server. The fix has to be done in nsSmtpProtocol.cpp (functions nsSmtpProtocol::LoginResponse and nsSmtpProtocol::ExtensionLoginResponse, for HELO and EHLO) I traced the smtp connection from mozilla mail to my mail server today (Mozilla 1.2.1 Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.2.1) Gecko/20030119) and got the following: 220 starwolf.biz ESMTP Sendmail 8.11.6/8.11.6; Tue, 28 Jan 2003 14:43:52 -0500 EHLO (none) 501 5.0.0 Invalid domain name HELO (none) 501 5.0.0 Invalid domain name MAIL FROM:<"bartof"@(none)> 250 2.1.0 <"bartof"@(none)>... Sender ok RCPT TO:<bartof@rpi.edu> 550 5.7.1 <bartof@rpi.edu>... Relaying denied I don't know where it got (none) from for the EHLO and HELO lines, becuase I know that my hostname is set to bartof.no-ip.com locally It's the default in the new account wizard. It is a well known fact (I hope) that mozilla puts this address (domain taken from 1st mail account) into EHLO. The problem is, it should not. Can anyone tell me, why fixing this problem takes THAT long? Anyone (including me) can not use Messenger if his account domain is different than actual host ip domain... Lukasz > Can anyone tell me, why fixing this problem takes THAT long? Because it's assigned to someone who was on sabbatical for most of the last year. If this is not easily fixed the correct way, is it possible to at least provide an override in the prefs.js to set the HELO string on a per-SMTP server basis. Remember that the HELO FQDN has nothing to do with the sender's email address, which is what Moz currently uses to compute the HELO string. If the hostname of my outbound IP is abc.acme.com and my sender email address is john@generic.org, then the HELO string should still be "abc.acme.org". Also as I manage SMTP servers and am imvolved in writing anti-SPAM filtering software, I can tell you that SMTP servers which enforce the standards with regards to the HELO/EHLO parameters are currently very effective at eliminating a lot of SPAM. So I think it is very important that the Moz MUA tries to follow the standard exactly so SPAM filters don't block it. Based on dupes this bug is All/All OS: Solaris → All Hardware: Sun → All Deron, Lukasz, putting the interfaces ip of the machine which runs Mozilla in the HELO/EHLO request shouldn't be the problem. Resolve it to an FQDN is much more difficult. But it's nearly impossible to get a ip the SMTP server likes if the machine running Mozilla is behind a NAT router or similar (-> e.g. 192.168.0.7). Would adding the current ip of the machine to HELO/EHLO satisfy SPAM filters? At least if the machine directly dialed in and has a unique ip? Christian, Deron's suggestion of adding a possibility to explicitly set FQDN for EHLO would solve at least my problem perfectly :) I get an ip always from the same domain, as lots of other ppl getting ips from a provider's dhcp, that always resolve to the same domain. I think we could live with solution that is good in >90% cases; possibility of manually setting FQDN would solve the rest. The only annoying situation I can imagine is when used ip is not correctly resolvable (by any means we could use) AND _at the same time_ its FQDN differs at least from time to time. Lukasz Yes, at a minimum, and default, it should place the IP address of the interface used into the HELO command. This is acceptable per the SMTP RFC if there is no easy or reliable way to get an FQDN. And it can't be [127.0.0.1] unless you're running your own MTA (which makes this problem irrelavent anyway). Just be sure to follow the RFC's for the syntax (both for IPv4 and IPv6). However if you do want to use a reverse DNS query to try to get a FQDN I see no problem with using that answer, only revert to IP address if you get no DNS answer. If DNS returns a "wrong" answer, well that's not Mozilla's fault. I would not worry about the issues that NAT's may raise, that's outside the scope of what a client application should have to worry about (or even know about). If a NAT point has to be crossed between Mozilla and the SMTP server, then it is the responsiblity of the router/firewall administrator to do the appropriate poxying (or better yet run an MTA which is multihomed across the NAT point). I guess my point is that you MUST figure out which interface will be used and then ATTEMPT to determine if there is a unique and valid reverse DNS entry to find an FQDN. If you can't determine the FQDN use the IP of the interface. This lookup should be done for each SMTP connection, as it may change in DHCP environments. A manual override in prefs.js might still be nice though. Slightly offtopic, but I guess I can also provide some example of bogus HELO's I find this can catch about 30%-40% of all spam and almost never thows out legit MTAs. Furthermore when running a gateway MTA, I can also catch spam coming from an outside address claiming it's from inside (e.g., HELO inside.pc.acme.com). If Mozilla follows the RFC, then it will look less like these spammers. I just want to add my 2 cents for those who underestimate the importance of this bug. Mozilla doesn't just send the domain name instead of the host name - it sends the domain extracted from the sender's e-mail address, which amounts to forgery of the headers, which may be *illegal* in some jurisdictions. I know a guy who uses Mozilla to write e-mails from his yahoo.com address. His messages have a header: Received: from yahoo.com (his.real.hostname [192.168.0.2]) Spamassassin marks those messages as having a forged yahoo header, and rightly so. That's 4.9 points in Spamassassin 2.53 even after taking into account a valid Mozilla signature: X-Spam-Status: Yes, hits=4.9 required=5.0 tests=CONFIRMED_FORGED,FORGED_RCVD_TRAIL,FORGED_YAHOO_RCVD, RCVD_FAKE_HELO_DOTCOM,USER_AGENT_MOZILLA_UA version=2.53 That's 0.1 below the default limit. What should I tell that guy? Stop using Mozilla for Mail? Flags: blocking1.4b? Summary: SMTP: "HELO domain" should be "HELO host.domain" → "HELO domain of sending mail account" should be "HELO host.domain of machine running Mozilla" Flags: blocking1.4b? → blocking1.4b- This is _not_ a bug. Mozilla mail works perfectly this way. In fact it works better than using the proposed idea. Please close this bug report and forget the idea. If the machine from you are sending mail doesn't have a FQDN and the mail server requires a FQDN in HELO, the proposed idea will fail. Resolving the name it's a very bad idea: - It's something from other layer (DNS/IP) not from SMTP - It breaks when the name of the computer is not FQDN (as many dial-ins do) and the SMTP server does strict EHLO/HELO checking as stated before. - It breaks computers with a TCP tunnel to another host from the connection is originated if the relay does strict EHLO/HELO checking. - It breaks computers using NAT, the host that sees the server is not the one that sends the message if the relay does strict EHLO/HELO checking. - It's considered spyware as you are sending information some companies or people don't want to say: the internal structure of the network. Netscape Mail has done this for years and it has worked always. Netscape Mail is the reference SMTP client implementation. > Netscape Mail is the reference SMTP client implementation. Up to there, your comment could be viewed as reasonable.... *** Bug 209999 has been marked as a duplicate of this bug. *** OK, Netscape's mail servers now apparently filter out mails with bogus HELO lines. People are reporting mail sent from Mozilla to @netscape.com addresses bouncing due to this bug. This makes mailnews unusable as dogfood for developers working on the project, which is rather sub-optimal. Could we get some traction on this, please? jgmyer's suggestion of "HELO [ip]" could be implemented using the nsIDNSService's myIPAddress attribute. I implemented EHLO [ip] using nsIDNSService's GetMyIPAddress() in April for testing purposes. It worked and still does as far as I can test. But I requeued it because 1. sending a FQHN was the major aim and 2. the result wasn't compatible with IPv6. Code to send the ip-address literal instead the domain of sending mail account with HELO and EHLO. IPv4 style only. I kept the current code to have a fallback if GetMyIPAddress() fails for some reason. Better to use the connection's local IP address. A multihomed client could easily send the wrong address if it uses the one from the DNS client. Regardless of how this is ultimately solved, I would still highly recommend that the EHLO identification be overridable via a configurable preference on a per-SMTP server basis. This is unfortunately part of the SMTP spec which the IETF failed to provide any reasonable solution (and why almost all MTA software also makes this a configurable setting). Having a per-SMTP server preference will allow those who have unusal multi-homed, natted, proxied, or servers with agressive spam filters to at least have a workaround available to them. Again as this can actually prevent the ability to send mail at all in some network settings, we at least need some form of workaround soon! So unless this is easy to fix correctly (which I suspect it is not), I would like to see a configurable preference in as soon as possible, and work on obtaining the real network interface identity later. John, you're welcome to write a patch doing this. At the moment I don't know how to as this seems to be a more-gecko-thing. Deron, I'll add such a pref soon. The pref itself is no problem. But implementing it two questions came up. Should it be a per server pref or a general one? Having a per server pref it's more flexible but bothersome if it has to be changed. While for a new SMTP UI (see) I did a per server, I don't think this is necessary. Doing a general pref I've no clue where to put it in a UI. Christian, I would still argue that this should be per-SMTP server since a different network path may be taken for each and the DNS for them may see a different "world" (think an internal mail server behind a NATing firewall and a public one out on the internet, secured against relaying of course, and perhaps a third reached through a VPN). If you're going to provide a config to help out those with tricky networking scenarios, then you should do it per server I think... unless of course that's too hard to do. As far as a GUI, that would be great, but I wouldn't be too upset if it just has to be manually added in to the prefs file (or via about:config). I'll leave that decision to the Mozilla developers. It's certainly an advanced option though. You need to present it in a way that won't confuse users...it's changing the SMTP EHLO greeting, not any email addresses in any way. That needs to be very clear. Also about your GUI mockups, "userdefined" is probably not needed (of course anything entered on this screen is obviously "user defined"). Perhaps the word "custom"? Also rather than "computername", I'd probably stick with "hostname". Computer name is often associated with a SMB (Windows) node name, and not a DNS FQDN. Oh, and this may not be your doing, but doesn't "user name and password" belong under the "security" tab? Anyway thanks for working on this while the network gurus contemplate the harder problem. A per server pref isn't much harder to do and I understand your statement. > As far as a GUI, that would be great, but I wouldn't be too upset if it > just has to be manually added in to the prefs file (or via about:config). Ok, so I'll omit this for now. > Also rather than "computername", I'd probably stick with "hostname". I chose "computername" because there's already the hostname pref for smtpservers though in the UI it's named "Server Name". I don't like "computername" too but the panel in bug 202468 is just a raw suggestion. > but doesn't "user name and password" belong under the "security" Hm, yes, that should be better. I note that for a new version, but as stated above, the panel is just a raw concept for a new all-in-one SMTP-UI. > Anyway thanks for working on this while the network gurus contemplate the > harder problem. Having a makeshift isn't always good because it can lead to "why looking at this, it's working". But we've a real problem if some servers don't accept Mozilla mails, so that should be ok. So here's the patch with the new pref added. The pref is named mail.smtpserver.smtpx.machine_name where x is the server No. We can talk about machine_name - but hostname is already given. Maybe FQDN or machine_domain or something. Attachment #126113 - Attachment is obsolete: true Eliminated helper var myIPLiteral (which wasn't freed). Attachment #126450 - Attachment is obsolete: true Comment on attachment 126920 [details] [diff] [review] suggested patch v3 I'm not into network foo but: I think you should construct the [address] at about line 300. I think your string contatenation foo is wrong. I think machineName should have a setter. I think you should add a pref mail.smtpserver.default.machine_name (although with an empty value) so that you can change the default for all the servers at once. > I think you should construct the [address] at about line 300. Er, why that? We've GetUserDomainName() to create this string. > I think your string contatenation foo is wrong. As you might have seen in the last bug I'm not good in string objects. I could also do m_machineName.Assign("["); m_machineName.Append(myIP); m_machineName.Append("]"); if that's better. > I think machineName should have a setter. Hm, I can't see why as long as we've no UI. > I think you should add a pref mail.smtpserver.default.machine_name > (although with an empty value) so that you can change the default for > all the servers at once. No prob to do. I left this out because we have no getDefaultCharPref() there. Should I add such a helper function or would it be ok to just do a hardcoded prefs->CopyCharPref(mail.smtpserver.default.machine_name, machineName) in GetMachineName()? Patch v3 with changes addressing Neils remarks 2 to 4. The setter is unused for now but ready for implementing a UI. Attachment #126920 - Attachment is obsolete: true Comment on attachment 127528 [details] [diff] [review] patch v4 >+ if(!m_machineName.IsEmpty()) >+ return (const char *)m_machineName; >+ else No else after return. Also, use .get() rather than (const char *) cast -- the latter is deprecated (see string/public/nsXPIDLString.h). > if (m_runningURL) > { >Index: mailnews/compose/src/nsSmtpProtocol.h >=================================================================== >RCS file: /cvsroot/mozilla/mailnews/compose/src/nsSmtpProtocol.h,v >retrieving revision 1.45 >diff -u -r1.45 nsSmtpProtocol.h >--- mailnews/compose/src/nsSmtpProtocol.h 8 May 2003 18:54:25 -0000 1.45 >+++ mailnews/compose/src/nsSmtpProtocol.h 11 Jul 2003 12:20:46 -0000 >@@ -173,6 +173,7 @@ > PRUint32 m_addressesLeft; > char *m_verifyAddress; > nsXPIDLCString m_mailAddr; >+ nsXPIDLCString m_machineName; Looks like people have been violating the modeline, both here (tabs) and elsewhere (two-space c-basic-offset) -- and in all these files. Thanks for upholding it. May be worth fixing and using diff -w for reviewers. Sspitzer, what do you think? > NS_IMETHODIMP >+nsSmtpServer::GetMachineName(char * *machineName) >+{ >+ nsresult rv; >+ nsCAutoString pref; >+ NS_ENSURE_ARG_POINTER(machineName); >+ nsCOMPtr<nsIPref> prefs(do_GetService(NS_PREF_CONTRACTID, &rv)); >+ if (NS_FAILED(rv)) >+ return rv; >+ getPrefString("machine_name", pref); >+ rv = prefs->CopyCharPref(pref.get(), machineName); >+ if (NS_FAILED(rv)) >+ { >+ rv = prefs->CopyCharPref("mail.smtpserver.default.machine_name", machineName); >+ if (NS_FAILED(rv)) >+ *machineName = nsnull; >+ } Should failures be suppressed here? Don't they mean out-of-memory or something worse (and therefore worth returning)? >+nsSmtpServer::SetMachineName(const char * machineName) >+{ >+ nsresult rv; >+ nsCAutoString pref; >+ nsCOMPtr<nsIPref> prefs(do_GetService(NS_PREF_CONTRACTID, &rv)); >+ if (NS_FAILED(rv)) >+ return rv; >+ getPrefString("machine_name", pref); >+ if (machineName) >+ return prefs->SetCharPref(pref.get(), machineName); >+ else No else after return. /be >+ if(!m_machineName.IsEmpty()) >+ return (const char *)m_machineName; >+ else > No else after return. Hm, I find it more clear to see what's the alternative to the if. But ok, no prob to change. > Also, use .get() rather than (const char *) cast -- the latter is > deprecated (see string/public/nsXPIDLString.h). Ah, ok. I used what is used down in the current function. > Looks like people have been violating the modeline, both here (tabs) and > elsewhere (two-space c-basic-offset) -- and in all these files. Thanks for > upholding it. May be worth fixing and using diff -w for reviewers. I'm using modeline params in new functions and single new lines but preserving the mode used in functions I'm changing. Strictly following the modeline or changing current lines has been criticized by reviewers in the past. What sould be the benefit of a -w here? >Should failures be suppressed here? Don't they mean out-of-memory or something >worse (and therefore worth returning)? For the > rv = prefs->CopyCharPref(pref.get(), machineName); I'd say clearly no. And for > rv = prefs->CopyCharPref("mail.smtpserver.default.machine_name", machineName); I don't know. > Hm, I find it more clear to see what's the alternative to the if. You and others weren't using it consistently, it overindents the else clause, which is often the normal-termination case (the bulk of the body of a function or method, frequently), and it doesn't read right to anyone who is thinking about control flow expressed in a C-like language (I was schooled in Pascal, I've been there -- don't mean to disparage). It's important to execute mentally when reading. > But ok, no prob to change. Thanks. > What sould be the benefit of a -w here? diff -wu will not show trivial whitespace changes, so you can expand tabs and reindent at will. Reviewers may stipulate minimal change near a freeze, but should be open to modeline conformance fixes otherwise. You should attach a diff -u version of the patch for applying, not for reviewing, if you do fix lots of whitespace glitches. cvs diff -p is good to use in conjunction with -u, btw. If you add diff -pu to your .cvsrc file, you'll always get it (I use -pu8). It shows function or method (or goto-label) tags introducing each hunk of diffs. I'm still not convinced that suppressing pref allocation errors is the right thing. Is there an ambiguity problem where some failure codes mean "pref not found" while others mean "out of memory" or something worse? /be > diff -wu will not show trivial whitespace changes, so you can expand tabs and > reindent at will. Er, yes, I know what it does. And I use it sometimes (e.g. if reindenting complete blocks), but I meant in this case. Anyway, I now got what you meant in your first comment. And I agree. > cvs diff -p is good to use in conjunction with -u, btw. Hey, that -p is great. I'll add this to the default. > I'm still not convinced that suppressing pref allocation errors is the right > thing. Is there an ambiguity problem where some failure codes mean "pref not > found" while others mean "out of memory" or something worse? I'm not convinced too. But CopyCharPref() returns NS_ERROR_UNEXPECTED if the pref wasn't found. That shouldn't happen for mail.smtpserver.default.machine_name as we have a default in mailnews.js. CopyCharPref can never "fail" as such for a known char pref, the best it can do is to set your pointer to null when it fails to strdup the value. Perhaps you could do better by setting the pointer to null anyway before calling CopyCharPref, that way you don't have to check the return value at all. Neil, shouldn't strdup failure cause NS_ERROR_OUT_OF_MEMORY to be returned? On small memory (no VM) systems, this can happen. /be yes please Er, I fear I don't understand. Timeless, yes please what? Neil, do you think of something like that: rv = prefs->CopyCharPref(pref.get(), machineName); if (NS_FAILED(rv)) { *machineName = nsnull; prefs->CopyCharPref("mail.smtpserver.default.machine_name", machineName); } return NS_OK; rv = prefs->CopyCharPref(pref.get(), machineName); if (NS_FAILED(rv)) { *machineName = nsnull; prefs->CopyCharPref("mail.smtpserver.default.machine_name", machineName); } return NS_OK; What is so hard to understand? (1) The above ignores the (single-use rv variable, in the excerpt) failure result, suppressing out-of-memory errors wrongly. (2) The second CopyCharPref's return value is completely ignored, also wrong. If you wish to cope with default or *unset* prefs, you have to test for a default or empty string value. A failure result is an error, which should be propagated. /be So I've to propagate the rv to the calling function. Ok, easy to do. But in case of the first CopyCharPref(), the pref might not be set. So need to test for an error which is not out of memory, yes? rv = prefs->CopyCharPref(pref.get(), machineName); if (rv == NS_ERROR_UNEXPECTED) { *machineName = nsnull; rv = prefs->CopyCharPref("mail.smtpserver.default.machine_name", machineName); } return rv; Testing if machineName is empty after CopyCharPref() wouldn't bring anything cause that can have multiple reasons. And then? The calling code: rv = smtpServer->GetMachineName(getter_Copies(m_machineName)); if (NS_FAILED(rv)) return; Looks good -- PREF_ERROR is returned by PREF_CopyCharPref for no such pref name, and it is mapped to NS_ERROR_UNEXPECTED. /be Ok, so next try. I hope, this patch addresses all issues and requests. This is a -puw patch, -pu on request. Attachment #127528 - Attachment is obsolete: true > nsSmtpServer::GetMachineName(char * *machineName) please use: nsSmtpServer::GetMachineName(char * *aMachineName) this is how we mark arguments so that they aren't confused with local variables. fwiw i filed Bug 213692 about the wrong allocator being used for the preferences code. Assignee: mscott → ch.ey So if I change this variables name, do you think the patch will survive the review? Changed argument name of GetMachineName()/SetMachineName(). This is a -puw patch, -pu on request. Attachment #129273 - Flags: superreview?(timeless) Attachment #129273 - Flags: review?(brendan) Comment on attachment 129273 [details] [diff] [review] patch v6 Please use module owner or peer for reviews. /be Attachment #129273 - Flags: review?(brendan) → review?(sspitzer) Comment on attachment 129273 [details] [diff] [review] patch v6 i'm not an sr (there's a list) neil is about to vacation i just got back and need to catch up on bugmail (so i'm not going to do a real review until i catch up - i.e. when i get *this* bugmail, about 9000bugmails from now) >Index: mailnews/compose/src/nsSmtpProtocol.cpp >+static NS_DEFINE_CID(kDNSServiceCID, NS_DNSSERVICE_CID); please use the contractid instead of the cid. >@@ -332,6 +337,26 @@ const char * nsSmtpProtocol::GetUserDoma >+ m_machineName.Assign("["); >+ m_machineName.Append(myIP); >+ m_machineName.Append("]"); I'd prefer a single string operation, but neil didn't like attachment 126920 [details] [diff] [review]'s code... Attachment #129273 - Flags: superreview?(timeless) Attachment #129273 - Flags: superreview?(sspitzer) Attachment #129273 - Flags: review?(timeless) Attachment #129273 - Flags: review?(sspitzer) >+ m_machineName.Assign("["); >+ m_machineName.Append(myIP); >+ m_machineName.Append("]"); m_machineName = NS_LITERAL_CSTRING("[") + myIP + NS_LITERAL_CSTRING("]"); Timeless: Why should I use the contractid instead the cid? I find the cid more describing and it's used in sources too. Yes, const char kDNSService_CONTRACTID[] = "@mozilla.org/network/dns-service;1"; works, but why to use and how to know this? Darin: That doesn't work because myIP is char *. If there's a workaround, I don't know it. CIDs identify implementations, contract-ids identify sets of interfaces. Use the latter unless you *really* mean to dependon a particular implementation. Here, you just want the DNS service, latest implementation. So contract-id is best. The raw char* can be wrapped in an nsDependentCString, no? I love our verbose string classes. /be Now using contract-id and new string assign to m_machineName according to Timeless and Darin (thanks Brendan for explanation). This is a -puw patch, -pu for checkin on request. Now using contract-id and new string assign to m_machineName according to Timeless and Darin (thanks Brendan for explanation). This is a -puw patch, -pu for checkin on request. Attachment #129273 - Attachment is obsolete: true Attachment #130408 - Flags: review?(timeless) Attachment #129273 - Flags: superreview?(sspitzer) Attachment #129273 - Flags: review?(timeless) Comment on attachment 130408 [details] [diff] [review] patch v7 >+ // hope this works even on machines without _PR_GET_HOST_ADDR_AS_NAME darin didn't answer this question? :( >+ nsCOMPtr<nsIDNSService> dns = do_GetService(kDNSService_CONTRACTID, &rv); >+ if (NS_SUCCEEDED(rv)) >+ { >+ char *myIP; >+ rv = dns->GetMyIPAddress(&myIP); this leaks ^^, use an xpidlcstring instead then you don't need the nsdependentcstring below >+ if (NS_SUCCEEDED(rv)) >+ { >+ m_machineName = NS_LITERAL_CSTRING("[") + nsDependentCString(myIP) + NS_LITERAL_CSTRING("]"); >+ return m_machineName.get(); >+ } >+ } >+nsSmtpServer::GetMachineName(char * *aMachineName) >+ nsCOMPtr<nsIPref> prefs(do_GetService(NS_PREF_CONTRACTID, &rv)); >+ if (NS_FAILED(rv)) >+ return rv; for this failure you don't null out aMachineName >+ rv = prefs->CopyCharPref(pref.get(), aMachineName); >+ if (rv == NS_ERROR_UNEXPECTED) >+ { >+ *aMachineName = nsnull; >+ rv = prefs->CopyCharPref("mail.smtpserver.default.machine_name", aMachineName); for this failure you do. should you be consistent? >+ } >+ return rv; >+} The patch looks good Attachment #130408 - Flags: review?(timeless) → review- it looks like mac classic is the only environment that defines _PR_GET_HOST_ADDR_AS_NAME, so we shouldn't have to worry. >it looks like mac classic is the only environment that defines >_PR_GET_HOST_ADDR_AS_NAME, so we shouldn't have to worry. Ok, so I don't. >>+ char *myIP; >>+ rv = dns->GetMyIPAddress(&myIP); >this leaks ^^, use an xpidlcstring instead then you don't need the >nsdependentcstring below Done, is now + nsXPIDLCString myIP; + rv = dns->GetMyIPAddress(getter_Copies(myIP)); >>+ if (rv == NS_ERROR_UNEXPECTED) >>+ { >>+ *aMachineName = nsnull; >>+ rv = prefs->CopyCharPref("mail.smtpserver.default.machine_name", aMachineName); >for this failure you do. should you be consistent? Yes, I should. I must admit I initially just copied nsSmtpServer::GetUsername that is inconsistent (and lacks to handle out-of-memory errors). Since GetMachineName's rv is checked and the calling function exited if wrong, I could omit nulling aMachineName out. Yeah, don't null out params on failure, in general (QueryInterface is a special case; there may be one or two others like it). /be Ok, so here's another revision of the patch addressing all issues. This is a -puw patch again, -pu for checkin on request. Attachment #130408 - Attachment is obsolete: true Comment on attachment 130493 [details] [diff] [review] patch v8 some comments: 1) + nsCOMPtr<nsIPref> prefs(do_GetService(NS_PREF_CONTRACTID, &rv)); nsIPref is deprecated 2) + if (rv == NS_ERROR_UNEXPECTED) I'd prefer: if (NS_FAILED(rv)) I'm not sure if prefs is guaranteed to return that exact error code. 3) why is there a setter? is there part of the patch I'm missing? shouldn't this be: "readonly attribute string machineName;" looking upwards, brendan commented about the pref stuff, and steered you away from if (NS_FAILED(rv)), because it will mask the out of memory errors. yowsa, there's a lot of mailnews code that does it this way. (probably elsewhere too) > 1) > > + nsCOMPtr<nsIPref> prefs(do_GetService(NS_PREF_CONTRACTID, &rv)); > > nsIPref is deprecated > > Uh, nsSmtpServer.cpp if full of it. But ok, I see. > 3) why is there a setter? is there part of the patch I'm missing? > > shouldn't this be: > > "readonly attribute string machineName;" See Neil's comment #42. But if a patch with UI (there should be one sooner or later) has the chance to go through, I can also do one. > looking upwards, brendan commented about the pref stuff, and steered you away > from if (NS_FAILED(rv)), because it will mask the out of memory errors. > > yowsa, there's a lot of mailnews code that does it this way. (probably > elsewhere too) There's already code for every call or if I did and I nevertheless got advice not to do this and that. Because of the lack of experience I don't know who's right (and maybe noone can say). Comment on attachment 130493 [details] [diff] [review] patch v8 I suppose I should have flagged nsIPref, but if you look at the file you'd see that all but one of the pref users use nsIPref. I think we can leave that fix for a file cleanup. if you post a new patch and seth sr's it and the only change is the nsIPref then you can carry my r= forward, otherwise perhaps seth will just sr this and leave the file cleanup for later. Attachment #130493 - Flags: review+ I'd like to do a planned cleanup with a new service function for the whole nsSmtpServer as a separate task but get this patch in now. So Seth, if nsIPref is your only caveat, shouldn't this be acceptable and let this patch have your sr? BTW, will the patch fix Thuderbird as well? Lukasz BTW no 2. Which version is the patch supposed to be implemented in? 1.5 I hope? You know, it is really nice to use and support Mozilla - when at least the basics work. Lukasz Lukasz, 1.5 just branched, so this patch won't make it there - maybe 1.6a. Thunderbird - I think yes, the code base is still quite the same. New try. This version now uses nsIPrefService/nsIPrefBranch instead of nsIPref. The new function GetPrefBranch() can also be used when migrating all other nsSmtpServer functions to nsIPrefBranch. The use of the removed GetMyIPAddress() has been replaced by combination of GetMyHostName() and Resolve(). It looks more complex but is the same the old function did in background. And I added provides a UI for the machine name. But unfortunately that's not how a pref branch should be used... nsCOMPtr<nsIPrefBranch> mKeyBranch; nsCOMPtr<nsIPrefBranch> mDefaultBranch; nsSmtpServer::SetKey(const char * aKey) { mKeyBranch = nsnull; mKey = aKey; return NS_OK; } nsresult /* NOT NS_IMETHODIMP */ nsSmtpServer::EnsurePrefBranch() { nsresult rv = NS_OK; if (!mDefaultBranch || !mKeyBranch) { nsCOMPtr<nsIPrefService> prefService = do_GetService(NS_PREFSERVICE_CONTRACTID, &rv); if (NS_SUCCEEDED(rv) && !mDefaultBranch) { rv = prefService->GetBranch("mail.smtpserver.default.", getter_AddRefs(mDefaultBranch)); } if (NS_SUCCEEDED(rv) && !mKeyBranch) { nsAutoCString prefBranchName = "mail.smtpserver."; prefBranchName += mKey; prefBranchName += "."; rv = prefService->GetBranch(prefBranchName.get(), getterAddRefs(mKeyBranch)); } return rv; } nsresult nsSmtpServer::GetCharPref(char * pref, char * *result) { nsresult rv = EnsurePrefBranch(); if (NS_SUCCEEDED(rv)) { rv = keyBranch->GetCharPref(pref, result); if (rv == NS_ERROR_UNEXPECTED) rv = defaultBranch->GetCharPref(pref, result); } return rv; } NS_IMETHODIMP nsSmtpServer::GetMachineName(char * *aMachineName) { return GetCharPref("machine_name", aMachineName); } Hm, I did know that the power lies within GetBranch()'s aPrefRoot parameter. But didn't see how to use it, resp. wanted to avoid tow nsIPrefBranch vars. So I did want I found in several other files. But ok, your code is nice though the hardwired get of the default value isn't what I'd do in this case. I'll use it. New version using EnsurePrefBranch() from comment #81. I didn't put get of user value and default value in one function because this combination is only used in GetMachineName(). Attachment #132789 - Attachment is obsolete: true Comment on attachment 132869 [details] [diff] [review] patch v10 the ui text isn't good enough to ship with a product, but i think a new bug could be filed to clean that up. if an sr objects to that then well, try again :). Attachment #132869 - Flags: review?(timeless) → review+ Well, any suggestions for a better text are welcome. Comment on attachment 132869 [details] [diff] [review] patch v10 Offhand "Report machine name as:" Report is of course not ideal, but ... The string I used in the patch is from KMail and accurate. Yes, it's quite long, but better this than confusing a user. Or what's the point of your criticism? Hm, what about "Identify this machine as:" The string in your patch will confuse our target audience. (and it's too long) Comment 87's text seems ok to me. Status: NEW → ASSIGNED Maybe just "Send as hostname in HELO/EHLO:" Lukasz > "Send as hostname in HELO/EHLO:" Ehm, no. I'd like this for myself. But nearly no user knows what HELO/EHLO is. And which hostname? The servers hostname or what? The problem is to describe what's meant understandable for all kind of users without more than four words ... My point was, if somebody wants to use this option, he/she must know what is going on. On the other hand, if somebody knows what the problem is, he/she may be mislead by description like "Identify this machine as". The servers' error messages to erroneous FQDN in EHLO do not point to the solution. But when you find the reason for them, "Send as hostname in HELO/EHLO" is instantly obvious. Just my 2PLN. Lukasz Yes, Lukasz, you can be right with this. Though I don't know what such error messages say if the server don't like the FQDN. As I wrote, I'd feel comfortable with this for myself. We could use it for alpha, wait and see. A help entry should also be added for this UI element. FYI, I have come across the following types of answers (different servers from different SMTP providers): - Relaying denied - You are not authorized to use this server - Wrong account name/password - Some generic errors like "Server error", but I do not remember exactly the error texts - And sometimes Mozilla goes into endless loop, exchanging the same command sequence with server over and over again. Lukasz Er, did the servers send these messages because of a wrong FQDN? And endless loop? That can happen with an old release if the password is wrong and saved. We've changed that in the newer releases. Yes. I checked all my accounts with Outlook Express and was able to use SMTP with the same credentials as provided in Mozilla. I can not imagine other reason for these errors. In time, the responses from the same server sometimes changed (server upgrades?), but I cannot recollect single message like "Wrong host name" or "Wrong FQDN in EHLO" or like. Maybe the loop is no more in the current version. This bug plagues me from early Mozillas, I have not checked everything every release. And I stopped checking since I track this bug. Sorry if some of this is re-hashing old comments do we really want a failure to get the machine name to abort the initialize process? + rv = smtpServer->GetMachineName(getter_Copies(m_machineName)); + if (NS_FAILED(rv)) + return; I think user-defined, not userdefined. Am I missing the point of having a default machine name? Are we ever going to set it? It's specific to each user's machine, isn't it? So no default is ever going to be meaningful. "mail.smtpserver.default.machine_name" I marginally prefer computer name over machine name - maybe because that's what Windows calls it and windows users would be our most confused users :-) +... > + rv = smtpServer->GetMachineName(getter_Copies(m_machineName)); > + if (NS_FAILED(rv)) > + return; > > do we really want a failure to get the machine name to abort the initialize > process? Since a rv!=0 should mean out-of-memory here, I think yes. > Am I missing the point of having a default machine name? Are we ever going to > set it? It's specific to each user's machine, isn't it? So no default is ever > going to be meaningful. Hm, a small net behind a NAT? Then you could set the default to a (from outside) resolvable name. > I think user-defined, not userdefined. > > "mail.smtpserver.default.machine_name" > > I marginally prefer computer name over machine name - maybe because that's > what Windows calls it and windows users would be our most confused users :-) Whatever you want - in both cases. >... I'd like to have the pref cleared if aMachineName is empty. So what about this: +; What do you think about the string in the UI? Which would you prefer? Or even better without NS_ENSURE_ARG_POINTER(aMachineName); but if (aMachineName && *aMachineName) New version, new luck to get it in. This version now uses computer_name instead of machine_name, the UI says "Identify this computer as:" and SetComputerName() has been changed according to comment #98. Attachment #132869 - Attachment is obsolete: true It looks OK, but I don't see the need for the whole default pref branch thing. Doesn't the pref code do that for you? No, if the pref doesn't exist, GetCharPref() return failure and a null string. Maybe there's a way to instruct the pref service code to take the default if the pref doesn't exist. But neither do I nor seems Neil to know it (as he posted the code in comment #81). I presume this is a similar approach to ? since we're defaulting the name to the empty string anyway, I still don't see the need for complicating the code this way. I'd really rather not add more code for this feature... I'm not sure what you're talking about. The default code in EnsurePrefBranch() are two lines: + if (NS_SUCCEEDED(rv) && !mDefaultBranch) { + rv = prefService->GetBranch("mail.smtpserver.default.", getter_AddRefs(mDefaultBranch)); The code in GetComputerName() are atmost two lines: + if (rv == NS_ERROR_UNEXPECTED) + rv = mDefaultBranch->GetCharPref("computer_name", aComputerName); And please see the possible reason in my comment #97. An admin could change the default in an installation script. And lastly we'll need the default code at least in EnsurePrefBranch() at the latest when moving away from nsIPref. But if you want I can remove these lines and handle NULL for m_computerName in GetUserDomain(). I wonder how it's possible Mozilla was ever created when it takes months to settle such a small ****... Lukasz, with useless Mozilla Mail for years now... (In reply to comment #25) > Sorry, I don't understand. Are You think what "HELO 66.55.44.33" isn'f RFC-proper ? I think You are wrong. Please see RFC2821, 4.1.3 Address Literals: IPv4-address-literal = Snum 3("." Snum) "[]" isn't present in specifically. "HELO 66.55.44.33" is right when 66.55.44.33 unresolved. > IPv4-address-literal = Snum 3("." Snum) more sorry, i miss 4.1.2 Command Argument Syntax :-( thanks for Briggs. I'm wrong. Sergey, glad you caught your mistake. It is a very easy one to make as the BNF is spread across several sections (I myself had to reread the RFC carefully several times). Fortunately for antispam purposes though it seems that many spammers also tend to make your same mistake of missing the required brackets, while most legitimate MTA's/MUA's seem to have gotten it correct. That's why I mentioned it as a possible indicator of spam, and hence why Mozilla should do it correctly if it must resort to IP literals. Changing subject (adding EHLO) to make it easier to find. Summary: "HELO domain of sending mail account" should be "HELO host.domain of machine running Mozilla" → "HELO/EHLO domain of sending mail account" should be "HELO/EHLO host.domain of machine running Mozilla" *** Bug 238890 has been marked as a duplicate of this bug. *** *** Bug 239104 has been marked as a duplicate of this bug. *** (In reply to comment #111) > *** Bug 239104 has been marked as a duplicate of this bug. *** I don't understand why I had not found this bug :/ Sorry for the duplicate. Well, is there some kind of status information and when this bug will be fixed / the patches are applied? Never. Plain and simple. Especially considering this bug is 3 years old. After some deep investigation I discovered, that this bug is maintained by MozillaAdversariesTeam (R)(TM) to make sure, that anybody can point out this bug as a proof Mozilla is as buggy as hell. And that Mozilla bug patching is as defined process as weather changes. Follow my steps and buy TheBat for your e-mail needs, 'cos this bug is gonna to stay. Lukasz Flags: blocking1.8a+ Flags: blocking1.7+ Flags: blocking1.8a+ Flags: blocking1.7+ I'll put in a patch for this, when I get a moment this is exactly the same as Christian's patch, but w/o the default value stuff. If later on it becomes clear that a default value is useful, we should do it in such a way that we use common routines for all the pref getting/setting, in the interest of cutting down on bloat. We should clean up this file in general to share more of the boiler-plate pref code (and use nsIPrefBranch throughout). Comment on attachment 145095 [details] [diff] [review] fix w/o default value for machine name can you omit the first hunk of compose/src/nsSmtpServer.cpp (@@ -85,6 +85,7 @@) That patch would be ok for me, I can live with it. I'm strongly against adding this kind of UI for the average user to fill out. They aren't (and shouldn't) have to fill out a computer name or IP address when setting up an SMTP server. I don't see any such UI in outlook or outlook express. I'm pretty sure eudora doesn't either. If anything, we should have a low level necko call where we can ask what the IP address or computer name is for the users's machine. I'm sure there is a windows API call to do just this, if someone knows of it please speak up here. What about GTK2 on linux? Christian, what was wrong with your original patch () which gets the IP address from the DNS server and uses that with the HELO command? did it just need some formatting tweaks to adhere to the RFC? That seems like the correct approach to me. We should not have UI for this feature, it is way to advanced. We should try to do the right thing without the user having to fill in this information in a geeky advanced UI panel. It isn't something a normal user should be filling in. Scott, the patch implements getting the hostname and resolve its address. The pref with UI for a computer name to present is only additionally and I implemented it after some requests. It's not mandatory to fill something in. For me nothing was wrong with my original patch. And it was RFC compliant. Read all the comments what people (reviewers among them) didn't like. My patch is around for nearly a year now and people still come up with "new" ideas and requests to change it. I swear that if we reduce it to simply "EHLO [IP]" people come up with "but in KMail I can enter a name" stuff again. I think it would be time to fix this bug because it makes the creators of the best mail client(tm) look like they wouldn't know how to use SMTP commands... please :) The default should be to use the ipaddress that was used to open the connection to the SMTP-server, not GetMyIPAddress or something similar. That doesn't work on multi-homed hosts or hosts with many ipaddresses (f.i. with dual IPv4/IPv6 network stacks). Just open the connection first, then use getsockname(). That ipaddress should be turned into a FQDN with a DNS-lookup. If this is the default, then most users would report the correct hostname in their SMTP-headers, and this would help in the war against SPAM (SMTP-servers should insists on that !). Now, Mozilla is breaking all the rules, we send only the domainname, not even a FQDN. If people need to override that (f.i. when they're behind a NAT that doesn't translates), they can do that as a hardcoded preference, but I wouldn't encourage that, so I don't think a GUI is needed. BTW : unless you have a static (and public) ipaddress, you will always provide a *fake* name (in a preference, GUI, or from gethostname), because the ipaddress/name will be different everytime. If your SMTP-server really wants to check the header, then you still can't send a mail. So we still need this 'automatic' setting! It looks everyone knows how to do it but dump me is stumpling around for almost a year nw. Why don't you do it yourself? I've to dig in how to use getsockname() (Passing socket id from SMTP protocol handler? That's fun.), how to distinguish v4 from v6 addresses a.s.o.. (In reply to comment #125) > I've to dig in how to use getsockname() (Passing socket id from SMTP protocol > handler? That's fun.), how to distinguish v4 from v6 addresses a.s.o. You want this here, I believe: this assumes that the mailnews code there uses nsISocketTransport... Here's a different strategy after talking to Jo and Christian Eyrich. It removes the UI and just does the following: 1) Get the IP address from the current socket transport 2) Do a DNS lookup on this IP address (does this meet Jo's reverse DNS lookup requirement?) 3) Use the resulting IP address from the DNS lookup if it succeeded, otherwise fall back to the IP address we got for the current socket transport. Do we have to do any special casing for IPv6? I would think the call to PR_NetAddrToString takes care of all that. fwiw my mail server runs on 127.0.0.1 (actually it's an ssh forward). i guess that means i'll always resolve to localhost. proposal of comment 128 insufficient for users behind NAT routers. Nelson, we're almost out of time for 1.7, and I would rather have a patch that works for most configs, and get a NAT bug fixed later. What does the client's DNS think of the untranslated IP address returned by getsockname? If reverse-lookup gives back a name other than one that maps to the NAT'd address on the mail server end, what can be done? How can this situation be detected on the mail client host? /be (In reply to comment #126) >. > Nelson, I program access servers and border gateways (= comparable to NAT) for a living. The problem is that when you have a SMTP-server that wants to check the EHLO header (or a service like Spamcop that wants to check the SMTP-header later), then you want to provide accurate data. But that's actually impossible if you have NAT, so we still need to provide the data manually. And even if you don't have NAT, you still have to provide the *real* ipaddress and name, which will probably change every time if you log on, unless you have a static ipaddress, or use dynamic DNS (in which case you have to provide th ename manually, the reverse DNS-llokup won't find it). There's only 1 solution to this problem, and that is to use a NAT that will transparently change the EHLO header or (better) a NAT that implements a SMTP server on its own (or a transparent proxy, so that it can accept your headers, and add its own. But that's very rare, at least for the NAT's that the public use. I'm using a transparent proxy in my own software. Scott, we will always have people that will have trouble with pedantic SMTP-servers (they can't use any version of Mozilla or Netscape at the moment). We can use the current proposal as a default, and we can provide a preference to override that for people who need it. Attachment 145096 [details] [diff] wasn't so bad after all. We only need to change the default, becuase that wasn't correct (we don't call GetMyHostName anymore, but use getsockname). I would vote not to provide a GUI for it, though. + aResult.Assign(NS_LITERAL_CSTRING("[") + resolvedIP + NS_LITERAL_CSTRING("]")); hmm. shouldn't you only use [] for IP addresses, and leave them out for domain names? Er, no Scott. Firstly socketTransport->GetAddress() gets the address of the remote end, not ours. And secondly, EHLO with IPv6 must no use "[IP]" but "IPv6:IP". I can't test IPv6 nets so I don't know if the system functions already return the prefix or if we have to put it there. Nelson, I know about the NAT problem, see comment #22. But 1. everything's better than the current obviously wrong one and 2. give us a hint how to get our outside IP and we'll implement it. > Er, no Scott. Firstly socketTransport->GetAddress() gets the address of the > remote end, not ours. Argh, this is my fault for misleading mscott. We were looking at nsServerSocket::GetAddress, which does calls PR_GetSockName (which calls getsockname, not getpeername). We should have been looking at nsSocketTransport::GetAddress. Two methods with the same name and opposite function. Darin, save us! Everyone stay cool -- this bug will be fixed for 1.7 or my head will explode. /be Re: comment 126 An ISP that does this is not implementing SMTP properly. RFC 2821 sec. 4.1.4 para. 6 says:. Unless you know of an ISP that actually does this, it's a waste of time to try to work around it. Re: comment 131 Brendan, I agree *completely* that this bug should be fixed in 1.7, even if the fix doesn't work for some users behind NAT routers. A 95% fix is better than a 0% fix. IMO, reverse DNS should not be attempted for addresses in the "private address" space defined in RFC 1918, which is what most systems behind a NAT router will find. mozilla can detect the NAT problem by checking the result of getsockname against those 3 subnets and treating them differently. Re: comment 132 and comment 132=4 I disagree with the remark that "There's only 1 solution to this problem,". A large portion of the NAT-using (or NAT-victim :) population would be satisfied if they could have a way (perhaps via a preference) to specify the value that should be sent in the HELO/EHLO request. Remember, we're trying to find solutions that work for large subsets of the user base. We're NOT trying to hold off all solutions until the perfect one for all users can be found. (Right, Brendan?) Re: comment 126, There are broadband ISPs that are doing this now as an anti-spam measure. Their attitude is that stopping spam is much more important than RFC-compliance for this issue. In some cases, the ISPs see it as a necessary measure to keep themselves out of various RBLs. Given that there *IS* a relatively simple solution that would enable mozilla users to workaround this problem without either (a) finding new ISPs (they may have only one choice for broadband ISP), or (b) replacing their NAT boxes with ones that rewrite outgoing HELO/EHLO (do any exist?), I think mozilla should implement such a solution. But I wouldn't hold up 1.7 for it. After talking to Darin and Brendan, this patch creates a new method on nsISocketTransport: [noscript] PRNetAddr getSelfAddr(); which returns the value of PR_GetSockName on the open smtp socket. For clarity, I also renamed getAddress to be getPeerAddr to avoid confusion. Still to do: IPv6 formatting of the string Attachment #137764 - Attachment is obsolete: true Attachment #145096 - Attachment is obsolete: true Attachment #145727 - Attachment is obsolete: true Comment on attachment 145845 [details] [diff] [review] another attempt >+++ netwerk/base/public/nsISocketTransport.idl 11 Apr 2004 03:25:40 -0000 >@@ -65,7 +65,13 @@ > * Returns the IP address for the underlying socket connection. This > * attribute is only defined once a connection has been established. Existing comment, I realize, but you clone it below. Nit: "only defined" should be "defined only". Fix that and then the question is, what's the return value if there is no established connection? >+NS_IMETHODIMP >+nsSocketTransport::GetSelfAddr(PRNetAddr *addr) >+{ >+ nsresult rv = NS_OK; >+ if (mFDconnected) >+ PR_GetSockName(mFD, addr); >+ else >+ rv = NS_ERROR_FAILURE; // trying to get the address before we have reached the connected state? >+ >+ return rv; > } How about early return for errors, check PR_GetSockName's r.v., and no rv local needed: { if (!mFDconnected || PR_GetSockName(mFD, addr) == PR_FAILURE) return NS_ERROR_FAILURE; return NS_OK; } Also, uniform four-space indentation. /be Great, Scott. That works though I couldn't test it on a multihomed machine. It looks we've no flag or so to see what version the IP is. So would this reasonable (a port and therefore a colon shouldn't be present in the IP, right)? if (strchr(resolvedIP, ':')) format IPv6 else format IPv4 I'd also like to modify the comment to something like +void nsSmtpProtocol::GetUserDomainName(nsACString& aResult) { + // should return the interface ip of the SMTP connection Also I'm not sure if we should reverse resolve the interface IP to a name at all. The resulting name may be resolveable locally but not from outside (NAT). While the IP also may not have any meaning outside, it should be considered as ok by most servers. But I think the probability to generate problems is higher when sending out an unresolveable domain name. This updated patch addresses Brendan's review comments and changes the comment in nsSmtpProtocol per Christian. Christian, I'm not sure about the DNS look up on our IP address either. I can't see what that is doing for us. In my limited testing scenarios (at home with a router and at work) the DNS lookup of my IP address is ALWAYS giving me back the same IP address. I don't think we'll ever get back a 'name' instead of an IP address here because the PRNetAddr object returned from the DNS request only holds IPv4 or IPv6 strings. So it would never give us a name anyway. My preference is to take that part out, and coded to handle IPv6 and get this into 1.7. Attachment #145845 - Attachment is obsolete: true In hope to push things forward, this is a modified version of Scott's last patch. It adds generation of EHLO argument for IPv6 addresses (see my comment #140) and removes the resolve as its benefit is disputable. Attachment #146107 - Attachment is obsolete: true Comment on attachment 146701 [details] [diff] [review] another attempt v3 No no, stupid me. That doesn't even compile. Attachment #146701 - Attachment is obsolete: true Next try, see comment #142 but tested this time. Christian, thanks for finishing off this patch. I just applied your patch but I can no longer send mail with it. At least for me, we never caclulate a name to pass to the EHLO command anymore. the log looks like: 0[294280]: SMTP Send: EHLO 0[294280]: SMTP entering state: 0 0[294280]: SMTP Response: 501 Syntax: EHLO hostname (In reply to comment #140) [...] > Also I'm not sure if we should reverse resolve the interface IP to a name at > all. The resulting name may be resolveable locally but not from outside (NAT). You shouldn't care about NAT in my opinion: A NAT user normally has an SMTP Server to send mail out, regardless of what helo/ehlo header is set. Either the client authenticates through an AUTH command with valid credentials or it is allowed to send from his natted IP. In both cases an SMTP server rejecting a relay due to an invalid helo/ehlo should be regarded as badly configured. If a NAT user doesn't authenticate in no way and wants to send non-local mail, the server should deny in any case, unless it's an open relay (which would not do header check most probably). Sending local mail without authentication could be denied, because of a malformed helo/ehlo header. This case may happen rarely, since a regular eMail user gets any form of valid authentication from his/her ISP. Thus for NAT users the contents of the helo/ehlo header is not important. Security concerns like the one, that internal network information is presented to the outside world, are not related to the MUA. If security is an issue, one should use a firewall, doing Message-ID Masquerade and hiding internal addresses, etc. For non-NAT communication the proposed way of reverse looking up the IP Address and using IP litterals in square brackets, is a perfect solution. The user configurable option is pure luxury, provided my arguments are coherent. Comment on attachment 146908 [details] [diff] [review] another attempt v4 > At least for me, we never caclulate a name to pass to the EHLO command anymore. What the heck? Sorry for muddling this up - again. I did apply your "another attempt" but only the nsSmtpProtocol.cpp part from the "updated patch". So I missed that you changed GetSelfAddr() in the latter and it worked for me. The patch I uploaded is the "updated patch" with the part for nsSmtpProtocol.cpp replaced. To cut a long story short nsSocketTransport::GetSelfAddr(PRNetAddr *addr) { if (mFDconnected || PR_GetSockName(mFD, addr) == PR_FAILURE) return NS_ERROR_NOT_AVAILABLE; else return NS_OK; } always returns error and therefore no IP address is used. I guess you meant something like if (mFDconnected && PR_GetSockName(mFD, addr) == PR_SUCCESS) return NS_OK; else return NS_ERROR_NOT_AVAILABLE; BTW, in PR_NetAddrToString() I saw the address tested for IPv6 with PR_AF_INET6 == addr->raw.family Maybe we also can use this instead strchr(ipAddressString, ':') If iaddr->raw.family is PR_AF_INET6, does ipAddressString always contain a IPv6 address (and vice versa)? Or might an address like 0:0:0:0:0:FFFF:129.144.52.38 become 129.144.52.38 in PR_NetAddrToString? actually I meant to say if (!mFDConnected || ....) My original tree with this patch has a !, but the patch I attached to this bug doesn't. Zoinks. Good catch. > if (!mFDConnected || ....) Yes, that's the other alternative. I had a blackout (yes, another one), thinking in this case the second condition would get evaluated if mFDConnected is false. And I still prefer the "positive assumption" I posted, but that's a personal preference. Can someone (Darin?) can anything about the question how to evaluate the address for IPv6? Comment on attachment 146908 [details] [diff] [review] another attempt v4 We'll need Darin to review the network changes anyway. Darin can you also answer Christian's IPv6 question if you know the answer :) I'll sr after Darin okay's the changes we made to nsISocketTransport. Attachment #146908 - Flags: review?(darin) fixed on the 0.6 tbird branch Comment on attachment 146908 [details] [diff] [review] another attempt v4 else after return in GetSelfAddr, wahhhh! /be please change the IID of netwerk/base/public/nsISocketTransport.idl when changing it, so users of it won't crash when this interface changes. Comment on attachment 146908 [details] [diff] [review] another attempt v4 >Index: mailnews/compose/src/nsSmtpProtocol.cpp >+const char kDNSService_CONTRACTID[] = "@mozilla.org/network/dns-service;1"; nit: nsNetCID.h defines NS_DNSSERVICE_CONTRACTID, so perhaps you want to use that instead? >+void nsSmtpProtocol::GetUserDomainName(nsACString& aResult) ... >+ nsresult rv = NS_OK; > >+ nsCOMPtr <nsIDNSService> dns = do_GetService(kDNSService_CONTRACTID, &rv); nit: you don't need to initialize |rv| here. it will be assigned a value by do_GetService. >Index: netwerk/base/public/nsISocketTransport.idl > * Returns the IP address for the underlying socket connection. This >+ * attribute is defined only once a connection has been established. nit: how about this instead: "Returns the IP address of the socket connection peer." >+ * Returns the IP address on the initiating end. This >+ * attribute is defined only once a connection has been established. nit: looks like attribute could be moved up to the preceding line. >Index: netwerk/base/src/nsSocketTransport2.cpp >+NS_IMETHODIMP >+nsSocketTransport::GetSelfAddr(PRNetAddr *addr) >+{ >+ if (mFDconnected || PR_GetSockName(mFD, addr) == PR_FAILURE) >+ return NS_ERROR_NOT_AVAILABLE; // trying to get the address before we have reached the connected state? >+ else >+ return NS_OK; > } this code is a problem. mFD has to be protected by mLock, otherwise you could crash if mFD is closed while this thread is inside PR_GetSockName. here's a better implementation: NS_IMETHODIMP nsSocketTransport::GetSelfAddr(PRNetAddr *addr) { // we must not call any PR methods on our file descriptor // while holding mLock since those methods might re-enter // socket transport code. PRFileDesc *fd; { nsAutoLock lock(mLock); fd = GetFD_Locked(); } if (!fd) return NS_ERROR_NOT_CONNECTED; nsresult rv = (PR_GetSockName(fd, addr) == PR_SUCCESS) ? NS_OK : NS_ERROR_FAILURE; { nsAutoLock lock(mLock); ReleaseFD_Locked(fd); } return rv; } NOTE: I have not tested this function, but I believe that it should work :) Attachment #146908 - Flags: review?(darin) → review- Comment on attachment 146908 [details] [diff] [review] another attempt v4 >Index: mailnews/compose/src/nsSmtpProtocol.cpp >+ // turn it into a string >+ char ipAddressString[64]; >+ if (PR_NetAddrToString(&iaddr, ipAddressString, sizeof(ipAddressString)) == PR_SUCCESS) >+ { >+ if (strchr(ipAddressString, ':')) // IPv6 style address? >+ aResult.Assign(NS_LITERAL_CSTRING("[IPv6:")); >+ else >+ aResult.Assign(NS_LITERAL_CSTRING("[")); >+ >+ aResult.Append(nsDependentCString(ipAddressString) + NS_LITERAL_CSTRING("]")); >+ } >+ } since you have a PRNetAddr, i recommend using the |af| field to determine the address family instead of searching for a ':' in the address string. - if (strchr(ipAddressString, ':')) // IPv6 style address? + if (iaddr.raw.family == PR_AF_INET6) // IPv6 style address? FYI, in the past Necko stored IPv4 addresses as IPv4-mapped IPv6 address. since the switch to using getaddrinfo under the hood, we now no longer use IPv4-mapped IPv6 addresses, so testing the address family should give you correct results. you might want to add this assertion: NS_ASSERTION(PR_IsNetAddrType(&iaddr, PR_IpAddrV4Mapped) == PR_FALSE, "unexpected IPv4-mapped IPv6 address"); Ok, next round. Thank you for looking through this and answering my question. This patch incorporates everything from comment #153 to #155. As far as I was able to test, GetSelfAddr() does what it should - but Scott's implementation did too since the difference only shows in the worst case scenario if I understood correctly. Attachment #146908 - Attachment is obsolete: true Comment on attachment 147184 [details] [diff] [review] another attempt 5 We can get rid of this altogether can't we since we don't use it? +#include "nsIDNSService.h" + nsCOMPtr <nsIDNSService> dns = do_GetService(NS_DNSSERVICE_CONTRACTID, &rv); + if (NS_SUCCEEDED(rv)) Comment on attachment 147184 [details] [diff] [review] another attempt 5 Oh, er, yes of course. Does everything else now ok for all of you? yeah everything else looks good to me and you've fixed all of Darin's comments. If you can post one last patch that removes the call to the DNS service, I'll put an sr on that patch and I presume Darin will be okay putting an r on it as well. Then we can finally close this one out. Thanks Christian! So, here it is. Attachment #147184 - Attachment is obsolete: true Comment on attachment 147315 [details] [diff] [review] another attempt 6 Thanks for making these changes Christian. Attachment #147315 - Flags: superreview+ Attachment #147315 - Flags: review?(darin) Comment on attachment 147315 [details] [diff] [review] another attempt 6 >+ /** >+ * Returns the IP address on the initiating end. This attribute >+ * is defined only once a connection has been established. >+ */ >+ [noscript] PRNetAddr getSelfAddr(); I think this comment should say "Returns the IP address of the initiating end." That is, just change "on" to "of" r=darin with that minor change. Attachment #147315 - Flags: review?(darin) → review+ fixed on the trunk Status: ASSIGNED → RESOLVED Closed: 16 years ago Resolution: --- → FIXED Comment on attachment 147315 [details] [diff] [review] another attempt 6 a=chofmann for 1.7 Attachment #147315 - Flags: approval1.7? → approval1.7+ Product: MailNews → Core. *** Bug 238275 has been marked as a duplicate of this bug. *** Product: Core → MailNews Core (In reply to Jerry Baker from comment #166) >. Then your Spam against needs to have its scoring corrected. You're asking to change the behavior of Thunderbird, which does the right thing, because your Spam filters are broken. Two wrongs don't make a right.
https://bugzilla.mozilla.org/show_bug.cgi?id=68877
CC-MAIN-2019-35
refinedweb
11,384
66.03
Code. Collaborate. Organize. No Limits. Try it Today. There.: Description using Description = NUnit.Framework.DescriptionAttribute; will cause all appearances of Description to be recognized only in NUnit. You need to add all the corresponding MSTest attributes to get the IDE runner to recognize things. Just add both attributes to each class/method, e.g. [TestClass,TestFixture]. [TestClass,TestFixture] [SetupFixture] [AssemblyInitialize]<br />[AssemblyCleanup] SetupFixture AssemblyIntialize AssemblyCleanup [Setup] [TearDown] [TestFixtureSetUp] [ClassInitialize] [TestFixtureTearDown] [ClassCleanup] [TestFixture] [TestClass] [TestInitialize] [Cleanup] [TestCleanup] [Test] [TestMethod]: public class Test() { public Test() { Setup(); } ~Test() { TearDown(); } public virtual void Setup() { } public virtual void TearD. If you happen to be using TestContext, this will be another conflict, since both frameworks have a same-named object. The MS static initialization methods have it as a parameter, too, whereas for the NUnit framework, it's a static object you can always access. An easy solution is just to alias the MS one, since you probably haven't written any code against it yet, e.g.:. MsTestContext Visual Studio won't give you the testing tools until you add this to the .csproj file of your test project. It goes under Project/PropertyGroup. .csproj > You are now done. If you've been careful, nothing you've done will in any way break this when running under NUnit, and all these tests will now run directly in the Visual Studio IDE as well. ... and you should be good to go. While it may take a little bit of work to update large existing test suites, it's mostly search and replace. For new work, just build this into your template, and it's already.
http://www.codeproject.com/Articles/292902/How-to-run-NUnit-tests-in-Visual-Studio-2010-MSTes
CC-MAIN-2014-23
refinedweb
269
56.45
IPython magic functions One of the cool features of IPython are magic functions - helper functions built into IPython. They can help you easily start an interactive debugger, create a macro, run a statement through a code profiler or measure its’ execution time and do many more common things. Don't mistake IPython magic functions with Python magic functions (functions with leading and trailing double underscore, for example __init__ or __eq__) - those are completely different things! In this and next parts of the article, whenever you see a magic function - it's an IPython magic function. Moreover, you can create your own magic functions. There are 2 different types of magic functions. The first type - called line magics - are prefixed with % and work like a command typed in your terminal. You start with the name of the function and then pass some arguments, for example: In [1]: %timeit range(1000) 255 ns ± 10.3 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) My favorite one is the %debug function. Imagine you run some code and it throws an exception. But given you weren’t prepared for the exception, you didn’t run it through a debugger. Now, to be able to debug it, you would usually have to go back, put some breakpoints and rerun the same code. Fortunately, if you are using IPython there is a better way! You can run %debug right after the exception happened and IPython will start an interactive debugger for that exception. It’s called post-mortem debugging and I absolutely love it! The second type of magic functions are cell magics and they work on a block of code, not on a single line. They are prefixed with %%. To close a block of code, when you are inside a cell magic function, hit Enter twice. Here is an example of timeit function working on a block of code: In [2]: %%timeit elements = range(1000) ...: x = min(elements) ...: y = max(elements) ...: ...: 52.8 µs ± 4.37 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) Both the line magic and the cell magic can be created by simply decorating a Python function. Another way is to write a class that inherits from the IPython.core.magic.Magics. I will cover this second method in a different article. Creating line magic function That’s all the theory. Now, let’s write our first magic function. We will start with a line magic and in the second part of this tutorial, we will make a cell magic. What kind of magic function are we going to create? Well, let’s make something useful. I’m from Poland and in Poland we are use Polish notation for writing down mathematical operations. So instead of writing 2 + 3, we write + 2 3. And instead of writing (5 − 6) * 7 we write * − 5 6 71. Let’s write a simple Polish notation interpreter. It will take an expression in Polish notation as input, and output the correct answer. To keep this example short, I will limit it to only the basic arithmetic operations: +, -, *, and /. Here is the code that interprets the Polish notation: def interpret(tokens): token = tokens.popleft() if token == "+": return interpret(tokens) + interpret(tokens) elif token == "-": return interpret(tokens) - interpret(tokens) elif token == "*": return interpret(tokens) * interpret(tokens) elif token == "/": return interpret(tokens) / interpret(tokens) else: return int(token) Next, we will create a %pn magic function that will use the above code to interpret Polish notation. from collections import deque from IPython.core.magic import register_line_magic @register_line_magic def pn(line): """Polish Notation interpreter Usage: >>> %pn + 2 2 4 """ return interpret(deque(line.split())) And that’s it. The @register_line_magic decorator turns our pn function into a %pn magic function. The line parameter contains whatever is passed to the magic function. If we call it in the following way: %pn + 2 2, line will contain + 2 2. To make sure that IPython loads our magic function on startup, copy all the code that we just wrote (you can find the whole file on GitHub) to a file inside IPython startup directory. You can read more about this directory in the IPython startup files post. In my case, I’m saving it in a file called: ~/.ipython/profile_default/startup/magic_functions.py (name of the file doesn’t matter, but the directory where you put it is important). Ok, it’s time to test it. Start IPython and let’s do some Polish math: In [1]: %pn + 2 2 Out[1]: 4 In [2]: %pn * - 5 6 7 Out[2]: -7 In [3]: %pn * + 5 6 + 7 8 Out[3]: 165 Perfect, it works! Of course, it’s quite rudimentary - it only supports 4 operators, it doesn’t handle exceptions very well, and given that it’s using recursion, it might fail for very long expressions. Also, the queue module and the interpret function will now be available in your IPython sessions, since whatever code you put in the magic_function.py file will be run on IPython startup. But, you just wrote your first magic function! And it wasn’t so difficult! At this point, you are probably wondering - Why didn’t we just write a standard Python function? That’s a good question - in this case, we could simply run the following code: In [1]: pn('+ 2 2') Out[1]: 4 or even: In [1]: interpret(deque('+ 2 2'.split())) Out[1]: 4 As I said in the beginning, magic functions are usually helper functions. Their main advantage is that when someone sees functions with the % prefix, it’s clear that it’s a magic function from IPython, not a function defined somewhere in the code or a built-in. Also, there is no risk that their names collide with functions from Python modules. Conclusion I hope you enjoyed this short tutorial and if you have questions or if you have a cool magic function that you would like to share - drop me an email or ping me on Twitter! Stay tuned for the next parts. We still need to cover the cell magic functions, line AND cell magic functions and Magic classes. Footnotes
https://switowski.com/python/ipython/2019/02/01/creating-magic-functions-part1.html
CC-MAIN-2019-18
refinedweb
1,032
64.3
import java.applet.*; import java.awt.*; import java.util.Date; import java.text.DateFormat; /** An Applet to display the current time */ public class TimeDate extends Applet implements... import java.applet.*; import java.awt.*; import java.util.Date; import java.text.DateFormat; /** An Applet to display the current time */ public class TimeDate extends Applet implements... I am still waiting for my Hexadecimal answer on a different thread ............ in the mean time only a country full of twitters would want to keep the Fahrenheit system.. Water the most useful... Bugger in New Zealand means something like "what a mistake" or mess I once had a web site oh-bugger.net.nz but people overseas thought I had a sexual problem, so I changed it to dubdubdub. Anyway... Now I am not sure why I posted this thread? I suppose the best idea might be to test to see if that Long variable could fit down into a Int, and if it is outside the range of -2147483648 to... I am learning java (or should I say, filling in the holes) from a reputable learning source (say no more) it said, if fact I am going to feed it back to source I made my purchase from, they said ... Very good, but the time is a hour late, however most likely it is because we have day light saving (Summer in New Zealand) There are a lot of dreamer running the radio station, wanting a white... By the way, how do you send error messages to a text file I tried a few like javac work.java >> error.txt even javac work.java 2>error.txt someone once told me How does the code go? As for the second program there is a error with for (int f= getFahrenheit(); f <= 212; f= getFahrenheit()) System.out.println(f+"F = "+convertToC(f)+"C"); I might try I find this a interesting problem, I have not checked out the link this time, I did read it once before, one would think that since the OAK project failed & it was internet that saved the day and... For a start off I sure your class name should start with a capital letter, (Java Convention ) so public class maxMinValues should read public class MaxMinValues To answer JavaPF's question ... Hi I should of said in the thread subject "A SCJA member of this forum" I was going to get ready to sit the SCJP but time goes on and my learning Java in 21 days book gets stuck in week two, So...
http://www.javaprogrammingforums.com/search.php?s=2c5ba491ec12171dbd4ada630053c3de&searchid=1074004
CC-MAIN-2014-41
refinedweb
423
68.3
One. Go ahead and open the project from where we left off, or download the completed project from last week here. I have also prepared an online repository here for those of you familiar with version control. Draw Cards Action All reactions are actually just actions that are played as a triggered event instead of being played manually by a user interaction. This means that I will create it as a subclass of GameAction just like we did before with the ChangeTurnAction. The purpose of the action is to serve as a context for the system that will apply it. What information is necessary? In this case, we will need to know which player is performing the action and how many cards should be drawn. After the action is performed, it would also be nice if the action could indicate which cards were drawn, so that a view somewhere else could display the drawn cards correctly. public class DrawCardsAction : GameAction { public int amount; public List<Card> cards; public DrawCardsAction(Player player, int amount) { this.player = player; this.amount = amount; } } In the code above, I used a simple “amount” field to hold the target amount of cards that a player is “supposed” to draw. I then have a List of “cards” to hold the result – whatever was successfully drawn. Note that the count of drawn “cards” may not match the target “amount” desired, for example if you tried to draw a card, but your deck was already empty. Player System Next we will create a system to handle the application of the Game Action onto the Game model itself. You could choose to organize this in a variety of ways. I decided to put it in a system for the Player, because when I describe the action itself, I would describe it as a “Player draws a card”, and so if the Player is the one performing the action, then the Player system makes the most sense as the location it occurs. Should any one system become too long (such as more than a few 100 lines of code) I would of course reconsider the location of each bit of code and potentially create additional systems with smaller focus. public class PlayerSystem : Aspect, IObserve { // ... Add Code Here } The Player System class inherits from Aspect. You should know by now that this superclass allows the system to be attached to the same container object as all of our other systems so that they can work together if needed. We also implement the IObserve interface, which will allow it to register and unregister for notifications at appropriate times. public void Awake () { this.AddObserver (OnPerformChangeTurn, Global.PerformNotification<ChangeTurnAction> (), container); this.AddObserver (OnPerformDrawCards, Global.PerformNotification<DrawCardsAction> (), container); } public void Destroy () { this.RemoveObserver (OnPerformChangeTurn, Global.PerformNotification<ChangeTurnAction> (), container); this.RemoveObserver (OnPerformDrawCards, Global.PerformNotification<DrawCardsAction> (), container); } In this case, the notifications will be used initially to listen for the ChangeTurnAction which it will use as the trigger to initiate its own DrawCardsAction. It will also listen for the performance of its own action in order to actually apply the logic at the correct time. void OnPerformChangeTurn (object sender, object args) { var action = args as ChangeTurnAction; var match = container.GetAspect<DataSystem> ().match; var player = match.players [action.targetPlayerIndex]; DrawCards (player, 1); } In the notification handler for the performance of changing turns, we figure out which player is becoming the active player and pass it along to another method that handles creating the actual action context, using the correct player, and a fixed number of cards to draw based on changing turns. void DrawCards (Player player, int amount) { var action = new DrawCardsAction (player, amount); container.AddReaction (action); } The “DrawCards” method was separated on its own because there are likely to be many “triggers” for actually drawing a card(s) and this will allow me to keep my code a little more DRY. The parameters we will need to create the DrawCardsAction are passed directly to the method. Once the action is created, it is also automatically added as a reaction to the action system via the extension method on the container. void OnPerformDrawCards (object sender, object args) { var action = args as DrawCardsAction; action.cards = action.player [Zones.Deck].Draw (action.amount); action.player [Zones.Hand].AddRange (action.cards); } Finally we have the notification handler for actually applying the logic of drawing a card(s). I determine the number of cards to draw based on the action’s context. Then I use another extension method on a List to handle randomly taking elements from a collection (I will show the code for this next). Note that I assign them to the action itself so that views and/or reactions can know “what” cards were taken. Next, we add the drawn cards to the player’s “hand”. There are at least two additional points that will need to be considered here in the future. First, what happens when you successfully draw a card but your hand is full? It could be that the card is “destroyed” – moved to the discard pile. The other issue is to consider what happens when you try to draw a card(s) and did not have enough left in your deck to draw? It could be that the player takes some sort of penalty, such as fatigue damage. In both cases, these should be considered addional reactions to the intended action of drawing a card. List Extensions In the Player System, you may have wondered how I was using a “Draw” method on the List class. I did this by adding a new extension in my pre-existing “Common/Extensions/ListExtensions” class. The methods follow: public static T Draw<T> (this List<T> list) { if (list.Count == 0) return default(T); int index = UnityEngine.Random.Range (0, list.Count); var result = list [index]; list.RemoveAt (index); return result; } public static List<T> Draw<T> (this List<T> list, int count) { int resultCount = Mathf.Min (count, list.Count); List<T> result = new List<T> (resultCount); for (int i = 0; i < resultCount; ++i) { T item = list.Draw (); result.Add (item); } return result; } There are two overloaded implementations of the Draw method. The first does not accept a “count” parameter, assuming that you only want to draw one card. It can be convenient because the result does not need to be wrapped by another object (a List). The second version does take a “count” of cards to try to take. The final results are returned in a List – which could be empty if there were no cards to draw. This allows the call to be safe in that you don’t need to worry about out of bounds errors on the collection you are drawing from. One interesting note about these methods: because the items are drawn at random from the entire collection, the collection itself never needs to be shuffled. This is an on-going truth. For example, you might have imagined needing to shuffle a deck both at the beginning of a game as well as if a game action caused cards to be added to the deck during gameplay. In either case, by using the “Draw” method, each card has the same chance of being picked. Later on for demonstration purposes, I will name the cards, in order, so that it is more evident that random cards can be drawn while no shuffling is necessary. Game Factory Don’t forget – because we added a new system, we need to add it to the factory in order for it to be included as part of the container. game.AddAspect<PlayerSystem> (); Board View In the scene hierarchy, I have added a component marking where the concept of a board would appear. In this case I decided to add a reference at this level to one of my reusable scripts called a SetPooler. A pooler is something I created to aid in the reuse of expensive objects (GameObjects) rather than needing to constantly destroy and re-create them. Without using a pooler, a battle could easily instantiate many cards, but if you use a pooler, you can limit that number because cards that have been discarded can be reused to display newer cards in the future. All I need to add to this script is a new field: public SetPooler cardPooler; I then created a new child GameObject in the scene (in edit mode) called the “Card Pooler” that had a SetPooler component attached. I used the “Card View” prefab as the reference assigned to this pooler. All of the other settings can be left at default, although you may wish to pre-populate it with a few instances – I set mine at 10. Finally I manually connect the BoardView’s “cardPooler” reference to the component instance just created. Deck View Because I have a visual reference to the concept of a player’s deck, I also want to be able to visually approximate how many cards remain in the deck. In other words, as a player draws cards, the width of the deck should slowly shrink until no cards remain. To handle this, I created a method called “ShowDeckSize” that expects a normalized value (0-1) indicating how much of the deck should be visible: public void ShowDeckSize (float size) { squisher.localScale = Mathf.Approximately (size, 0) ? Vector3.zero : new Vector3 (1, size, 1); } Card View I also added a bunch of functionality to the view for displaying the cards themselves. For example, I added a reference to the Card model that needs to be displayed. When drawing cards, I want to support the ability to see both the back and front of the card. While a card is on the deck, it should be face-down, and I should only see it as face-up, if it is a card I am drawing. When my opponent draws a card, I should not be able to see it until he plays it. public bool isFaceUp { get; private set; } public Card card; private GameObject[] faceUpElements; private GameObject[] faceDownElements; void Awake () { faceUpElements = new GameObject[] { cardFront.gameObject, healthText.gameObject, attackText.gameObject, manaText.gameObject, titleText.gameObject, cardText.gameObject }; faceDownElements = new GameObject[] { cardBack.gameObject }; Flip (isFaceUp); } public void Flip (bool shouldShow) { isFaceUp = shouldShow; var show = shouldShow ? faceUpElements : faceDownElements; var hide = shouldShow ? faceDownElements : faceUpElements; Toggle (show, true); Toggle (hide, false); Refresh (); } void Toggle (GameObject[] elements, bool isActive) { for (int i = 0; i < elements.Length; ++i) { elements [i].SetActive (isActive); } } void Refresh () { if (isFaceUp == false) return; manaText.text = card.cost.ToString (); titleText.text = card.name; cardText.text = card.text; var minion = card as Minion; if (minion != null) { attackText.text = minion.attack.ToString (); healthText.text = minion.maxHitPoints.ToString (); } else { attackText.text = string.Empty; healthText.text = string.Empty; } } Draw Cards View Next, we need to add the code that observes the DrawCardsAction notification and presents the results to our users. Note that this could have been placed just about anywhere, such as in the BoardView, or PlayerView component scripts. Adding it to the PlayerView might be the most intuitive since it is the PlayerSystem that performs the logic. However, since there are two PlayerView instances in a scene then we would need multiple listeners and would need to add extra code to “ignore” the action where the player didn’t match. The BoardView might have been another good choice, because it could listen to the notification one time for all players, and then just trigger the matching player to take over. I sort of liked that idea as well, but imagined that the BoardView may end up responsible for far too many tasks. In the end I decided to simply add a new component specific to this action. I created a new script in the Components folder called “DrawCardsView”. I also attached this new component to the same GameObject that the BoardView is attached to, so that I could easily get a reference to the board and its children player views. public class DrawCardsView : MonoBehaviour { void OnEnable () { this.AddObserver (OnPrepareDrawCards, Global.PrepareNotification<DrawCardsAction> ()); } void OnDisable () { this.RemoveObserver (OnPrepareDrawCards, Global.PrepareNotification<DrawCardsAction> ()); } void OnPrepareDrawCards (object sender, object args) { var action = args as DrawCardsAction; action.perform.viewer = DrawCardsViewer; } IEnumerator DrawCardsViewer (IContainer game, GameAction action) { yield return true; // perform the action logic so that we know what cards have been drawn var drawAction = action as DrawCardsAction; var boardView = GetComponent<BoardView> (); var playerView = boardView.playerViews [drawAction.player.index]; for (int i = 0; i < drawAction.cards.Count; ++i) { int deckSize = action.player[Zones.Deck].Count + drawAction.cards.Count - (i + 1); playerView.deck.ShowDeckSize ((float)deckSize / (float)Player.maxDeck); var cardView = boardView.cardPooler.Dequeue ().GetComponent<CardView> (); cardView.card = drawAction.cards [i]; cardView.transform.ResetParent (playerView.hand.transform); cardView.transform.position = playerView.deck.topCard.position; cardView.transform.rotation = playerView.deck.topCard.rotation; cardView.gameObject.SetActive (true); var showPreview = action.player.mode == ControlModes.Local; var addCard = playerView.hand.AddCard (cardView.transform, showPreview); while (addCard.MoveNext ()) yield return null; } } } Because this is a MonoBehaviour, I can simply use the “OnEnable” and “OnDisable” methods to add and remove notification listeners. In this case I am observing the “prepare” phase of the DrawCardsAction as an opportunity to attach a “viewer” to the “perform” phase of the same action. In the viewer method itself I make the very first statement a return statement with a value of “true” – which causes the “perform” key frame to trigger. This means that the Player System would apply the logic, and the DrawCardsAction should have its “cards” field updated so we know which cards have successfully been drawn. Next, I can cache some references such as getting the correct PlayerView which matches the player who is actually drawing cards. I then loop over the number of cards that need to be drawn. Within each loop I determine how many cards are left in the deck and scale the deck view appropriately. Then I use my SetPooler to “Dequeue” a new card view instance (automatically creating new objects if necessary). I parent the view instance to the GameObject in the scene that represents the location for the player’s hand, but I set its world position and rotation to match the top of the player’s deck. You can think of this as the first keyframe in a tween so that we can animate the card from the deck to our hand. However, the animation needs to be different depending on whether or not it is the local player or the opponent that is drawing the card. Furthermore, there are several actions that can cause a card to need to be put in a players hand besides drawing a card from a deck. Therefore, I implemented the rest of this viewer’s animation in another script called the HandView. Hand View In the Hand View, I created a public method “AddCard” which accepts a transform reference of a GameObject (that should also have a CardView component attached). It also takes a parameter called “Show Preview” that indicates whether or not the card should animate straight into the player’s hand, or if it should take a small detour so that the player can see what was drawn. public IEnumerator AddCard (Transform card, bool showPreview) { if (showPreview) { var preview = ShowPreview (card); while (preview.MoveNext ()) yield return null; } cards.Add (card); var layout = LayoutCards (); while (layout.MoveNext ()) yield return null; } I created a “ShowPreview” method to handle the display of a drawn card to a user before sliding a card into place among the other cards in the hand. It Tweens the card view from “wherever” it currently is (in our case it will be located at the deck), and animates it moving to the same position as another GameObject that appears in the scene hierarchy – we have cached a reference to it called the “activeHandle”. While a card is between the deck and the display location, we will be checking its rotation. Whenever we determine that the card is physically face-up based on the rotation angle, we tell the CardView component to update itself appropriatly. After reaching the activeHandle position, the card remains still at that location for a second to give a user plenty of time to see it. IEnumerator ShowPreview (Transform card) { Tweener tweener = null; card.RotateTo (activeHandle.rotation); tweener = card.MoveTo (activeHandle.position, Tweener.DefaultDuration, EasingEquations.EaseOutBack); var cardView = card.GetComponent<CardView> (); while (tweener != null) { if (!cardView.isFaceUp) { var toCard = (Camera.main.transform.position - card.position).normalized; if (Vector3.Dot (card.up, toCard) > 0) cardView.Flip (true); } yield return null; } tweener = card.Wait (1); while (tweener != null) yield return null; } Finally, I have added a “Layoutcards” method which adjusts the position of all of the cards in the hand so that they can make room for the newly drawn card. IEnumerator LayoutCards (bool animated = true) { var overlap = 0.2f; var width = cards.Count * overlap; var xPos = -(width / 2f); var duration = animated ? 0.25f : 0; Tweener tweener = null; for (int i = 0; i < cards.Count; ++i) { var canvas = cards [i].GetComponentInChildren<Canvas> (); canvas.sortingOrder = i; var position = inactiveHandle.position + new Vector3 (xPos, 0, 0); cards [i].RotateTo (inactiveHandle.rotation, duration); tweener = cards [i].MoveTo (position, duration); xPos += overlap; } while (tweener != null) yield return null; } Game View System There is one final step to do before trying everything out. At the moment, we haven’t actually created a deck of cards for either player. We need some sort of placeholder data for now, and I want it to have random values for all of the card fields like mana cost, attack and health, etc so that we can see whether or not the views update properly. In addition, I want to name each card in order so that we can see that cards are drawn as if the deck was shuffled, even though we didn’t need to shuffle it. Open up the GameViewSystem script and update the Temp_SetupSinglePlayer method as follows: void Temp_SetupSinglePlayer() { var match = container.GetMatch (); match.players [0].mode = ControlModes.Local; match.players [1].mode = ControlModes.Computer; foreach (Player p in match.players) { for (int i = 0; i < Player.maxDeck; ++i) { var card = new Minion (); card.name = "Card " + i.ToString(); card.cost = Random.Range (1, 10); card.maxHitPoints = card.hitPoints = Random.Range (1, card.cost); card.attack = card.cost - card.hitPoints; p [Zones.Deck].Add (card); } } } Demo At this point we have successfully added everything necessary to implement drawing a card, both for the model behind the scenes and for the view to display it to the user. Save your scene and project and then press “Play” for the “Game” scene. When you press the “End Turn” button you should see the opponent draw a card before it is your turn again. When your turn begins you will also draw a card, but the animation should be different – you should see a preview of your card before it lands face up in your own hand. You wont be able to change turns again until the entire action sequence has played out at which point the game state will be idle again. You can keep drawing cards as long as you like, and should notice the size of the deck shrink over time. Eventually the deck will be depleted and the view for the deck will disappear from the screen. You can continue changing turns, but no additional cards will be drawn. Summary In this lesson, we created our first action-sequence, by causing a new action to occur as a result of another action. Now, whenever a turn is changed, a player will draw a card. We implemented everything necessary on the models, systems and views to make a complete and playable demo. – Drawing Cards” Hi John, amazing tutorial so far. I’m struggling to understand the loop of this archtecture, do you have a link or maybe another post on the blog that explains the workflow for the Game Systems, Actions and notifications? I kinda understand the notification system, but after rereading the code it’s getting harder to follow up with so many systems working simultaneously. Cheers from Brazil! Much of the game systems in this tutorial are unique to this game, due to the complexity of its design. If there is something particular you are struggling with, feel free to ask here or on my forums and I will try to elaborate. I have talked about actions and notifications in other blog posts such as this mini 3 part series where I originally created the notification center: I have also used similar architecture patterns in other tutorial project series. If something doesn’t click in one, maybe it would help to try another such as the Tactics RPG. Good luck!
https://theliquidfire.com/2017/10/02/make-a-ccg-drawing-cards/
CC-MAIN-2021-10
refinedweb
3,432
54.63
One of the most common analysis done with genetic sequences is a dot-plot. This is where we plot two sequences against each other – in the X and Y direction to get a sense of the similarity between them. The dot-plot provides a simple, visual tool which can quickly identify consensus between the sequences. While it was originally developed for the genetic field, the actual concept can be applied to any style of data – allowing professors to detect plagiarism for example. .NET Bio doesn’t provide any direct support for dot-plots, but it’s fairly easy to do. In this post, we will build a simple dot-plot program to compare two sequences (or a sequence against itself), and then we will enhance it with sliding windows to reduce the noise often produced due to the low number of symbols being compared (A/T/C/G). To start with, we will use a WPF application – since I’d like to take advantage of the Model-View-ViewModel pattern, I am going to include my library of helpers () but any MVVM helper library or routines would work here. You can get this from NuGet if you install the NuGet package manager into Visual Studio 2010. We will also use .NET Bio (), the open-source bioinformatics library from Microsoft Research. This library contains the core classes we will need to load, parse and interpret sequence data. Introduction to Dot-Plots The algorithm behind dot-plots is actually quite simple. We will take the first sequence of data and lay it down on the X axis of our graph. We will then take the second sequence and lay it along the Y-axis. Then, we fill in the actual plot by comparing each coordinate position. Where the X and Y are the same, we fill in that cell with a dot – where they are different we leave it blank. As an example, consider the following data: T G C C T G G C G G C C G T A G C G C G G T G G T C C C A C T x x x x x G x x x x x x x x x x x x C x x x x x x x x x x x … When you compare the same sequence against itself you will see a diagonal line produced in the data where X == Y for each position. In our program, once we load the two sequences, we will generate a byte array of matches. As a simplistic example, consider the following code which takes two sequences (_sequences[0] and _sequences[1]) and generates a byte[] of 0x00 and 0xff: private void CalculateDataPlot() { const byte ON = 0xff; const byte OFF = 0x00; long width = _sequences[0].Count; long height = _sequences[1].Count; _plotData = new byte[height,width]; Parallel.For(0, height, row => { for (long column = 0; column < width; column++) { _plotData[row,column] = _sequences[0][column] == _sequences[1][row] ? ON : OFF; } }); OnPropertyChanged(() => PlotData); } We can then easily take this produced data and data bind it to an Image control in WPF using a ValueConverter to create a new BitmapSource: public class ByteArrayToImageConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { byte[,] values = value as byte[,]; int height = values.GetLength(0); int width = values.GetLength(1); byte[] buffer = new byte[height*width]; int i = 0; for (int row = 0; row < height; row++) { for (int col = 0; col < width; col++) { buffer[i++] = values[row, col]; } } return BitmapSource.Create(width, height, 96, 96, PixelFormats.Gray8, null, buffer, width); } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } } Loading a very simple sequence and comparing it against itself reveals the following image: Notice the heavy amount of noise in the produced image? The problem is we have a ton of random matches – in fact we have a probability of a 1/4 (25%) match given an alphabet of 4 characters! This background noise is completely uninteresting in the data analysis, in fact it’s downright distracting to what we’d like to see. To remove/reduce this noise, what we need to do is apply a filter to the data by forcing the size of the window required to produce a dot to be larger than it’s current value (1). We’ll do that next week – stay tuned! Here’s the 1st draft of the code — Sequence Dot Plot Part
http://julmar.com/blog/open-source/dot-plotting-with-net-bio/
CC-MAIN-2017-22
refinedweb
750
59.23
0 Hi guys I am trying to build a program which scans data from a text file, but I keep getting an error when trying to compile it. The code import javax.swing.*; import java.awt.*; import java.awt.event.*; import java.util.Scanner; import java.io.*; class anything extends JFrame { public anything() { File stuff = new File(stuff.txt); Scanner scan = Scanner(stuff); int length; int count = 0; double place; while(scan.hasNextDouble()) { place = scan.NextDouble(); count++; } This isn't the whole program, but is the part which has Scanner. The errors cannot find symbol variable txt cannot find symbol method Scanner(java.io.File) cannot find symbol method NextDouble() So what is the problem, I have a feeling it has to do with the importing of scanner or something.
https://www.daniweb.com/programming/software-development/threads/148925/problem-using-java-util-scanner
CC-MAIN-2017-17
refinedweb
130
75.61
Hi, I tried to run numpy, but get several problems. I work with the following s/w: OS - Window 7 professional Python v.3.3.2 Numpy 1.7.0 I installed Numpy and after python setup.py install copied a new folder Bulid\Numpy into C:\Python33\Lib\dite-packages But when I run any module from numpy: for example: import fft I have got the following error: ImportError: Error importing numpy: you should not try to import numpy from its source directory; please exit the numpy source tree,and relaunch you python interpreter from there Does anybody know how to solve this problem? Sincerely ok21
http://forums.devshed.com/python-programming-11/numpy-946466.html
CC-MAIN-2015-14
refinedweb
107
60.85
43544/how-to-make-the-instance-ip-static-in-nature-aws Hey @Laksha! By default, all instances have a dynamic public IP address which changes every time you stop or restart the instance. You could use ElasticIP to assign a static IP to your instance. Follow these steps: On the left side, go to Network and Security -> ElasticIP -> Allocate New IP Assign this new address to an instance import boto3 ec2 = boto3.resource('ec2') instance = ec2.create_instances( ...READ MORE Using the AWS Console. You can follow .. Hey @Robii, Start with some basic research about ...READ MORE You can begin using Boto 3. Boto is ...READ MORE OR
https://www.edureka.co/community/43544/how-to-make-the-instance-ip-static-in-nature-aws
CC-MAIN-2019-30
refinedweb
107
70.39