text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Opened 6 years ago Closed 6 years ago #9919 closed defect (fixed) AttributeError: 'NoneType' object has no attribute 'splitlines' Description I am getting the following error with Trac 0.13dev-r10991 running on apache httpd ( 2.8 (WSGIProcessGroup /trac/trac1 WSGIApplicationGroup %{GLOBAL})) and newest XmlRpcPlugin: 2012-03-22 12:56:47,923 Trac[web_ui] DEBUG: RPC incoming request of content type 'text/xml' dispatched to <tracrpc.xml_rpc.XmlRpcProtocol object at 0x7f719e8ffcd0> 2012-03-22 12:56:47,924 Trac[build\bdist.win32\egg\trac\web\main] "/usr/local/lib/python2.6/dist-packages/TracXMLRPC-1.1.2_r11306-py2.6.egg/tracrpc/web_ui.py", line 69, in process_request self._rpc_process(req, protocol, content_type) File "/usr/local/lib/python2.6/dist-packages/TracXMLRPC-1.1.2_r11306-py2.6.egg/tracrpc/web_ui.py", line 139, in _rpc_process proto_id = protocol.rpc_info()[0] File "/usr/local/lib/python2.6/dist-packages/TracXMLRPC-1.1.2_r11306-py2.6.egg/tracrpc/xml_rpc.py", line 76, in rpc_info return ('XML-RPC', prepare_docs(self.__doc__)) File "/usr/local/lib/python2.6/dist-packages/TracXMLRPC-1.1.2_r11306-py2.6.egg/tracrpc/util.py", line 49, in prepare_docs return ''.join(l[indent:] for l in text.splitlines(True)) AttributeError: 'NoneType' object has no attribute 'splitlines' I tried connecting with eclipse mylyn (newest update). User anonymous has permission XML_RPC. Attachments (0) Change History (4) comment:1 follow-up: 2 Changed 6 years ago by comment:2 follow-up: 3 Changed 6 years ago by I suspect you are running Python with 'optimize' settings on. Good guess! That was the problem. I had set WSGIPythonOptimize 2 in my wsgi.conf. Thank you for your immediate and informative answer !! Among other things this strips away docstrings that the plugin uses, and no doubt lots of other parts of Trac and plugins rely on. Well, we are running Trac now for almost a year in production mode and hadn't any problems until now. We're using AccountManagerPlugin and some self-made plugins (see list). Is Trac really using the named features (docstrings, etc.) that extensive?! Trac++ relies on runtime introspection to such a large extent that running Python with optimization switches is generally discouraged (and unsupported). What's Trac++? Haven't heard of that. comment:3 Changed 6 years ago by Is Trac really using the named features (docstrings, etc.) that extensive?! For much of documentation needs, yes. Most documentation (like help for wiki macros). And for RPC the API docs will not be very useful without it. However, perhaps the plugin function should at least be made not to fail. Like just returning an empty string if there is no text. Reopening ticket to remember to get that done. What's Trac++? Haven't heard of that. Heh. Trac and all related dependencies, plugins and various enhancements = "and more". I didn't feel like listing them all... comment:4 Changed 6 years ago by (In [11410]). Looks like it hasn't got anything to do with Trac or the Trac version, and the code in the plugin is verified and correct. I suspect you are running Python with 'optimize' settings on. Among other things this strips away docstrings that the plugin uses, and no doubt lots of other parts of Trac and plugins rely on. Trac++ relies on runtime introspection to such a large extent that running Python with optimization switches is generally discouraged (and unsupported). Could you check optimization options for your config? Likely WSGIPythonOptimizesetting for mod_wsgi. Check docs:
https://trac-hacks.org/ticket/9919
CC-MAIN-2017-47
refinedweb
578
51.55
What is an abstract class? This type of question is often asked by interviewers, so due to that I have decided to write this article to help job seekers and anyone who wants to learn about abstract classes. So let us start from the basics so beginners can also understand. What is Abstract Class ? If the class is created for the purpose of providing common fields and members to all subclasses then this type of class is called an abstract class. Syntax of creating an abstract class An abstract class is created in C# using the abstract keyword. Example abstract public class Pen { } Use of Abstract Class There may sometimes be a situation where it is not possible to define a method in a base class and instead every derived class must override that method. In this situation abstract classes or methods are used.. Some of the key points of abstract classes are: - A class may inherit only one abstract class. - Members of an abstract class may have any access modifier. - Abstract class methods may OR may not have an implementation. - Such a type of class cannot be instantiated but it allows the other classes to inherit from it, in other words from an abstract class. - Abstract classes allow definition of fields and constants. Now let us see the abstract class with an example. Use the following procedure to create a console application: - Open Visual Studio from Start - - All programs -- Microsoft Visual Studio. - Then go to to "File" -> "New" -> "Project..." then select Visual C# -> Windows -> Console application. - After that specify the name such as abstract class or whatever name you wish and the location of the project and click on the OK button. The new project is created. Add the namespace at the top of the program.cs class file, as in: using System; Now run the application. The following output will be displayed: Summary In the preceding article ,I have briefly explained abstract classes to make them understandable to beginners and newcomers. If you have any suggestion regarding this article then please contact me.
https://www.compilemode.com/2015/05/abstract-class-in-c.html
CC-MAIN-2020-24
refinedweb
345
64.71
: <img /> vs. <img></img>. 0.0.4 2013-02-20 - Fix a bug where we compared ->name() instead of ->localname(). - This caused <db:info /> not to match as <info /> if db: and the default namespace are the same. 0.0.3 2013-01-06 - Add t/00basic.t to get some diagnosis of the libxml2 versions. - Made the minimum required XML::LibXML version 2.0014 to avoid some test issues. 0.0.2 2012-12-05 - Sped up the tests in t/xml_compare1.t by not loading external DTDs from w3.org. - - Thanks to Slaven Rezic (SREZIC) for the report and the analysis. - TODO : this may be a useful feature to add to the options hash-ref. 0.0.1 2012-12-04 - First version.
https://metacpan.org/changes/distribution/Test-XML-Ordered
CC-MAIN-2017-17
refinedweb
125
79.26
Re-Imagining Linux Platforms to Meet the Needs of Cloud Service Providers Watch→ Kernel.org Mirrors Full Changelog Ok, it's out there now, again there may well have been patches I missed, I concentrated on the fundamental ones (ie starting LSM merge, and most noticeably probably merging Rik's rmap patch through Andrew Morton along with other work by Andrew). AGP split up and cleaned up, IDE patches 99-100 and 4GB FAT32 support. And the inevitable USB updates, of course. Linus ---- Summary of changes from v2.5.26 to v2.5.27 ============================================ <lists@mdiehl.de>: o USB: patch to make USB_ZERO_PACKET work in ohci-hcd.c <mark@alpha.dyndns.org>: o USB: ov511 1.61 for 2.5.25 <stuartm@connecttech.com>: o USB: usbserial.c fixup Andrew Morton <akpm@zip.com.au>: o minimal rmap o leave truncate's orphaned pages on the LRU o avoid allocating pte_chains for unshared pages o VM instrumentation o O_DIRECT open check o restore CHECK_EMERGENCY_SYNC. Again o inline generic_writepages() o alloc_pages cleanup o direct_io mopup o remove add_to_page_cache_unique() o writeback scalability improvements o readahead optimisations David Brownell <david-b@pacbell.net>: o USB: usbnet queuing Greg Kroah-Hartman <greg@kroah.com>: o agpgart: Split agpgart code into separate files o agpgart: fix syntax error in the i8x0 file o agpgart: renamed the agp files to make more sense o agpgart: added agp prefix to the debug printk o LSM: move the struct msg_msg and struct msg_queue definitions out of the msg.c file to the msg.h file o LSM: move struct shmid_kernel out of ipc/shm.c to include/linux/shm.h o agpgart: added "-agp" to the .c files that are for specific hardware types, based on mailing list comments o USB: removed the usb-ohci driver, as it is no longer being used o LSM: change BUS_ISA to CTL_BUS_ISA to prevent namespace collision with the input subsystem o LSM: Add all of the new security/* files for basic task control o LSM: Enable the security framework. This includes basic task control hooks o LSM: for now, always set CONFIG_SECURITY_CAPABILITIES to y Hirofumi Ogawa <hirofumi@mail.parknet.co.jp>: o Add 4G-1 file support to FAT32 Linus Torvalds <torvalds@home.transmeta.com>: o Remove "tristate" for CONFIG_SECURITY_CAPABILITIES, make it unconditional for now. Martin Dalecki <dalecki@evision-ventures.com>: o 2.5.26 IDE 99 o IDE 100 Neil Brown <neilb@cse.unsw.edu.au>: o MD - Remove bdput calls from raid personalities o MD - Remove dead consistancy checking code from multipath o MD - Get multipath to use mempool o MD - 27 - Remove state field from multipath mp_bh structure o MD - Embed bio in mp_bh rather than separate allocation o MD - Don't "analyze_sb" when creating new array o MD - Use symbolic names for multipath (-4) and linear (-1) o MD - Get rid of find_rdev_all o MD - Rdev list cleanups o MD - Pass the correct bdev to md_error o MD - Move md_update_sb calls o MD - Set desc_nr more sanely o MD - Remove concept of 'spare' drive for multipath o MD - Improve handling of spares in md o MD - Add raid_disk field to rdev o MD - Add in_sync flag to each rdev o MD - Add "degraded" field to md device o MD - when writing superblock, generate from mddev/rdev info o MD - Don't maintain disc status in superblock o MD - Remove old_dev field o MD - nr_disks is gone from multipath/raid1 o MD - Remove number and raid_disk from personality arrays o MD - Move persistent from superblock to mddev o MD - Remove dependance on superblock o MD - Remove the sb from the mddev o MD - Change partition_name calls to bdev_partition_name were possible o MD - Get rid of dev in rdev and use bdev exclusively Oliver Neukum <oliver@neukum.name>: o USB: lots of locking and other SMP race fixes Rusty Russell <rusty@rustcorp.com.au>: o drivers/usb/* designated initializer rework Trond Myklebust <trond.myklebust@fys.uio.no>: o Fix NFS locking bug o Fix typo in net/sunrpc/xprt.c Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled.
https://www.linuxtoday.com/developer/2002072000526NWKNDV
CC-MAIN-2018-47
refinedweb
686
52.09
Ticket #467 (closed defect: fixed) Attachments Change History comment:2 Changed 9 years ago by ferdy Also, a first look at the code shows that there is a VERY small window where signal_handler would try to Lock the same lock (tasks_mutex) that the main paludis thread has already locked. Thus leading to a deadlock. Now, once you see it hung, attach gdb to it and post backtraces of every thread. - ferdy comment:3 Changed 9 years ago by ciaranm I think it's the new run_command handler that's screwing things up... We did have a very small window, in that I stopped noticing it ever locking up, up until a few days ago. comment:4 Changed 9 years ago by ciaranm This consistently hangs for me: nice sudo paludis -i1 hilite & sleep 3 ; sudo killall paludis ; fg Can't figure out what it is. It's related to the new run_command handler, but not the way I thought. This doesn't help: diff --git a/paludis/util/system.cc b/paludis/util/system.cc index 00befb2..cf17b0f 100644 --- a/paludis/util/system.cc +++ b/paludis/util/system.cc @@ -529,6 +529,14 @@ paludis::run_command(const Command & cmd) throw RunCommandError("select failed: " + stringify(strerror(errno))); else if (0 == retval) { + int status(-1); + if (0 != waitpid(child, &status, WNOHANG)) + { + Log::get_instance()->message(ll_warning, lc_no_context) << "Child process " << child << + " appears to have exited abnormally with exit status " << status; + return (WIFSIGNALED(status) ? WTERMSIG(status) + 128 : WEXITSTATUS(status)); + + } Log::get_instance()->message(ll_debug, lc_context) << "Waiting for child " << child << " to finish"; continue; } comment:5 Changed 9 years ago by ciaranm - Status changed from new to closed - Resolution set to fixed r4294. Unix sucks. comment:6 Changed 9 years ago by Mellen - Cc tais.hansen@… added - Status changed from closed to reopened - Resolution fixed deleted I've attached a backtrace of paludis-0.26.0_alpha9 hanging on ctrl-c while calculating dependencies on "sudo paludis --install --pretend --compact everything". Changed 9 years ago by Mellen - attachment gdb-paludis.log added Updated backtrace as ciaranm requested on irc. Provide more info. Does it hang ALWAYS? Or do you have to press ctrl+c twice to make it hang?
http://paludis.exherbo.org/trac/ticket/467
CC-MAIN-2016-50
refinedweb
359
56.76
meta data for this page Discussion about TITE Wiki LTY in title bar ⇒ LUT or maybe the title simple as “LUT TITE Wiki”. “TITE” is used short international name for department ? Feature request: page list plugin To automatically list pages in name space. Navigation feature request Navigation bar that shows the namespace hierarchy. Now it is hard to get into namespace main start space. – hevi Namespace for the users Now it is only one page, that is not enought. –hevi User link up and user management is provided by the Monobook layout plugin, not by the dokuwiki. Changed to :wiki:user:<loginname>:start . This is hard to use too, too long hierarchy. :wiki namespace should contain wiki system related pages not users. Monobook hardcodes the :wiki prefix in functions, so it is not easily changed.
https://www.it.lut.fi/wiki/doku.php/talk/admin/faq
CC-MAIN-2020-50
refinedweb
135
67.76
Eh guys, i'm launching a new Syndication format. yeah. i'll call it JYS (for Jean Yves Syndication). Why not? everybody's doing this nowadays! Please, if you haven't done it today, read the RSS2.0 spec. It's so simple that a French could understand it. And furthermore it works for some time now. Some people caused confusion by breaking RSS0.9x evolution calling a new format RSS1.0, with no backward compatibility and reimposing an RDF syntax that had be taken away since the first days of RSS. (If 10 people exploit the RDF syntax of RSS1.0 in their app, please tell us how) And now, some people (the same?!) want to create more and more confusion by creating a new format from scratch. Do you think RSS is so much complicated and broken that it has to be rebuilt from scratch ? No way ! The only small justifications for this undeavor are of this level : "We don't know if html is allowed in the title tag". Oh yeah ! let's build a new format then ! I can now see a trend. Look at XML-RPC/SOAP. Dave insisted in keeping the format simple, but others wanted more and more and more features. We have now a complicated SOAP, and a very easy and powerful XML-RPC that everybody loves to use. Note that XML-RPC has some problems too (it is ascii based for example), but having a frozen spec that works simply for most cases is a lot better that letting it grow the SOAP way. Let's froze RSS2.0 (namespaces let everyone extend it if necessary), and explicite more what is the common uses of some tags contents. Some people will still be pissed in 10 years because RSS is not RDF based. But eh maybe they'll still be pissed about it in 20 years when nobody remembers what RDF was. Note : You don't build formats with insults and mean feelings. Some people should stop acting like 11 years old kids. Previous Day / Jour Précédent Where : Nantes, France Form : AIM : jystervinou ICQ : 143406028 MSN : jy@stervinou.com
http://radio.weblogs.com/0001103/2003/06/26.html
crawl-002
refinedweb
358
75.71
NAME getdtablesize - get descriptor table size SYNOPSIS #include <unistd.h> int getdtablesize(void); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): getdtablesize(): _BSD_SOURCE || _XOPEN_SOURCE >= 500 DESCRIPTION getdtablesize() returns the maximum number of files a process can have open, one more than the largest possible value for a file descriptor. RETURN VALUE The current limit on the number of open files per process. ERRORS On Linux, getdtablesize() can return any of the errors described for getrlimit(2); see NOTES below. CONFORMING TO SVr4, 4.4BSD (the getdtablesize() function first appeared in 4.2BSD). It is not specified in POSIX.1-2001; portable applications should employ sysconf(_SC_OPEN_MAX) instead of this call. NOTES getdtablesize() is implemented as a libc library function. The glibc version calls getrlimit(2) and returns the current RLIMIT_NOFILE limit, or OPEN_MAX when that fails. The libc4 and libc5 versions return OPEN_MAX (set to 256 since Linux 0.98.4). SEE ALSO close(2), dup(2), getrlimit(2), open(2) COLOPHON This page is part of release 2.77 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://manpages.ubuntu.com/manpages/hardy/en/man2/getdtablesize.2.html
CC-MAIN-2015-27
refinedweb
191
58.89
0 Hi everyone.... I want to develop a music lexicon (tyoe band name --> give description) and I want to add terms (bandName and description) I use ArrayList but I want to change. I have also the code that read a text file (bands.txt) but I dont know haw it works... Any suggestion? private void addTerms () { addTerm("Anathema","A music band from Liverpool, UK."); } import java.io.*; public class ReadFile { public static void main(String[] args) { try { //Sets up a file reader to read the file passed on the command line one character at a time FileReader input = new FileReader("bands.txt"); //Filter FileReader through a Buffered read to read a line at a time BufferedReader bufRead = new BufferedReader(input); String line; // String that holds current file line int count = 0; // Line number of count while((line = bufRead.readLine()) != null){ System.out.println(count+": "+line); line = bufRead.readLine(); count++ } bufRead.close(); } catch (ArrayIndexOutOfBoundsException e){ /* If no file was passed on the command line, this expception is generated. A message indicating how to the class should be called is displayed */ System.out.println("Usage: java ReadFile filename\n"); }catch (IOException e){ // If another exception is generated, print a stack trace e.printStackTrace(); } } }
https://www.daniweb.com/programming/software-development/threads/322338/problem-import-text-and-conbined-with-arraylist
CC-MAIN-2016-50
refinedweb
201
56.15
PageStack vs pageStackWindow element - Which one should I use and when/why? - Does pageStackWindow contain pageStack? I mean does pageStackWindow instantiate pageStack element? Do I have to instantiate it myself? I guess I don't have to. - Can I use both in Symbian and in MeeGo? More on pageStack More on pageStackWindow - srikanth_trulyit last edited by "1. Which one should I use and when/why?" The docs has it all "Applications can be constructed using the PageStackWindow that brings the page navigation, statusbar, toolbar, and platform's common look and feel." "2. Does pageStackWindow contain pageStack? I mean does pageStackWindow instantiate pageStack element? Do I have to instantiate it myself? I guess I don’t have to" PageStack is a primitive way to manage stack of application pages. I suggest using PageStackWindow as it takes the extra effort of instantiating required application window components. You can access pagestack via pageStack property. "3. Can I use both in Symbian and in MeeGo?" Yes. Thanks, Srikanth Hi Srikanth, Thanks for your answer. I wasn't sure which one of the two should I use. If pageStackWindow is newer I will use it instead. I got confused because most of the documentation uses pageStack in combination with window instead. The thing that is also bothering me is than when i use pageStackWindow and I change my orientation I get the error message: "ReferenceError: Can’t find variable: statusBar" When I use pageStack in combination with window I don't see this message. I have described this here: - srikanth_trulyit last edited by Using PageStackWindow will do the combination of window and pageStack. PageStackWindow is introduced newly in QtQuick 1.1 hence most documents prior to it refer to 1.0 implementation. By the way, how are you accessing statusBar? It is not provided from PageStackWindow or Window. Thanks, Srikanth You can easly recreate this error. Just create new project using Qt Quick components. It will create Hello World app. When you run it and rotate the screen you will get this error in console view. “ReferenceError: Can’t find variable: statusBar” I haven't added any code. My main.qml look like this @import QtQuick 1.1 import com.nokia.symbian 1.1 PageStackWindow{ id:window initalPage: MainPage {tools: toolBarLayout} showStatusBar:true showToolBar:true ToolBarLayout{ .... } }@ If I comment out showStatusBar:true I have noticed I don't get this message anymore. Am I doing something wrong here? I am using Qt SDK 1.1.4. Qt Creator 2.4.0 Based on Qt 4.7.4 (32 bit) Built on Dec 16 2011 at 03:25:42 I have Windows 7 OS
https://forum.qt.io/topic/12915/pagestack-vs-pagestackwindow-element
CC-MAIN-2022-27
refinedweb
438
61.12
![if !IE]> <![endif]> Operating System–Provided Atomics Using the information in this chapter, it should be possible for you to write some fundamental atomic operations such as an atomic add. However, these operations may already be provided by the operating system. The key advantage of using the operating system–provided code is that it should be correct, although the cost is typically a slight increase in call overhead. Hence, it is recommended that this code be taken advantage of whenever possible. gcc provides the operations such as __sync_fetch_and_add(), which fetches a value from memory and adds an increment to it. The return value is the value of the variable before the increment. Windows defines InterlockedExchangeAdd(), which provides the same operation, and Solaris has a number of atomic_add() functions to handle different variable types. Table 8.1 on page 311 provides a mapping between the atomic operations provided by gcc, Windows, OS X, and Solaris. An asterisk in the func-tion name indicates that it is available for multiple different types. The code in Listing 8.15 uses the gcc atomic operations to enable multiple threads to increment a counter. The program creates ten threads, each of which completes the function work(). The original program thread also executes the same code. Each thread increments the variable counter 10 million times so that at the end of the program, the variable holds the value 110 million. If the atomic add operation is replaced with a nor-mal unprotected increment operation, the output of the program is unpredictable because of the data race that this introduces. Listing 8.15 Using Atomic Operations to Increment a Counter #include <stdio.h> #include <pthread.h> volatile int counter=0; void *work( void* param ) { int i; for( i=0; i<1000000; i++ ) { __sync_fetch_and_add( &counter, 1 ); } } int main() { int i; pthread_t threads[10]; for( i=0; i<10; i++ ) { pthread_create( &threads[i], 0, work, 0 ); } work( 0 ); for( i=0; i<10; i++ ) { pthread_join( threads[i], 0 ); } printf( "Counter=%i\n", counter ); } The main advantage of using these atomic operations is that they avoid the overhead of using a mutex lock to protect the variable. Although the mutex locks usually also have a compare and swap operation included in their implementation, they also have overhead around that core operation. The other difference is that the mutex lock operates at a dif-ferent location in memory from the variable that needs to be updated. If both the lock and variable are shared between threads, there would typically be cache misses incurred for obtaining the lock and performing an operation on the variable. These two factors combine to make the atomic operation more efficient. Atomic operations are very effective for situations where a single variable needs to be updated. They will not work in situations where a coordinated update of variables is required. Suppose an application needs to transfer money from one bank account to another. Each account could be modified using an atomic operation, but there would be a point during the entire transaction where the money had been removed from one account and had not yet been added to the other. At this point, the total value of all accounts would be reduced by the amount of money in transition. Depending on the rules, this might be acceptable, but it would result in the total amount of money held in the bank being impossible to know exactly while transactions were occurring. The alternative approach would be to lock both accounts using mutex locks and then perform the transaction. The act of locking the accounts would stop anyone else from reading the value of the money in those accounts until the transaction had resolved. Notice that there is the potential for a deadlock situation when multiple transactions of this kind exist. Suppose an application needs to move x pounds from account A to account B and at the same time another thread in the application needs to move y pounds from account B to account A. If the first thread acquires the lock account A and the second thread acquires the lock on account B, then neither thread will be able to make progress. The simplest way around this is to enforce an ordering (perhaps order of memory addresses, low to high) on the acquisition of locks. In this instance, if all threads had to acquire lock A before they would attempt to get the lock on B, only one of the two threads would have succeeded in getting the lock on A, and consequently the dead-lock would be avoided. Related Topics Copyright © 2018-2020 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.
https://www.brainkart.com/article/Operating-System---Provided-Atomics_9518/
CC-MAIN-2019-30
refinedweb
777
51.38
I've periodically come across this SPException: "This task is currently locked by a running workflow and cannot be edited" when using the SPWorkflowTask.AlterTask method, even when it seems that the workflow is not in fact locked, and is instead patiently listening for an OnTaskChangedEvent. It turns out that this exception is thrown when the WorkflowVersion of the task list item is not equal to 1, which, if you believe the error message is the same thing as checking to see if the workflow is locked. Only it isn't - apparently sometimes at least, the Workflow version is non zero and the workflow is not locked (the InternalState flag of the workflow does not include the Locked flag bits). I'm not sure why this is occurring - maybe the error message is misleading - but the following code demonstrates a dodgy sort of a workaround that I've found useful. I've no idea if this is a good idea or not, so please treat with skepticism... using System; using Microsoft.SharePoint; using Microsoft.SharePoint.Workflow; using System.Collections; using System.Threading; namespace DevHoleDemo { public class WorkflowTask { public static bool AlterTask(SPListItem task, Hashtable htData, bool fSynchronous, int attempts, int millisecondsTimeout) { if ((int)task[SPBuiltInFieldId.WorkflowVersion] != 1) { SPList parentList = task.ParentList.ParentWeb.Lists[new Guid(task[SPBuiltInFieldId.WorkflowListId].ToString())]; SPListItem parentItem = parentList.Items.GetItemById((int)task[SPBuiltInFieldId.WorkflowItemId]); for (int i = 0; i < attempts; i++) { SPWorkflow workflow = parentItem.Workflows[new Guid(task[SPBuiltInFieldId.WorkflowInstanceID].ToString())]; if (!workflow.IsLocked) { task[SPBuiltInFieldId.WorkflowVersion] = 1; task.SystemUpdate(); break; } if (i != attempts - 1) Thread.Sleep(millisecondsTimeout); } } return SPWorkflowTask.AlterTask(task, htData, fSynchronous); } } } 18 comments: Thank you so much for the tip, your code did the trick. I think the reason WorkflowVersion changes is because there's a different version of the workflow dll. So when you recompile your workflow and DLL is changed in the GAC while there are still running workflows, trying to finish any of those workflows might give you "This task is currently locked.." error. Elena. Thank you very much. I had the same problem, your solution works fine. How would I use this code snippet? Do you have an example of that? You could use it as you would the SPWorkflowTask.AlterTask method, adding the attempts and timeout parameters. Here's someone who was having issues and posted some code: Thank u very very much! With this I was able to resurrect some workflows having randmonly this issue. What I would like is to find out the reason. Why this attribute is left with values like 1024, 8096? It's cleary a flag. I think I will take a look in the WSS 3.0 assemblies to find out when is set. Cheers, Tibi The solution is really great but in our scenario the DLL was same, we never changed it but were facing the issue. Thanks anyway!!! I'm having this problem with my workflow! Can you please give me some directions how to use the code snippet above?? Hey Friend I have same problem, but in my case, workflow is not firing OnTaskChanged event and workflow is locked for ever i dont know how to resolve this problem, can you help me? Hi there, I didn't really know how to implement the code you posted, but I decided to fish around and try some things out manually. I know it's highly frowned upon, but I was able to run some UPDATE queries on the AllUserData table that stored those workflows. I changed the non-1 versions to 1, and it solved my problem (for now). Here's what I did. UPDATE dbo.AllUserData SET tp_WorkflowVersion='1' WHERE tp_WorkflowVersion != '1' AND tp_DirName = 'quality/Lists/Document Review Tasks' You would change tp_DirName to whatever directory/list/task list name you are dealing with. If anyone can shed some light as to how to implement the code in this blog, that would be very helpful! Hi, I have same problem in my case ontaskchanged not fire and workflow locked for ever. i can solve my problem by removing onTaskCreated Activity from my workflow and now my workflow is working properly. my previous workflow CreatetaskWithContentType onTaskCreated While onTaskChanged End While completeTask my workflow now CreatetaskWithContentType While onTaskChanged End While completeTask hi I solved my problem by removing onTaskCreated Activity from workflow I have same problem, but in my case, workflow is not firing OnTaskChanged event and workflow is locked for ever Hi thx so much this saved my day! I included your method in my full example to notify a workflow from an external event ( Thanks very much for your assistance on this matter. I had to dynamically set a field on the workflow task item which required a item.Update() method call. This update was then causing a lock. By setting the task workflow version back to 1 this removed the problem. In my case I had this problem on a workflow that was an out-of-the-box MOSS workflow. I found the comment from Mind to be very helpful. Of course this is highly discouraged, but it does seem to work perfectly.. SELECT * FROM dbo.AllUserData WHERE tp_WorkflowVersion != 1 AND tp_ID = #### Nothing here worked for me, until I found this: Unlock all locked workflow tasks with a single PowerShell command, ridiculously easy. Nothing here worked for me, until I found this: Unlock all locked workflow tasks with a single PowerShell command, ridiculously easy. Nothing here worked for me, until I found this: Unlock all locked workflow tasks with a single PowerShell command, ridiculously easy. حامد باقرزاده,oh man...u rly helped me. I removed onTaskCreated Activity and it's work. But i can't understand why it's work now :D
http://geek.hubkey.com/2007/09/locked-workflow.html
CC-MAIN-2020-29
refinedweb
954
66.13
I'm trying to find a query through which I can pull the records from the database... Below is the example class Apple < AR::Base has_many :bananas has_many :events, as: :eventable end class Banana < AR::Base belongs_to :apple has_many :events, as: :eventable end class Event < AR::Base belongs_to :eventable, polymorphic: true end You need to combine the apple events and the apple's bananas' events into one collection. The simple approach, if an array suffices, is to call: def events_for apple apple.events + apple.bananas.map(&:events).flatten end If you want the collection to be an ActiveRecord Relation, you could take the following (arel union based) approach: def events_for apple apple_events = apple.events banana_ids = apple.bananas.pluck(:id) banana_events = Event.where(eventable_type: 'Banana', eventable_id: banana_ids ) all_events = Event.from(" ( ( #{apple_events.to_sql} ) union ( #{banana_events.to_sql} ) ) #{Event.table_name} ").distinct end end
https://codedump.io/share/WJFuvYKtN02u/1/rails-query-related-to-polymorphic-association
CC-MAIN-2016-50
refinedweb
139
58.08
13 October 2011 10:27 [Source: ICIS news] SINGAPORE (ICIS)--PetroChina is planning to conduct trial runs at its 3m tonne/year liquefied natural gas (LNG) terminal at ?xml:namespace> “Construction is close to an end,” the source said. The terminal has two 160,000 cubic metres (cbm) LNG tanks and the company has a plan to double its capacity to 6m tonne/year in the second phase of construction, according to the source Dalian LNG terminal is set to receive long-term cargoes from “The LNG purchasing agreement between PetroChina and “Therefore Dalian LNG terminal is set to start official operation from 2012,” he added. Dalian LNG terminal is the second LNG terminals belonging to PetroChina. The company started trial run at its 3.5m tonne/year terminal at Rud
http://www.icis.com/Articles/2011/10/13/9499651/petrochina-to-start-trial-running-dalian-lng-terminal-in.html
CC-MAIN-2014-52
refinedweb
131
50.87
An ASCII-art diagram library for graphs. It supports both parsing existing diagrams and rendering graphs out as an ASCII diagram. You can use it via sbt: libraryDependencies += "com.github.mdr" %% "ascii-graphs" % "0.0.3" Graph layoutGraph layout import com.github.mdr.ascii.layout._ val graph = Graph( vertices = List( "V1", "V2", "V3", "V4", "V5", "V6", "V7"), edges = List( "V1" -> "V2", "V7" -> "V1", "V1" -> "V3", "V1" -> "V4", "V2" -> "V5", "V2" -> "V6")) val ascii = Layouter.renderGraph(graph) println(ascii) This would produce: +---+ |V7 | +---+ | v +-------+ | V1 | +-------+ | || ----- |-------- | --- | v | | +-----+ | | | V2 | | | +-----+ | | | | | | --- --- | | | | | | v v v v +---+ +---+ +---+ +---+ |V5 | |V6 | |V4 | |V3 | +---+ +---+ +---+ +---+ Layout is Sugiyama-style layered graph drawing, and supports multi-edges, cycles, and vertex labels, but not self-loops or edge labels. Other ASCII layout librariesOther ASCII layout libraries - Vijual (Clojure): - Graph::Easy (Perl): Graph parserGraph parser The graph parser is intended for constructing test DSLs, particularly for data which would be much more comprehensible in ASCII art than constructed through regular programming language expressions. For example, directed graphs, trees, 2D games, object graphs, and so on. Typical usage is to parse the diagram into Diagram/ Box/ Edge objects, and then convert those objects into whatever your specific test data happens to be. SyntaxSyntax BoxesBoxes A Box is drawn as follows: +------+ | | | | | | +------+ It can contain text: +---------------+ |The quick brown| |fox jumps over | |the lazy dog. | +---------------+ Or other boxes: +-----+ |+---+| ||+-+|| ||| ||| ||+-+|| |+---+| +-----+ EdgesEdges An Edge connects two boxes: +-----+ | | | |--------- | | | +-----+ | | | | | +-----+ +-----+ | | | | | |-----| | | | | | +-----+ +-----+ Edges can have an arrow at either or both ends: +-----+ | | | |--------- | | | +-----+ | ^ | | v +-----+ +-----+ | | | | | |<--->| | | | | | +-----+ +-----+ You can connect a child box to its parent: +--------------+ | | | +-----+ | | | | | | | |----| | +-----+ | | | +--------------+ Edges can cross using a +: +-----+ | | | | | | +-----+ | +-----+ | +-----+ | | | | | | |-----+---->| | | | | | | +-----+ | +-----+ v +-----+ | | | | | | +-----+ LabelsLabels Edges can have an associated label: +-----+ | | | | | | +-----+ | |[label] | +-----+ +-----+ | | [label] | | | |---------| | | | | | +-----+ +-----+ The label's [ or ] bracket must be adjacent (horizontally or vertically) to part of the edge. UsageUsage import com.github.mdr.ascii._ val diagram = Diagram(""" +-+ ---------|E|---------- | +-+ | [9]| [6]| | | +-+ [2] +-+ [11] +-+ |F|-------|C|--------|D| +-+ +-+ +-+ | | | [14]| [10]| [15]| | | | +-+ +-+ | |A|-------|B|---------- +-+ [7] +-+ """) // Print all the vertices neighbouring A along with the edge weight: for { box ← diagram.allBoxes.find(_.text == "A") (edge, otherBox) ← box.connections() label ← edge.label } println(box + " ==> " + label + " ==> " + otherBox)
https://index.scala-lang.org/mutcianm/ascii-graphs/ascii-graphs/0.0.6?target=_2.11
CC-MAIN-2021-43
refinedweb
333
53.61
I have the below diagram wired up but the LED Matrix is not active at all (no blink no nothing, never lit up). The dashed light Blue (SDA on B5) and Orange (SCL on B6) lines represents my second try when I connected the LED Matrix on the GSM Breakout and not directly on the GSM-BLE module (I do prefer to use them on the GSM-BLE module and not on the breakout) I am using Arduino IDE and this are the included library's and code (below) #include <TinyWireM.h> #include "Adafruit_LEDBackpack.h" #include "Adafruit_GFX.h" @dbelam you told me this previously: what are you saying is related with the I2C library? - #include <TinyWireM.h> (this is written by Adafruit for Trinket and Gemma modules specially)what are you saying is related with the I2C library? - #include <TinyWireM.h> (this is written by Adafruit for Trinket and Gemma modules specially)I2C devices have a somewhat unique address, and your module's address can be changed with jumpers - so check the arduino code and the module's documentation, and make the 2 address match. Oh, and the "write" and "read" address is different by 1 bit! I do have all this working on Adafruit Flora with the below code but I want to port it on the RePhone with Arduino. If the I2C library is the issue, then will the LLedMatrix.h written by Seeed for their Xsadow LED work? ... edMatrix.h Thank you! for any help Code: Select all /* This example code is in the public domain. */ #include <TinyWireM.h> #include "Adafruit_LEDBackpack.h" #include "Adafruit_GFX.h" const int buttonPin = D3; // button pin set on GSM Breakout pad A1 Adafruit_8x8matrix matrix = Adafruit_8x8matrix(); // Variables will change: int buttonPushCounter = 0; // counter for the number of button presses int buttonState = 0; // current state of the button int lastButtonState = 0; // previous state of the button void setup() { // initialize the button pin as a input: pinMode(buttonPin, INPUT); matrix.begin(0x70); // pass in the address } static const uint8_t PROGMEM // X and square bitmaps hearth_bmp[] = { B00111100, B01000010, B10100101, B10000001, B10100101, B10011001, B01000010, B00111100 }, smile_bmp[] = { B00100100, B01011010, B10011001, B10000001, B10000001, B01000010, B00100100, B00011000 }, flower_bmp[] = { B11000011, B10100101, B01100110, B00011000, B01111011, B10101110, B11001100, B00001000 }; void loop() { // read the pushbutton input pin: buttonState = digitalRead(buttonPin); // compare the buttonState to its previous state if (buttonState != lastButtonState) { // if the state has changed, increment the counter if (buttonState == HIGH && lastButtonState == LOW) { // if the current state is HIGH then the button // went from off to on: buttonPushCounter++; } else { // if the current state is LOW then the button // went from on to) { matrix.clear(); matrix.drawBitmap(0, 0, hearth_bmp, 8, 8, LED_ON); matrix.writeDisplay(); buttonState = HIGH; if (buttonPushCounter % 1 == 0){ matrix.clear(); matrix.drawBitmap(0, 0, smile_bmp, 8, 8, LED_ON); matrix.writeDisplay(); buttonState = HIGH; if (buttonPushCounter % 2 == 0){ matrix.clear(); matrix.drawBitmap(0, 0, flower_bmp, 8, 8, LED_ON); matrix.writeDisplay(); buttonState = HIGH; } else { matrix.clear(); } } } }
https://forum.seeedstudio.com/viewtopic.php?p=24546
CC-MAIN-2019-51
refinedweb
481
61.06
Java.io.Writer.write() Method Advertisements Description The java.io.Writer.write(int c) method writes a single character. The character to be written is contained in the 16 low-order bits of the given integer value; the 16 high-order bits are ignored. Declaration Following is the declaration for java.io.Writer.write() method public void write(int c) Parameters c -- int specifying a character to be written Return Value This method does not return a value Exception IOException -- If an I/O error occurs Example The following example shows the usage of java.io.Writer.write() method. package com.tutorialspoint; import java.io.*; public class WriterDemo { public static void main(String[] args) { int c = 70; // create a new writer Writer writer = new PrintWriter(System.out); try { // write an int that will be printed as ASCII writer.write(c); // flush the writer writer.flush(); // write another int that will be printed as ASCII writer.write(71); // flush the stream again writer.flush(); } catch (IOException ex) { ex.printStackTrace(); } } } Let us compile and run the above program, this will produce the following result: FG
http://www.tutorialspoint.com/java/io/writer_write_char.htm
CC-MAIN-2014-41
refinedweb
183
59.5
well first of all, I'm using windows and new compiler(Bloodshed DevCpp YAY )) well here's my situation i cant really continue unless there's a solution... The problem is under function void make_file. I tried using the loop thing and lib string isn't really helping(or i just didnt know the potential of it) anyways i hope someone can help me. i cant make the file name unless its char, and i placed the supposed to be file name in struct name.data[1] and its a string(why string cuz i concatenated it). and converting it to char is probably my first prob. Mind that my problem is only focused on function void make_file. others works fine. but suggestion swill be accepted, if your wondering what the hell i;m doing... just imagine cut file from one directory like "c:\" to "d:\" with file creating, deleting files and extras Thanks in advance Code:#include <iostream> #include <fstream> #include <string> using namespace std; // string data 1 is from/old and data 2 is to/new kk? struct info{string data[2];}location,name,msg; void create(); void open(); void del(); void make_file(); int main() { int select = 0; while(select<1||select>4) { system("cls"); cout<<"Select File Option: \n"; cout<<"[1] - Create File \n"; cout<<"[2] - Open File \n"; cout<<"[3] - Delete File \n"; cout<<"[4] - Exit \n"; cout<<"Execute: "; cin>>select; switch(select) { case 1: create(); break; case 2: open(); break; case 3: del(); break; case 4: system("cls"); cout<<"Thank you... \n"; cin.get(); break; default: cout<<"Please Enter Correctly"; system("pause"); break; } } system("pause"); return 0; } void create() { string filename; system("cls"); cout<<"Create File: \n\n"; cout<<"LocatioN: "; cin>>location.data[1]; cout<<"\nFilename: "; cin>>filename; //concat found here filename += ".txt"; name.data[1] = filename; cout<<"\nMessage: "; cin>>msg.data[1]; make_file(); } void open() { system("cls"); } void del() { system("cls"); } // here's the problem... void make_file() { char we[50]; we=name.data[1]; cout<<we; }
http://cboard.cprogramming.com/cplusplus-programming/129921-txtfile-title%3Dchar-concatenating%3Dstring-must-convert-string-char-%3D-need-help.html
CC-MAIN-2015-48
refinedweb
332
54.22
Let's say this is my function: def function(x): return x.str.lower() A B C D 0 1.67430 BAR 0.34380 FOO 1 2.16323 FOO -2.04643 BAR 2 0.19911 BAR -0.45805 FOO 3 0.91864 BAR -0.00718 BAR 4 1.33683 FOO 0.53429 FOO 5 0.97684 BAR -0.77363 BAR B D df.apply(function, axis=1) Just subselect the columns from the df, by neglecting the axis param we operate column-wise rather than row-wise which will be significantly as you have more rows than columns here: df[['B','D']].apply(function) this will run your func against each column In [186]: df[['B','D']].apply(function) Out[186]: B D 0 bar foo 1 foo bar 2 bar foo 3 bar bar 4 foo foo 5 bar bar You can also filter the df to just get the string dtype columns: In [189]: df.select_dtypes(include=['object']).apply(function) Out[189]: B D 0 bar foo 1 foo bar 2 bar foo 3 bar bar 4 foo foo 5 bar bar Timings column-wise versus row-wise: In [194]: %timeit df.select_dtypes(include=['object']).apply(function, axis=1) %timeit df.select_dtypes(include=['object']).apply(function) 100 loops, best of 3: 3.42 ms per loop 100 loops, best of 3: 2.37 ms per loop However for significantly larger dfs (row-wise) the first method will scale much better
https://codedump.io/share/S69u40NBo5bg/1/pandas-how-to-apply-a-function-to-different-columns
CC-MAIN-2016-50
refinedweb
242
71.21
Today is the second day of being stuck at home due to heavy (by Christchurch standards) snow which has meant that I have had more time to focus on writing up the details of replacing a modification with a combination of a custom control, web-services and jscripts, that and learning about the fun of digging your driveway out. While working through the existing modification I wondered about look up fields in M3, the ones with those little arrows to the right The thing that I was really interested in was what control they use? So for the purposes of expedience I just created a quick script which retrieved the control object and did a GetType() on it. Much to my surprise it was a plain old TextBox 😦 So, my next thought was were Lawson doing some XAML magic? I had a quick look at the Microsoft documentation on XAML and quickly got side tracked and bored and decided to assume that there was something more interesting going on…not that XAML isn’t interesting, it just gets a little arcane when you are trying to figure out how to reverse engineer something. I started looking for interesting controls in the Visual Studio object browser and that didn’t get far, so I started looking at the RenderEngine – perhaps there was some magic happening there. I couldn’t see anything but one thing stuck out…”AddTextBoxHistory” – I haven’t seen any textboxes in Smart Office that really have a history of commands so I wondered if it was a control that was being reused for something other than the original intended purpose (yup, a random leap). A bit of a search through the Object Browser yielded pay-dirt. “IsHistory” as part of the MForms.ControlTag object. Gears started turning, small amounts of smoke escaping my ears as I remembered I had come across some issues when I had commandeered a <UIElements>.Tag field. ( – and to think, I had almost had the name of that object in the subject of my post! :-)) This ControlTag needed investigation, and investigate I did – there are only three properties but there are several exposed public objects, and what do we have? “IsBrowseable” This tag is set to true if the field is browsable and false if not. I’m guessing, but I suspect you need to use the AddElement method from MForms.Render.RenderEngine if you wanted to add your own control with the little wee triangle, setting the IsBrowseable flag after the control has been added doesn’t do anything. Other interesting data in this object “ReferenceField” and “ReferenceFile” – these, at least in the case of our modification identify the column and table respectively in the database that the field corresponds to! Code as follows, you’ll just need to change the TextBox that you are searching for to something that makes sense in your environment. import System; import System.Windows; import System.Windows.Controls; import MForms; package MForms.JScript { class Test { public function Init(element: Object, args: Object, controller : Object, debug : Object) { var content : Object = controller.RenderEngine.Content; // WEBKRF is the TextBox we are interested in var tbControl = ScriptUtil.FindChild(content,"WEBKRF"); if(null != tbControl) { if(null != tbControl.Tag) { // show the type of the Tag object MessageBox.Show("Type: " + tbControl.Tag.GetType()); // extract the ControlTag object var ctTag : ControlTag = tbControl.Tag; // show some interesting information MessageBox.Show("IsBrowsable: " + ctTag.IsBrowsable); MessageBox.Show("ReferenceField: " + ctTag.ReferenceField); MessageBox.Show("ReferenceFile: " + ctTag.ReferenceFile); } } } } } Have fun! 🙂 Pingback: ControlTag « M3 ideas
https://potatoit.kiwi/2011/08/16/controltag/
CC-MAIN-2017-39
refinedweb
583
51.18
Your V-Play Sample Launcher App download has started! This app allows you to quickly test and run all the open-source examples and demo games available with the V-Play SDK. The RubeParser creates QML items based on the description of a JSON file exported by the RUBE level editor. More... The RubeParser creates QML items based on the description of a JSON file exported by the RUBE level editor. Add the RubeParser to your scene and call the setupLevelFromJSON function like in this example: import VPlay 2.0 import QtQuick 2.0 import "Rube" GameWindow { Scene { id: scene EntityManager { id: entityManager entityContainer: level } Item { id: level PhysicsWorld { id: physicsWorld } } RubeParser { id: parser entitiesFolder: "entities/" Component.onCompleted: { console.debug("load finished, start parsing") parser.setupLevelFromJSON( Qt.resolvedUrl("../assets/sidescroller.json"), physicsWorld, level ) } } } } Just call setupLevelFromJSON(). The parameters are: jsonPath- a path to a .json file as string world- the id of your game's physics world levelContainer- the id of an item, the created RubeBodies will be children of this item At first, the JSON file is read and loaded into a JSON-object in the memory by the setupLevelFromJSON() function. Then, this function calls parseJSON() passing through all the parameters. This function reads the settings from the JSON and applies them to the physics world. The parseJSON() function calls the createBodies() with a list of body objects from the JSON, the world and the levelContainer. The createBodies() uses the RubeManager to create objects from the Rube Components. If the custom property "qmlType" is set it tries to create an object of this type. Then the signal internalInit is emitted on this new body causing it to initialize its properties from the transmitted JSON. For each created body the addFixturesToBody() function is called, which creates objects of the different fixture components and attaches them to the body. Each fixtures has its internalInit signal emitted. When all fixtures are attached to the body the fixturesFinished signal is emitted on it. Afterwards, the initialized signal is emitted on the body giving the users a chance to react to the finished object. When all bodies are finished the createImages() function is called, which does the same as the createBodies() but with images. If the image belongs to a body (as said in the JSON) it is assigned as a child to this body. The the renderOrder aka. z in Qt is applied and the image if it doesn't belong to any body or to its parent body. This is because the z property works only among siblings. Then, internalInit and initialized are emitted on the image just like the body before. In short: Just call clearLevel. All created RubeBodies with their fixtures and all RubeImages will be destroyed. Following properties are read from the JSON and applied to the World given as parameter: Following properties are not yet supported: Following properties are read from the JSON and applied to the RubeBody instances: The RubeBody contains a list of fixtures Body::fixtures, too. Following properties are read from the JSON and applied to all the different RubeFixture instances: Following properties are read from the JSON and applied to the RubeFixtureCircle instances: Following properties are read from the JSON and applied to the PolygonFixture instances: The RUBE fixture type "line" and "loop" are not supported at the moment. Following properties are read from the JSON and applied to the RubeImage instances: Joints are not yet supported Set this folder to the relative path where the game entities are located. The game entities are set with the qmlType custom property in RUBE. If your entities are in "qml/entities/", and your current QML file is in the qml folder, set the path to "entities/". Note: By default, it is set to "../qml/entities" relative to the assetsFolder (the folder where your rube .json file is located). However, when you create a publish build of your game and protect your QML files, you need to set this path explicitly. Destroys all RubeBodies with their fixtures and all RubeImages. Reads the given JSON file from jsonPath and creates RubeBodies and RubeImages according to the definition. All created objects will be placed in levelContainer. The world property holds a reference to the game's PhysicsWorld. Voted #1 for:
https://v-play.net/doc/vplay-rubeparser/
CC-MAIN-2017-09
refinedweb
711
55.13
The Evoked data structure: evoked/averaged data¶ This tutorial covers the basics of creating and working with evoked data. It introduces the Evoked data structure in detail, including how to load, query, subselect, export, and plot data from an Evoked object. For info on creating an Evoked object from (possibly simulated) data in a NumPy array, see Creating MNE’s data structures from scratch. Page contents As usual we’ll start by importing the modules we need: import os import mne Creating Evoked objects from Epochs¶ Evoked objects typically store an EEG or MEG signal that has been averaged over multiple epochs, which is a common technique for estimating stimulus-evoked activity. The data in an Evoked object are stored in an array of shape (n_channels, n_times) (in contrast to an Epochs object, which stores data of shape (n_epochs, n_channels, n_times)). Thus to create an Evoked object, we’ll start by epoching some raw data, and then averaging together all the epochs from one condition: sample_data_folder = mne.datasets.sample.data_path() sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample', 'sample_audvis_raw.fif') raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False) events = mne.find_events(raw, stim_channel='STI 014') # we'll skip the "face" and "buttonpress" conditions, to save memory: event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3, 'visual/right': 4} epochs = mne.Epochs(raw, events, tmin=-0.3, tmax=0.7, event_id=event_dict, preload=True) evoked = epochs['auditory/left'].average() del raw # reduce memory usage Out: 320 events found Event IDs: [ 1 2 3 4 5 32] 289 matching events found Applying baseline correction (mode: mean) Not setting metadata Created an SSP operator (subspace dimension = 3) 3 projection items activated Loading data for 289 events and 601 original time points ... 0 bad epochs dropped Basic visualization of Evoked objects¶ We can visualize the average evoked response for left-auditory stimuli using the plot() method, which yields a butterfly plot of each channel type: Like the plot() methods for Raw and Epochs objects, evoked.plot() has many parameters for customizing the plot output, such as color-coding channel traces by scalp location, or plotting the global field power alongside the channel traces. See Visualizing Evoked data for more information about visualizing Evoked objects. Subselecting Evoked data¶ Unlike Raw and Epochs objects, Evoked objects do not support selection by square-bracket indexing. Instead, data can be subselected by indexing the data attribute: print(evoked.data[:2, :3]) # first 2 channels, first 3 timepoints Out: [[ 5.72160572e-13 3.57859354e-13 3.98040833e-13] [-2.75128428e-13 -3.15309907e-13 -5.83186429e-13]] To select based on time in seconds, the time_as_index() method can be useful, although beware that depending on the sampling frequency, the number of samples in a span of given duration may not always be the same (see the Time, sample number, and sample index section of the tutorial about Raw data for details). Selecting, dropping, and reordering channels¶ By default, when creating Evoked data from an Epochs object, only the “data” channels will be retained: eog, ecg, stim, and misc channel types will be dropped. You can control which channel types are retained via the picks parameter of epochs.average(), by passing 'all' to retain all channels, or by passing a list of integers, channel names, or channel types. See the documentation of average() for details. If you’ve already created the Evoked object, you can use the pick(), pick_channels(), pick_types(), and drop_channels() methods to modify which channels are included in an Evoked object. You can also use reorder_channels() for this purpose; any channel names not provided to reorder_channels() will be dropped. Note that channel selection methods modify the object in-place, so in interactive/exploratory sessions you may want to create a copy() first. evoked_eeg = evoked.copy().pick_types(meg=False, eeg=True) print(evoked_eeg.ch_names) new_order = ['EEG 002', 'MEG 2521', 'EEG 003'] evoked_subset = evoked.copy().reorder_channels(new_order) print(evoked', 'MEG 2521', 'EEG 003'] Similarities among the core data structures¶ Evoked objects have many similarities with Raw and Epochs objects, including: They can be loaded from and saved to disk in .fifformat, and their data can be exported to a NumPy array(but through the dataattribute, not through a get_data()method). Pandas DataFrameexport is also available through the to_data_frame()method. You can change the name or type of a channel using evoked.rename_channels()or evoked.set_channel_types(). Both methods take dictionarieswhere the keys are existing channel names, and the values are the new name (or type) for that channel. Existing channels that are not in the dictionary will be unchanged. SSP projector manipulation is possible through add_proj(), del_proj(), and plot_projs_topomap()methods, and the projattribute. See Repairing artifacts with SSP for more information on SSP. Like Rawand Epochsobjects, Evokedobjects have copy(), crop(), time_as_index(), filter(), and resample()methods. Like Rawand Epochsobjects, Evokedobjects have evoked.times, evoked.ch_names, and infoattributes. Evoked data¶ Single Evoked objects can be saved to disk with the evoked.save() method. One difference between Evoked objects and the other data structures is that multiple Evoked objects can be saved into a single .fif file, using mne.write_evokeds(). The example data includes just such a .fif file: the data have already been epoched and averaged, and the file contains separate Evoked objects for each experimental condition: sample_data_evk_file = os.path.join(sample_data_folder, 'MEG', 'sample', 'sample_audvis-ave.fif') evokeds_list = mne.read_evokeds(sample_data_evk_file, verbose=False) print(evokeds_list) print(type(evokeds_list)) Out: [<Evoked | 'Left Auditory' (average, N=55), [-0.1998, 0.49949] sec, 376 ch, ~4.9 MB>, <Evoked | 'Right Auditory' (average, N=61), [-0.1998, 0.49949] sec, 376 ch, ~4.9 MB>, <Evoked | 'Left visual' (average, N=67), [-0.1998, 0.49949] sec, 376 ch, ~4.9 MB>, <Evoked | 'Right visual' (average, N=58), [-0.1998, 0.49949] sec, 376 ch, ~4.9 MB>] <class 'list'> Notice that mne.read_evokeds() returned a list of Evoked objects, and each one has an evoked.comment attribute describing the experimental condition that was averaged to generate the estimate: for evok in evokeds_list: print(evok.comment) Out: Left Auditory Right Auditory Left visual Right visual If you want to load only some of the conditions present in a .fif file, read_evokeds() has a condition parameter, which takes either a string (matched against the comment attribute of the evoked objects on disk), or an integer selecting the Evoked object based on the order it’s stored in the file. Passing lists of integers or strings is also possible. If only one object is selected, the Evoked object will be returned directly (rather than a length-one list containing it): right_vis = mne.read_evokeds(sample_data_evk_file, condition='Right visual') print(right_vis) print(type(right_vis)) (Right visual) 0 CTF compensation matrices available nave = 58 - aspect type = 100 Projections have already been applied. Setting proj attribute to True. No baseline correction applied <Evoked | 'Right visual' (average, N=58), [-0.1998, 0.49949] sec, 376 ch, ~4.9 MB> <class 'mne.evoked.Evoked'> Above, when we created an Evoked object by averaging epochs, baseline correction was applied by default when we extracted epochs from the class: Raw object (the default baseline period is (None, 0), which assured zero mean for times before the stimulus event). In contrast, if we plot the first Evoked object in the list that was loaded from disk, we’ll see that the data have not been baseline-corrected: evokeds_list[0].plot(picks='eeg') This can be remedied by either passing a baseline parameter to mne.read_evokeds(), or by applying baseline correction after loading, as shown here: evokeds_list[0].apply_baseline((None, 0)) evokeds_list[0].plot(picks='eeg') Notice that apply_baseline() operated in-place. Similarly, Evoked objects may have been saved to disk with or without projectors applied; you can pass proj=True to the read_evokeds() function, or use the apply_proj() method after loading. Combining Evoked objects¶ One way to pool data across multiple conditions when estimating evoked responses is to do so prior to averaging (recall that MNE-Python can select based on partial matching of /-separated epoch labels; see Subselecting epochs for more info): left_right_aud = epochs['auditory'].average() print(left_right_aud) Out: <Evoked | '0.50 * auditory/left + 0.50 * auditory/right' (average, N=145), [-0.29969, 0.69928] sec, 366 ch, ~5.3 MB> This approach will weight each epoch equally and create a single Evoked object. Notice that the printed representation includes (average, N=145), indicating that the Evoked object was created by averaging across 145 epochs. In this case, the event types were fairly close in number: Out: [72, 73] However, this may not always be the case; if for statistical reasons it is important to average the same number of epochs from different conditions, you can use equalize_event_counts() prior to averaging. Another approach to pooling across conditions is to create separate Evoked objects for each condition, and combine them afterward. This can be accomplished by the function mne.combine_evoked(), which computes a weighted sum of the Evoked objects given to it. The weights can be manually specified as a list or array of float values, or can be specified using the keyword 'equal' (weight each Evoked object by \(\frac{1}{N}\), where \(N\) is the number of Evoked objects given) or the keyword 'nave' (weight each Evoked object by the number of epochs that were averaged together to create it): left_right_aud = mne.combine_evoked([left_aud, right_aud], weights='nave') assert left_right_aud.nave == left_aud.nave + right_aud.nave Keeping track of nave is important for inverse imaging, because it is used to scale the noise covariance estimate (which in turn affects the magnitude of estimated source activity). See The minimum-norm current estimates for more information (especially the Whitening and scaling section). For this reason, combining Evoked objects with either weights='equal' or by providing custom numeric weights should usually not be done if you intend to perform inverse imaging on the resulting Evoked object. Other uses of Evoked objects¶ Although the most common use of Evoked objects is to store averages of epoched data, there are a couple other uses worth noting here. First, the method epochs.standard_error() will create an Evoked object (just like epochs.average() does), but the data in the Evoked object will be the standard error across epochs instead of the average. To indicate this difference, Evoked objects have a kind attribute that takes values 'average' or 'standard error' as appropriate. Another use of Evoked objects is to represent a single trial or epoch of data, usually when looping through epochs. This can be easily accomplished with the epochs.iter_evoked() method, and can be useful for applications where you want to do something that is only possible for Evoked objects. For example, here we use the get_peak() method (which isn’t available for Epochs objects) to get the peak response in each trial: for ix, trial in enumerate(epochs[:3].iter_evoked()): channel, latency, value = trial.get_peak(ch_type='eeg', return_amplitude=True) latency = int(round(latency * 1e3)) # convert to milliseconds value = int(round(value * 1e6)) # convert to µV print('Trial {}: peak of {} µV at {} ms in channel {}' .format(ix, value, latency, channel)) Out: Trial 0: peak of 159 µV at 35 ms in channel EEG 003 Trial 1: peak of -45 µV at 569 ms in channel EEG 005 Trial 2: peak of -46 µV at 648 ms in channel EEG 015 Total running time of the script: ( 1 minutes 48.467 seconds) Estimated memory usage: 755 MB Gallery generated by Sphinx-Gallery
https://mne.tools/stable/auto_tutorials/evoked/plot_10_evoked_overview.html
CC-MAIN-2020-34
refinedweb
1,893
54.83
After many applications written using Scala’s Futures, Akka Actors or Monix,… Cats-effect is now my favourite stack to write Scala programs. Why is that? Well, it makes your code easier to write and reason about while providing good performances. Intro In functional programming an effect is a context in which your computation operates: Optionmodels the absence of value Trymodels the possibility of failure Futuremodels the asynchronicity of a computation - … These effects have nothing to do with side-effects. They’re just contexts in which your code operates. On the other hand a side-effect happens when a computation interacts with the outside world like printing something on the screen, reading or writing to a file … or something as simple as getting the current time or updating a variable outside the current scope. All these operations, when replayed, produce different results or leave the outside world in a different state. The side-effects are the things that make a program hard to reason about because they are not referential transparent. An expression is said to be referential transparent when it can be replaced by its value without changing the program. Let’s consider this basic example: // referential transparent def n = 3 println(s"$n + $n") // prints "3 + 3" println("3 + 3") // replaces n by its value and prints the same thing // not referential transparent def n = { print("n") 3 } println(s"$n + $n") // prints "nn3 + 3" // not the same as println("3 + 3") The same reasoning applies to Scala’s Future as just creating the Future is enough to kick-off the computation (which is also memoized) // here f1 and f2 run concurrently val f1 = Future { ... } val f2 = Future { ... } for { a <- f1 b <- f2 } yield (a, b) // here f1 and f2 run sequentially for { a <- Future { ... } b <- Future { ... } } yield (a, b) This is why side-effects are hard to reason about. Yet we need them because this is how a program interacts with the world. This is what makes a program useful. If the above program would not be able to place an order it wouldn’t be so useful. So can we have our cake and it eat too? Can we have side-effects and referential transparency? Well, yes if you wrap them in a context. This context is called IO and it’s the effect to deal with side-effect. IO indicates that the computation interact with the outside world (Input/Output) and this is the core of the cats-effect library. Principle How is that possible? Well there is only one way to make a code that performs side-effects referentially transparent and it’s to not run it! This is exactly what the IO effect does. It wraps a computation but doesn’t run it. val n = IO { print("n") 3 } This code doesn’t print anything (unlike Future). It’s just a description of what needs to be done and nothing happens until you explicitly run this computation. Now you can see why this is so powerful. By not running the code straight away we gain a lot in composition. We can assemble a whole program without running anything, and only run it, once we’re ready. You run an IO by calling one of the “unsafe” method that it provides: unsafeRunSync: runs the program synchronously unsafeRunTimed: runs the program synchronously but abort after a specified timeout (useful for testing) unsafeRunAsync: runs the program asynchronously and execute the specified callback when done unsafeToFuture: runs the program and produces the result as a Scala Future (useful to integrate with other libraries/framework). - … Conditionals By not running the code straight away we get additional capabilities. E.g. it’s possible to decide to run a computation after declaring it: val isWeekday = true for { _ <- IO(println("Working")).whenA(isWeekday) _ <- IO(println("Offwork")).unlessA(isWeekday) } yield () Asynchronous IO So far we have only created synchronous IO. i.e. computations that can be run straight away on the current thread. However sometimes you need to interact with remote systems and in this case you need an asynchronous IO. An asynchronous IO is created by passing a callback that is invoked when the computation completes. The signature of IO.async may look scary def async[A](k: (Either[Throwable, A] => Unit) => Unit): IO[A] but it just takes a function that given a callback Either[Throwable, A] => Unit returns Unit This is especially useful to convert between a Java Future to IO def fromCompletableFuture[A](f: => CompetableFuture[A]): IO[A] = IO.async { callback => f.whenComplete { (res: A, error: Throwable) => if (error == null) callback(Right(res)) else callback(Left(error)) } } The same logic applies to convert a Scala Future to IO but there is already a IO.fromFuture method available. Note that this method takes a IO[Future[A]] as an argument to avoid passing an already completed future. Brackets IO being lazy (it doesn’t evaluate strait away) it’s possible to make sure all the resources used in the computation are released properly no matter the outcome of the computation. Think of it as a try / catch / finally. In cats-effect this is called Bracket // acquire the resource IO(new Socket("hostname", 12345)).bracket { socket => // use the socket here } { socket => // release block socket.close() } The bracket makes sure that the release block is called whatever happens with the resources in the usage block. There’re more variants on this. E.g. bracketCase allows to consider the outcome of the execution when releasing the resource Resources Resource builds on top of Bracket. It allows to acquire resources (and making sure they are released properly) before running your computation. Moreover Resource is composable so that you can acquire all your resources at once. def acquire(s: String) = IO(println(s"Acquire $s")) *> IO.pure(s) def release(s: String) = IO(println(s"Releasing $s")) val resources = for { a <- Resource.make(acquire("A"))(release("A") b <- Resource.make(acquire("B"))(release("B") } yield (a ,b) resources.use { case (a, b) => // use a and b } // release code is automatically invoked when computation finishes Cancellation Remember the asynchronous IO that we created by passing a continuation (or callback). The signature was Callback => Unit (where Callback is Either[Throwable, A] => Unit). Now if instead of having a Callback => Unit we have a Callback => IO[Unit] we have a computation that we can run ( IO[Unit]) to cancel the async task. Back to our Java CompletableFuture we now have: def fromCompletableFuture[A](f: => CompletableFuture[A]): IO[A] = IO.cancellable { callback => f.whenComplete(res: A, error: Throwable) => if (error == null) callback(Right(res)) else callback(Left(error)) IO(f.cancel(true)) } Now if we cancel our IO[A] what’s going to happen is that the cancellation token ( IO(f.cancel())) is going to run, trying to cancel the underlying future. Note the cancellation is a concurrent action and can only be enforced at asynchronous boundaries. You must also consider what could happen if you cancel (e.g. close or release a resource) while the computation is running (e.g. the resource is being used). This point is very well explained in the cats-effect documentation. The cats effect type-classes So far we’ve only focused on IO which is at the heart of cats-effect, and as we’ve seen it’s a really powerful beast. That’s ok to use in a simple application effect but it’s not recommended for writing more complex application or libraries and services. Instead it’s always preferable to use the least powerful type-class needed for the job. That makes it clear what abstraction is needed and as a bonus it also make testing easier. Bracketsafely acquire and release resources Syncsuspend a synchronous computation (stack-safe) Asyncfor asynchronous computations running outside of the main program Concurrentconcurrently start or cancel computations Effectlazy evaluation of asynchronous computation ConcurrentEffectcancellation and concurrent executions of asynchronous computation LiftIOconverts from IOto F Effect provide a “safe” run methods as opposed to the unsafeRun methods available with other classes. unsafeRun do run the computation and return the result (or the result wrap in an already running context, e.g. Future) … Safe run methods on the other hand do not execute any code but show an intent to run the code. They return a SyncIO instance which can be converted to F with a LiftIO instance.null Concurrency Fibers Everything that supports Concurrent can be started. When you start an asynchronous computation you get back a Fiber. A Fiber holds the result of a computation and can be think of as a “green” or “logical” thread. They are light-weight (unlike JVM Threads) so it’s possible to start many of them and like threads they can be started and joined. Async boundaries An important point to understand is that cancellation can only happen when the execution crosses an asynchronous boundary. But what is an asynchronous boundary exactly? Well, it’s just when you give the processor a chance to do a context shift. And for that cats-effect provides a ContextShift with 2 methods: shiftcreates a logical fork (places an async boundary) evalOnruns the computation on another ExecutionContext Unlike Future which created a context shift for every operation ( map, flatMap, …) and resulted in pretty poor performances, cats-effect gives you back control when to place such boundaries, achieving far better performances at the same time. Concurrency helpers Cats-effect provides a bunch of useful data structures to deal with concurrency. Ref The first one is called Ref and is similar to an AtomicRef. It always holds a value (can’t be empty but can hold an Option ;-)). Deferred Deferred is like a Promise. It is created empty and can be completed only once. When a consumer calls get it blocks (no, it doesn’t block a thread but its computation is suspended) until a producer calls complete and provides a value. complete can only be called once. MVar MVar is like a Queue of size 1 where consumer block when empty and producers block when full. Semaphore Manages a number of permit. Users block on acquire when no permits are available. And more… Cats-effect also provide a Clock service which is just a wrapper to get the current time. It can be mocked easily to facilitate testing. Quite basic but always useful. And for anything more complex (Queues and streams) there is the awesome fs2 library. Conclusion For me cats-effect is a game changer. It provides simple and powerful abstraction that takes most of the complexity away to reason about a program while offering very decent performances out-of-the-box. If you’re still using Scala Future and the likes, give it a try, you won’t be coming back (plus the documentation is really good).
https://www.beyondthelines.net/programming/cats-effect-an-overview/
CC-MAIN-2019-30
refinedweb
1,797
55.13
last week in October wasn't the smoothest for the W3C HTML Working Group. First, a notable blog entry criticized their handling of XML namespaces, leading to a formal objection. On top of that, Tim Berners-Lee blogged that new and separate HTML and forms Working Groups would be chartered to "incrementally" update HTML, in contrast with the groups' present approach. More on that later. As has always been the case, XML Annoyances aims to stimulate discussion on XML topics by challenging entrenched views. This article digs beneath the surface issues and encourages others to do the same. The objectionable technical issue relates to what is commonly called a chameleon schema, that is, the ability for elements defined in a vocabulary to appear in more than one namespace -- in this case, for XForms elements to appear without additional qualification in XHTML. True enough, this seems to fly in the face of the goal of globally unique namespace-qualified elements. As one blogger writes: If you believe there is a single, more important, absolute requirement in the land of XML than that of the proper usage of XML namespaces: You obviously don't understand XML. Similarly, in a section dedicated to XML namespaces, O'Reilly's XML Hacks by Michael Fitzgerald asserts in hack #59: Though controversial, XML namespaces are a necessity if you want to manage XML documents in the wild. Controversial, yes. But a "necessity"? Many such statements are treated as an axiomatic bedrock. Instead of blithely accepting these aphorisms, let's look at some specific evidence. Nearly any contemporary XML reference will include material on XML namespaces to help novices get up to speed on a potentially tricky topic. Beginners need to be shown powerful examples of problems that led to the requirement to have namespaces in XML in the first place. Let's examine a few real-world examples from my bookshelf. The previously mentioned coverage in XML Hacks is interesting in that it doesn't even show a multiple-namespace example. O'Reilly's excellent XML in a Nutshell, Third Edition, by Elliotte Rusty Harold and W. Scott Means, devotes an entire chapter to namespaces. The opening example uses a mythical catalog of paintings' markup language, which has conflicts on elements named title, date, and description. Even then, the description elements are equivalent in content model and overall purpose, only appearing in different contexts -- one describes a page and one describes a painting. title date description Another highly regarded work, Ted Leung's Professional XML Development with Apache Tools dives in and addresses namespaces on the third page. The example? An Apache-XML-Book vocabulary, wherein one "can easily imagine the element name, title, or author being used in another XML grammar, say one for music CDs." author The point here is not to criticize the writing of these books. Quite the opposite; I'm relying on the talented writing in these books to make my point. Perhaps there's something funny going on with the problem statement of a technology that relies on unrealistic examples to justify its existence. Numerous similar examples exist on your bookshelf, too. Post your favorite good or bad examples in the comments section below. For such an indispensable layer of XML processing, compelling real-world examples are hard to find. By the end of the 1990s, the markup world looked bright. The XML Recommendation was still fresh and new, and the Namespaces in XML Recommendation fresher and newer still. Combined, the two would usher in a new era of stricter error-checking, leaving behind the messy habits of HTML authors past. Unfortunately, many of the early implementations failed to properly implement namespaces in a manner conforming to the spec. Internet Explorer 5, in particular, set an unfortunate precedent. In a 1999 article, Tim Bray writes:. html This kind of thinking persisted. Even years later, in 2002, Ben Hammersley wrote about namespace problems revolving around the RSS 2.0 spec, made worse by a flaw in Dave Winer's reference implementation in a software package called Radio. The problem? This is because, as many other people might not realise either, it forgets that the namespace prefix can change. A proper XML parser takes it only as a reference to the URI. It is the URI that matters. Even today, the concept of an arbitrary, changeable mapping from short prefixes to URLs is confusing and nonintuitive to many. Further examples are welcome in the comments section. Pages: 1, 2 Next Page © , O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://www.xml.com/pub/a/2006/11/08/cracks-in-the-foundation.html
CC-MAIN-2014-35
refinedweb
775
54.93
Okay, we are on the third assignment in Java, Java, Java. I do not understand the book that well and while I have been searching google for something explained in simpler terms it hasn't came up. We have two files: OneRowNim2.java and Assign3.java The specifications were: You are to create 2 java files: OneRowNim2.java and Assign3.java. The OneRowNim2.java file is based on the code in the book that is in page 131 (Figure 3-16) of Chapter 3. Remember that the code in the book is for a class named OneRowNim and you need to change the class name to OneRowNim2. (You need also to change the 3 constructors to follow the name of the class so the code would not cause any errors.). Also we need to allow up to a maximum of "4" sticks to be taken not just the mximum of 3 sticks like the code does in page 131. Make sure you change that accordingly in the code. Once the changes are done the code is ready to be compiled. Create a separate java program which you will name Assign3.java. Write a main method that will create a game which you will name game1 and which starts out with "9" sticks. Remember to use the code of page 133 (figure 3-17). We need to modify the code so that we will read the numbers "582423264" one digit at a time. We also need to declare an int "Counter" before the while loop that will count how many numbers from the above digits were smaller or equal to 4. Then you will primt the value of the Counter after the loop ends. Do not forget to drop both the OneRowNim2.java and the Assign3.java in the dropbox. Click on the next link to do so: DropBox for Assignment3 Do not forget to compile and run to verify that the class Assign3 is running. PS. Output of Execution : Number of Sticks left: 9 Next turn by player 1 input the digits '582423264' one digit at a time and hit return 5 Number of Sticks left: 9 Next turn by player 1 input the digits '582423264' one digit at a time and hit return 8 Number of Sticks left: 9 Next turn by player 1 input the digits '582423264' one digit at a time and hit return 2 Number of Sticks left: 7 Next turn by player 2 input the digits '582423264' one digit at a time and hit return 4 Number of Sticks left: 3 Next turn by player 1 input the digits '582423264' one digit at a time and hit return 2 Number of Sticks left: 1 Next turn by player 2 input the digits '582423264' one digit at a time and hit return 3 Number of Sticks left: -2 Next turn by player 1 Counter = 4 Game won by by player 1 Now I have OneRowNim2.java as: /* * File: OneRowNim2.java * Description: One Row Nim2 */ public class OneRowNim2 { private int nSticks = 9; private int player = 1; public OneRowNim2() { } public OneRowNim2(int sticks) { nSticks = sticks; } public OneRowNim2(int sticks, int starter) { nSticks = sticks; player = starter; } public boolean takeSticks(int num) { if (num < 1) return false; else if ( num > 4) return false; else { nSticks = nSticks - num; player = 4 - player; return true; } } public int getSticks() { return nSticks; } public int getPlayer() { return player; } public boolean gameOver() { return (nSticks <= 0); } public int getWinner() { if (nSticks < 1) return getPlayer(); else return 0; } public void report() { System.out.println("Number of sticks left:" + getSticks()); System.out.println("Next turn by player" + getPlayer()); } } And Assign3.java as: /* * File: Assign2.java * Author: Leah Stevens * Description: Assignment3 */ import java.util.Scanner; public class Assign3 { public static void main (String [ ] args) { Scanner sc = Scanner.create(System.in); OneRowNim game1 = new OneRowNim(9); while(game1.gameOver() == false) { game1.report(); System.out.print('Input 1, 2, 3, 0r 4: ") int sticks = sc.nextInt(); game1.takeSticks(sticks); System.out.println(); } game1.report(); System.out.print("Game won by player "); System.out.println(game1.getwinner()); } } I am having problems with the Assign3.java I believe that OneRowNim.java is correct but if someone wants to check it I dont mind ^.^ Im not asking for anyone to do this for me, but if anyone has any sites that could explain how to go about it I would appreciate it. The main thing I dont understand, is in what order would I go about placing the counter, loop and such. The book is very unclear on what exactly the code does, and the instructor says first we learn how to program then we understand what it means. It is hard for me to learn that way, so if anyone can direct me to something that would explain what the different codes affect it would be much appreciated.
https://www.daniweb.com/programming/software-development/threads/229223/i-would-like-to-b-understand-b-my-homework-xd
CC-MAIN-2017-39
refinedweb
803
69.82
#include <coherent_dht.hpp> This implements a processor consistent cache coherent distributed hash table. Each machine has a part of the hash table as well as a cache. The system implements automatic cache invalidation as well as automatic cache subscription (currently through a rather poor heuristic). This class also implements some functionality which provide more fine-grained control over the invalidation policy, allowing stronger consistency levels to be implemented on top of this class. Definition at line 68 of file coherent_dht.hpp. datatype of the data map. maps from key to the value Definition at line 73 of file coherent_dht.hpp. Puts out a prefetch request for this key. Definition at line 384 of file coherent_dht.hpp. Attaches a modification trigger which is signalled whenever the local cache/storage of any index is updated or invalidated. Only one modification trigger can be attached. The trigger is only signalled after the update/invalidation is complete. The trigger is called with three parameters, the key, the value as well as a boolean flag "is_in_cache". The "key" is the key of the entry which was just updated/invalidated. The "value" is the current value of the entry. This is only set if is_in_cache is true Note that the call to the trigger is not locked and the "is_in_cache" flag could very well be outdated when the call is issued. The flag should therefore not be treated as "truth" but simply as a hint about the state of the internal cache. Definition at line 133 of file coherent_dht.hpp. acquire a lock on the key. The lock should be released using end_critical_section() as soon as possible. There is no guarantee that the lock on this key is fine-grained. Definition at line 151 of file coherent_dht.hpp. Detaches the modification trigger Definition at line 142 of file coherent_dht.hpp. Releases a lock on the key as a acquired by begin_critical_section() Definition at line 166 of file coherent_dht.hpp. Gets the value associated with the key. returns true on success. get will read from the cache if data is already available in the cache. If not, get will obtain the data from across the network Definition at line 325 of file coherent_dht.hpp. Returns true of the key is current in the cache Definition at line 366 of file coherent_dht.hpp. Returns the machine responsible for storing the key Definition at line 357 of file coherent_dht.hpp. Push the current value of the key to all machines. If async=true, when this call returns, all machines are guaranteed to have the most up to date value of the key. Definition at line 273 of file coherent_dht.hpp. Sets the key to the value if the key belongs to a remote machine. It is guaranteed that if the current machine sets a key to a new value subsequent reads will never return the previous value. (i.e. it will return the new value or later values set by other processors). Definition at line 186 of file coherent_dht.hpp. Forces synchronization of this key This operation is synchronous. When this function returns all machines are guarnateed to have the updated value Definition at line 232 of file coherent_dht.hpp. Subscribes to this key. This key will be a permanent entry in the cache and can not be invalidated. Key modifications are automatically sent to this machine. Definition at line 417 of file coherent_dht.hpp.
http://select.cs.cmu.edu/code/graphlab/doxygen/html/classgraphlab_1_1coherent__dht.html
CC-MAIN-2015-18
refinedweb
566
50.12
Prev Java Exception Experts Index Headers Your browser does not support iframes. Re: After deserialization program occupies about 66% more RAM From: Robert Klemme <shortcutter@googlemail.com> Newsgroups: comp.lang.java.programmer Date: Tue, 19 Sep 2006 14:16:11 +0200 Message-ID: <4na5ccF9est5U1@individual.net> This is a multi-part message in MIME format. --------------000303000500040801080806 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit On 19.09.2006 10:42, setar wrote: User "Eric Sosman" wrote: My program stores in RAM dictionary with about 100'000 words. This dictionary occupies about 380MB of RAM. [...] ... thus using an average of 3800 bytes per word! What are you storing: bit-map images of the printed text? I not only store text of words but also many more information about them, for example: translation to english, synonyms, hypernyms, hyponyms (ontology) and language. For each mentioned elements (they are actually phrases of words not single words) I also store phrase parsed to component words with information about type of connection between words and phase text generated by concatenating parsed words (it can be different). I will try to decrease amount of memory used by one word (phase) but I estimated that on average one word must occupy at least 700 bytes. Except of these I have three indices to be able to search words. Serialization blows up strings. You can see with the attached program if used with a debugger (I tested with 1.4.2 and 1.5.0 with Eclipse). You can see that (1) copies of strings do not share the char array any more and (2) that the char array is larger than that of the original even though only some characters are used (the latter is true for 1.4.2 only, so Sun actually has improved this). Kind regards robert --------------000303000500040801080806 Content-Type: text/plain; name="SharingTest.java" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="SharingTest.java" package serialization; import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; import java.io.IOException; import java.io.ObjectInputStream; import java.io.ObjectOutputStream; public class SharingTest { /** * @param args * @throws IOException in case of error * @throws ClassNotFoundException never */ public static void main( String[] args ) throws IOException, ClassNotFoundException { String root = "foobar"; Object[] a1 = { root, root.substring( 3 ) }; Object[] a2 = { root, root.substring( 3 ) }; ByteArrayOutputStream byteOut = new ByteArrayOutputStream(); ObjectOutputStream objectOut = new ObjectOutputStream( byteOut ); objectOut.writeObject( a1 ); objectOut.writeObject( a2 ); objectOut.close(); ByteArrayInputStream byteIn = new ByteArrayInputStream( byteOut.toByteArray() ); ObjectInputStream objectIn = new ObjectInputStream( byteIn ); Object[] c1 = ( Object[] ) objectIn.readObject(); Object[] c2 = ( Object[] ) objectIn.readObject(); // breakpoint here System.out.println( c1 == c2 ); for ( int i = 0; i < c1.length; ++i ) { System.out.println( i + ": " + ( c1[i] == c2[i] ) ); } } } --------------000303000500040801080806-- Generated by PreciseInfo ™ "We, the Jews, not only have degenerated and are located at the end of the path, we spoiled the blood of all the peoples of Europe ... Jews are descended from a mixture of waste of all races." -- Theodor Herzl, the father and the leader of modern Zionism:
http://preciseinfo.org/Convert/Articles_Java/Exception_Experts/Java-Exception-Experts-060919151611.html
CC-MAIN-2021-49
refinedweb
498
52.26
Introducing C# - Oct 31, 2003 Escaping Characters In some cases, characters cannot be displayed directly. This can lead to problems. The next line shows what problems might occur: Console.WriteLine ("This is a quotation mark: " "); We try to display a quotation mark, which leads to an error because the compiler cannot translate the code. The compiler has no chance to do so because there's no way to find out which quotation mark means what. To tell the C# compiler which character to display on screen, we can use a backslash. In the next example, you can see how the most important symbols can be displayed: using System; class Hello { public static void Main() { Console.WriteLine("Single quote: \'"); Console.WriteLine("Quotation mark: \""); Console.WriteLine("Backslash: \\"); Console.WriteLine("Alert: \a "); Console.WriteLine("Backspace: -\b"); Console.WriteLine("Formfeed: \f"); Console.WriteLine("Newline: \n"); Console.WriteLine("Carriage Return: \r"); Console.WriteLine("Tabulator: before\tafter"); Console.WriteLine("Tab: \v"); Console.WriteLine("binary 0: \0"); } } Backspaces are importantin contrast to other characters, backspaces make it possible to delete other characters. Not all characters lead to real output. In case of binary zeros, nothing is displayed on screen: [hs@localhost csharp]$ mcs hello6.cs; mono hello6.exe Compilation succeeded Single quote: ' Quotation mark: " Backslash: Alert: Backspace: - Formfeed: Newline: Carriage Return: Tabulator: before after Tab: binary 0: Escaping characters is essential. Particularly when dealing with external data structures, databases, or XML, escaping is important. We'll get back to escaping later in this book. Symbols for escaping special characters are important when talking about paths. Paths can contain a number of backslashes. Especially on Microsoft Windows systems, this is interesting subject matter that can be the root of evil. Therefore, C# provides a simple mechanism for escaping backslashes in a path. Let's look at an example: using System; class Hello { public static void Main() { String error = "c:\new_pics"; Console.WriteLine("Error: " + error + "\n"); String correct = @"c:\new_pics"; Console.WriteLine("Correct: " + correct); } } The first string contains a hidden line feed that causes trouble when displaying the text on the screen. In the second example, we use verbatim stringsthis is a special kind of string that escapes only quotation marks. The advantage of verbatim strings is that the programmer need not worry about symbols other than quotation marks. Let's see what comes out when the program is started: [hs@localhost mono]$ mono path.exe Error: c: ew_pics Correct: c:\new_pics
http://www.informit.com/articles/article.aspx?p=101656&seqNum=3
CC-MAIN-2013-48
refinedweb
403
52.15
17 September 2009 16:33 [Source: ICIS news] HOUSTON (ICIS news)--Economies in emerging markets such as China, India and Brazil are growing significantly faster than developed markets and offer the best opportunities for near-term growth, a DuPont executive said on Thursday at the Credit Suisse Chemical and Ag Science Conference. “Our long-term goal is to penetrate all major markets to the same degree that we have in the US,” said Thomas Connelly, executive vice president and chief innovation officer for the US chemical producer. “This will take some time, but offers significant headroom for growth.” DuPont said significant increases in population and globalisation were transforming markets, adding that sales in emerging markets grew by 80% in the past five years at an annual growth rate of 16%. Those revenues made up nearly one-third of the company’s $30.5bn (€20.7bn) sales in 2008. The company said that 2009 emerging market sales would be down by about 10% from $9bn in 2008, but said such annual sales were projected to reach $13bn by 2012. “Our approach to growth outside the ?xml:namespace> In In In response to that demand, DuPont said it opened an R&D centre in “We initially establish relationships with strong local supply chains and build a base of business with our existing offers,” Connelly said. “As we gain market insight, we use our science capabilities to develop products that are tailored to the unique demands of the local market.” (
http://www.icis.com/Articles/2009/09/17/9248412/emerging-markets-offer-best-growth-opportunities-us.html
CC-MAIN-2015-22
refinedweb
247
55.88
13.4. Tuple Assignment with unpacking¶ Python has a very powerful tuple assignment feature that allows a tuple of variable names on the left of an assignment statement to be assigned values from a tuple on the right of the assignment. Another way to think of this is that the tuple of values is unpacked into the variable names. This does the equivalent of seven assignment statements, all on one easy line. One requirement is that the number of variables on the left must match the number of elements in the tuple.. Earlier we were demonstrating how to use tuples as return values when calculating the area and circumference of a circle. Here we can unpack the return values after calling the function. Python even provides a way to pass a single tuple to a function and have it be unpacked for assignment to the named parameters. If you run this, you will be get an error caused by line 7, where it says that the function add is expecting two parameters, but you’re only passing one parameter (a tuple). In line 6 you’ll see that the tuple is unpacked and 5 is bound to x, 4 to y. Don’t worry about mastering this idea yet. But later in the course, if you come across some code that someone else has written that uses the * notation inside a parameter list, come back and look at this again. Note Unpacking into multiple variable names also works with lists, or any other sequence type, as long as there is exactly one value for each variable. For example, you can write x, y = [3, 4]. 13.5. Unpacking Into Iterator Variables¶ Multiple assignment with unpacking is particularly useful when you iterate through a list of tuples or lists. For example, a dictionary consists of key-value pairs. When you call the items() method on a dictionary, you get back a sequence of key-value pairs. Each of those pairs is a two-item tuple. (More generally, we refer to any two-item tuple as a pair). You can iterate over the key-value pairs. Each time line 4 is executed, p will refer to one key-value pair from d. A pair is just a tuple, so p[0] refers to the key and p[1] refers to the value. That code is easier to read if we unpack the key-value pairs into two variable names. More generally, if you have a list of tuples that each has more than two items, and you iterate through them with a for loop pulling out information from the tuples, the code will be far more readable if you unpack them into separate variable names right after the word for. Check your Understanding tuples-4-1: If you want a function to return two values, contained in variables x and y, which of the following methods will work? - Make the last two lines of the function be "return x" and "return y" - As soon as the first return statement is executed, the function exits, so the second one will never be executed; only x will be returned - Include the statement "return [x, y]" - return [x,y] is not the preferred method because it returns x and y in a list and you would have to manually unpack the values. But it is workable. - Include the statement "return (x, y)" - return (x, y) returns a tuple. - Include the statement "return x, y" - return x, y causes the two values to be packed into a tuple. - It's not possible to return two values; make two functions that each compute one value. - It is possible, and frequently useful, to have one function compute multiple values. - You can't use different variable names on the left and right side of an assignment statement. - Sure you can; you can use any variable on the right-hand side that already has a value. - At the end, x still has it's original value instead of y's original value. - Once you assign x's value to y, y's original value is gone. - Actually, it works just fine! - Once you assign x's value to y, y's original value is gone. tuples-4-2: Consider the following alternative way to swap the values of variables x and y. What’s wrong with it? # assume x and y already have values assigned to them y = x x = y v1, v2, v3, and v4, to the following four values: 1, 2, 3, 4. pokemon. For every key value pair, append the key to the list p_names, and append the value to the list p_number. Do not use the .keys() or .values() methods. track_medal_countsand assign the list to the variable name track_events. Do NOT use the .keys() method.
https://runestone.academy/runestone/static/fopp/Tuples/TupleAssignmentwithunpacking.html
CC-MAIN-2019-26
refinedweb
799
70.23
Traffic Collision 2 3D action driving game3.80 / 5.00 2,083 Views Hero Agency Manage and train heroes to build up a world-renowned adventure agency!3.76 / 5.00 18,298 Views Hello Newgrounders! I'm starting work on a new series (Again XD)! I need two male voice actors to help me out! I need a guy who can sound like VideogameDunkey and a guy who can sound like Critical. If you can, PM me with your Email address and I'll send you the script. If I think your voice is perfect for the role, then your hired! Thanks to anyone who wants to help :D Semi paid position, looking for voice tallent. For full project details, please visit the URL below: eboot/ Hello voice actors of Newgrounds! I require myself one male and one female voice actor, for an animation I want to make. Now I've put all the information on this project in a neat .pdf (), but I understand this might be a bit suspicious, with virusses and what-not. If you'd feel more comfortable with a .docx I've also provided that.() If you don't feel comfortable on downloads then you might wanna check the public version out here : osuqZjSmQmEwkVnvTCg/edit I've added a screencap and I'm looking forward to your submissions! Yo!, Animator here looking for Two Voice actors for Semi periodic Animated Shorts! One male One female, looking for Teen-ish to Young Adult Voice Ranged. The Male would shoot for this Character: No specific voices in mind atm, so feel free to take a shot at it!. Female voice would shoot for: , same as the male no specifics, so please take a shot =)!. If interested or have any Questions feel free to PM me, or E-mail me at adrian@candyrag.com for more Info! Thank you for your time~ -Adrian Casting for Flash RPG "Primal Champions" - - Lots of Roles! Download this Audition Packet for the Audition lines and images of the characters <audition packet. Send the auditions to me at Redharvestng@yahoo.com ok, i am currently making a two episode miniseries for class, it is based on a story i wrote a while back and i'm gonna need voice actors, the story is still pending and the characters names are pending changes so i will give a short description of the characters who will be in it or sure so as i can see if you can give an appropriate voice story- the story takes place in a sort of fantasy world, its like a modern dark ages/medieval time 1-Red (between 20 and 25 years old), He is the impatient eldest of two brothers, arrogant and a very skilled swordsman and fighter. he can have any woman he wants (based on his skills and looks) but has little interest in anything other than honing and displaying his skills. he cares for his brother but doesn't express it well often leading to conflict. he has no job and makes his money as a sort of bounty hunter. Voice- Preferably with an "im better than you" attitude that can still be changed to regretful or sympathetic without it seeming unusual 2-Blue (between 19 and 24) the genius younger of the two brothers, he quietly tends to keep to himself and rarely leaves the house. Blue is a scientist/inventor, he makes things upon request as it is his job. he wishes that he could be a fighter like his brother but any time he tries he ends up hurting himself as he is not suited for the field. he cares for yet is often Jealous of his brother. blue's achievements are overshadowed by anything Red does. little by little his sanity depletes from his jealousy voice- relatively shy and reserved, but can change to impatient and pushy. voice 2- once he loses his sanity his voice must follow suit, something less deep pitch and i guess the word stringy, so not deep but not high. anyway, he will also need a laugh that shows he has lost his mind, nothing left but a soul of hate, i would prefer it if the laugh was drawn out but maybe a regular version as well 3-Advanced blue (????) there will be a brief time gap and Blue will return (it will be blue but it also wont be blue, i cant really explain without spoiler) regardless, he will be different from his previous self greatly, his intelligence will remain the same if not increased, he is devious, deceitful and out for revenge (possibly more) his skills in combat now rival his brothers but he also has unusual abilities to add to his dangerousness. sly, unpredictable/spontaneous and merciless, he will use anyone he needs to in order to achieve his goals. he almost always has a smile or smirk on his face and very rarely loses his calm, but when he does then it may mean trouble once i get the script under way i can request a few lines for trials, but in the mean time if you are interested you can PM me, you can try saying some lines you make up or read somewhere in the mean time if you like i may show you what the characters look like upon request when i get them sketched out THE GROK SQUAD ----- The Grok Squad is a group of sixteen little alien researchers stranded on Earth. While trying to find a way home, they decide to start learning more about our world to prepare an impromptu field report to their home culture. The show is essentially a fun educational show where each new topic is looked at through multiple lenses -- the scientific alien, the jock alien, the social alien, the artistic alien, and so on. ----- VOICES: One thing to remember with these characters is that they are all smart and friendly. Some are a little more introverted, others are much more outgoing. They are very curious which occasionally gets them into trouble, but they do their best not to cause trouble intentionally. Imagine a mix of an innocent inquisitive child with a trained scientist. Each has their own area of interest and unique way of looking at the world, but overall they are positive, upbeat, fun-loving, and have a thirst for knowledge. Just as important, none of the accents should be TOO thick. These characters need to be able to explain complex topics in ways an average person can understand, and the accents, while adding flavor, should not subtract from the comprehension. Character design artwork is available here: If anyone has questions about like, personalities, that aren't clear from the images or descriptions, let me know! EDIT: I have posted this same offer to my Facebook friends, but no one has specified any parts yet, so they are all still equally open and I will indicate otherwise if that changes! MALES: SCRUMP (Spatial/Dimensional): Brooklyn, Italian, Gangster, Gravely, New York, New Jersey (Danny Devito, Frank Stallone, Tony Clifton, Al Pacino, Chazz Palminteri) VINDALOO (Vocal/Linguistic): Arkansas, Georgia, Politician, Deep Southern Drawl (Bill Clinton, Roscoe and Boss Hogg, Foghorn Leghorn, Futurama's Hyperchicken Lawyer) HORNSWOGGLE (Mechanical/Dexterous): New England, Boston (Norm Abrahm's New Yankee Workshop, Peter Griffin, Alan Alda) DIDGERIDOO (Kinesthetic/Athletic): Black, Urban (Will Smith, Michael Jordan) MOOG (Naturalistic/Environmental): Australian, Ocker (Crocodile Hunter Steve irwin, Paul Hogan, Yahoo Derious) YONKERS (Interpersonal/Social): Canadian, Minnesota (Bob and Doug McKenzie, Fargo, Drop Dead Gorgeous) GIMBLE (Intrapersonal/Emotional): Midwest, Northwest, Quiet, Reserves, Friendly (Michael Cera, Tobey Maguire, Winnie the Pooh, Piglet) IPSWITCH (Literary/Textual): British, Southern English, Formal RP (David Mitchell, Ricky Gervaise, Stephen Fry, John Cleese, Wadsworth the Robot) FEMALES: NUDNIK (Visual/Graphic): High rising terminal (Hippie, Valley Girl, Luna Lovegood, Cat Valentine from Victorious, Phoebe and Ursula from Friends/Mad About You, Cloudcuckoolander) POLLIWOG (Aural/Rhythmic): Country Western (Reba McIntyre, Holly Hunter) KATZENJAMMER (Existential/Spiritual): Southern Irish, Gaelic (Elvish) WURTZEL (Analytical/Empirical): Queens, Yiddish, Jewish (Less Annoying Fran Drescher) CADDYWAMPUS (Creative/Synthetical): Chicano, Latino (Carla from Scrubs) TREACLE (Mathematical/Logical): Hindi, Indian, Pakistani (Raj -- Big Bang Theory, Samir -- Office Space) BOROGOVE (Symbolic/Metaphorical): Jamaican (Cool Runnings) FLINK (Factual/Memorial): Mid/Trans-Atlantic (Katherine Hepburn, Joan Crawford) ----- AUDITION SCRIPT NOTE: The following was voted the second most funniest joke in the world, according to Wikipedia. As it combines critical thinking with humor, it seemed a great sample script to play with animation using these characters, I'd like a full version of the script all in the same accent but with different inflections. Example: If you decide to do Wurtzel, you would do all the lines with the Queens/Yiddish accent, but she might do her regular voice for the narrator, a stern, deeper voice for Holmes, and a wistful dreamy voice for Watson, but they all would still sound like her. I'll cut and splice after I get 16 good recordings, either with one line from each character, or do a separate animation for each character telling the story, or combine two or three characters -- one as the narrator, one playing Holmes and one playing Watson. We'll see once I know what I have to work with! NARRATOR: Sherlock Holmes and Doctor Watson were going camping. They pitched their tent under the stars and went to sleep. Sometime in the middle of the night Holmes woke Watson up and said: HOLMES: Watson, look up at the sky, and tell me what you see. NARRATOR: Watson replied: WATSON: I see millions and millions of stars. NARRATOR: Holmes said: HOLMES: And what do you deduce from that? NARRATOR: Watson replied: WATSON: Well, if there are millions of stars, and if even a few of those have planets, itâEUTMs quite likely there are some planets like Earth out there. And if there are a few planets like Earth out there, there might also be life." NARRATOR: And Holmes said: HOLMES: Watson, you idiot, it means that somebody stole our tent! ----- DEADLINE: I will set an initial deadline of 11:59PM, January 31st, 2013. If I need to extend the date more, I'll do so at that time (sixteen voices is a lot to ask for!) ----- Please send attached files OR links to files on something such as Soundcloud (don't forget to enable downloading) to my email: jasonleeholm@gmail.com Hello everyone, I'm attempting to make my first flash and would like the assistance of two voice actors/actress. I would like the help of one male who will be playing the 'father' so anyone who has a fatherly, adult sounding voice would be greatly appreciated. Secondly for the role of the babysitter I would need the assistance of a female to play her part. She has been drawn to be in her younger teens and her I think the best way to describe her would be that she's upbeat and addresses adults with good manners. Included in the link is a rough WIP of the animation with sub titles so those interested can have a look and see if they might be willing to play the part. Please PM me if you wish to help me with my first flash. Cheers. I've written a movie script and put together a team for the storyboards. An artist, 4 professional producers, 2 signed rock musicians and a composer for the Film Score. We're currently working with a movie studio and part of the contract allows me to source outside material for the DVD concept Package that would be presented to executives and Licensee's. The budget I'm working with is extremely tight so the payment is in experience. If all goes to plan we can negotiate payment once finances become available. Plot: 19 year old Shaun starts off as an ordinary outcast until he unveils a secret of God's. He an his friends must decide which side they will serve during an epic battle between good and evil.! Voice recording submissions: "Hi my name is____(name of character)__________" If you can do more than one voice that would be great! I'm really new to voice acting but if you are willing to take up a female noobie.....I'm game.! Roles that are currently Available: Mrs. Paschar Supporting Character: Angie (Female) Supporting Character: Samantha (Female) Ms. Basquali - (Female) Mrs. Pendagras (teacher) Henny - (Female) Nurse's 1, 2 & 3 (Females) At 1/14/13 04:56 PM, XTREEMMAK wrote: Semi paid position, looking for voice tallent. For full project details, please visit the URL below: eboot/ I remember seeing this on voiceactingalliance a few times I think I even auditioned once or twice. I am sorry you either aren't finding the right voices/ losing voice actors. Your project seems very well put together and I hope you are able to see it through; I know I would like to see it. I don't mind giving it another try, I will try to get an audition in as soon as I can. These are the characters we have left. Please send your auditions to Natesmickle84@hotmail.ca Thanks! Ms. Basquali - 40'S (Female) Mrs. Pendagras - mid 30'S (teacher) Mrs. Paschar - 40's (Female) Henny - mid 20'S (Female) Hello! I am looking for two voices for my new, short film: 1. A deep, villainy voice. Can be cracky. 2. A light, woman voice. Actually, it doesn't have to be all that light, it can be a comically masculine voice. :) If you're interested, PM me and I'll send you the lines and my E-mail adress. Thanks in advance, - Nikolaj > Insert naive and probably tragically foreshadowing signature here < Hey guys, so basically as the title says I need a VA who can do a deep black man voice. Message me if your up for the job and I will go trough auditions and pick the best one. Cheers Yes 1. I need a VA that can sound like Sexy adult spanish male for an Adult Swim network bumper. I also need a VA that can sound like a sexy female for the same project. 2. "delicious!" "fresh" Try and sound as sensuous as possible. 3. email submissions to bentfingers1@gmail.com with the subject VO 4. TONIGHT! hey. Gender: Male Age: 13 (Relatively deep voice, can change pitch) Microphone: Samson - Meteor Mic VA History: None. New to the scene Languages (Other than English): N/A Accents: British, Mexican (Can do others, just not very well) Notes: I am also a "Brony", and have no problem with cursing. I think that's all that needs to be known. Peace. "I've done nothing productive all day." fuck fuck fuck posted in the wrong thread my bad shit. "I've done nothing productive all day." At 1/27/13 10:21 PM, imratherdashingokay wrote: Age: 13 (Relatively deep voice, can change pitch) Microphone: Samson - Meteor Mic Accents: British, Mexican (Can do others, just not very well) Notes: I am also a "Brony", and have no problem with cursing. Peace. You are an enormous faggot and you're posting in the wrong thread. Quit giving the youth a bad name. @metraff @NG_Artists Support Newgrounds Classifieds: commission animation "Metraff I'm going to fuck you." - Sodamachine I DONT NEED VOICE ACTORS ANYMORE, GAAH!!! > Insert naive and probably tragically foreshadowing signature here < I'm making a animation called "Ageless: Fall of VigiI". I'm planning this episode to be about 10-15 minutes long. I am looking for a female voice actress to play a Ninja Named Ava. Its a main role. If you're interested, PM me. I have a presentation on my page as well. (ClimbLadders) Hi Guys, I'm looking for Some voice actors for a short Pokemon animation. These are the VA parts: Ash Ketchum Misty Battle opponent Enraged Man Script and storyboards are complete and can be sent to potential VA to read from. The animation is a comedy and comprises of a few short sketches that run for 1minute 20 seconds total. Thank you in advance to anyone who can help me with this as I've been trying to get an animation made for an extremely long time! I got bored one day and wrote a script for an audio drama based in the star wars universe. I show it to a friend who said it was good I should rewrite it and try to produce it. I will be taking a web animation course later this year so i will try and animate it once i get the sounds mixed together. The story is about a pair of slave girls in Jabba's Palace leading up to the events of Episode 6. There are six roles all together, seven if you count the narration. Record your auditions in .MP3 format, just be sure they are clearly labeled so I know what character you are auditioning for. Send your lines to Spectre1988@hotmail.com and make sure the e-mail is titled 'Star Wars Project lines' so I will know it's not a spam bot or something. Try to have you auditions in by February 16th. I would suggest auditioning for all of them, or all the ones you think you can manage. I have been over the net and a few roles have been filled but there are still several open. Female roles Name: Lyn Me. Age: 21 Voice type: Soft and sultry Character: A professional singer with a great body who likes to show it off. Left home to see the galaxy and maybe meet her childhood hero Boba Fett. Line 1: So you are the new girl. Fortuna told me about you. Line 2: I think they would rather watch, and so would I. Name: Yarna. Age: 40-ish Voice type: Deeper then average and soft. Character: A dancer who's husband was killed by pirates who then sold her and her children to Jabba. Line 1: DonâEUTMt talk like that! There is always hope. Just a little longer and it will all be over. Line 2: It's time Oola. Male roles Name: Bib Fortuna. Age: 30-ish Voice type: Slightly higher then normal Character; A sly and devious schemer, always looking to gain more power and back stab the competition. Line 1: He is, most powerful person in this system and among the most influential in the Outer Rim. Line 2: Oh great and mighty one, I bring you a rare gift from my home world. Name: Jabba Age: 800+ Voice Type: Very deep, will probably have to use the editor to change it to fit so don't worry too much. Character: Powerful and evil crime lord, cruel and sadistic. Line 1: Bo shudda! Line 2: ChoyâEUTMsa dtay wonna wanga? Send your audition lines to spectre1988@hotmail.com. I need a girl (Or boy if you can do this) to voice act for me. I need someone who can sound like a little girl that's around 5-8 years of age. Message me inbox and I'll give you the script. I need a girl (Or boy if you can do this) to voice act for me. I need someone who can sound like a little girl that's around 5-8 years of age. Message me inbox and I'll give you the script. We're looking for narrator voice for a large RPG game. The game is comedic in tone and we're open to a range of voices. Some examples of games that have narrators similar to what we're looking for: Bastion The Cave Trine PM me if interested with a link or some other way of listening to your entry. We are willing to negotiate payment. Thanks I need a voice actor with a good microphone. for this animation im working on. would really be helping me out. At 2/8/13 01:07 PM, ToonLink-PC wrote: I need a voice actor with a good microphone. for this animation im working on. would really be helping me out. What is the animation? Can you give some more details? I have a good microphone and I'm interested. :) At 2/8/13 01:07 PM, ToonLink-PC wrote: I need a voice actor with a good microphone. for this animation im working on. would really be helping me out. I have top-tier microphone(s). Could you be more specific, though? : "Sorry, but 'FUCK.als' already exists" At 2/8/13 01:07 PM, ToonLink-PC wrote: I need a voice actor with a good microphone. for this animation im working on. would really be helping me out. Hello ToonLink-PC, I recently read your post which requested a voice actor with a good microphone, I have a Yeti-Blue pod-cast microphone with the ability to remove most of the background noise. [like this] There is one problem I do have: I read based on grammar, so you might want to consider putting your pauses in the right places, for example, if I read your post. (Reads post) ;)
http://www.newgrounds.com/bbs/topic/816629/91
CC-MAIN-2016-44
refinedweb
3,528
70.63
Why Would Somebody Program Treepython Instead of Python Treepython is a variation of python which works in my structure editor textended. I am planning to write a computer game using this new language this February. In this post I will be adding new semantic patterns into treepython. It hopefully explains my motivation to break up with plaintext programming. Here's the samples/clearscreen.t+ from textended-edit. When I run it, it grabs the editor display and colors it black. The 0.0, 0.0, 0.0, 1.0 represents the black color here. Programmers see their colors like this, which stands to explain where the term "programmer art" is coming from. We could do a slightly better job by representing the color in the way they appear in an image manipulation software. This is called hexadecimal notation. In python you could create a function which constructs the needed format from a string, and input the color using that function as a helper. It would look like this: OpenGL.GL.glClearColor(*rgba_to_vec4("#000000")) The star stands for variable number of arguments. The rgba_to_vec4 produces 4 values in a list, but glClearColor wants 4 values. The star expands the list to fill 4 argument slots. This is slightly more readable, because you can copy the string into your preferred color chooser to see the color. But it doesn't come even close to what I can do in the treepython: If you coded in trees, you could represent the color right here. Next I'm going to show how that happens. First we annotate a string with symbol float-rgba. This will represent our hexadecimal colors and should translate to a tuple, each channel represented by a floating point number. If we try to evaluate the program, it returns us an error. The editor highlights the bad construct and shows the error message on the right. Extending Treepython with semantics The error message tells that it cannot be recognised as an expression. Lets extend treepython to recognize strings that are labeled float-rgba. @semantic(expr, String("float-rgba")) def float_rgba_expression(env, hexdec): channels = [c / 255.0 for c in hex_to_rgb(hexdec)] + [1.0] return ast.Tuple( [ast.Num(x, lineno=0, col_offset=0) for x in channels[:4]], ast.Load(), lineno=0, col_offset=0) def hex_to_rgb(value): lv = len(value) return tuple(int(value[i:i + lv // 3], 16) for i in range(0, lv, lv // 3)) Treepython is a translator between textended tree structures and python ast, so the above code resembles a lisp macro. It's up to the language's implementor to decide how his language is extended. Next if we try to run the program, the program crashes and produces the following error message to the terminal: Traceback (most recent call last): File "main.py", line 550, in <module> paint(time.time()) File "t+", line 0, in paint TypeError: this function takes at least 4 arguments (1 given) I intend to get it overlay this error over the file, just like it did small while ago. Anyway you may see why it's happening here. We are passing a tuple to the call. That there's vararg implemented may give you a false sense of completeness of this project. I just coded in the support for variable argument semantics while writing this blog post. Extending Treepython's layouter with semantics Here's we got the code to extend our layouter with the float-rgba string semantics: ) return hpack([Glue(2), ImageBox(12, 10, 4, None, rgba)] + prefix + text) def hex_to_rgb(value): lv = len(value) return tuple(int(value[i:i + lv // 3], 16) for i in range(0, lv, lv // 3)) Would you have rather wanted a border around it? No problem: ) yield Padding( hpack([ImageBox(12, 10, 4, None, rgba)] + prefix + text), (1, 1, 1, 1), Patch9('assets/border-1px.png')) yield Glue(2) As you may see now, we've improved readability of our program. Why show the hexadecimal if you can just show the color? That's because the editor needs a visual form to change the color. I think I'll be able to loosen that requirement later on. I modified an existing language, but all these changes could have been isolated apart and introduced into the file with a directive, like this: To support some other things, such asadding anonymous functions into a language that doesn't support them would not be as easy. Many languages might still end up to not support extensibility at all. But there are clearly less costs to implementing new semantics in the first place... ...Well you could do this kind of things in lisp of course! But when was the last time your lisp looked like python?
http://boxbase.org/entries/2015/jan/26/treepython/
CC-MAIN-2018-34
refinedweb
791
65.01
Introduction Computing has evolved over time and more and more ways have come up to make computers run even faster. What if instead of executing a single instruction at a time, we can also execute several instructions at the same time? This would mean a significant increase in the performance of a system. Through concurrency, we can achieve this and our Python programs will be able to handle even more requests at a single time, and over time leading to impressive performance gains. In this article, we will discuss concurrency in the context of Python programming, the various forms it comes in and we will speed up a simple program in order to see the performance gains in practice. What is Concurrency? When two or more events are concurrent it means that they are happening at the same time. In real life, concurrency is common since a lot of things happen at the same time all the time. In computing, things are a bit different when it comes to concurrency. In computing, concurrency is the execution of pieces of work or tasks by a computer at the same time. Normally, a computer executes a piece of work as others wait their turn, once it is completed, the resources are freed and the next piece of work begins execution. This is not the case when concurrency is implemented as the pieces of work to be executed don't always have to wait for others to be completed. They are executed at the same time. Concurrency vs Parallelism We have defined concurrency as the execution of tasks at the same time, but how does it compare to parallelism, and what is it? Parallelism is achieved when multiple computations or operations are carried out at the same time or in parallel with the goal of speeding up the computation process. Both concurrency and parallelism are involved with performing multiple tasks simultaneously, but what sets them apart is the fact that while concurrency only takes place in one processor, parallelism is achieved through utilizing multiple CPUs to have tasks done in parallel. Thread vs Process vs Task While generally speaking, threads, processes, and tasks may refer to pieces or units of work. However, in detail they are not so similar. A thread is the smallest unit of execution that can be performed on a computer. Threads exist as parts of a process and are usually not independent of each other, meaning they share data and memory with other threads within the same process. Threads are also sometimes referred to as lightweight processes. For example, in a document processing application, one thread could be responsible for formatting the text and another handles the autosaving, while another is doing spell checks. A process is a job or an instance of a computed program that can be executed. When we write and execute code, a process is created to execute all the tasks that we have instructed the computer to do through our code. A process can have a single primary thread or have several threads within it, each with its own stack, registers, and program counter. But they all share the code, data, and memory. Some of the common differences between processes and threads are: - Processes work in isolation while threads can access the data of other threads - If a thread within a process is blocked, other threads can continue executing, while a blocked process will put on hold the execution of the other processes in the queue - While threads share memory with other threads, processes do not and each process has its own memory allocation. A task is simply a set of program instructions that are loaded in memory. Multithreading vs Multiprocessing vs Asyncio Having explored threads and processes, let us now delve deeper into the various ways a computer executes concurrently. Multithreading refers to the ability of a CPU to execute multiple threads concurrently. The idea here is to divide a process into various threads that can be executed in a parallel manner or at the same time. This division of duty enhances the speed of execution of the entire process. For example, in a word processor like MS Word, a lot of things are going on when in use. Multithreading will allow the program to autosave the content being written, perform spell checks for the content, and also format the content. Through multithreading, all this can take place simultaneously and the user does not have to complete the document first for the saving to happen or the spell checks to take place. Only one processor is involved during multithreading and the operating system decides when to switch tasks in the current processor, these tasks may be external to the current process or program being executed in our processor. Multiprocessing, on the other hand, involves utilizing two or more processor units on a computer to achieve parallelism. Python implements multiprocessing by creating different processes for different programs, with each having its own instance of the Python interpreter to run and memory allocation to utilize during execution. AsyncIO or asynchronous IO is a new paradigm introduced in Python 3 for the purpose of writing concurrent code by using the async/await syntax. It is best for IO-bound and high-level networking purposes. When to use Concurrency The advantages of concurrency are best tapped into when solving CPU-bound or IO-bound problems. CPU-bound problems involve programs that do a lot of computation without requiring networking or storage facilities and are only limited by capabilities of the CPU. IO-bound problems involve programs that rely on input/output resources which sometimes may be slower than the CPU and are usually in use, therefore, the program has to wait for the current task to release the I/O resources. It is best to write concurrent code when the CPU or I/O resources are limited and you want to speed up your program. How to use Concurrency In our demonstration example, we will solve a common I/O bound problem, which is downloading files over a network. We will write non-concurrent code and concurrent code and compare the time taken for each program to complete. We will download images from Imgur through their API. First, we need to create an account and then register our demo application in order to access the API and download some images. Once our application is set up on Imgur, we will receive a client identifier and client secret that we'll use to access the API. We'll save the credentials in a .env file since Pipenv automatically loads the variables from the .env file. Synchronous Script With those details, we can create our first script that will simply download a bunch of images to a downloads folder: import os from urllib import request from imgurpython import ImgurClient import timeit client_secret = os.getenv("CLIENT_SECRET") client_id = os.getenv("CLIENT_ID") client = ImgurClient(client_id, client_secret) def download_image(link): filename = link.split('/')[3].split('.')[0] fileformat = link.split('/')[3].split('.')[1] request.urlretrieve(link, "downloads/{}.{}".format(filename, fileformat)) print("{}.{} downloaded into downloads/ folder".format(filename, fileformat)) def main(): images = client.get_album_images('PdA9Amq') for image in images: download_image(image.link) if __name__ == "__main__": print("Time taken to download images synchronously: {}".format(timeit.Timer(main).timeit(number=1))) In this script, we pass an Imgur album identifier and then download all the images in that album using the function get_album_images(). This gives us a list of the images and then we use our function to download the images and save them to a folder locally. This simple example gets the job done. We are able to download images from Imgur but it does not work concurrently. It only downloads one image at a time before moving on to the next image. On my machine, the script took 48 seconds to download the images. Optimizing with Multithreading Let us now make our code concurrent using Multithreading and see how it performs: # previous imports from synchronous version are maintained import threading from concurrent.futures import ThreadPoolExecutor # Imgur client setup remains the same as in the synchronous version # download_image() function remains the same as in the synchronous def download_album(album_id): images = client.get_album_images(album_id) with ThreadPoolExecutor(max_workers=5) as executor: executor.map(download_image, images) def main(): download_album('PdA9Amq') if __name__ == "__main__": print("Time taken to download images using multithreading: {}".format(timeit.Timer(main).timeit(number=1))) In the above example, we create a Threadpool and set up 5 different threads to download images from our gallery. Remember threads execute on a single processor. This version of our code takes 19 seconds. That is almost three times faster than the synchronous version of the script. Optimizing with Multiprocessing Now we will implement Multiprocessing over several CPUs for the same script to see how it performs: # previous imports from synchronous version remain import multiprocessing # Imgur client setup remains the same as in the synchronous version # download_image() function remains the same as in the synchronous def main(): images = client.get_album_images('PdA9Amq') pool = multiprocessing.Pool(multiprocessing.cpu_count()) result = pool.map(download_image, [image.link for image in images]) if __name__ == "__main__": print("Time taken to download images using multiprocessing: {}".format(timeit.Timer(main).timeit(number=1))) In this version, we create a pool that contains the number of CPU cores on our machine and then map our function to download the images across the pool. This makes our code run in a parallel manner across our CPU and this multiprocessing version of our code takes an average of 14 seconds after multiple runs. This is slightly faster than our version that utilizes threads and significantly faster than our non-concurrent version. Optimizing with AsyncIO Let us implement the same script using AsyncIO to see how it performs: # previous imports from synchronous version remain import asyncio import aiohttp # Imgur client setup remains the same as in the synchronous version async def download_image(link, session): """ Function to download an image from a link provided. """ filename = link.split('/')[3].split('.')[0] fileformat = link.split('/')[3].split('.')[1] async with session.get(link) as response: with open("downloads/{}.{}".format(filename, fileformat), 'wb') as fd: async for data in response.content.iter_chunked(1024): fd.write(data) print("{}.{} downloaded into downloads/ folder".format(filename, fileformat)) async def main(): images = client.get_album_images('PdA9Amq') async with aiohttp.ClientSession() as session: tasks = [download_image(image.link, session) for image in images] return await asyncio.gather(*tasks) if __name__ == "__main__": start_time = timeit.default_timer() loop = asyncio.get_event_loop() results = loop.run_until_complete(main()) time_taken = timeit.default_timer() - start_time print("Time taken to download images using AsyncIO: {}".format(time_taken)) There are few changes that stand out in our new script. First, we no longer use the normal requests module to download our images, but instead we use aiohttp. The reason for this is that requests is incompatible with AsyncIO since it uses Python's http and sockets module. Sockets are blocking by nature, i.e. they cannot be paused and execution continued later on. aiohttp solves this and helps us achieve truly asynchronous code. The keyword async indicates that our function is a coroutine (Co-operative Routine), which is a piece of code that can be paused and resumed. Coroutines multitask cooperatively, meaning they choose when to pause and let others execute. We create a pool where we make a queue of all the links to the images we wish to download. Our coroutine is started by putting it in the event loop and executing it until completion. After several runs of this script, the AsyncIO version takes 14 seconds on average to download the images in the album. This is significantly faster than the multithreaded and synchronous versions of the code, and pretty similar to the multiprocessing version. Performance Comparison Conclusion In this post, we have covered concurrency and how it compares to parallelism. We have also explored the various methods that we can use to implement concurrency in our Python code, including multithreading and multiprocessing, and also discussed their differences..
https://stackabuse.com/concurrency-in-python/
CC-MAIN-2019-43
refinedweb
1,994
54.12
SYNOPSYS #include <genlib.h> void GENLIB_SC_TOP(insname, symetry) char *insname; char symetry; man2html: unable to open or read file man1/alc_origin.1 PARAMETERS - insname - Name to be given to the instance on the model - symetry - Geometrical operation to be performed on the instance before beeing placed DESCRIPTIONSC_TOP add an instance in the current cell. The bottom left corner of the abutment box of the instance is placed, after beeing symetrized and/or rotated, toward the top left corner of the abutment box of the "reference instance". The newly placed instance becomes the "reference instance". The placement takes place only if the netlist is up to day, because the model of the instance is seeked there, in order to ensure rotation take place - SY_RM - Y becomes -Y, and then a negative 90 degrees rotation take place ERRORS"GENLIB_SC. -_TOP("ins2", SYM_X); /* Save all that on disk */ GENLIB_SAVE_PHSC(); }
https://manpages.org/genlib_sc_top
CC-MAIN-2022-21
refinedweb
146
51.28
Opened 7 years ago Closed 6 years ago #5933 closed defect (fixed) sqlite2mysql script problem Description Hi I am getting following error when tried to use the above script to conver sqlite db to mysql for trac. python sqlite2mysql -h Traceback (most recent call last): File "sqlite2mysql", line 36, in <module> from MySQLdb import ProgrammingError ImportError: No module named MySQLdb Plz help me for the same Attachments (0) Change History (2) comment:1 Changed 7 years ago by anonymous comment:2 Changed 6 years ago by anonymous - Resolution set to fixed - Status changed from new to closed Note: See TracTickets for help on using tickets. You need to install the mysql library in python for that import to work, in ubuntu the package is named python-mysqldb. If you are on another os, find out how to install libraries into python.
https://trac-hacks.org/ticket/5933
CC-MAIN-2016-22
refinedweb
141
61.84
In our last post we have learnt about the addition of two matrices.Today we shall learn about the multiplication of two matrices.Multiplication of two matrices is little complicated as compared to the addition of the two matrices.One of the basic condition for the multiplication of two matrices is number of columns of the first matrix equal to the number of rows of second matrix.This is the main condition for the multiplication of two matrices. 1. Input:We enter the number of rows,columns of two matrices.Later we enter the elements of the two matrices. 2. Explanation:while multiplying two matrices,we need to multiply the rows,columns.To get the element of the first row,first column element of the resultant matrix……we need to multiply the corresponding first row elements of first matrix with first column elements of second matrix and then add them.In the same way if we need to get the nth row,ith column…..then we need to multiply the corresponding nth row elements of the first matrix with ith column of the second matrix and then add them. From the above picture we can explain that we got the first row,first column element as 1*5+7*2=19….In the same way first row,second column element as 1*6+2*8=22.In the same way we can get the value of the element we need. 3.Output:We need to display the resultant matrix of multiplication. From the above explanation we shall write the code for multiplication. import java.util.*; class multiplication { int i,j,sum=0; void multi() { Scanner in = new Scanner(System.in); System.out.println("Enter the number of rows of first matrix:"); int m=in.nextInt(); System.out.println("Enter the number of columns of first matrix:"); int n=in.nextInt(); int[][] mat1 = new int[m+1][n+1]; for(i=1;i < m+1;i++) { for(j=1;j < n+1;j++) { System.out.println("Enter the element of "+ i +" row "+ j +" column:"); mat1[i][j] = in.nextInt(); }// end of j loop }// end of i loop System.out.println("Enter the number of rows of second matrix:"); int p=in.nextInt(); System.out.println("Enter the number of columns of second matrix:"); int q=in.nextInt(); int[][] mat2 = new int[p+1][q+1]; for(i=1;i < p+1;i++) { for(j=1;j < q+1;j++) { System.out.println("Enter the element of "+ i +" row "+ j +" column:"); mat2[i][j] = in.nextInt(); }// end of j loop }// end of i loop if(n==q)//comparing coulmns with rows { int[][] resltmat = new int[m+1][q+1]; for ( i = 1 ; i < m+1 ; i++ ) { for ( j = 1 ; j < n+1 ; j++ ) { for (int k = 0 ; k < p+1 ; k++) sum = sum + mat1[i][k]*mat2[k][j]; resltmat[i][j] = sum; sum = 0; } // end of the loop } // end of the loop System.out.println("Resultant matrix is:\n"); for(i=1;i < m+1;i++) { for(j=1;j< q+1;j++) { System.out.print(resltmat[i][j] + " "); } // end of j loop System.out.println("\n"); } // end of i loop } else System.out.println("Matrix Multiplication can't be performed because columns not equal to rows"); }//end of function }// end of class class matmul { public static void main(String arg[]) { Scanner in = new Scanner(System.in); multiplication ob = new multiplication(); ob.multi(); } } You can download the source code:download Well written post with good indentation and comments. Check line number 8,10,17,21,23,30 and 58. These statements are not completely visible. Please take care of this. Read this :A short article on how to learn java programming Java programming tutorial
https://letusprogram.com/2013/08/27/multiplication-of-two-matrix-in-java/
CC-MAIN-2021-17
refinedweb
623
58.48
(Almost) out-of-the box logging Project description (Almost) out-of-the box logging. Frustum is a wrapper around the standard’s library logging, so you don’t have to write the same boilerplate again. Install: pip install frustum Usage: from frustum import Frustum # Initialize with verbosity from 1 to 5 (critical to info) frustum = Frustum(verbosity=5, name='app') # Register all the events that you want within frustum frustum.register_event('setup', 'info', 'Frustum has been setup in {}') # Now you can use the registered events in this way frustum.log('setup', 'readme') # The previous call would output: # INFO:app:Frustum has been setup in readme # into your stdout (as per standard logging configuration) Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/frustum/0.0.2/
CC-MAIN-2019-26
refinedweb
136
51.18
I have this method inside a class that extends an abstract class. This is a two part problem. PART 1: my toString method is loaded with errors below. I have imported java.util.* already. I've obviously missed one thing that is causing the whole method to give me an error. This same class is copied from an abstract class I created. Should my abstract contain less info and could that be causing the error? LongDistanceCall.java:45: class, interface, or enum expected public String toString() { ^ LongDistanceCall.java:49: class, interface, or enum expected String startTimeOutput = startTime.toString(startTime); //send PreciseTime objects to 'toString()' ^ LongDistanceCall.java:50: class, interface, or enum expected String endTimeOutput = endTime.toString(endTime); ^ LongDistanceCall.java:52: class, interface, or enum expected costOutput += this.computeCost(); ^ PART 2: My class extend and abstract class. I cant determine what I'm supposed to put in my abstract class to help define my class constructor. Here's my class constructor. Should it just be blank, since I have to classes that implement the abstract? public class MyConstuctor extends AnAbstractClass { protected object o; protected object o2; public myConstructor(Object o, Object o2 ) { this.o = o; this.o2 = o2; } //end constructor } Thanks in advance for your help.
http://www.javaprogrammingforums.com/whats-wrong-my-code/10360-method-issues.html
CC-MAIN-2014-35
refinedweb
205
61.63
Here’s something I discovered the other day: ActiveRecord infers model attributes from the database. Each attribute gets a getter, setter, and query method. A query method? I know this is true for boolean columns but non boolean columns? class User < ActiveRecord::Base end users (id, name) def test_should_say_if_it_does_or_does_not_have_a_name_when_sent_name? user = User.new assert ! user.name? user.name = '' assert ! user.name? user.name = 'name' assert user.name? end You get that query method for free for each of your ActiveRecord attributes. From this passing test it looks as if for strings its implemented in terms of String#blank? This is nice because it can save you an extra message. if user.name.blank? end becomes if ! user.name? end
http://robots.thoughtbot.com/i-did-not-know-that
CC-MAIN-2014-35
refinedweb
118
69.18
.\" $NetBSD: vgrindefs.5,v 1.4 1997/10/20 03:01:28 lukemefs.5 8.1 (Berkeley) 6/6/93 .\" .Dd June 6, 1993 .Dt VGRINDEFS 5 .Os BSD 4.2 .Sh NAME .Nm vgrindefs .Nd language definition data base for .Xr vgrind 1 .Sh SYNOPSIS .Nm .Sh DESCRIPTION The .Nm file contains all language definitions for .Xr vgrind 1 . The data base is very similar to .Xr termcap 5 . .Sh FIELDS The following table names and describes each field. .Pp .Bl -column Namexxx Tpexxx .Sy Name Type Description .It "pb str regular expression for start of a procedure" .It "bb str regular expression for start of a lexical block" .It "be str regular expression for the end of a lexical block" .It "cb str regular expression for the start of a comment" .It "ce str regular expression for the end of a comment" .It "sb str regular expression for the start of a string" .It "se str regular expression for the end of a string" .It "lb str regular expression for the start of a character constant" .It "le str regular expression for the end of a character constant" .It "tl bool present means procedures are only defined at the top lexical level" .It "oc bool present means upper and lower case are equivalent" .It "kw str a list of keywords separated by spaces" .El .Pp .Sh EXAMPLES The following entry, which describes the C language, is typical of a language entry. .Bd -literal C|c:\ :pb=^\ed?*?\ed?\ep\ed?\e(\ea?\e):bb={:be=}:cb=/*:ce=*/:sb=":se=\ee":\e :lb=':le=\ee':tl:\e :kw=asm auto break case char continue default do double else enum\e extern float for fortran goto if int long register return short\e sizeof static struct switch typedef union unsigned while #define\e #else #endif #if #ifdef #ifndef #include #undef # define else endif\e if ifdef ifndef include undef: .Ed .Pp Note that the first field is just the language name (and any variants of it). Thus the C language could be specified to .Xr vgrind 1 as "c" or "C". .Pp Entries may continue onto multiple lines by giving a \e as the last character of a line. Capabilities in .Nm are of two types: Boolean capabilities which indicate that the language has some particular feature and string capabilities which give a regular expression or keyword list. .Sh REGULAR EXPRESSIONS .Nm uses regular expression which are very similar to those of .Xr ex 1 and .Xr lex 1 . The characters `^', `$', `:' and `\e' are reserved characters and must be "quoted" with a preceding .Ql \e if they are to be included as normal characters. The metasymbols and their meanings are: .Bl -tag -width indent .It $ the end of a line .It \&^ the beginning of a line .It \ed a delimiter (space, tab, newline, start of line) .It \ea matches any string of symbols (like .* in lex) .It \ep matches any alphanumeric name. In a procedure definition (pb) the string that matches this symbol is used as the procedure name. .It () grouping .It \&| alternation .It ? last item is optional .It \ee preceding any string means that the string will not match an input string if the input string is preceded by an escape character (\e). This is typically used for languages (like C) which can include the string delimiter in a string by escaping it. .El .Pp Unlike other regular expressions in the system, these match words and not characters. Hence something like "(tramp|steamer)flies?" would match "tramp", "steamer", "trampflies", or "steamerflies". .Sh KEYWORD LIST The keyword list is just a list of keywords in the language separated by spaces. If the "oc" boolean is specified, indicating that upper and lower case are equivalent, then all the keywords should be specified in lower case. .Sh FILES .Bl -tag -width /usr/share/misc/vgrindefs -compact .It Pa /usr/share/misc/vgrindefs File containing terminal descriptions. .El .Sh SEE ALSO .Xr troff 1 , .Xr vgrind 1 .Sh HISTORY The .Nm file format appeared in .Bx 4.2 .
http://opensource.apple.com/source/developer_cmds/developer_cmds-53.1/vgrind/vgrindefs.5
CC-MAIN-2013-20
refinedweb
675
69.99
Let's Chat Application Using SignalR in MVC SignalR is an open and free library which can be used to integrate real-time functionality in your web applications. Join the DZone community and get the full member experience.Join For Free Introduction Here, I will be demonstrating an application to chat, including private chat, using SignalR. First, we need to know what SignalR is! SignalR is an open and free library which can be used to integrate real-time functionality in your web applications. There are a lot of areas where SignalR can come in handy to make your application better, more integrated, and more responsive to the end user. Real-time means having your server respond as quickly as possible to the client when a request is made. For example, we may have a requirement to show the user uploading a file and the percentage of that file that has been uploaded to the server. Or, perhaps we have a scenario where we have to show progress to the end user for uploading and processing a CSV file with 'n' number of rows, where each row has some validation. The end user may be wondering what is going on in the back-end, so it'd be great if we could show him, similar to a progress window, how many rows have been processed and how many are left! Here comes the SignalR magic! Most of us think SignalR would be useful in making chat applications, but it has much more than just chat! I don’t think the makers of SignalR would have a thought in mind to make a chat application out of it! Enough of the story! Let's get into a bit of theory! Theory We will look at a simple image below and from it, try and gain some knowledge about the flow: Nowadays, every application needs to load a server response in real time to gain traction in the market, as user expectations are higher. Remote Procedure Calls (RPC) is the concept that takes place in SignalR internally. SignalR provides an API which helps in making the RPC between the server and the client. Actually, from the client side, server side functions are called using JavaScript, once the connection to the server is set. The SignalR API also helps create connections and manage them when required. In simple terms, SignalR provides the connection between server and client, letting the server call the functions on the client side and the client side to call the server side. That somehow is called "Server-Push". SignalR starts with HTTP and then changes to a WebSocket if the connection is available. From. Being a full-duplex communication, it has low latency. These are the considerations made with SignalR which make it more efficient. SignalR decides the transport based on the browser; i.e. which required transport the browser supports. We will discuss the kinds of transports next: HTML 5 Transports - WebSockets we have already discussed. This transport is considered to be true-persistent, creating a two-way connection between client and server if the browsers support it. - Server Sent events are also called Event Source, which is supported by all browsers except IE. Comet Transports Comet is a web application model in which a long-held HTTP request allows the server to post data to a client (browser). - Forever frame This is supported by Internet Explorer only. Creates a hidden frame making a request to an endpoint on the server. The server keeps pinging the client or sends a connection resets. Practical We will be creating a chat application in order to explain the flow of SignalR. We install SignalR, create a hub with which the client will interact. The client calls the server methods and in return the server responds and interacts with the client. You can directly add a new project in VS for the SignalR or create an MVC project and install the SignalR package/libraries from NuGet. PM > Install-Package Microsoft.AspNet.SignalR This downloads all the dependencies required for SignalR. After a successful installation, the above DLLs or packages are installed into your project. There will be a class file which needs to be added to the root of your project, which would look like:(); } } } This is an OWIN based application. Every OWIN application will have a startup.cs class, where the component for the application pipeline is added. The OWIN attribute specifies the type of property, specifying the project’s start up and the configuration method, and also sets up the SignalR mapping for the App. There will be another two script files that will be added as we install the packages for SignalR. These script files are mandatory to be loaded onto the .cshtml page in order to activate SignalR. Let's look into the code straight away: We need to add a new hub class inside a Hub folder. Let's name that LetsChatHub.cs, which would look like: using System; using System.Collections.Generic; using System.Linq; using System.Web; using Microsoft.AspNet.SignalR; namespace LetsChatApplication.Hubs { public class LetsChatHub : Hub { public void Send(string name, string message,string connId) { Clients.Client(connId).appendNewMessage(name, message); } } } The above send method accepts the parameters, name (which you will give once you navigate to the URL), and the message (which the user will send from the UI). The other parameter is connId, which would help us have a private chat and not send the message to every user who navigates to the site. To allow every user access to every message, the code you change is below: namespace LetsChatApplication.Hubs { public class LetsChatHub : Hub { public void Send(string name, string message) { Clients.All.appendNewMessage(name, message); } } } The Send method is requested from the client with the parameters after the connection is set on the client side. Once the server receives the request, it processes and sends back the response to the client, using appendNewMessage. This appendNewMessage method is added on the client side to receive the response and display it in the UI to the client. You need to add a controller. Let's call it "LetsChat" with an action "LetsChat", and add a view to that. The client side code would look like below: @{ ViewBag. <div class="form-group col-xl-12"> <label class="control-label">Your connection Id</label><br /> <input type="text" class="col-lg-12 text-primary" id="frndConnId" placeholder="Paste your friend's connection Id" /><br /><br /> <label class="control-label">Your Message</label><br /> <textarea type="text" class="col-lg-10 text-primary" id="message"></textarea> <input type="button" class="btn btn-primary" id="sendmessage" value="Send" /><br /><br /> <img src="~/Content/smile.jpg" width="20" height="20" id="smile" style="cursor:pointer"/> <img src="~/Content/uff.jpg" width="20" height="20" id="ufff" style="cursor:pointer" /> <div class="container chatArea"> <input type="hidden" id="displayname" /> <ul id="discussion"></ul> </div> </div> <br /> <input type="hidden" id="connId" /> <!--Reference the autogenerated SignalR hub script. --> @section scripts { <script src="~/Scripts/jquery-1.10.2.min.js"></script> <script src="~/Content/sweetalert.min.js"></script> <script src="~/Scripts/jquery.signalR-2.2.0.min.js"></script> <script src="~/signalr/hubs"></script> <script> //var userName = ""; //varEnter your name:</span>", type: "input", html: true, showCancelButton: true, closeOnConfirm: true, animation: "slide-from-top", inputPlaceholder: "Your Name" }, function (inputValue) { userName = inputValue; if (inputValue === false) return false; if (inputValue === "") { swal.showInputError("You need to type your name!"); return false; } $('#displayname').val(inputValue); }); // Set initial focus to message input box. $('#message').focus(); $('#message').keypress(function (e) { if (e.which == 13) {//Enter key pressed $('#sendmessage').trigger('click');//Trigger search button click event } }); $("#smile").click(function () { }); // Start the connection. $.connection.hub.start().done(function () { $('#sendmessage').click(function () { // Call the Send method on the hub. var connId = $("#connId").val(); var frndConnId = $("#frndConnId").val(); var finalConnId = frndConnId == "" ? $.connection.hub.id : frndConnId; chat.server.send($('#displayname').val(), $('#message').val(), finalConnId); $("#connId").val($.connection.hub.id); if (frndConnId == "") { swal("You connection Id", $.connection.hub.id, "success"); } // Clear text box and reset focus for next comment. $('#discussion').append('<li><strong>' + htmlEncode($('#displayname').val()) + '</strong>: ' + htmlEncode($('#message').val()) + '</li>'); $('#message').val('').focus(); }); }); }); // This optional function html-encodes messages for display in the page. function htmlEncode(value) { var encodedValue = $('<div />').text(value).html(); return encodedValue; } </script> } Let's Chat We have a normal UI in place to add your message and a send button to call the server methods. Let's understand the code above part by part. var chat = $.connection.letsChatHub; Here, we set the connection to the Hub class. As you notice here, letsChatHub is the same hub class file name which we added to set up the server. The convention is that the method or class name starts with a lowercase letter. From here, we use chat to access the Send method. $.connection.hub.start().done(function () { $('#sendmessage').click(function () { // Calls the Send method on the hub. chat.server.send($('#displayname').val(), $('#message').val(), finalConnId); chat.server.send is self-explanatory. It sets the chat connection to call the server Send method once the connection is set and started. chat.client.appendNewMessage = function (name, message) { // } This is called when the Server receives the request and calls back the method on the client side. How the Sample Would Work The sample provided for download has a few instructions to follow: - When you navigate to the LetsChat/LetsChat route, an alert pops up asking you for the name with which you would like to chat. - and use it while chatting will be able to see and send messages to you. - When you friend navigates, he generates his connection ID and shares it with you in order to set up your connection completely. Thus, you need to have the connID to whom you will be sending and vice-versa with your friend as well. Then just chat! If you would like to send a message to all and make that common, then use the Clients.All code snippet to send all. Another interesting thing I figured out is the use of @section scripts{} that lets the SignalR scripts render on your page. Using the @section scripts provides your code with good style. Share and Send Files Using SignalR? Oops! Nice question, right? It is ideally not advised. I would not recommend to send or share files using SignalR. There is always a better way to accomplish that. The idea would be using an API, you can have an upload area, and use SignalR to show the progress. Once the upload completes, update the user regarding the completion and generate a download link on the UI for the users to download and view the file. This is not always the best idea, but is just another idea. Conclusion This is just a simple Chat application which you can use to chat with your friends if you host on Azure or any other domain. But again, SignalR is not limited. There are a lot of other effective uses of SignalR. I will be sharing more uses of SignalR in the next few articles. Published at DZone with permission of Suraj Sahoo, DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/lets-chat-application-using-signalr-in-mvc
CC-MAIN-2021-43
refinedweb
1,873
58.18
28 May 2010 11:07 [Source: ICIS news] SINGAPORE (ICIS news)--Shanghai Coking & Chemical was running its coal-based methanol plants at an average of 70-80% capacity after a turnaround of its 450,000 tonnes/year plant in Wujing this month, a company source said on Friday. The unit was shut for two weeks for a turnaround in early May. The company’s other methanol plant of 350,000 tonnes/year capacity in the same area had been running normally during the shutdown of the other unit, the source said. Chinese methanol prices were heard around $230-240/tonne (€186-194/tonne) CFR (cost and freight) ?xml:namespace> Other methanol producers in ($1= €0.81)
http://www.icis.com/Articles/2010/05/28/9363166/chinas-shanghai-coking-runs-methanol-plant-at-70-80.html
CC-MAIN-2014-42
refinedweb
116
67.89
Understanding Components in Ember 2By Lamin Sanneh This article was peer reviewed by Edwin Reynoso and Nilson Jacques. Thanks to all of SitePoint’s peer reviewers for making SitePoint content the best it can be! Components are a vital part of an Ember application. They allow you to define your own, application-specific HTML tags and implement their behavior using JavaScript. As of Ember 2.x components will replace views and controllers (which have been deprecated) and are the recommended way to build an Ember application. Ember’s implementation of components adheres as closely to the W3C’s Web Components specification as possible. Once Custom Elements become widely available in browsers, it should be easily to migrate Ember components to the W3C standard and have them usable by other frameworks. If you’d like to find out more about why routable components are replacing controllers and views, then check out this short video by Ember core team members Yehuda Katz and Tom Dale. The Tab Switcher Application More from this author To get an in-depth understanding of Ember components, we will build a tab-switcher widget. This will comprise a set of tabs with associated content. Clicking on a tab will display that tab’s content and hide that of the other tabs. Simple enough? Lets begin. As ever, you can find the code for this tutorial on our GitHub repo, or on this Ember Twiddle, if you’d like to experiment with the code in your browser. The Anatomy of an Ember Component An Ember component consists of a Handlebars template file and an accompanying Ember class. The implementation of this class is required only if we need extra interactivity with the component. A component is usable in a similar manner to an ordinary HTML tag. When we build our tab switcher component, we will be able to use it like so: {{tab-switcher}}{{/tab-switcher}} The template files for Ember components live in the directory app/templates/components. The class files live in app/components. We name Ember components using all lowercase letters with words separated by hyphens. This naming is by convention so we avoid name clashes with future HTML web components. Our main Ember component will be tab-switcher. Notice I said main component because we will have several components. You can use components in conjunction with others. You can even have components nested inside another parent component. In the case of our tab-switcher, we will have one or more tab-item components like so: {{#each tabItems as |tabItem| }} {{tab-item item=tabItem setSelectedTabItemAction="setSelectedTabItem" }} {{/each}} As you can see, components can also have attributes just like native HTML elements. Create an Ember 2.x Project To follow along with this tutoial, you’ll need to create an EMber 2.x project. Here tabswitcher Navigate to that directory and edit the bower.json file to include the latest version of the Ember, ember-data and ember-load-initializers: { "name": "hello-world", "dependencies": { "ember": "^2.1.0", "ember-data": "^2.1.0", "ember-load-initializers": "^ember-cli/ember-load-initializers#0.1.7", ... } } Back in the terminal run: bower install Bower might prompt you for a version resolution for Ember. Select the 2.1 version from the list provided and prefix it with an exclamation mark to persist the resolution to bower.json. Next start Ember CLI’s development server: ember server Finally navigate to and check the version your browser’s console. Creating the Tab Switcher Component Let’s create a tab switcher component using Ember’s built in generator: ember generate component tab-switcher This will create three new files. One is a Handlebars file for our HTML ( app/templates/components/tab-switcher.hbs), the second is a JavaScript file for our component class ( app/components/tab-switcher.js), the final one is a test file ( tests/integration/components/tab-switcher-test.js). Testing the component is beyond the scope of this tutorial, but you can read more about that on the Ember site. Now run ember server to load up the server and navigate to. You should see a welcome message titled “Welcome to Ember”. So why isn’t our component showing up? Well, we haven’t used it yet, so lets do so now. Using the Component Open the application template app/templates/application.hbs. Add in the following after the h2 tag to use the component. {{tab-switcher}} In Ember, components are usable in two ways. The first way, called inline form, is to use them without any content inside. This is what we’ve done here. The second way is called block form and allows the component to be passed a Handlebars template that is rendered inside the component’s template wherever the {{yield}} expression appears. We will be sticking with the inline form throughout this tutorial. This still isn’t displaying any content on the screen, though. This is because, the component itself doesn’t have any content to show. We can change this by adding the following line to the component’s template file ( app/templates/components/tab-switcher.hbs): <p>This is some content coming from our tab switcher component</p> Now when the page reloads (which should happen automatically), you will see the above text displayed. Exciting times! Create a Tab Item Component Now that we have setup our main tab-switcher component, let’s create some tab-item components to nest inside it. We can create a new tab-item component like so: ember generate component tab-item Now change the handlebars file for the new component ( app/templates/components/tab-item.hbs) to: <span>Tab Item Title</span> {{yield}} Next, let’s nest three tab-items inside our main tab-switcher component. Change the tab-switcher template file ( app/templates/components/tab-switcher.hbs) to: <p>This is some content coming from our tab switcher component</p> {{tab-item}} {{tab-item}} {{tab-item}} {{yield}} As mentioned above, the yield helper will render any Handlebars template that is passed in to our component. However, this is only useful if we use the tab-switcher in its block form. Since we are not, we can delete the yield helper altogether. Now when we view the browser we will see three tab-item components, all saying “Tab Items Title”. Our component is rather static right now, so lets add in some dynamic data. Adding Dynamic Data When an Ember application starts, the router is responsible for displaying templates, loading data, and otherwise setting up application state. It does so by matching the current URL to the routes that you’ve defined. Let’s create a route for our application: ember generate route application Answer “no” to the command line question to avoid overwriting the existing application.hbs file. This will also generate a file app/routes/application.js. Open this up and add a model property: export default Ember.Route.extend({ model: function(){ }); }); A model is an object that represents the underlying data that your application presents to the user. Anything that the user expects to see should be represented by a model. In this case we will add the contents of our tabs to our model. To do this alter the file like so: import Ember from 'ember'; export default Ember.Route.extend({ model: function(){ var tabItems = [ { title: 'Tab 1', content: 'Some exciting content for the tab 1' }, { title: 'Tab 2', content: 'Some awesome content for the tab 2' }, { title: 'Tab 3', content: 'Some stupendous content for the tab 3' } ]; return tabItems; } }); Then change the tab-switcher template file ( app/templates/components/tab-switcher.hbs) to: {{#each tabItems as |tabItem| }} {{tab-item item=tabItem }} {{/each}} Next, change the content of the tab-item template file ( app/templates/components/tab-item.hbs) to: <span>{{item.title}}</span> {{yield}} Finally change the tab-switcher usage in the application.hbs file to: {{tab-switcher tabItems=model}} This demonstrates how to pass properties to a component. We have made the item property accessible to the tab-item component template. After a page refresh, you should now see the tab item titles reflecting data from the models. Adding Interactions Using Actions Now let’s make sure that when a user clicks on a tab-item title, we display the content for that tab-item. Change the tab-switcher template file ( app/templates/components/tab-switcher.hbs) to: {{#each tabItems as |tabItem| }} {{tab-item item=tabItem {{selectedTabItem.content}} </div> This change assumes that we have a tabItem property on the tab-switcher component. This property represents the currently selected tab-item. We don’t currently have any such property so lets deal with that. Inside a regular template, an action bubbles up to a controller. Inside a component template, the action bubbles up to the class of the component. It does not bubble any further up the hierarchy. We need a way to send click actions to the tab-switcher component. This should happen after clicking any of its child tab-item components. Remember I said that actions get sent to the class of the component and not further up the hierarchy. So it seems impossible that any actions coming from child components will reach the parent. Do not worry because this is just the default behavior of components and there is a workaround to circumvent it. The simple workaround is to add an action to the tab-switcher template ( app/templates/components/tab-switcher.hbs) like so: {{#each tabItems as |tabItem| }} <div {{action "setSelectedTabItem" tabItem}} > {{tab-item item=tabItem {{selectedTabItem.content}} </div> And to change the tab-switcher class file ( app/components/tab-switcher.js) to look like export default Ember.Component.extend({ actions: { setSelectedTabItem: function(tabItem){ this.set('selectedTabItem', tabItem); } } }); At this point if you view our app in the browser, it will work as expected. However, this workaround does not address the fact an action only bubbles up to the class of the component, so let’s do it in a way that does. Keep the changes in app/components/tab-switcher.js, but revert app/templates/components/tab-switcher.hbs back to its previous state: <div class="item-content"> {{selectedTabItem.content}} </div> {{#each tabItems as |tabItem| }} {{tab-item item=tabItem setSelectedTabItemAction="setSelectedTabItem" }} {{/each}} Now let’s change the tab-item template to: <span {{action "clicked" item }}>{{item.title}}</span> {{yield}} And the tab-item class file to: export default Ember.Component.extend({ actions:{ clicked: function(tabItem){ this.sendAction("setSelectedTabItemAction", tabItem); } } }); Here, you can see that we have added an action handler to deal with clicks on the tab-item title. This sends an action from the tab-item component to its parent, the tab-switcher component. The action bubbles up the hierarchy along with a parameter, namely the tabItem which we clicked on. This is so that it can be set as the current tab-item on the parent component. Notice that we are using the property setSelectedTabItemAction as the action to send. This isn’t the actual action name that gets sent but the value contained in the property — in this case setSelectedTabItem, which is the handler on the parent component. Conclusion And that brings us to the end of this introduction to Ember components. I hope you enjoyed it. The productivity benefits of using reusable components throughout your Ember projects cannot be understated (and indeed throughout your projects in general). Why not give it a try? The source code for this tutorial is available on GitHub. Are you using components in Ember already? What have been your experiences so far? I’d love to hear from you in the comments. - 1 Back to Basics: JavaScript Operators, Conditionals & Functions - 2 How to Improve Site Performance (and Conversions) with Dareboost - 3 How to Migrate Your WordPress Site to A New Hosting Provider - 4 Fetching Data from a Third-Party API with Vue.js and Axios - 5 Three Keys to Being a Productive Software Engineer
https://www.sitepoint.com/understanding-components-in-ember-2/
CC-MAIN-2017-26
refinedweb
1,987
56.86
Odoo Help This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers. [v7] Lot Upstream and Downstream Traceability By the definition that I found in documentation for openerp 6:.) In version 7 both buttons do the same thing and by looking at the code and can't see how could they do anything different. def action_traceability(self, cr, uid, ids, context=None): """ It traces the information of a product @param self: The object pointer. @param cr: A database cursor @param uid: ID of the user currently logged in @param ids: List of IDs selected @param context: A standard dictionary @return: A dictionary of values """ lot_id = ids if context is None: context = {} type1 = context.get('type', 'move_history_ids2') field = context.get('field', 'tracking_id') obj = self.pool.get('stock.move') ids = obj.search(cr, uid, [(field, 'in',lot_id)]) cr.execute('select id from ir_ui_view where model=%s and field_parent=%s and type=%s', ('stock.move', type1, 'tree')) view_ids = cr.fetchone() view_id = view_ids and view_ids[0] or False import pdb;pdb.set_trace() value = { 'domain': "[('id','in',["+','.join(map(str, ids))+"])]", 'name': ((type1=='move_history_ids2') and _('Upstream Traceability')) or _('Downstream Traceability'), 'view_mode': 'tree', 'view_type': 'tree', 'res_model': 'stock.move', 'field_parent': type1, 'view_id': (view_id,'View'), 'type': 'ir.actions.act_window', 'nodestroy':True, } return value Am I missing something or is it not working as it should!
https://www.odoo.com/forum/help-1/question/v7-lot-upstream-and-downstream-traceability-86493
CC-MAIN-2016-44
refinedweb
239
59.4
KuroSeiMember Content count26 Joined Last visited Community Reputation580 Good About KuroSei - RankMember - Hu. I didnt know you could do a static assert inside a union/struct etc. :| Neat, i always love it when i learn something new. :) Now i only wish i could change the compiler warning regarding non-trivially_copyable types. Saying e.g. the destructor is deleted is unfortunately uninformative if you dont know unions. Back to topic: Too bad. I will have to remove the comment now. :P - I just found that little gem in a collaborative hobby project. /** @brief Enables easy access to bytes in a struct for array/struct conversion. @details stype should be trivially_copyable. Its your duty to make sure it is. Otherwise you might be in undefined-behavior-land, which is - trust me on that - not a amusement park. @tparam stype The type to be wrapped. */ template<typename stype> union Bytes { stype data; unsigned char bytes[sizeof(stype)]; }; I dont know if that is good style (its in a "details" namespace) but its quite convenient to have something like that. Unity Lighting voxel planets KuroSei replied to SpikeViper's topic in General and Gameplay ProgrammingI'd either assign each face a light value or each vertex and blend them for smoother lighting depending on the number of vertices. Per vertex close up, per face a little farther out maybe? To calculate light levels a modified flood fill should work just fine if you dont have to many light levels. Gesture Recognition Approach KuroSei posted a topic in Artificial IntelligenceHello fellow gamedevs, i have a quick question regarding gesture recognition and hope for some of you to be better informed than i am. (Which is most likely the case.) What i did first: Me beeing new to the field at first took a very naive approach, sampling my sensor x times per second and then did a likelihood analysis (via a Levensthein adaption) - which was painfully slow and not really robust (for obvious reasons). What i do now: My new approach differs quite a bit of that. Now i have a list of sensor data, a time value how fast the state has to be reached to be satisfied and a fuzzyness factor. Every samplepoint in my list then basicly acts as a state and i store which state was last satisfied. A state is satisfied, as soon as its predecessor is satisfied (state -1 is always) and we reached its ancestor in the given time without leaving the capsule spanned by the two samplepoints and the fuzzynessfactor(as radius). The question(s) beeing... Does this approach have a name so i can do some readup on that? I am not very well acquainted with the terminology as many of you might have guessed by now, so i couldnt find anything so far. Also: Does this even sound like a feasible approach? It works for very simple gestures and i am currently in the process of testing more advanced ones, but it also clearly has its shortcomings... I hope someone is able and willing to help, if you need more information i will provide this as soon as possible. Thanks in advance and warm regards, Kuro PS: I almost forgot to tell! My sensor data is continous and clamped (since its angular data). Positional data is also planned but should work similar. Looking for Multiplayer 3d game engine with tools KuroSei replied to techrandy's topic in For BeginnersYou have no clue what you are getting yourself into.:) As far as i know there is no such thing as a "(M)MORPG Do-It-Yourself-Development Kit for Dummies". Multiplayer Games are hard work. Multiplayer realtime 3D games are even harder. Further you arent really specific in your needs at all. The needs of a Shooter might differ greatly from those of a ... say RTS. Thus the engines would differ greatly in terms of rendering mechanics, networking etc. I am with no means an expert on this topic though and i might underestimate your expertise greatly by packing you with the cliche "I want to write an awsome multiplayer game after i mastered HTML now!" guy... I Also dont mean to discourage you in anyway. Its just... if it fails - which has a great chance to be the cas specially in a 5 year project - you possibly wasted a lot of time for nothing. Of course: You can learn a lot in the process if you do it right. But if learning isnt the goal than it is better achieved by smaller projects where failure isnt such a great trigger for frustration. When you realize how dumb a bug is... KuroSei replied to Sik_the_hedgehog's topic in Coding HorrorsExporting a tree-structure as a visual output to another tool from a tool that doesnt provide positional data... So yeah... Calculate them from the structure. Not that hard to do, right? But i wasted like a shitton of time because the target tool had the undocumented feature to set everything with a x or y coordinate of 0 (totally valid value...) to 100 / 100. Searched for ages in my code and then just tried adding 1 to everything... and suddenly it worked. Biggest derp in my recent programming carreer. :/ Voxel Rendering Engine Tips KuroSei replied to Butabee's topic in Graphics and GPU ProgrammingThis is not quite true, honestly. Polygonbased voxel rendereres are still the fastest you can get at the moment with the graphics pipeline beeing optimized for such. Sure, if you have tiny tiny cubes you've got a lot of stuff to render, but you can batch and optimize. Merge larger faces etc. That isnt too complex, especially if you use appropriate datastructes like an octree. In fact with an octree you got the "merged faces" as a bonus to the (in case of a sparse tree) really compact (comparably) memory usage. If you want to get smooth you could as well roll with something like marching cubes (beeing the most known) or move to even more sophisticated algorithms like surface nets or other dual algorithms which optimize mesh topology to some extend. With my (arguably modest) testing on this subject i found Polygonbased Voxel rendering to be a lot faster than the given alternatives like raytracing/marching etc. Share the most challenging problem you solved recently! What made you feel proud? KuroSei replied to yusef28's topic in GDNet LoungeNot so much of a "Boy, how cool", but i dont want this thread to die. :P. ) :P So far its running pretty fast concerning its a very naive approach and just meant as a prove of concept... But a remake is in the forge atm and its goin good :P "Complex" cases like a 6-Node Path are simplified atm thus details are lost, but this method (with the right implementation) Also allows for sharp features as tested with rotated cubes. This is a little fiddly, tho :P I would post a picture of it, but as wireframe its just messy ( As usual: Iso surface with shit tons of triangles ) and i didnt implemented lighting for it yet so its just a clump of... well... Colored Triangles... Every color beeing the same. :P If there is intereset i might post more details or try to implement a quick and dirty lighting solution for actual usefull pictures.... ^^" DONT DIE, THREAD! The world needs peo... topics like you! (Or at least i do, against the boredom!) DX11 Hardware Voxel Terrain (your opinion) KuroSei replied to Hawkblood's topic in General and Gameplay ProgrammingIt is indeed for generation on the CPU but i guess it still applies for the GPU. Problematic with this approach is btw that you would need to implement a volumebased physicsengine or a physicapproach on the GPU that works with your extracted meshes etc. Its hard to do, i think, if you need physics, that is. But i didnt implement any of those algorithms on the GPU and i dont plan to, so i cant help that much with that. I just wanted to point you to some maybe helpful things :) DX11 Hardware Voxel Terrain (your opinion) KuroSei replied to Hawkblood's topic in General and Gameplay ProgrammingWell its a set of lookup tables. I guess its not that important that it is super-readable and it serves its purpose well as is. DX11 Hardware Voxel Terrain (your opinion) KuroSei replied to Hawkblood's topic in General and Gameplay ProgrammingHow much control? Depends on your noise of course. You can do pretty amazing things with multiple noise steps and some pseudo randomly distributed precalculated models like e.g. trees. Youtube has some nice vids to that. If you go for only perlin noise for example you have persistence, frequency, etc. as parameters to play with and your seed as a starting point. When it is generated you dont need to know the info, do you? Otherwise: Voxelspace -> Worldspace calculations. LOD... Thats tricky. Its of course possible e.g. by means of the Trans Voxel Algorithm as proposed by Eric Langyel. But the topic is not as trivial as it should be. I am fiddling around with it, myself. :P Performance Regression KuroSei replied to theoutfield's topic in AngelCodeIf I am not totally mistaken you could calculate that without the loop entirely. 3 * (n+1) * (41 * n + 304) and the result divided by 200 where n is the upper limit for the loop. Plus I'm not sure wether you even need those values or simply used the equotation for testing. :X I'm quite tired so sorry if I'm offtopic. I made some testing: I am mistaken. You need to use n-1. For n = 10 i get 100.94 period9 and with my equotation for n=9 i get 100.95 With ( 3*n*(41*n+263) ) /200 i get 100.95 for n = 10. It also checks for other values, so i guess Its fine now. X) - I worked myself through the any-addon. It seemed quite hilarious what effort one has to put in, but now that it works, its not as bad as i thought at the beginning. The mechanics are quite simple but thats what makes it hard for me to comprehend. Anyway, keep up the nice work, Andreas. I fell in lvoe with angelscript, even tho i rly feel kinda stupid sometimes. Well, thats life, isn't it? - I will look into that. Thanks so far. - I encountered a similar problem. It has to do with "lamba" arguments and is exactly what you encountered, i guess. The problem is, that i need to store the reference. Simplified my problem is this: I have a "generic" List e.g. ParamList. It has the method void " addParam(void* param, int tId); " registered as " void addParam(const?&in) ". Calling the Method works fine. The reference works okayish in the addParam method itself. My problem is, that i need to store the reference. Three ways to do so now, i guess: 1. [code] ParamList par(); string foo="345"; par.addParam(bleh); BaseEntity@ ent3=System.instanciate("AbstractNPC","AbstractNPC @AbstractNPC(int,string)",par); [/code] Works fine, as long as i use the Parameter list in the scope in that foo exists. But when i store the ParamList for caching issues or stuff it kinda breaks. 2. A Wrapper of a reference type like the any addon around the parameter itself ( Like addParam(any(param)); ) 3. Registering an interface of the ParameterList and writing a scriptClass implementing that Interface. It would automaticly increase the reference count then if i stored it there. But im not very comfortable with those solutions. I wonder if there is a better way, or if i can register the addParam method in a different way so the copy of the reference wont be destroyed when the lamba goes out of scope. Help me, oh grandmaster of angelic scripting, I need your wisdom and swear to obey. ._. Really, im crawling around the source for some time now and I am growing tired of this problem. I would simply copy the contents if it wasnt a void* i got there. Makes allocating memory hardly possible, ka?
https://www.gamedev.net/profile/196520-kurosei/
CC-MAIN-2018-05
refinedweb
2,030
73.37
I see some people who use std:: .Isn't it better to use namespace std so we don't have to write std:: all the time or I am wrong and std:: is used for another thing? This is a discussion on std:: or namespace std within the C++ Programming forums, part of the General Programming Boards category; I see some people who use std:: .Isn't it better to use namespace std so we don't have to write ... I see some people who use std:: .Isn't it better to use namespace std so we don't have to write std:: all the time or I am wrong and std:: is used for another thing? There are advantages and disadvantages to both approaches. Employing "using namespace std;" everywhere does a bit more than (effectively) putting std:: in front of symbol names that are within std. It formally tells the compiler to look within namespace std for candidate names, when trying to resolve a particular name. It can also introduce ambiguity in some circumstances, particularly if multiple namespaces and multiple using directives are employed. (Ambiguity refers to code that has multiple possible meanings, which the compiler cannot resolve, so the compiler must reject the code) -- in which case it is still necessary to use the namespace prefix even though the using directive is in effect. This is one reason for a common guideline that "using namespace" directives should not be employed in header files; once a "using namespace" directive is employed, its effects cannot be turned off within that translation unit (aka source file). The main disadvantage of using the std:: prefix is additional typing. Personally, I almost never employ "using namespace" directives, as I consider the additional typing a small price to pay for code without ambiguity. Last edited by grumpy; 12-03-2007 at 10:54 PM. It's better to use std:: because 'using namespace std' puts everything inside the namespace std in the global namespace; thus, ruining the whole point of namespaces. When you use std:: , you do not mess with the global namespace, which is what you want to do. It depends... If that's the only namespace you're using, then sure go ahead. If you're writing a large program with multiple namespaces, where conflicts could occur, then it's safer to be explicit. For example, if a .cpp file is using code from the std:: namespace and the blah:: namespace, and both have a class named vector<T> (I know, it's pretty stupid for the makers of blah to use such a standard name, but what can you do). Now if you try to create a vector<int>, the compiler won't know which namespace you're talking about. There's also even more evil situations where the compiler might mix & match classes from different namespaces without any errors, but then leads to a bug in your program. In any case, you should NEVER write a using statement in a header file since it will propagate to every file that includes that header. So you should ALWAYS explicitly qualify all namespaces in header files. Don't forget the global namespace. Putting the std namespace in the global namespace could create conflicts. You might want to know that a vector is not only an STL container but also a math entity thus it's very possible that someone calls a class vector. Putting the std namespace in the global namespace would cause a conflict between std::vector and ::vector. I use "using namespace std" in my own code, but never in headers - that's just rude In the code base I work in, there's little concern of std namespace ambiguity. gg Also, rather than importing the whole namespace, you could err on the side of caution by only importing the names you're actually going to use like this: But again, even those should never go in a header file.But again, even those should never go in a header file.Code:using std::string; using std::cout; using std::endl; I prefer to always use std::. I follow the don't put the using directive in a header advice, and I prefer to just be consistent so that the names look the same wherever I see them. I don't care that much about the extra typing. I also think the code is clearer when I explicitly qualify the std namespace. If (1) there was something named ::cout in scope, (2) <iostream> had been #include'd, and (3) a "using namespace std;" was in effect, then the line;.Code:cout << "Hello\n"; Note that if there is ambiguity in names because you are using a using directive, it will not necessarily be flagged as an error by the compiler. The compiler might just make the wrong choice and you won't know about it until your application works incorrectly or crashes (if you're lucky). Admittedly this will be rare, but I've seen a handful of problems on C++ forums that were the result of a using directive causing a name to be ambiguous. For contrived examples see this thread: including namespaces I guess the point of the examples (there are several in the thread) are that it is possible for using directives or declarations to have adverse effects that do not necessarily cause compiler errors. Notice the last example fails even if all proper headers are included. Thank you for telling guys. I think I will start using std:: but I don't know wich command or functions need std:: beacuse I always used: using namespace std; Any function in the STL. vector, map, string, cout, cin, etc. IF you find the compiler complaining about not finding a function and you've included a header, try putting std:: before. Also use some common sense, of course For information on how to enable C++11 on your compiler, look here. よく聞くがいい!私は天才だからね! ^_^
http://cboard.cprogramming.com/cplusplus-programming/96570-std-namespace-std.html
CC-MAIN-2014-10
refinedweb
995
67.89
Dear Friends With the grace of Allah Almighty, the Snow Cross 2010 & Extreme Off-road Conference was held at Nathiagali on February, 6-7. The event was organised by the Islamabad Jeep Club and attended by 4x4 Engaged ( Lahore ), Muzaffarabad Jeep Club ( Muzaffarabad ), Pakistan Land Rover Club & Offroaders Unlimited ( Abbotabad ). Over 46 offroaders from the 5 participating clubs attended the event on around 19 vehicles. Following heavy snowfall on Friday and Saturday the attendees had to overcome challenges posed by snow and ice on the steep climb to Greens Hotel Nathiagali, which not only tested their driving and recovery skills but also highlighted the spirit of team work and cooperation amoungst the seasoned offroaders. The roads leading upto Greens Hotel were covered by upto 2.5 feet of snow and attendees had to drive up the slippery slopes using snow chains, with some road clearing equipment showing up very late in the day. The Extreme 4x4 conference featured presentations from the Presidents of the 5 attending Clubs as well as a speech from the chief guest, Mr. Nazir Abbassi - Nazim Nathiagali. Participants at the conference highlighted offroading and extreme motorsport as a healthy alternative for the people and youth of Pakistan who are greatly affected by prevailing environment of terrorism which is stifling sports and entertainment activities. Participants also emphasized the need to recognize and promote offroading and extreme motorsport in Pakistan at the official level. Mr. Nazir Abbassi appreciated the efforts of the attendees to organise and attend such a challenging event and highlighted the need to promote tourist activities in Nathiagali and Galliat areas which are amongst the most scenic hillstations in the coutry with virtually no law and order risks. Thanx IJC , 4x4 Engaged , MJC , Pakistan Land Rover Club , Offroaders Unlimited awesome picturesmore pictures plz It was an awsome event !made the video ! hell of a time but it was all EXTREME CHILLED FUN !! Awesome pictures. Waiting to see some more.... Extreme excitement <?xml:namespace prefix = o<o:p></o:p>Waiting for more pic zabardast... thumbs up to the participants and the organizers. awsome ..wish i had my 4X4 ... Grand Cherokee is looking cool .... Wow !!!!Waiting for more pics KEEP THEM COMING DAMMIT! Nice shot SK. good good.... I never seen snow fall in my whole life Beautiful shot.. excellent coverage seems every one is tired
https://www.pakwheels.com/forums/t/snow-cross-2010-extreme-offroad-conference-on-6-th-7-th-feb-2010-in-nathiagali/115077
CC-MAIN-2017-30
refinedweb
390
54.63
Feel free to talk to her, by typing into the text box above. If you ask her questions, she'll answer you. The cool thing is that we have designed her for extensibility, so that the internet community can make her smarter. As an example, I built a small code snippet to interface with the Best Buy Remix API so that you can ask her questions about where Best Buy stores are. Here's a picture of a dialog that I had with Amy Iris earlier today: As you can see, I have asked Amy Iris a couple of different ways to tell me where various Best Buy stores are. This example could be extended for all of the Best Buy API calls (such as product lookups). Here's the code that's part of the Bot's logic. One little 14-line code snippet, submitted to Amy Iris' brain, and she now is that much smarter. # example of Best Buy Store Locator import amyiris, re from google.appengine.api import urlfetch if ("best buy" in str(textin)) and ((' in ' in str(textin)) or ('near' in str(textin))): fields = ['name','distance','address'] url = "(area(%s,50))?show=" url += ",".join(fields) + "&apiKey=amysremixkeygoeshere" r = "The nearest Best Buy store to %s is the %s store, "+ "which is %s miles away, at %s." vals = [re.search(r'\d'*5,str(textin)).group(0),] #grab the zip code page = urlfetch.fetch(url%vals[0]).content #look up results based on zipcode for tag in fields: #parse the xml vals.append(re.search('<'+tag+'>(.*?)</'+ tag+'>',page, re.DOTALL).group(1)) say(r%tuple(vals),confidence=21 ) A quick code review reveals the tricks (and limitations) of this conversational parser. I scan for the words "best buy", " in ", and "near", and rely on a 5-digit zip code in a regex search (that is, r'\d'*5). And if I find all these, then the snippet will retrieve the information from the Best Buy web site and present it to the user in conversation form. Imagine - it's now available on the web, on twitter, on your cell phone. And this is just one small look-up. Imagine what happens as people begin contributing similar code snippets over the years! Amy Iris will be brilliant!
http://blog.amyiris.com/2009/06/how-amy-iris-knows-where-best-buy-is.html
CC-MAIN-2014-15
refinedweb
380
71.55
Source CherryPy / sphinx / source / progguide / extending / customtools.rst Custom Tools CherryPy is an extremely capable platform for web application and framework development. One of the strengths of CherryPy is its modular design; CherryPy separates key-but-not-core functionality out into "tools". This provides two benefits: a slimmer, faster core system and a supported means of tying additional functionality into the framework. Tools can be enabled for any point of your CherryPy application: a certain path, a certain class, or even individual methods using the :ref:`_cp_config <cp_config>` dictionary. Tools can also be used as decorators which provide syntactic sugar for configuring a tool for a specific callable. See :doc:`/concepts/tools` for more information on how to use Tools. This document will show you how to make your own. Your First Custom Tool Let's look at a very simple authorization tool: import cherrypy def protect(users): if cherrypy.request.login not in users: raise cherrypy.HTTPError("401 Unauthorized") cherrypy.tools.protect = cherrypy.Tool('on_start_resource', protect) We can now enable it in the standard ways: a config file or dict passed to an application, a :ref:`_cp_config<cp_config>` dict on a particular class or callable or via use of the tool as a decorator. Here's how to turn it on in a config file: [/path/to/protected/resource] tools.protect.on = True tools.protect.users = ['me', 'myself', 'I'] Now let's look at the example tool a bit more closely. Working from the bottom up, the :class:`cherrypy.Tool<cherrypy._cptools.Tool>` constructor takes 2 required and 2 optional arguments. point First, we need to declare the point in the CherryPy request/response handling process where we want our tool to be triggered. Different request attributes are obtained and set at different points in the request process. In this example, we'll run at the first hook point, called "on_start_resource". Hooks Tools package up hooks. When we created a Tool instance above, the Tool class registered our protect function to run at the 'on_start_resource' hookpoint. You can write code that runs at hookpoints without using a Tool to help you, but you probably shouldn't. The Tool system allows your function to be turned on and configured both via the CherryPy config system and via the Tool itself as a decorator. You can also write a Tool that runs code at multiple hook points.. callable Second, we need to provide the function that will be called back at that hook point. Here, we provide our protect callable. The Tool class will find all config entries related to our tool and pass them as keyword arguments to our callback. Thus, if: 'tools.protect.on' = True 'tools.protect.users' = ['me', 'myself', 'I'] is set in the config, the users list will get passed to the Tool's callable. [The 'on' config entry is special; it's never passed as a keyword argument.] The tool can also be invoked as a decorator like this: @cherrypy.expose @cherrypy.tools.protect(users=['me', 'myself', 'I']) def resource(self): return "Hello, %s!" % cherrypy.request.login All of the builtin CherryPy tools are collected into a Toolbox called :attr:`cherrypy.tools`. It responds to config entries in the "tools" :ref:`namespace<namespaces>`.: # cpstuff/newauth.py from cpstuff import newauth class Root(object): def default(self): return "Hello" default.exposed = True conf = {'/demo': { 'newauth.check_access.on': True, 'newauth.check_access.default': True, }} app = cherrypy.tree.mount(Root(), config=conf) if hasattr(app, 'toolboxes'): # CherryPy 3.1+ app.toolboxes['newauth'] = newauth.newauthtools Just the Beginning Hopefully that information is enough to get you up and running and create some simple but useful CherryPy tools. Much more than what you have seen in this tutorial is possible. Also, remember to take advantage of the fact that CherryPy is open source! Check out :doc:`/progguide/builtintools` and the :doc:`libraries</refman/lib/index>`())
https://bitbucket.org/btubbs/cherrypy/src/bfdda6de07bf/sphinx/source/progguide/extending/customtools.rst
CC-MAIN-2015-32
refinedweb
643
58.69
A simple alternative to Python 3 enums. Project description Xenum offers a simple alternative to Python 3 enums that’s especially useful for (de)serialization. When you would like to model your enum values so that they survive jumps to and from JSON, databases, and other sources cleanly as strings, Xenum will allow you to do so extremely easily. Installation Installation is simple. With python3-pip, do the following: $ sudo pip install -e . Or, to install the latest version available on PyPI: $ sudo pip install xenum Usage Just create a basic class with attributes and annotate it with the @xenum attribute. This will convert all of the class attributes into Xenum instances and allow you to easily convert them to strings via str(MyEnum.A) and from strings via MyEnum.by_name("MyEnum.A"). Example from xenum import xenum, ctor @xenum class Actions: INSERT = ctor('insert') UPDATE = ctor('update') DELETE = ctor('delete') def __init__(self, name): self.name = name assert Actions.INSERT == Actions.by_name('Actions.INSERT') assert Actions.INSERT().name == 'insert' Change Log Version 1.4: September 5th, 2016 - Add ‘xenum.sref()’ allowing enum values to be instances of the @xenum annotated class, whos *args is prepended with the Xenum instance itself for self-reference. Version 1.3: September 4th, 2016 - Add ‘xenum.ctor()’ allowing enum values to be instances of the @xenum annotated class. - Made Xenum instances callable, returning the enum’s internal value. Version 1.2: September 4th, 2016 - Add ‘values()’ method to @xenum annotated classes for fetching an ordered sequence of the Xenum entities created. Version 1.1: August 31st, 2016 - Made Xenum instances hashable, removed value() as a function. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/xenum/
CC-MAIN-2018-30
refinedweb
295
58.58
Returns whether the selectable is currently 'highlighted' or not. Use this to check if the selectable UI element is currently highlighted. //Create a UI element. To do this go to Create>UI and select from the list. Attach this script to the UI GameObject to see this script working. The script also works with non-UI elements, but highlighting works better with UI. using UnityEngine; using UnityEngine.Events; using UnityEngine.EventSystems; using UnityEngine.UI; //Use the Selectable class as a base class to access the IsHighlighted method public class Example : Selectable { //Use this to check what Events are happening BaseEventData m_BaseEvent; void Update() { //Check if the GameObject is being highlighted if (IsHighlighted(m_BaseEvent) == true) { //Output that the GameObject was highlighted, or do something else Debug.Log("Selectable is Highlighted"); } } }
https://docs.unity3d.com/kr/2018.1/ScriptReference/UI.Selectable.IsHighlighted.html
CC-MAIN-2020-16
refinedweb
129
60.31
I got this script that displays a iframe when clicking on the show hide text.. only the script starts with show.. and whatever i do, i can't get it to start with hide.. So when i go to my .html file it shows the iframe.. but I would like to start with hidding iframe.. Could somewhone help me? Here is my code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns=""> <head> <meta http- <title>Untitled Document</title> </head> <body> <div> <script type="text/javascript" language="JavaScript"><!-- function DoViewIFRAME(tid1,tid2,tid3) { document.getElementById(tid1).style. Click to hide </a> <a id="viewiframe" onclick="DoViewIFRAME('viewiframe','hideiframe','iframe');" style="display: none; font-size: 14px; font-weight: bold; font-family: sans-serif; color: brown;"> Click to show </a> <br/><br/> <div id="iframe"> <iframe frameborder="1" height="200" name="frame1" scrolling="yes" src="" width="550"></iframe> </div> </div> </body> </html> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns=""> <head> <meta http- <title>Untitled Document</title> <style type="text/css"> #iframe { display:none; } #hideiframe { line-height:16px; font-size:14px; font-weight:bold; font-family:sans-serif; color:brown; text-decoration:none; } </style> <script type="text/javascript" language="JavaScript"> //send first parameter as the link object itself - use ( this ) in function call, second parameter as the id of the element you wish to show/hide function DoViewIFRAME(linkElem,elemId2) { //toggle the link elements innerHTML linkElem.innerHTML = linkElem.innerHTML === "Click to hide" ? "Click to show" : "Click to hide"; //get the current display property of the element to toggle var isitshown = document.getElementById(elemId2).style.display; //toggle the display property of the element // note the style tag giving the elemtent a display property initially document.getElementById(elemId2).style.display = isitshown === "block" ? "none" : "block"; } </script> </head> <body> <div> <a href="#" id="hideiframe" onclick="DoViewIFRAME(this,'iframe'); return false;"> Click to Show </a> <br/><br/> <div id="iframe"> <iframe frameborder="1" height="200" name="frame1" scrolling="yes" src="" width="550"></iframe> </div> </div> </body> </html> well it works perfect.. great thanks for the info between te lines, so i understand wat it meens! I was wondering the script would be perfect if it could hide or show after about 10 seconds? So first you don't see (hide) and then 10 seconds later it shows.. or turned around.. first you see and then after 10 seconds it's hide. Again thx for the first script i got to learn the javacodes better so i don't have to ask on and on and on.. <script type="text/javascript" language="JavaScript"> var t=setTimeout("DoViewIFRAME(this,'iframe'); return false;",10000); </script> It might work that way in Internet Explorer, but not in Firefox or others perhaps. You are talking about delaying it when the page first opens (not when the user clicks the link), correct? Also that will not change the links text, as in that context ( this ) does not point to the link. Remember that the way the function is, the link is normally sent as an object (although it would be easy for you to convert it to sending it as an id like we do with the iframe div - I just felt that using (this) was easiest at the time, as the text you will want to change will almost always be in the link or element you are clicking on), so to pass the link as an object - use document.getElementById('hideiframe'). And to make it work in other browsers you will need to tell the browser to actually execute that timeout. A couple ways you can go about it (I changed from just 't' to a longer name -less chance of polluting the global namespace): You could do this: <script type="text/javascript" language="JavaScript"> var gonnaShowIt = function () { setTimeout("DoViewIFRAME(document.getElementById('hideiframe'),'iframe');",10000); }; gonnaShowIt(); /***<--- this will tell the browser to execute the function with the setTimeout in it. Do not just do: gonnaShowIt = setTimeout( blah blah blah -as other browsers will not execute that, as they will not see gonnaShowIt as a function that way ***/ </script> or like this: <script type="text/javascript" language="JavaScript"> function gonnaShowIt() { setTimeout("DoViewIFRAME(document.getElementById('hideiframe'),'iframe');",10000); }; gonnaShowIt(); </script> or like this (preferred): <script type="text/javascript" language="JavaScript"> window.onload = function () { setTimeout("DoViewIFRAME(document.getElementById('hideiframe'),'iframe');",10000);}; </script> Note that those commands (whichever ONE you choose) can actually be in the same set of script tags as the DoViewIFRAME() function is in, no problem. <script type="text/javascript" language="JavaScript"> setTimeout("DoViewIFRAME(document.getElementById('hideiframe'),'iframe');",10000); </script> [retraction]I take that back, as I'm an idiot. Sorry. I am still learning things too, and even though I thought it would work the way you were doing it too ( var something = setTimeout( blah blah blah), the first time or two it did not work in FF - so that was why I said the above. But it does work now if I do that, I must have screwed up. So ignore everything except for what I said pertaining to the link and ( this )[/retraction] Does exactly what i want! I am a total novice with javascript and the way I learn is to watch what you guys do and then try it out myself. So please advise, can I put several show/hides with different content on the same page and link to a box and to a downloadable invitation. Regards KerryK Yes, absolutely. If you are referring to the code that I posted in the second post in this thread, it is re-usable. Merely need to send a different id as the second argument in the function call from the link or elemtent you are clicking on. You can call it multiple times from different elements to show/hide totally different elements. Excellent, that is how you will really learn. So please do so, and if you need specific advise about maybe modifying it (if necessary) to your needs, just ask (with some code perhaps).
http://www.webmasterworld.com/javascript/3794982.htm
CC-MAIN-2014-10
refinedweb
1,004
61.97
More Servlets and JavaServer Pages Marty Hall Publisher: Prentice Hall PTR First Edition December 01, 2001 ISBN: 0-13-067614-4, 752 pages More Servlets and JavaServer Pages shows developers how to use the latest advances in servlet and JSP technology. A companion to the worldwide bestseller Core Servlets and JavaServer Pages, it starts with a fast introduction to basic servlet and JSP development, including session tracking, custom JSP tag libraries, and the use of the MVC architecture. It then covers the use and deployment of Web applications, declarative and programmatic security, servlet and JSP filters, life-cycle event listeners, new features for writing tag libraries, the standard JSP tag library (JSPTL), and much more. Library of Congress Cataloging-in-Publication Data Hall, Marty More Servlets and JavaServer Pages / Marty Hall. p. cm. Includes index. ISBN 0-13-067614-4 1. Java (Computer programming language) 2. Servlets. 3. Active server pages. I. Title. QA76.73.J38 H3455 2001 005.2'762--dc21 2001056014 © 2002 Sun Microsystems, Inc. Printed in the United States of America. 901 San Antonio Road, Palo Alto, California 94303-4900 U.S.A. All rights reserved. This product and related documentation are protected by and decompilation. No part of this product or related documentation may be reproduced in any form. Credits Production Editor and Compositor: Vanessa Moore Copy Editor: Mary Lou Nohr Project Coordinator: Anne R. Garcia Acquisitions Editor: Gregory G. Doench Editorial Assistant: Brandt Kenna Cover Design Director: Jerry Votta Cover Designer: Design Source Art Director: Gail Cocker-Bogusz Manufacturing Manager: Alexis R. Heydt-Long Marketing Manager: Debby vanDijk Sun Microsystems Press Publisher: Michael Llwyd Alread 10 9 8 7 6 5 4 3 2 1 Sun Microsystems Press A Prentice Hall Title Acknowledgments About the Author Introduction Who Should Read This Book Book Distinctives How This Book Is Organized Conventions About the Web Site I: The Basics 1. Server Setup and Configuration 1.1 Download the Java Development Kit (JDK) 1.2 Download a Server for Your Desktop 1.3 Change the Port and Configure Other Server Settings 1.4 Test the Server 1.5 Try Some Simple HTML and JSP Pages 1.6 Set Up Your Development Environment 1.7 Compile and Test Some Simple Servlets 1.8 Establish a Simplified Deployment Method 1.9 Deployment Directories for Default Web Application: Summary 2. A Fast Introduction to Basic Servlet Programming 2.1 The Advantages of Servlets Over “Traditional” CGI 2.2 Basic Servlet Structure 2.3 The Servlet Life Cycle 2.4 The Client Request: Form Data 2.5 The Client Request: HTTP Request Headers 2.6 The Servlet Equivalent of the Standard CGI Variables 2.7 The Server Response: HTTP Status Codes 2.8 The Server Response: HTTP Response Headers 2.9 Cookies 2.10 Session Tracking 3. A Fast Introduction to Basic JSP Programming 3.1 JSP Overview 3.2 Advantages of JSP 3.3 Invoking Code with JSP Scripting Elements 3.4 Structuring Autogenerated Servlets: The JSP page Directive 3.5 Including Files and Applets in JSP Documents 3.6 Using JavaBeans with JSP 3.7 Defining Custom JSP Tag Libraries 3.8 Integrating Servlets and JSP: The MVC Architecture II: Web Applications 4. Using and Deploying Web Applications 4.1 Registering Web Applications 4.2 Structure of a Web Application 4.3 Deploying Web Applications in WAR Files 4.4 Recording Dependencies on Server Libraries 4.5 Handling Relative URLs in Web Applications 4.6 Sharing Data Among Web Applications 5. Controlling Web Application Behavior with web.xml 5.1 Defining the Header and Root Elements 5.2 The Order of Elements within the Deployment Descriptor 5.3 Assigning Names and Custom URLs 5.4 Disabling the Invoker Servlet 5.5 Initializing and Preloading Servlets and JSP Pages 5.6 Declaring Filters 5.7 Specifying Welcome Pages 5.8 Designating Pages to Handle Errors 5.9 Providing Security 5.10 Controlling Session Timeouts 5.11 Documenting Web Applications 5.12 Associating Files with MIME Types 5.13 Locating Tag Library Descriptors 5.14 Designating Application Event Listeners 5.15 J2EE Elements 6. A Sample Web Application: An Online Boat Shop 6.1 General Configuration Files 6.2 The Top-Level Page 6.3 The Second-Level Pages 6.4 The Item Display Servlet 6.5 The Purchase Display Page III: Web Application Security 7. Declarative Security 7.1 Form-Based Authentication 7.2 Example: Form-Based Authentication 7.3 BASIC Authentication 7.4 Example: BASIC Authentication 7.5 Configuring Tomcat to Use SSL 8. Programmatic Security 8.1 Combining Container-Managed and Programmatic Security 8.2 Example: Combining Container-Managed and Programmatic Security 8.3 Handling All Security Programmatically 8.4 Example: Handling All Security Programmatically 8.5 Using Programmatic Security with SSL 8.6 Example: Programmatic Security and SSL IV: Major New Servlet and JSP Capabilities 9. Servlet and JSP Filters 9.1 Creating Basic Filters 9.2 Example: A Reporting Filter 9.3 Accessing the Servlet Context from Filters 9.4 Example: A Logging Filter 9.5 Using Filter Initialization Parameters 9.6 Example: An Access Time Filter 9.7 Blocking the Response 9.8 Example: A Prohibited-Site Filter 9.9 Modifying the Response 9.10 Example: A Replacement Filter 9.11 Example: A Compression Filter 9.12 The Complete Filter Deployment Descriptor 10. The Application Events Framework 10.1 Monitoring Creation and Destruction of the Servlet Context 10.2 Example: Initializing Commonly Used Data 10.3 Detecting Changes in Servlet Context Attributes 10.4 Example: Monitoring Changes to Commonly Used Data 10.5 Packaging Listeners with Tag Libraries 10.6 Example: Packaging the Company Name Listeners 10.7 Recognizing Session Creation and Destruction 10.8 Example: A Listener That Counts Sessions 10.9 Watching for Changes in Session Attributes 10.10 Example: Monitoring Yacht Orders 10.11 Using Multiple Cooperating Listeners 10.12 The Complete Events Deployment Descriptor V: New Tag Library Capabilities 11. New Tag Library Features in JSP 1.2 11.1 Using the New Tag Library Descriptor Format 11.2 Bundling Listeners with Tag Libraries 11.3 Checking Syntax with TagLibraryValidator 11.4 Aside: Parsing XML with SAX 2.0 11.5 Handling Exceptions with the TryCatchFinally Interface 11.6 New Names for Return Values 11.7 Looping Without Generating BodyContent 11.8 Introducing Scripting Variables in the TLD File 12. The JSP Standard Tag Library 12.1 Using JSTL: An Overview 12.2 Installing and Configuring JSTL 12.3 Looping with the forEach Tag 12.4 Accessing the Loop Status 12.5 Looping with the forTokens Tag 12.6 Evaluating Items Conditionally 12.7 Using the Expression Language Server Organization and Structure Download Sites Starting and Stopping the Server Servlet JAR File Locations Locations for Files in the Default Web Application Locations for Files in Custom Web Applications Acknowledgments Many people helped me with this book. Without their assistance, I would still be on the third chapter. Larry Brown (U.S. Navy), John Guthrie (American Institutes for Research), Randal Hanford (Boeing, University of Washington), Bill Higgins (IBM), and Rich Slywczak (NASA) provided valuable technical feedback on many different chapters. Others providing useful suggestions or corrections include Nathan Abramson (ATG), Wayne Bethea (Johns Hopkins University Applied Physics Lab—JHU/ APL), Lien Duong (JHU/APL), Bob Evans (JHU/APL), Lis Immer (JHU/APL), Makato Ishii (Casa Real), Tyler Jewell (BEA), Jim Mayfield (JHU/APL), Matt McGinty (New Atlanta), Paul McNamee (JHU/APL), Karl Moss (Macromedia), and Jim Stafford (Capita). I hope I learned from their advice. Mary Lou “Eagle Eyes” Nohr spotted my errant commas, awkward sentences, typographical errors, and grammatical inconsistencies. She improved the result immensely. Vanessa Moore designed the book layout and produced the final version; she did a great job despite my many last-minute changes. Greg Doench of Prentice Hall believed in the concept from the beginning and encouraged me to write the book. Mike Alread persuaded Sun Microsystems Press to believe in it also. Thanks to all. Most of all, thanks to B.J., Lindsay, and Nathan for their patience and encouragement. God has blessed me with a great family. About the Author Marty Hall is president of coreservlets.com , a small company that provides training courses and consulting services related to server-side Java technology. He also teaches Java and Web programming in the Johns Hopkins University part-time graduate program in Computer Science, where he directs the Distributed Computing and Web Technology concentration areas. Marty is the author of Core Web Programming and Core Servlets and JavaServer Pages, both from Sun Microsystems Press and Prentice Hall. You can reach Marty at hall@coreservlets.com ; you can find out about his onsite training courses at . Introduction.OS,: • The official servlet and JSP reference implementation is no longer developed by Sun. Instead, it is Apache Tomcat, an open-source product developed by a team from many different organizations. • Use of Web applications to bundle groups of servlets and JSP pages has grown significantly. • Portable mechanisms for enforcing Web application security have started to displace the server-specific mechanisms that were formerly used. • Version 2.3 of the servlet specification was released (August 2001). New features in this specification include servlet and JSP filters, application life-cycle event handlers, and a number of smaller additions and changes to existing APIs and to the deployment descriptor (web.xml). • Version 1.2 of the JSP specification was released (also August 2001). This version lets you bundle event listeners with tag libraries, lets you designate XML-based programs to check the syntax of pages that use custom tags, and supplies interfaces that let your custom tags loop more efficiently and handle errors more easily. JSP 1.2 also makes a number of smaller changes and additions to existing APIs and to the TLD file format. • XML has become firmly entrenched as a data-interchange language. Servlet and JSP pages use it for configuration files. Tag library validators can use it to verify custom tag syntax. JSP pages can be represented entirely in XML. • Throughout 2000 and 2001, the JSR-052 expert group put together a standard tag library for JSP. In November of 2001 they released early access version 1.2 of this library, called JSTL (JSP Standard Tag Library). This library provides standard tags for simple looping, iterating over a variety of data structures, evaluating content conditionally, and accessing objects without using explicit scripting code.. Who Should Read . In fact, I put the entire text of Core Servlets and JavaServer Pages on the Web site (in PDF). Although this book is well suited for both experienced servlet and JSP programmers and newcomers to the technology, it assumes. Come back here after you are comfortable with at least the basics. Book Distinctives This book has four important characteristics that set it apart from many other similar-sounding books: • Integrated coverage of servlets and JSP. The two technologies are closely related; you should learn and use them together. • Real code. Complete, working, documented programs are essential to learning; I provide lots of them. • Step-by-step instructions. Complex tasks are broken down into simple steps that are illustrated with real examples. • Server configuration and usage details. I supply lots of concrete examples to get you going quickly. Integrated Coverage of Servlets and JSP One of the key philosophies behind Core Servlets and JavaServer Pages was that servlets and JSP should be learned (and used!) together, not separately. After all, they aren’t two entirely distinct technologies: JSP is just a different way of writing servlets. If you don’t know servlet programming, you can’t use servlets when they are a better choice than JSP, you can’t use the MVC architecture to integrate servlets and JSP, you can’t understand complex JSP constructs, and you can’t understand how JSP scripting elements work (since they are really just servlet code). If you don’t understand JSP development, you can’t use JSP when it is a better option than servlet technology, you can’t use the MVC architecture, and you are stuck using print statements even for pages that consist almost entirely of static HTML.! Real Code. I provide plenty of such programs, all of them documented and available for unrestricted use at . Step-by-Step Instructions? Server Configuration and Usage Details. How This Book Is Organized This book consists of five parts: • Part I: The Basics. Server setup and configuration. Basic servlet programming. Basic JSP programming. • Part II: Web Applications. Using and deploying Web applications. Controlling behavior with web.xml. A larger example. • Part III: Web Application Security. Declarative security. Programmatic security. SSL. • Part IV: Major New Servlet and JSP Capabilities. Servlet and JSP filters. Application life-cycle event listeners. • Part V: New Tag Library Capabilities. New tag library features in JSP 1.2. The JSP Standard Tag Library (JSTL). The Basics • Server setup and configuration. • Downloading the JDK. • Obtaining a development server. • Configuring and testing the server. • Deploying and accessing HTML and JSP pages. • Setting up your development environment. • Deploying and accessing servlets. • Simplifying servlet and JSP deployment. • Basic servlet programming.. • Basic JSP programming. • Understanding the benefits of JSP. • Invoking Java code with JSP expressions, scriptlets, and declarations. • Structuring the servlet that results from a JSP page. • Including files and applets in JSP documents. • Using JavaBeans with JSP. • Creating custom JSP tag libraries. • Combining servlets and JSP: the Model View Controller (Model 2) architecture. Web Applications • Using and deploying Web applications. • Registering Web applications with the server. • Organizing Web applications. • Deploying applications in WAR files. • Recording Web application dependencies on shared libraries. • Dealing with relative URLs. • Sharing data among Web applications. • Controlling Web application behavior with web.xml. • Customizing URLs. • Turning off default URLs. • Initializing servlets and JSP pages. • Preloading servlets and JSP pages. • Declaring filters for servlets and JSP pages. • Designating welcome pages and error pages. • Restricting access to Web resources. • Controlling session timeouts. • Documenting Web applications. • Specifying MIME types. • Locating tag library descriptors. • Declaring event listeners. • Accessing J2EE Resources. • Defining and using a larger Web application. • The interaction among components in a Web application. • Using sessions for per-user data. • Using the servlet context for multiuser data. • Managing information that is accessed by multiple servlets and JSP pages. • Eliminating dependencies on the Web application name. Web Application Security • Declarative security. • Understanding the major aspects of Web application security. • Authenticating users with HTML forms. • Using BASIC HTTP authentication. • Defining passwords in Tomcat, JRun, and ServletExec. • Designating protected resources with the security-constraint element. • Using login-config to specify the authentication method. • Mandating the use of SSL. • Configuring Tomcat to use SSL. • Programmatic security. • Combining container-managed and programmatic security. • Using the isUserInRole method. • Using the getRemoteUser method. • Using the getUserPrincipal method. • Programmatically controlling all aspects of security. • Using SSL with programmatic security. Major New Servlet and JSP Capabilities • Servlet and JSP filters. • Designing basic filters. • Reading request data. • Accessing the servlet context. • Initializing filters. • Blocking the servlet or JSP response. • Modifying the servlet or JSP response. • Using filters for debugging and logging. • Using filters to monitor site access. • Using filters to replace strings. • Using filters to compress the response. • Application life-cycle event listeners. • Understanding the general event-handling strategy. • Monitoring servlet context initialization and shutdown. • Setting application-wide values. • Detecting changes in attributes of the servlet context. • Recognizing creation and destruction of HTTP sessions. • Analyzing overall session usage. • Watching for changes in session attributes. • Tracking purchases at an e-commerce site. • Using multiple cooperating listeners. • Packaging listeners in JSP tag libraries. New Tag Library Capabilities • New tag library features in JSP 1.2. • Converting TLD files to the new format. • Bundling life-cycle event listeners with tag libraries. • Checking custom tag syntax with TagLibraryValidator. • Using the Simple API for XML (SAX) in validators. • Handling errors with the TryCatchFinally interface. • Changing names of method return values. • Looping without creating BodyContent. • Declaring scripting variables in the TLD file. • The JSP Standard Tag Library (JSTL). • Downloading and installing the standard JSP tag library. • Reading attributes without using Java syntax. • Accessing bean properties without using Java syntax. • Looping an explicit number of times. • Iterating over various data structures. • Checking iteration status. • Iterating with string-based tokens. • Evaluating expressions conditionally. • Using the JSTL expression language to set attributes, return values, and declare scripting variables. Conventions Throughout the book, concrete programming constructs or program output are presented in a monospaced font. For example, when abstractly discussing server-side programs that use HTTP, I might refer to “HTTP servlets” or just “servlets,” but when I say HttpServlet I am Some Output URLs, filenames, and directory names are presented with italics. So, for example, I would say “the StringTokenizer class” (monospaced because I’m talking about the class name) and “Listing such and such shows SomeFile.java” (italic because I’m talking about the filename). Paths use forward slashes as in URLs unless they are specific to the Windows operating system. So, for instance, I would use a forward slash when saying “look in install_dir/bin” (OS neutral) but use backslashes when saying “C:\Windows\Temp” (Windows specific). Important standard techniques are indicated by specially marked entries, as in the following example. Core Approach Pay particular attention to items in “Core Approach” sections. They indicate techniques that should always or almost always be used. Notes and warnings are called out in a similar manner. About the Web Site The book has a companion Web site at . This free site includes: • Documented source code for all examples shown in the book; this code can be downloaded for unrestricted use. • The complete text of Core Servlets and JavaServer Pages in PDF format. • Up-to-date download sites for servlet and JSP software. • Links to all URLs mentioned in the text of the book. • Information on book discounts. • Reports on servlet and JSP short courses. • Book additions, updates, and news. Part I: The Basics Part I The Basics Chapter 1 Server Setup and Configuration Chapter 2 A Fast Introduction to Basic Servlet Programming Chapter 3 A Fast Introduction to Basic JSP Programming Chapter 1. Server Setup and Configuration Topics in This Chapter • Downloading the JDK • Obtaining a development server • Configuring and testing the server • Deploying and accessing HTML and JSP pages • Setting up your development environment • Deploying and accessing servlets • Simplifying servlet and JSP deployment Before you can start learning specific servlet and JSP techniques, you need to have the right software and know how to use it. This introductory chapter explains how to obtain, configure, test, and use free versions of all the software needed to run servlets and JavaServer Pages. 1.1 Download the Java Development Kit (JDK) You probably already have the JDK installed, but if not, installing it should be your first step. Version 2.3 of the servlet API and version 1.2 of the JSP API require the Java 2 platform (standard or enterprise edition). If you aren’t using J2EE features like EJB or JNDI, I recommend that you use the standard edition, JDK 1.3 or 1.4. For Solaris, Windows, and Linux, obtain JDK 1.3 at and JDK 1.4 at . For other platforms, check first whether a Java 2 implementation comes preinstalled as it does with MacOS X. If not, see Sun’s list of third-party Java implementations at . 1.2 Download a Server for Your Desktop Your second step is to download a server that implements the Java Servlet 2.3 and JSP 1.2 specifications for use on your desktop. In fact, I typically keep two servers installed on my desktop (Apache’s free Tomcat server and one commercial server) and test my applications on both to keep myself from accidentally using nonportable constructs. Regardless of the server that you will use for final deployment, you will want at least one server on your desktop for development. Even if the deployment server is in the office next to you connected by a lightning-fast network connection, you still don’t want to use it for your development. Even a test server on your intranet that is inaccessible to customers is much less convenient for development purposes than a server right on your desktop. Running a development server on your desktop simplifies development in a number of ways, as compared to deploying to a remote server each and every time you want to test something. 1. It is faster to test. With a server on your desktop, there is no need to use FTP or another upload program. The harder it is for you to test changes, the less frequently you will test. Infrequent testing will let errors persist that will slow you down in the long run. 2. It is easier to debug. When running on your desktop, many servers display the standard output in a normal window. This is in contrast to deployment servers where the standard output is almost always either completely hidden or only available on the screen of the system administrator. So, with a desktop server, plain old System.out.println statements become useful tracing and debugging utilities. 3. It is simple to restart. During development, you will find that you need to restart the server frequently. For example, the server typically reads the web.xml file (see Chapter 4 , “ Using and Deploying Web Applications ”) only at startup. So, you normally have to restart the server each time you modify web.xml. Although some servers (e.g., ServletExec) have an interactive method of reloading web.xml, tasks such as clearing session data, resetting the ServletContext, or replacing modified class files used indirectly by servlets or JSP pages (e.g., beans or utility classes) may still necessitate restarting the server. Some older servers also need to be restarted because they implement servlet reloading unreliably. (Normally, servers instantiate the class that corresponds to a servlet only once and keep the instance in memory between requests. With servlet reloading, a server automatically replaces servlets that are in memory but whose class file has changed on the disk). Besides, some deployment servers recommend completely disabling servlet reloading in order to increase performance. So, it is much more productive to develop in an environment where you can restart the server with a click of the mouse without asking for permission from other developers who might be using the server. 4. It is more reliable to benchmark. Although it is difficult to collect accurate timing results for short-running programs even in the best of circumstances, running benchmarks on systems that have heavy and varying system loads is notoriously unreliable. 5. It is under your control. As a developer, you may not be the administrator of the system on which the test or deployment server runs. You might have to ask some system administrator every time you want the server restarted. Or, the remote system may be down for a system upgrade at the most critical juncture of your development cycle. Not fun. Now, if you can run on your desktop the same server you use for deployment, all the better. But one of the beauties of servlets and JSP is that you don’t have to; you can develop with one server and deploy with another. Following are some of the most popular free options for desktop development servers. In all cases, the free version runs as a standalone Web server; in most cases, you have to pay for the deployment version that can be integrated with a regular Web server like Microsoft IIS, iPlanet/Netscape, or the Apache Web Server. However, the performance difference between using one of the servers as a servlet and JSP engine within a regular Web server and using it as a complete standalone Web server is not significant enough to matter during development. See for a more complete list of servers. • Apache Tomcat. Tomcat 4 is the official reference implementation of the servlet 2.3 and JSP 1.2 specifications. Tomcat 3 is the official reference implementation for servlets 2.2 and JSP 1.1. Both versions can be used as a standalone server during development or can be plugged into a standard Web server for use during deployment. Like all Apache products, Tomcat is entirely free and has complete source code available. Of all the servers, it also tends to be the one that is most compliant with the latest servlet and JSP specifications. However, the commercial servers tend to be better documented, easier to configure, and a bit faster. To download Tomcat, see . • Allaire/Macromedia JRun. JRun is a servlet and JSP engine that can be used in standalone mode for development or plugged into most common commercial Web servers for deployment. It is free for development purposes, but you have to purchase a license before deploying with it. It is a popular choice among developers that are looking for easier administration than Tomcat. For details, see . • New Atlanta’s ServletExec. ServletExec is another popular servlet and JSP engine that can be used in standalone mode for development or, for deployment, plugged into the Microsoft IIS, Apache, and iPlanet/Netscape Web servers. Version 4.0 supports servlets 2.3 and JSP 1.2. You can download and use it for free, but some of the high-performance capabilities and administration utilities are disabled until you purchase a license. The ServletExec Debugger is the configuration you would use as a standalone desktop development server. For details, see . • Caucho’s Resin. Resin is a fast servlet and JSP engine with extensive XML support. It is free for development and noncommercial deployment purposes. For details, see . • LiteWebServer from Gefion Software. LWS is a small standalone Web server that supports servlets and JSP. It is free for both development and deployment purposes, but a license will entitle you to increased support and the complete server source code. See for details. 1.3 Change the Port and Configure Other Server Settings Most of the free servers listed in Section 1.2 use a nonstandard default port in order to avoid conflicts with other Web servers that may be using the standard port (80). However, if you are using the servers in standalone mode and have no other server running permanently on port 80, you will find it more convenient to use port 80. That way, you don’t have to use the port number in every URL you type in your browser. There are one or two other settings that you might want to modify as well. Changing the port or other configuration settings is a server-specific process, so you need to read your server’s documentation for definitive instructions. However, I’ll give a quick summary of the process for three of the most popular free servers here: Tomcat, JRun, and ServletExec. Apache Tomcat Tomcat Port Number With Tomcat 4, modifying the port number involves editing install_dir/conf/server.xml, changing the port attribute of the Connector element from 8080 to 80, and restarting the server. Remember that this section applies to the use of Tomcat. The original element will look something like the following: <Connector className="org.apache.catalina.connector.http.HttpConnector" port="8080"... ... /> It should change to something like the following: <Connector className="org.apache.catalina.connector.http.HttpConnector" port="80"... ... /> The easiest way to find the correct entry is to search for 8080 in server.xml; there should only be one noncomment occurrence. Be sure to make a backup of server.xml before you edit it, just in case you make a mistake that prevents the server from running. Also, remember that XML is case sensitive, so for instance, you cannot replace port with Port or Connector with connector. With Tomcat 3, you modify the same file (install_dir/conf/server.xml), but you need to use slightly different Connector elements for different minor releases of Tomcat. With version 3.2, you replace 8080 with 80 in the following Parameter element. <Connector ...> <Parameter name="port" value="8080"/> </Connector> Again, restart the server after making the change. Other Tomcat Settings Besides the port, three additional Tomcat settings are important: the JAVA_HOME variable, the DOS memory settings, and the CATALINA_HOME or TOMCAT_HOME variable. The most critical Tomcat setting is the JAVA_HOME environment variable—failing to set it properly prevents Tomcat from handling JSP pages. This variable should list the base JDK installation directory, not the bin subdirectory. For example, if you are on Windows 98/Me and installed the JDK in C:\JDK1.3, you might put the following line in your autoexec.bat file. set JAVA_HOME=C:\JDK1.3 On Windows NT/2000, you would go to the Start menu and select Settings, then Control Panel, then System, then Environment. Then, you would enter the JAVA_HOME value. On Unix/Linux, if the JDK is installed in /usr/j2sdk1_3_1 and you use the C shell, you would put the following into your.cshrc file. setenv JAVA_HOME /usr/j2sdk1_3_1 Rather than setting the JAVA_HOME environment variable globally in the operating system, some developers prefer to edit the startup script to set it there. If you prefer this strategy, edit install_dir/bin/catalina.bat (Tomcat 4; Windows) or install_dir/bin/tomcat.bat (Tomcat 3; Windows) and change the following: if not "%JAVA_HOME%" == "" goto gotJavaHome echo You must set JAVA_HOME to point at ... goto cleanup :gotJavaHome to: if not "%JAVA_HOME%" == "" goto gotJavaHome set JAVA_HOME=C:\JDK1.3 :gotJavaHome Be sure to make a backup copy of catalina.bat or tomcat.bat before making the changes. Unix/Linux users would make similar changes in catalina.sh or tomcat.sh. If you use Windows, 2816. Repeat the process for install_dir/bin/shutdown.bat. In some cases, it is also helpful to set the CATALINA_HOME (Tomcat 4) or TOMCAT_HOME (Tomcat 3) environment variables. This variable identifies the Tomcat installation directory to the server. However, if you are careful to avoid copying the server startup scripts and you use only shortcuts (called “symbolic links” on Unix/Linux) instead, you are not required to set this variable. See Section 1.6 for more information on using these shortcuts. Please note that this section describes the use of Tomcat as a standalone server for servlet and JSP development. It requires a totally different configuration to deploy Tomcat as a servlet and JSP container integrated within a regular Web server. For information on the use of Tomcat for deployment, please see . Allaire/Macromedia JRun When using JRun in standalone mode (vs. integrated with a standard Web server), there are several options that you probably want to change from their default values. All can be set from the graphical JRun Management Console and/or through the JRun installation wizard. JRun Port Number To change the JRun port, first start the JRun Admin Server by clicking on the appropriate icon (on Windows, go to the Start menu, then Programs, then JRun 3.x). Then, click on the JRun Management Console (JMC) button or enter the URL in a browser. Log in, using a username of admin and the password that you specified when you installed JRun, choose JRun Default Server, then select JRun Web Server. Figure 1-1 shows the result. Next, select Web Server Port, enter 80, and press Update. See Figure 1-2 . Finally, select JRun Default Server again and press the Restart Server button. Figure 1-1. JMC configuration screen for the JRun Default Server. Figure 1-2. JRun Default Server port configuration window. Other JRun Settings When you install JRun, the installation wizard will ask you three questions that are particularly relevant to using JRun in standalone mode for development purposes. First, it will ask for a serial number. You can leave that blank; it is only required for deployment servers. Second, it will ask if you want to start JRun as a service. You should deselect this option; starting JRun automatically is useful for deployment but inconvenient for development because the server icon does not appear in the taskbar, thus making it harder to restart the server. The wizard clearly states that using JRun as a service should be reserved for deployment, but since the service option is selected by default, you can easily miss it. Finally, you will be asked if you want to configure an external Web server. Decline this option; you need no separate Web server when using JRun in standalone mode. New Atlanta ServletExec The following settings apply to use of the ServletExec Debugger 4.0, the version of ServletExec that you would use for standalone desktop development (vs. integrated with a regular Web server for deployment). ServletExec Port Number To change the port number from 8080 to 80, edit install_dir/StartSED40.bat and add “-port 80” to the end of the line that starts the server, as below. %JAVA_HOME%\bin\java ... ServletExecDebuggerMain -port 80 Remember that this section applies to the use of ServletExec. Other ServletExec Settings ServletExec shares two settings with Tomcat. The one required setting is the JAVA_HOME environment variable. As with Tomcat, this variable refers to the base installation directory of the JDK (not the bin subdirectory). For example, if the JDK is installed in C:\JDK1.3, you should modify the JAVA_HOME entry in install_dir/StartSED40.bat to look like the following. set JAVA_HOME=C:\JDK1.3 Also as with Tomcat, if you use Windows, you may have to change the DOS memory settings for the startup script. If you get an “Out of Environment Space” error message when you start the server, you will need to right-click on install_dir/bin/StartSED40.bat, select Properties, select Memory, and change the Initial Environment entry from Auto to 2816. 1.4 Test the Server Before trying your own servlets or JSP pages, you should make sure that the server is installed and configured properly. For Tomcat, click on install_dir/bin/startup.bat (Windows) or execute install_dir/bin/startup.sh (Unix/Linux). For JRun, go to the Start menu and select Programs, JRun 3.1, and JRun Default Server. For ServletExec, click on install_dir/bin/StartSED40.bat. In all three cases, enter the URL in your browser and make sure you get a regular Web page, not an error message saying that the page cannot be displayed or that the server cannot be found. Figures 1-3 through 1-5 show typical results. If you chose not to change the port number to 80 (see Section 1.3 , “ Change the Port and Configure Other Server Settings ”), you will need to use a URL like that includes the port number. Figure 1-3. Initial home page for Tomcat 4.0. Figure 1-4. Initial home page for JRun 3.1. Figure 1-5. Initial home page for ServletExec 4.0. 1.5 Try Some Simple HTML and JSP Pages. Second, successfully accessing a new JSP page shows that the Java compiler (not just the Java virtual machine) is configured properly. Eventually, you will almost certainly want to create and use your own Web applications (see Chapter 4 , “ Using and Deploying Web Applications ”), but for initial testing I recommend that you use the default Web application. Although Web applications follow a common directory structure, the exact location of the default Web application is server specific. Check your server’s documentation for definitive instructions, but I summarize the locations for Tomcat, JRun, and ServletExec in the following list. Where I list SomeDirectory you can use any directory name you like. (But you are never allowed to use WEB-INF or META-INF as directory names. For the default Web application, you also have to avoid a directory name that matches the URL prefix of any other Web application.) • Tomcat Directory install_dir/webapps/ROOT (or install_dir/webapps/ROOT/SomeDirectory) • JRun Directory install_dir/servers/default/default-app (or install_dir/servers/default/default-app/SomeDirectory) • ServletExec Directory install_dir/public_html [1] [1] Note that the public_html directory is created automatically by ServletExec the first time you run the server. So, you will be unable to find public_html if you have not yet tested the server as described in Section 1.4 (Test the Server). (or install_dir/public_html/SomeDirectory) • Corresponding URLs (or) (or) For your first tests, I suggest you simply take Hello.html (Listing 1.1 , Figure 1-6 ) and Hello.jsp (Listing 1.2 , Figure 1-7 ) and drop them into the appropriate locations. The code for these files, as well as all the code from the book, is available online at . That Web site also contains updates, additions, information on short courses, and the full text of Core Servlets and JavaServer Pages (in PDF). If (e.g., with the JAVA_HOME variable). Figure 1-6. Result of Hello.html. Figure 1-7. Result of Hello.jsp. Listing 1.1 Hello.html <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML> <HEAD><TITLE>HTML Test</TITLE></HEAD> <BODY BGCOLOR="#FDF5E6"> <H1>HTML Test</H1> Hello. </BODY> </HTML> Listing 1.2 Hello.jsp <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML> <HEAD><TITLE>JSP Test</TITLE></HEAD> <BODY BGCOLOR="#FDF5E6"> <H1>JSP Test</H1> Time: <%= new java.util.Date() %> </BODY> </HTML> 1.6 Set Up Your Development Environment The server startup script automatically sets the server’s CLASSPATH to include the standard servlet and JSP classes and the WEB-INF/classes directory (containing compiled servlets) of each Web application. But you need similar settings, or you will be unable to compile servlets in the first place. This section summarizes the configuration needed for servlet development. Create a Development Directory The first thing you should do is create a directory in which to place the servlets and JSP pages that you develop. This directory can be in your home directory (e.g., ~/ServletDevel on Unix) or in a convenient general location (e.g., C:\ServletDevel on Windows). It should not, however, be in the server’s installation directory. Eventually, you will organize this development directory into different Web applications (each with a common structure—see Chapter 4 ). (see Section 1.9 ). I strongly discourage this practice and instead recommend one of the approaches described in Section 1.8 (Establish a Simplified Deployment Method).. Core Warning Don’t use the server’s deployment directory as your development location. Instead, keep a separate development directory. Make Shortcuts to Start and Stop the Server Since I find myself frequently restarting the server, I find it convenient to place shortcuts to the server startup and shutdown icons inside my main development directory. You will likely find it convenient to do the same. For example, for Tomcat on Windows, go to install_dir/bin, right-click on startup.bat, and select Copy. Then go to your development directory, right-click in the window, and select Paste Shortcut (not just Paste). Repeat the process for install_dir/bin/shutdown.bat. On Unix, you would use ln -s to make a symbolic link to startup.sh, tomcat.sh (needed even though you don’t directly invoke this file), and shutdown.sh. For JRun on Windows, go to the Start menu, select Programs, select JRun 3.x, right-click on the JRun Default Server icon, and select Copy. Then go to your development directory, right-click in the window, and select Paste Shortcut. Repeat the process for the JRun Admin Server and JRun Management Console. For the ServletExec Debugger (i.e., standalone development server), go to install_dir, right-click on StartSED40.bat, and select Copy. Then go to your development directory, right-click in the window, and select Paste Shortcut (not just Paste). There is no separate shutdown file; to stop ServletExec, just go to (see Figure 1-5 ) and click on the Shutdown link in the General category on the left-hand side. Set Your API will fail with error messages about unknown classes. The exact location of the servlet JAR file varies from server to server. In most cases, you can hunt around for a file called servlet.jar. Or, read your server’s documentation to discover the location. Once you find the JAR file, add the location to your development CLASSPATH. Here are the locations for some common development servers: • Tomcat 4 Location. install_dir/common/lib/servlet.jar • Tomcat 3 Location. install_dir/lib/servlet.jar • JRun Location. install_dir/lib/ext/servlet.jar • ServletExec Location. install_dir/ServletExecDebugger.jar in the first subsection. Forgetting this setting is perhaps the most common mistake made by beginning servlet programmers. Core Approach Remember to add your development directory to your CLASSPATH . Otherwise, you will get “Unresolved symbol” error messages when you attempt to compile servlets that are in packages and that make use of other classes in the same package. Finally, you should include “.” (the current directory) in the CLASSPATH. Otherwise, you will only be able to compile packageless classes that are in the top-level development directory. Here are a few representative methods of setting the CLASSPATH. They assume that your development directory is C:\devel (Windows) or /usr/devel (Unix/Linux) and that you are using Tomcat 4. Replace install_dir with the actual base installation location of the server. Be sure to use the appropriate case for the filenames. Note that these examples represent only one approach for setting the CLASSPATH. Many Java integrated development environments have a global or project-specific setting that accomplishes the same result. But these settings are totally IDE-specific and won’t be discussed here. • Windows 98/Me. Put the following in your autoexec.bat. (Note that this all goes on one line with no spaces—it is broken here for readability.) • • set CLASSPATH=.; • C:\devel; install_dir\common\lib\servlet.jar • Windows NT/2000. Go to the Start menu and select Settings, then Control Panel, then System, then Environment. Then, enter the CLASSPATH value from the previous bullet. • Unix/Linux (C shell). Put the following in your .cshrc. (Again, in the real file it goes on a single line without spaces.) • • setenv CLASSPATH .: • /usr/devel: install_dir/common/lib/servlet.jar Bookmark or Install the Servlet and JSP API Documentation Just as no serious programmer should develop general-purpose Java applications without access to the JDK 1.3 or 1.4 API documentation (in Javadoc format), no serious programmer should develop servlets or JSP pages without access to the API for classes in the javax.servlet packages. Here is a summary of where to find the API: • This site lets you download the Javadoc files for either the servlet 2.3 and JSP 1.2 API or for the servlet 2.2 and JSP 1.1 API. You will probably find this API so useful that it will be worth having a local copy instead of browsing it online. However, some servers bundle this documentation, so check before downloading. • This site lets you browse the servlet 2.3 API online. • This site lets you browse the servlet 2.2 and JSP 1.1 API online. • This address lets you browse the complete API for the Java 2 Platform, Enterprise Edition (J2EE), which includes the servlet 2.2 and JSP 1.1 packages. 1.7 Compile and Test Some Simple Servlets OK, so your environment is all set. At least you think it is. It would be nice to confirm that hypothesis. Following are three tests that help verify this. Test 1: A Servlet That Does Not Use Packages The first servlet to try is a basic one: no packages, no utility (helper) classes, just simple HTML output. Rather than writing your own test servlet, you can just grab HelloServlet.java (Listing 1.3 ) from the book’s source code archive at . If you get compilation errors, go back and check your CLASSPATH settings (Section 1.6 )—you most likely erred in listing the location of the JAR file that contains the servlet classes (e.g., servlet.jar). Once you compile Hello-Servlet.java, put HelloServlet.class in the appropriate location (usually the WEB-INF/ classes directory of your server’s default Web application). Check your server’s documentation for this location, or see the following list for a summary of the locations used by Tomcat, JRun, and ServletExec. Then, access the servlet with the URL (or if you chose not to change the port number as described in Section 1.3 ). You should get something similar to Figure 1-8 . If this URL fails but the test of the server itself (Section 1.4 ) succeeded, you probably put the class file in the wrong directory. Figure 1-8. Result of HelloServlet . • Tomcat Directory. install_dir/webapps/ROOT/WEB-INF/classes • JRun Directory. install_dir/servers/default/default-app/WEB-INF/classes • ServletExec Directory. install_dir/Servlets • Corresponding URL. Listing 1.3 HelloServlet.java import java.io.*; import javax.servlet.*; import javax.servlet.http.*; /** Simple servlet used to test server. */ public class HelloServlet</TITLE></HEAD>\n" + "<BODY BGCOLOR=\"#FDF5E6\">\n" + "<H1>Hello</H1>\n" + "</BODY></HTML>"); } } Test 2: A Servlet That Uses Packages The second servlet to try is one that uses packages but no utility classes. Again, rather than writing your own test, you can grab HelloServlet2.java (Listing 1.4 ) from the book’s source code archive at . Since this servlet is in the moreservlets package, it should go in the moreservlets directory, both during development and when deployed to the server. If you get compilation errors, go back and check your CLASSPATH settings (Section 1.6 )—you most likely forgot to include “.” (the current directory). Once you compile HelloServlet2.java, put HelloServlet2 for Tomcat, JRun, and ServletExec. For now, you can simply copy the class file from the development directory to the deployment directory, but Section 1.8 (Establish a Simplified Deployment Method) will provide some options for simplifying the process. Once you have placed the servlet in the proper directory, access it with the URL . You should get something similar to Figure 1-9 . If this test fails, you probably either typed the URL wrong (e.g., used a slash instead of a dot after the package name) or put HelloServlet2.class in the wrong location (e.g., directly in the server’s WEB-INF/classes directory instead of in the moreservlets subdirectory). Figure 1-9. Result of HelloServlet2 . • Tomcat Directory. install_dir/webapps/ROOT/WEB-INF/classes/moreservlets • JRun Directory. install_dir/servers/default/default-app/WEB-INF/classes/moreservlets • ServletExec Directory. install_dir/Servlets/moreservlets • Corresponding URL. Listing 1.4 moreservlets/HelloServlet2.java package moreservlets; import java.io.*; import javax.servlet.*; import javax.servlet.http.*; /** Simple servlet used to test the use of packages. */ public class HelloServlet2 (2)</TITLE></HEAD>\n" + "<BODY BGCOLOR=\"#FDF5E6\">\n" + "<H1>Hello (2)</H1>\n" + "</BODY></HTML>"); } } 1.7.3 A Servlet That Uses Packages and Utilities The final servlet you should test to verify the configuration of your server and development environment is one that uses both packages and utility classes. Listing 1.5 presents HelloServlet3.java, a servlet that uses the ServletUtilities class (Listing 1.6 ). Again, the source code can be found at . Since both the servlet and the utility class are in the moreservlets package, they should go in the moreservlets directory. If you get compilation errors, go back and check your CLASSPATH settings (Section 1.6 )—you most likely forgot to include the top-level development directory. I’ve said it before, but I’ll say it again: your CLASSPATH must. Core Warning Your CLASSPATH must include your top-level development directory. Otherwise, you cannot compile servlets that are in packages and that also use classes from the same package. Once you compile HelloServlet3.java (which will automatically cause ServletUtilities.java to be compiled), put HelloServlet3.class and ServletUtilities used by Tomcat, JRun, and ServletExec. Then, access the servlet with the URL . You should get something similar to Figure 1-10 . Figure 1-10. Result of HelloServlet3 . • Tomcat Directory. install_dir/webapps/ROOT/WEB-INF/classes/moreservlets • JRun Directory. install_dir/servers/default/default-app/WEB-INF/classes/moreservlets • ServletExec Directory. install_dir/Servlets/moreservlets • Corresponding URL. Listing 1.5 moreservlets/HelloServlet3.java package moreservlets; import java.io.*; import javax.servlet.*; import javax.servlet.http.*; /** Simple servlet used to test the use of packages * and utilities from the same package. */ public class HelloServlet3 extends HttpServlet { public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/html"); PrintWriter out = response.getWriter(); String title = "Hello (3)"; out.println(ServletUtilities.headWithTitle(title) + "<BODY BGCOLOR=\"#FDF5E6\">\n" + "<H1>" + title + "</H1>\n" + "</BODY></HTML>"); } } Listing 1.6 moreservlets/ServletUtilities.java package moreservlets; import javax.servlet.*; import javax.servlet.http.*; /** Some simple time savers. Note that most are static methods. */ public class ServletUtilities { public static final String DOCTYPE = "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.0 " + "Transitional//EN\">"; public static String headWithTitle(String title) { return(DOCTYPE + "\n" + "<HTML>\n" + "<HEAD><TITLE>" + title + "</TITLE></HEAD>\n"); } ... } 1.8 Establish a Simplified Deployment Method OK, so you have a development directory. You can compile servlets with or without packages. You know which directory the servlet classes belong in. You know the URL that should be used to access them (at least the default URL; in Section 5.3 , “ Assigning Names and Custom URLs,” you’ll see how to customize that address). But how do you move the class files from the development directory to the deployment direc-tory? Copying each one by hand every time is tedious and error prone. Once you start using Web applications (see Chapter 4 ),. 1. Copy to a shortcut or symbolic link. 2. Use the -d option of javac. 3. Let your IDE take care of deployment. 4. Use ant or a similar tool. Details on these four options are given in the following subsections. Copy to a Shortcut or Symbolic Link On Windows, go to the server’s default Web application, right-click on the classes directory, and select Copy. Then go to your development directory, right-click, and select Paste Shortcut (not just Paste). Now, whenever you compile a packageless servlet, just drag the class files onto the shortcut. When you develop in packages, use the right mouse to drag the entire directory (e.g., the moreservlets directory) onto the shortcut, release the mouse, and select Copy. On Unix/Linux, you can use symbolic links (created with ln -s) in a manner similar to that for Windows shortcuts. An advantage of this approach is that it is simple. So, it is good for beginners who want to concentrate on learning servlets and JSP, not deployment tools. Another advantage is that a variation applies once you start using your own Web applications (see Chapter 4 ). Just make a shortcut to the main Web application directory (one level up from the top of the default Web application), and copy the entire Web application each time by using the right mouse to drag the directory that contains your Web application onto this shortcut and selecting Copy. One disadvantage of this approach is that it requires repeated copying if you use multiple servers. For example, I keep at least two different servers on my development system and regularly test my code with both servers. A second disadvantage is that this approach copies both the Java source code files and the class files to the server, whereas only the class files are needed. This may not matter much on your desktop server, but when you get to the “real” deployment server, you won’t want to include the source code files. Use the -d Option of javac By default, the Java compiler (javac) places class files in the same directory as the source code files that they came from., with Tomcat I could compile the HelloServlet2 servlet (Listing 1.4 , Section 1.7 ) as follows (line break added only for clarity; omit it in real life).. An advantage of this approach is that it requires no manual copying of class files. Furthermore, the exact same command can be used for classes in different packages since javac automatically puts the class files in a subdirectory matching the package. The main disadvantage is that this approach applies only to Java class files; it won’t work for deploying HTML and JSP pages, much less entire Web applications. Let Your IDE Take Care of Deployment Most servlet- and JSP-savvy development environments (e.g., IBM WebSphere Studio, Macromedia JRun Studio, Borland JBuilder) some IDEs, is that it can deploy HTML and JSP pages and even entire Web applications, not just Java class files. A disadvantage is that it is an IDE-specific technique and thus is not portable across systems. Use ant or a Similar Tool Developed by the Apache foundation, (see Chapter 4 ). For general information on using ant, see . See for specific guidance on using ant with Tomcat. The main advantage of this approach is flexibility: ant is powerful enough to handle everything from compiling the Java source code to copying files to producing WAR files (Section 4.3 ). The disadvantage of ant is the overhead of learning to use it; there is more of a learning curve with ant than with the other techniques in this section. 1.9 Deployment Directories for Default Web Application: Summary The following subsections summarize the way to deploy and access HTML files, JSP pages, servlets, and utility classes in Tomcat, JRun, and ServletExec. The summary assumes that you are deploying files in the default Web application, have changed the port number to 80 (see Section 1.3 ), and are accessing servlets through the default URL (i.e.,). Later chapters explain how to deploy user-defined Web applications and how to customize the URLs. But you’ll probably want to start with the defaults just to confirm that everything is working properly. The Appendix (Server Organization and Structure) gives a unified summary of the directories used by Tomcat, JRun, and ServletExec for both the default Web application and custom Web applications. If you are using a server on your desktop, you can use localhost for the host portion of each of the URLs in this section. Tomcat HTML and JSP Pages • Main Location. install_dir/webapps/ROOT • Corresponding URLs. • More Specific Location (Arbitrary Subdirectory). install_dir/webapps/ROOT/SomeDirectory • Corresponding URLs. Individual Servlet and Utility Class Files • Main Location (Classes without Package). install_dir/webapps/ROOT/WEB-INF/classes • Corresponding URL (Servlets). • More Specific Location (Classes in Packages). install_dir/webapps/ROOT/WEB-INF/classes/packageName • Corresponding URL (Servlets in Packages). Servlet and Utility Class Files Bundled in JAR Files • Location. install_dir/webapps/ROOT/WEB-INF/lib • Corresponding URLs (Servlets). JRun HTML and JSP Pages • Main Location. install_dir/servers/default/default-app • Corresponding URLs. • More Specific Location (Arbitrary Subdirectory). install_dir/servers/default/default-app/SomeDirectory • Corresponding URLs. Individual Servlet and Utility Class Files • Main Location (Classes without Package). install_dir/servers/default/default-app/WEB-INF/classes • Corresponding URL (Servlets). • More Specific Location (Classes in Packages). install_dir/servers/default/default-app/WEB-INF/classes/packageName • Corresponding URL (Servlets in Packages). Servlet and Utility Class Files Bundled in JAR Files • Location. install_dir/servers/default/default-app/WEB-INF/lib • Corresponding URLs (Servlets). ServletExec HTML and JSP Pages • Main Location. install_dir/public_html • Corresponding URLs. • More Specific Location (Arbitrary Subdirectory). install_dir/public_html/SomeDirectory • Corresponding URLs. Individual Servlet and Utility Class Files • Main Location (Classes without Package). install_dir/Servlets • Corresponding URL (Servlets). • More Specific Location (Classes in Packages). install_dir/Servlets/packageName • Corresponding URL (Servlets in Packages). Servlet and Utility Class Files Bundled in JAR Files • Location. install_dir/Servlets • Corresponding URLs (Servlets). Chapter 2. A Fast Introduction to Basic Servlet Programming Topics in This Chapter 2-1 . Figure 2-1. The role of Web middleware. 1. Read the explicit data sent by the client. The end user normally enters this data in an HTML form on a Web page. However, the data could also come from an applet or a custom HTTP client program. 2. Read the implicit HTTP request data sent by the browser. Figure 2-1 shows a single arrow going from the client to the Web server (the layer where servlets and JSP execute), but there are really two varieties of data: the explicit data the end user enters in a form and the behind-the-scenes HTTP information. Both varieties are critical to effective development. The HTTP information includes cookies, media types and compression schemes the browser understands, and so forth. 3.. 4. Send the explicit data (i.e., the document) to the client. This document can be sent in a variety of formats, including text (HTML), binary (GIF images), or even a compressed format like gzip that is layered on top of some other underlying format. 5. Send the implicit HTTP response data. Figure 2-1: •. • The Web page is derived from data that changes frequently. For example, a weather report or news headlines site might build the pages dynamically, perhaps returning a previously built page if that page is still up to date. • Enter the password to open this PDF file: File name: - File size: - Title: - Author: - Subject: - Keywords: - Creation Date: - Modification Date: - Creator: - PDF Producer: - PDF Version: - Page Count: - Preparing document for printing… 0% Log in to post a comment
https://www.techylib.com/en/view/milklivereddeep/more_servlets_and_javaserver_pages
CC-MAIN-2019-43
refinedweb
9,474
51.65
\input texinfo @c -*-texinfo-*- @c %**start of header @setfilename uucp.info @settitle Taylor UUCP @setchapternewpage odd @c %**end of header @iftex @finalout @end iftex @ifinfo @format START-INFO-DIR-ENTRY * UUCP: (uucp). Transfer mail and news across phone lines. END-INFO-DIR-ENTRY @end format This file documents Taylor UUCP, version 1.07. Copyright @copyright{} 1992, 1993, 1994, 1995, 2002 Ian Lance Taylor ifinfo @titlepage @title Taylor UUCP @subtitle Version 1.07 @author Ian Lance Taylor @email{ian@@airs.com} @page @vskip 0pt plus 1filll Copyright @copyright{} 1992, 1993, 1994, 1995, 2002 Ian Lance Taylor Published by Ian Lance Taylor @email{ian@@airs titlepage @node Top, Copying, (dir), (dir) @top Taylor UUCP 1.07 This is the documentation for the Taylor UUCP package, version 1.07. The programs were written by Ian Lance Taylor. The author can be reached at @email{ian@@airs.com}. There is a mailing list for discussion of the package. The list is hosted by Eric Schnoebelen at @email{cirr.com}. To join (or get off) the list, send mail to @email{taylor-uucp-request@@gnu.org}. Mail to this address is answered by the majordomo program. To join the list, send the message @samp{subscribe @var{address}} where @var{address} is your e-mail address. To send a message to the list, send it to @email{taylor-uucp@@gnu.org}. There is an archive of all messages sent to the mailing list at @url{}. @end menu @node Copying, Introduction, Top, Top @unnumbered Taylor UUCP Copying Conditions This package is covered by the GNU Public License. See the file @dfn. @node Introduction, Invoking the UUCP Programs, Copying, Top @chapter. @table @command @item uucp The @command{uucp} program is used to copy file between systems. It is similar to the standard Unix @command{cp} program, except that you can refer to a file on a remote system by using @samp{system!} before the file name. For example, to copy the file @file{notes.txt} to the system @samp{airs}, you would say @samp{uucp notes.txt airs!~/notes.txt}. In this example @samp{~} is used to name the UUCP public directory on @samp{airs}. For more details, see @ref{Invoking uucp, uucp}. @item uux The @command{uux} program is used to request the execution of a program on a remote system. This is how mail and news are transferred over UUCP. As with @command{uucp}, programs and files on remote systems may be named by using @samp{system!}. For example, to run the @command{rnews} program on @samp{airs}, passing it standard input, you would say @samp{uux - airs!rnews}. The @option{-} means to read standard input and set things up such that when @command{rnews} runs on @samp{airs} it will receive the same standard input. For more details, see @ref{Invoking uux, uux}. @end table Neither @command{uucp} nor @command. @table @command @item uustat The @command{uustat} program does many things. By default it will simply list all the jobs you have queued with @command{uucp} or @command{uux} that have not yet been processed. You can use @command @command{uustat} to automatically discard old jobs while sending mail to the user who requested them. For more details, see @ref{Invoking uustat, uustat}. @item uuname The @command{uuname} program by default lists all the remote systems your system knows about. You can also use it to get the name of your local system. It is mostly useful for shell scripts. For more details, see @ref{Invoking uuname, uuname}. @item uulog The @command @ref{Invoking uulog, uulog}. @item uuto @item uupick @command{uuto} is a simple shell script interface to @command{uucp}. It will transfer a file, or the contents of a directory, to a remote system, and notify a particular user on the remote system when it arrives. The remote user can then retrieve the file(s) with @command{uupick}. For more details, see @ref{Invoking uuto, uuto}, and see @ref{Invoking uupick, uupick}. @item cu The @command{cu} program can be used to call up another system and communicate with it as though you were directly connected. It can also do simple file transfers, though it does not provide any error checking. For more details, @ref{Invoking cu, cu}. @end table These eight programs just described, @command{uucp}, @command{uux}, @command{uuto}, @command{uupick}, @command{uustat}, @command{uuname}, @command{uulog}, and @command{cu} are the user programs provided by Taylor UUCP@. @command{uucp}, @command{uux}, and @command{uuto} add requests to the work queue, @command{uupick} extracts files from the UUCP public directory, @command{uustat} examines the work queue, @command{uuname} examines the configuration files, @command{uulog} examines the log files, and @command{cu} just uses the UUCP configuration files. The real work is actually done by two daemon processes, which are normally run automatically rather than by a user. @table @command @item uucico The @command{uucico} daemon is the program which actually calls the remote system and transfers files and requests. @command{uucico} is normally started automatically by @command{uucp} and @command{uux}. Most systems will also start it periodically to make sure that all work requests are handled. @command{uucico} checks the queue to see what work needs to be done, and then calls the appropriate systems. If the call fails, perhaps because the phone line is busy, @command{uucico} leaves the requests in the queue and goes on to the next system to call. It is also possible to force @command{uucico} to call a remote system even if there is no work to be done for it, so that it can pick up any work that may be queued up remotely. For more details, see @ref{Invoking uucico, uucico}. @need 1000 @item uuxqt The @command{uuxqt} daemon processes execution requests made by the @command{uux} program on remote systems. It also processes requests made on the local system which require files from a remote system. It is normally started by @command{uucico}. For more details, see @ref{Invoking uuxqt, uuxqt}. @end table Suppose you, on the system @samp{bantam}, want to copy a file to the system @samp{airs}. You would run the @command{uucp} command locally, with a command like @samp{uucp notes.txt airs!~/notes.txt}. This would queue up a request on @samp{bantam} for @samp{airs}, and would then start the @command{uucico} daemon. @command{uucico} would see that there was a request for @samp{airs} and attempt to call it. When the call succeeded, another copy of @command{uucico} would be started on @samp{airs}. The two copies of @command{uucico} would tell each other what they had to do and transfer the file from @samp{bantam} to @samp{airs}. When the file transfer was complete the @command{uucico} on @samp{airs} would move it into the UUCP public directory. UUCP is often used to transfer mail. This is normally done automatically by mailer programs. When @samp{bantam} has a mail message to send to @samp{ian} at @samp{airs}, it executes @samp{uux - airs!rmail ian} and writes the mail message to the @command{uux} process as standard input. The @command{uux} program, running on @samp{bantam}, will read the standard input and store it, as well as the @command{rmail} request itself, on the work queue for @samp{airs}. @command{uux} will then start the @command{uucico} daemon. The @command{uucico} daemon will call up @samp{airs}, just as in the @command{uucp} example, and transfer the work request and the mail message. The @command{uucico} daemon on @samp{airs} will put the files on a local work queue. When the communication session is over, the @command{uucico} daemon on @samp{airs} will start the @command{uuxqt} daemon. @command{uuxqt} will see the request on the work queue, and will run @samp{rmail ian} with the mail message as standard input. The @command{rmail} program, which is not part of the UUCP package, is then responsible for either putting the message in the right mailbox on @samp{airs} or forwarding the message on to another system. Taylor UUCP comes with a few other programs that are useful when installing and configuring UUCP. @table @command @item uuchk The @command @ref{Invoking uuchk, uuchk}. @item uuconv The @command @ref{Invoking uuconv, uuconv}. @item uusched The @command{uusched} script is provided for compatibility with older UUCP releases. It starts @command{uucico} to call, one at a time, all the systems for which work has been queued. For more details, see @ref{Invoking uusched, uusched}. @item tstuu The @command @ref{Testing the Compilation, tstuu}. @end table @node Invoking the UUCP Programs, Installing Taylor UUCP, Introduction, Top @chapter @end menu @node Standard Options, Invoking uucp, Invoking the UUCP Programs, Invoking the UUCP Programs @section Standard Options All of the UUCP programs support a few standard options. @table @option @item -x type @itemx --debug type Turn on particular debugging types. The following types are recognized: @samp{abnormal}, @samp{chat}, @samp{handshake}, @samp{uucp-proto}, @samp{proto}, @samp{port}, @samp{config}, @samp{spooldir}, @samp{execute}, @samp{incoming}, @samp{outgoing}. Not all types of debugging are effective for all programs. See the @code{debug} configuration command for details (@pxref{Debugging Levels}). Multiple types may be given, separated by commas, and the @option{--debug} option may appear multiple times. A number may also be given, which will turn on that many types from the foregoing list; for example, @samp{--debug 2} is equivalent to @samp{--debug abnormal,chat}. To turn on all types of debugging, use @samp{-x all}. The @command{uulog} program uses @option{-X} rather than @option{-x} to select the debugging type; for @command{uulog}, @option{-x} has a different meaning, for reasons of historical compatibility. @item -I file @itemx --config file Set the main configuration file to use. @xref{config File}. When this option is used, the programs will revoke any setuid privileges. @item -v @itemx --version Report version information and exit. @item --help Print a help message and exit. @end table @need 2000 @node Invoking uucp, Invoking uux, Standard Options, Invoking the UUCP Programs @section Invoking uucp @menu * uucp Description:: Description of uucp * uucp Options:: Options Supported by uucp @end menu @node uucp Description, uucp Options, Invoking uucp, Invoking uucp @subsection uucp Description @example uucp [options] @file{source-file} @file{destination-file} uucp [options] @file{source-file}... @file{destination-directory} @end example The @command{uucp} command copies files between systems. Each @file{file} argument is either a file name on the local machine or is of the form @samp{system!file}. The latter is interpreted as being on a remote system. When @command{uucp} is used with two non-option arguments, the contents of the first file are copied to the second. With more than two non-option arguments, each source file is copied into the destination directory. A file may be transferred to or from @samp{system2} via @samp{system1} by using @samp{system1!system2!file}. Any file name that does not begin with @samp{/} or @samp{~} will be prepended with the current directory (unless the @option{-W} or @option{--noexpand} options are used). For example, if you are in the directory @samp{/home/ian}, then @samp{uucp foo remote!bar} is equivalent to @samp{uucp /home/ian/foo remote!/home/ian/bar}. Note that the resulting file name may not be valid on a remote system. A file name beginning with a simple @samp{~} starts at the UUCP public directory; a file name beginning with @samp{~name} starts at the home directory of the named user. The @samp{~} is interpreted on the appropriate system. Note that some shells will interpret an initial @samp{~} before @command{uucp} sees it; to avoid this the @samp{~} must be quoted. The shell metacharacters @samp{?} @samp{*} @samp{[} and @samp{]} are interpreted on the appropriate system, assuming they are quoted to prevent the shell from interpreting them first. The file copy does not take place immediately, but is queued up for the @command{uucico} daemon; the daemon is started immediately unless the @option{-r} or @option{--nouucico} option is given. The next time the remote system is called, the file(s) will be copied. @xref{Invoking uucico}. The file mode is not preserved, except for the execute bit. The resulting file is owned by the uucp user. @node uucp Options, , uucp Description, Invoking uucp @subsection uucp Options The following options may be given to @command{uucp}. @table @option @item -c @itemx --nocopy Do not copy local source files to the spool directory. If they are removed before being processed by the @command{uucico} daemon, the copy will fail. The files must be readable by the @command{uucico} daemon, and by the invoking user. @item -C @itemx --copy Copy local source files to the spool directory. This is the default. @item -d @itemx --directories Create all necessary directories when doing the copy. This is the default. @item -f @itemx --nodirectories If any necessary directories do not exist for the destination file name, abort the copy. @item -R @itemx --recursive If any of the source file names are directories, copy their contents recursively to the destination (which must itself be a directory). -m @itemx --mail Report completion or failure of the file transfer by sending mail. @item -n user @itemx --notify user Report completion or failure of the file transfer by sending mail to the named user on the destination system. @item -r @itemx --nouucico Do not start the @command{uucico} daemon immediately; merely queue up the file transfer for later execution. @item -j @itemx --jobid Print the jobid on standard output. The job may be later cancelled by passing this jobid to the @option{-kill} switch of @command{uustat}. @xref{Invoking uustat}. It is possible for some complex operations to produce more than one jobid, in which case each will be printed on a separate line. For example @example uucp sys1!~user1/file1 sys2!~user2/file2 ~user3 @end example will generate two separate jobs, one for the system @samp{sys1} and one for the system @samp{sys2}. @item -W @itemx --noexpand Do not prepend remote relative file names with the current directory. @item -t @itemx --uuto This option is used by the @command{uuto} shell script; see @ref{Invoking uuto}. It causes @command{uucp} to interpret the final argument as @samp{system!user}. The file(s) are sent to @samp{~/receive/@var{user}/@var{local}} on the remote system, where @var{user} is from the final argument and @var{local} is the local UUCP system name. Also, @command{uucp} will act as though @option{--notify user} were specified. @item -x type @itemx --debug type @itemx -I file @itemx --config file @itemx -v @itemx --version @itemx --help @xref{Standard Options}. @end table @node Invoking uux, Invoking uustat, Invoking uucp, Invoking the UUCP Programs @section Invoking uux @menu * uux Description:: Description of uux * uux Options:: Options Supported by uux * uux Examples:: Examples of uux Usage @end menu @node uux Description, uux Options, Invoking uux, Invoking uux @subsection uux Description @example uux [options] command @end example The @command{uux} command is used to execute a command on a remote system, or to execute a command on the local system using files from remote systems. The command is not executed immediately; the request is queued until the @command{uucico} daemon calls the system and transfers the necessary files. The daemon is started automatically unless one of the @option{-r} or @option{--nouucico} options is given. The actual command execution is done by the @command @samp{~/}, in which case it is relative to the UUCP public directory on the appropriate system. A file name may begin with @samp{ @command{uux} rather than interpreted by the shell. Append redirection (@samp{>>}) does not work. All specified files are gathered together into a single directory before execution of the command begins. This means that each file must have a distinct name. For example, @example uux 'sys1!diff sys2!~user1/foo sys3!~user2/foo >!foo.diff' @end example will fail because both files will be copied to @samp{sys1} and stored under the name @file{foo}. Arguments may be quoted by parentheses to avoid interpretation of exclamation points. This is useful when executing the @command{uucp} command on a remote system. Most systems restrict the commands which may be executed using @samp{uux}. Many permit only the execution of @samp{rmail} and @samp{rnews}. A request to execute an empty command (e.g., @samp{uux sys!}) will create a poll file for the specified system; see @ref{Calling Other Systems} for an example of why this might be useful. The exit status of @command{uux} is one of the codes found in the header file @file{sysexits.h}. In particular, @samp{EX_OK} (@samp{0}) indicates success, and @samp{EX_TEMPFAIL} (@samp{75}) indicates a temporary failure. @node uux Options, uux Examples, uux Description, Invoking uux @subsection uux Options The following options may be given to @command{uux}. @table @option @item - @itemx -p @itemx --stdin Read standard input up to end of file, and use it as the standard input for the command to be executed. @item -c @itemx --nocopy Do not copy local files to the spool directory. This is the default. If they are removed before being processed by the @command{uucico} daemon, the copy will fail. The files must be readable by the @command{uucico} daemon, as well as the by the invoker of @command{uux}. @item -C @itemx --copy Copy local files to the spool directory. @item -l @itemx --link Link local files into the spool directory. If a file can not be linked because it is on a different device, it will be copied unless one of the @option{-c} or @option{--nocopy} options also appears (in other words, use of @option{--link} switches the default from @option{--nocopy} to @option{--copy}). If the files are changed before being processed by the @command{uucico} daemon, the changed versions will be used. The files must be readable by the @command{uucico} daemon, as well as by the invoker of @command{uux}. -n @itemx --notification=no Do not send mail about the status of the job, even if it fails. @item -z @itemx --notification=error Send mail about the status of the job if an error occurs. For many @command{uuxqt} daemons, including the Taylor UUCP @command{uuxqt}, this is the default action; for those, @option{--notification=error} will have no effect. However, some @command{uuxqt} daemons will send mail if the job succeeds, unless the @option{--notification=error} option is used. Some other @command{uuxqt} daemons will not send mail even if the job fails, unless the @option{--notification=error} option is used. @item -a address @itemx --requestor address Report job status, as controlled by the @option{--notification} option, to the specified mail address. @item -r @itemx --nouucico Do not start the @command{uucico} daemon immediately; merely queue up the execution request for later processing. @item -j @itemx --jobid Print the jobid on standard output. A jobid will be generated for each file copy operation required to execute the command. These file copies may be later cancelled by passing the jobid to the @option{-kill} switch of @command{uustat}. @xref{Invoking uustat}. Cancelling any file copies will make it impossible to complete execution of the job. @item -x type @itemx --debug type @itemx -v @itemx --version @itemx --help @xref{Standard Options}. @end table @node uux Examples, , uux Options, Invoking uux @subsection uux Examples Here are some examples of using @command{uux}. @example uux -z - sys1!rmail user1 @end example This will execute the command @samp{rmail user1} on the system @samp{sys1}, giving it as standard input whatever is given to @command{uux} as standard input. If a failure occurs, mail will be sent to the user who ran the command. @example uux 'diff -c sys1!~user1/file1 sys2!~user2/file2 >!file.diff' @end example This will fetch the two named files from system @samp{sys1} and system @samp{sys2} and execute @samp{diff}, putting the result in @file{file.diff} in the current directory on the local system. The current directory must be writable by the @command{uuxqt} daemon for this to work. @example uux 'sys1!uucp ~user1/file1 (sys2!~user2/file2)' @end example Execute @command{uucp} on the system @samp{sys1} copying @file{file1} (on system @samp{sys1}) to @samp{sys2}. This illustrates the use of parentheses for quoting. @node Invoking uustat, Invoking uuname, Invoking uux, Invoking the UUCP Programs @section Invoking uustat @menu * uustat Description:: Description of uustat * uustat Options:: Options Supported by uustat * uustat Examples:: Examples of uustat Usage @end menu @node uustat Description, uustat Options, Invoking uustat, Invoking uustat @subsection uustat Description @example @end example The @command{uustat} command can display various types of status information about the UUCP system. It can also be used to cancel or rejuvenate requests made by @command{uucp} or @command{uux}. With no options, @command{uustat} displays all jobs queued up for the invoking user, as if given the @option{--user} option with the appropriate argument. If any of the @option{-a}, @option{--all}, @option{-e}, @option{--executions}, @option{-s}, @option{--system}, @option{-S}, @option{--not-system}, @option{-u}, @option{--user}, @option{-U}, @option{--not-user}, @option{-c}, @option{--command}, @option{-C}, @option{--not-command}, @option{-o}, @option{--older-than}, @option{-y}, or @option{--younger-than} options are given, then all jobs which match the combined specifications are displayed. The @option{-K} or @option{--kill-all} option may be used to kill off a selected group of jobs, such as all jobs more than 7 days old. @node uustat Options, uustat Examples, uustat Description, Invoking uustat @subsection uustat Options The following options may be given to @command{uustat}. @table @option @item -a @itemx --all List all queued file transfer requests. @item -e @itemx --executions List queued execution requests rather than queued file transfer requests. Queued execution requests are processed by @command{uuxqt} rather than @command{uucico}. Queued execution requests may be waiting for some file to be transferred from a remote system. They are created by an invocation of @command{uux}. @item -s system @itemx --system system List all jobs queued up for the named system. These options may be specified multiple times, in which case all jobs for all the named systems will be listed. If used with @option{--list}, only the systems named will be listed. @item -S system @itemx --not-system system List all jobs queued for systems other than the one named. These options may be specified multiple times, in which case no jobs from any of the specified systems will be listed. If used with @option{--list}, only the systems not named will be listed. These options may not be used with @option{-s} or @option{--system}. @item -u user @itemx --user user List all jobs queued up for the named user. These options may be specified multiple times, in which case all jobs for all the named users will be listed. @item -U user @itemx --not-user user List all jobs queued up for users other than the one named. These options may be specified multiple times, in which case no jobs from any of the specified users will be listed. These options may not be used with @option{-u} or @option{--user}. @item -c command @itemx --command command List all jobs requesting the execution of the named command. If @samp{command} is @samp{ALL} this will list all jobs requesting the execution of some command (as opposed to simply requesting a file transfer). These options may be specified multiple times, in which case all jobs requesting any of the commands will be listed. @item -C command @itemx --not-command command List all jobs requesting execution of some command other than the named command, or, if @samp{command} is @samp{ALL}, list all jobs that simply request a file transfer (as opposed to requesting the execution of some command). These options may be specified multiple times, in which case no job requesting one of the specified commands will be listed. These options may not be used with @option{-c} or @option{--command}. @item -o hours @itemx --older-than hours List all queued jobs older than the given number of hours. If used with @option{--list}, only systems whose oldest job is older than the given number of hours will be listed. @item -y hours @itemx --younger-than hours List all queued jobs younger than the given number of hours. If used with @option{--list}, only systems whose oldest job is younger than the given number of hours will be listed. @item -k jobid @itemx --kill jobid Kill the named job. The job id is shown by the default output format, as well as by the @option{-j} or @option{--jobid} options to @command{uucp} or @command{uux}. A job may only be killed by the user who created the job, or by the UUCP administrator, or the superuser. The @option{-k} or @option{--kill} options may be used multiple times on the command line to kill several jobs. @item -r jobid @itemx --rejuvenate jobid Rejuvenate the named job. This will mark it as having been invoked at the current time, affecting the output of the @option{-o}, @option{--older-than}, @option{-y}, or @option{--younger-than} options, possibly preserving it from any automated cleanup daemon. The job id is shown by the default output format, as well as by the @option{-j} or @option{--jobid} options to @command{uucp} or @command{uux}. A job may only be rejuvenated by the user who created the job, or by the UUCP administrator, or the superuser. The @option{-r} or @option{--rejuvenate} options may be used multiple times on the command line to rejuvenate several jobs. @item -q @itemx --list Display the status of commands, executions and conversations for all remote systems for which commands or executions are queued. The @option{-s}, @option{--system}, @option{-S}, @option{--not-system}, @option{-o}, @option{--older-than}, @option{-y}, and @option{--younger-than} options may be used to restrict the systems which are listed. Systems for which no commands or executions are queued will never be listed. @item -m @itemx --status Display the status of conversations for all remote systems. @item -p @itemx --ps Display the status of all processes holding UUCP locks on systems or ports. @need 500 @item -i @itemx --prompt For each listed job, prompt whether to kill the job or not. If the first character of the input line is @kbd{y} or @kbd{Y}, the job will be killed. @item -K @itemx --kill-all Automatically kill each listed job. This can be useful for automatic cleanup scripts, in conjunction with the @option{--mail} and @option{--notify} options. @item -R @itemx --rejuvenate-all Automatically rejuvenate each listed job. This may not be used with @option{--kill-all}. @item -M @itemx --mail For each listed job, send mail to the UUCP administrator. If the job is killed (due to @option{--kill-all}, or @option{--prompt} with an affirmative response) the mail will indicate that. A comment specified by the @option{--comment} option may be included. If the job is an execution, the initial portion of its standard input will be included in the mail message; the number of lines to include may be set with the @option{--mail-lines} option (the default is 100). If the standard input contains null characters, it is assumed to be a binary file and is not included. @item -N @itemx --notify For each listed job, send mail to the user who requested the job. The mail is identical to that sent by the @option{-M} or @option{--mail} options. @item -W comment @itemx --comment comment Specify a comment to be included in mail sent with the @option{-M}, @option{--mail}, @option{-N}, or @option{--notify} options. @item -B lines @itemx --mail-lines lines When the @option{-M}, @option{--mail}, @option{-N}, or @option{--notify} options are used to send mail about an execution with standard input, this option controls the number of lines of standard input to include in the message. The default is 100. @item -Q @itemx --no-list Do not actually list the job, but only take any actions indicated by the @option{-i}, @option{--prompt}, @option{-K}, @option{--kill-all}, @option{-M}, @option{--mail}, @option{-N} or @option{--notify} options. @item -x type @itemx --debug type @itemx -I file @itemx --config file @itemx -v @itemx --version @itemx --help @xref{Standard Options}. @end table @node uustat Examples, , uustat Options, Invoking uustat @subsection uustat Examples @example uustat --all @end example Display status of all jobs. A sample output line is as follows: @smallexample bugsA027h bugs ian 04-01 13:50 Executing rmail ian@@airs.com (sending 12 bytes) @end smallexample The format is @example jobid system user queue-date command (size) @end example The jobid may be passed to the @option{--kill} or @option{--rejuvenate} options. The size indicates how much data is to be transferred to the remote system, and is absent for a file receive request. The @option{--system}, @option{--not-system}, @option{--user}, @option{--not-user}, @option{--command}, @option{--not-command}, @option{--older-than}, and @option{--younger-than} options may be used to control which jobs are listed. @example uustat --executions @end example Display status of queued up execution requests. A sample output line is as follows: @smallexample bugs bugs!ian 05-20 12:51 rmail ian @end smallexample The format is @example system requestor queue-date command @end example The @option{--system}, @option{--not-system}, @option{--user}, @option{--not-user}, @option{--command}, @option{--not-command}, @option{--older-than}, and @option{--younger-than} options may be used to control which requests are listed. @example uustat --list @end example Display status for all systems with queued up commands. A sample output line is as follows: @smallexample bugs 4C (1 hour) 0X (0 secs) 04-01 14:45 Dial failed @end smallexample This indicates the system, the number of queued commands, the age of the oldest queued command, the number of queued local executions, the age of the oldest queued execution, the date of the last conversation, and the status of that conversation. @example uustat --status @end example Display conversation status for all remote systems. A sample output line is as follows: @smallexample bugs 04-01 15:51 Conversation complete @end smallexample This indicates the system, the date of the last conversation, and the status of that conversation. If the last conversation failed, @command{uustat} will indicate how many attempts have been made to call the system. If the retry period is currently preventing calls to that system, @command{uustat} also displays the time when the next call will be permitted. @example uustat --ps @end example Display the status of all processes holding UUCP locks. The output format is system dependent, as @command{uustat} simply invokes @command{ps} on each process holding a lock. @example uustat -c rmail -o 168 -K -Q -M -N -W "Queued for over 1 week" @end example This will kill all @samp @option{-W} option. The @option{-Q} option prevents any of the jobs from being listed on the terminal, so any output from the program will be error messages. @node Invoking uuname, Invoking uulog, Invoking uustat, Invoking the UUCP Programs @section Invoking uuname @example uuname [-a] [--aliases] uuname -l uuname --local @end example By default, the @command{uuname} program simply lists the names of all the remote systems mentioned in the UUCP configuration files. The @command{uuname} program may also be used to print the UUCP name of the local system. The @command{uuname} program is mainly for use by shell scripts. The following options may be given to @command{uuname}. @table @option @item -a @itemx --aliases List all aliases for remote systems, as well as their canonical names. Aliases may be specified in the @file{sys} file (@pxref{Naming the System}). @item -l @itemx --local Print the UUCP name of the local system, rather than listing the names of all the remote systems. @item -x type @itemx --debug type @itemx -I file @itemx --config file @itemx -v @itemx --version @itemx --help @xref{Standard Options}. @end table @node Invoking uulog, Invoking uuto, Invoking uuname, Invoking the UUCP Programs @section Invoking uulog @example uulog [-#] [-n lines] [-sf system] [-u user] [-DSF] [--lines lines] [--system system] [--user user] [--debuglog] [--statslog] [--follow] [--follow=system] @end example The @command{uulog} program may be used to display the UUCP log file. Different options may be used to select which parts of the file to display. @table @option @item -# @itemx -n lines @itemx --lines lines Here @samp{#} is a number; e.g., @option{-10}. The specified number of lines is displayed from the end of the log file. The default is to display the entire log file, unless the @option{-f}, @option{-F}, or @option{--follow} options are used, in which case the default is to display 10 lines. @item -s system @itemx --system system Display only log entries pertaining to the specified system. @item -u user @itemx --user user Display only log entries pertaining to the specified user. @item -D @itemx --debuglog Display the debugging log file. @item -S @itemx --statslog Display the statistics log file. @item -F @itemx --follow Keep displaying the log file forever, printing new lines as they are appended to the log file. @item -f system @itemx --follow=system Keep displaying the log file forever, displaying only log entries pertaining to the specified system. @item -X type @itemx --debug type @itemx -I file @itemx --config file @itemx -v @itemx --version @itemx --help @xref{Standard Options}. Note that @command{uulog} specifies the debugging type using @option{-X} rather than the usual @option{-x}. @end table The operation of @command{uulog} depends to some degree upon the type of log files generated by the UUCP programs. This is a compile time option. If the UUCP programs have been compiled to use HDB style log files, @command{uulog} changes in the following ways: @itemize @bullet @item The new options @option{-x} and @option{--uuxqtlog} may be used to list the @command{uuxqt} log file. @item It is no longer possible to omit all arguments: one of @option{-s}, @option{--system}, @option{-f}, @option{--follow=system}, @option{-D}, @option{--debuglog}, @option{-S}, @option{--statslog}, @option{-x}, or @option{--uuxqtlog} must be used. @item The option @option{--system ANY} may be used to list log file entries which do not pertain to any particular system. @end itemize @node Invoking uuto, Invoking uupick, Invoking uulog, Invoking the UUCP Programs @section Invoking uuto @example uuto files... system!user @end example The @command{uuto} program may be used to conveniently send files to a particular user on a remote system. It will arrange for mail to be sent to the remote user when the files arrive on the remote system, and he or she may easily retrieve the files using the @command{uupick} program (@pxref{Invoking uupick}). Note that @command{uuto} does not provide any security---any user on the remote system can examine the files. The last argument specifies the system and user name to which to send the files. The other arguments are the files or directories to be sent. The @command{uuto} program is actually just a trivial shell script which invokes the @command{uucp} program with the appropriate arguments. Any option which may be given to @command{uucp} may also be given to @command{uuto}. @xref{Invoking uucp}. @need 2000 @node Invoking uupick, Invoking cu, Invoking uuto, Invoking the UUCP Programs @section Invoking uupick @example uupick [-s system] [--system system] @end example The @command{uupick} program is used to conveniently retrieve files transferred by the @command{uuto} program. For each file transferred by @command{uuto}, @command{uupick} will display the source system, the file name, and whether the name refers to a regular file or a directory. It will then wait for the user to specify an action to take. One of the following commands must be entered: @table @samp @item q Quit out of @command{uupick}. @item RETURN Skip the file. @item m [directory] Move the file or directory to the specified directory. If no directory is specified, the file is moved to the current directory. @item a [directory] Move all files from this system to the specified directory. If no directory is specified, the files are moved to the current directory. @item p List the file on standard output. @item d Delete the file. @item ! [command] Execute @samp{command} as a shell escape. @end table The @option{-s} or @option{--system} option may be used to restrict @command{uupick} to only present files transferred from a particular system. The @command{uupick} program also supports the standard UUCP program options; see @ref{Standard Options}. @need 2000 @node Invoking cu, Invoking uucico, Invoking uupick, Invoking the UUCP Programs @section Invoking cu @menu * cu Description:: Description of cu * cu Commands:: Commands Supported by cu * cu Variables:: Variables Supported by cu * cu Options:: Options Supported by cu @end menu @node cu Description, cu Commands, Invoking cu, Invoking cu @subsection cu Description @example cu [options] [system | phone | "dir"] @end example The @command{cu} program is used to call up another system and act as a dial in terminal. It can also do simple file transfers with no error checking. The @command{cu} program takes a single non-option argument. If the argument is the string @samp @option{-z} or @option{--system} options may be used to name a system beginning with a digit, and the @option{-c} or @option{--phone} options may be used to name a phone number that does not begin with a digit. The @command{cu} program locates a port to use in the UUCP configuration files. If a simple system name is given, it will select a port appropriate for that system. The @option{-p}, @option{--port}, @option{-l}, @option{--line}, @option{-s}, and @option{--speed} options may be used to control the port selection. When a connection is made to the remote system, @command{cu} forks into two processes. One reads from the port and writes to the terminal, while the other reads from the terminal and writes to the port. @node cu Commands, cu Variables, cu Description, Invoking cu @subsection cu Commands The @command{cu} program provides several commands that may be used during the conversation. The commands all begin with an escape character, which by default is @kbd{~} (tilde). The escape character is only recognized at the beginning of a line. To send an escape character to the remote system at the start of a line, it must be entered twice. All commands are either a single character or a word beginning with @kbd{%} (percent sign). The @command{cu} program recognizes the following commands. @table @samp @item ~. Terminate the conversation. @item ~! command Run command in a shell. If command is empty, starts up a shell. @item ~$ command Run command, sending the standard output to the remote system. @item ~| command Run command, taking the standard input from the remote system. @item ~+ command Run command, taking the standard input from the remote system and sending the standard output to the remote system. @item ~#, ~%break Send a break signal, if possible. @item ~c directory, ~%cd directory Change the local directory. @item ~> file Send a file to the remote system. This just dumps the file over the communication line. It is assumed that the remote system is expecting it. @item ~< Receive a file from the remote system. This prompts for the local file name and for the remote command to execute to begin the file transfer. It continues accepting data until the contents of the @samp{eofread} variable are seen. @item ~p from to @itemx ~%put from to Send a file to a remote Unix system. This runs the appropriate commands on the remote system. @item ~t from to @itemx ~%take from to Retrieve a file from a remote Unix system. This runs the appropriate commands on the remote system. @item ~s variable value Set a @command{cu} variable to the given value. If value is not given, the variable is set to @samp{true}. @item ~! variable Set a @command{cu} variable to @samp{false}. @item ~z Suspend the cu session. This is only supported on some systems. On systems for which @kbd{^Z} may be used to suspend a job, @samp{~^Z} will also suspend the session. @item ~%nostop Turn off XON/XOFF handling. @item ~%stop Turn on XON/XOFF handling. @item ~v List all the variables and their values. @item ~? List all commands. @end table @node cu Variables, cu Options, cu Commands, Invoking cu @subsection cu Variables The @command{cu} program also supports several variables. They may be listed with the @samp{~v} command, and set with the @samp{~s} or @samp{~!} commands. @table @samp @item escape The escape character. The default is @kbd{~} (tilde). @item delay If this variable is true, @command{cu} will delay for a second, after recognizing the escape character, before printing the name of the local system. The default is true. @item eol The list of characters which are considered to finish a line. The escape character is only recognized after one of these is seen. The default is @kbd{carriage return}, @kbd{^U}, @kbd{^C}, @kbd{^O}, @kbd{^D}, @kbd{^S}, @kbd{^Q}, @kbd{^R}. @item binary Whether to transfer binary data when sending a file. If this is false, then newlines in the file being sent are converted to carriage returns. The default is false. @item binary-prefix A string used before sending a binary character in a file transfer, if the @samp{binary} variable is true. The default is @samp{^V}. @item echo-check Whether to check file transfers by examining what the remote system echoes back. This probably doesn't work very well. The default is false. @item echonl The character to look for after sending each line in a file. The default is carriage return. @item timeout The timeout to use, in seconds, when looking for a character, either when doing echo checking or when looking for the @samp{echonl} character. The default is 30. @item kill The character to use delete a line if the echo check fails. The default is @kbd{^U}. @item resend The number of times to resend a line if the echo check continues to fail. The default is 10. @item eofwrite The string to write after sending a file with the @samp{~>} command. The default is @samp{^D}. @item eofread The string to look for when receiving a file with the @samp{ ~<} command. The default is @samp{$}, which is intended to be a typical shell prompt. @item verbose Whether to print accumulated information during a file transfer. The default is true. @end table @node cu Options, , cu Variables, Invoking cu @subsection cu Options The following options may be given to @command{cu}. @table @option @item -e @itemx --parity=even Use even parity. @item -o @itemx --parity=odd Use odd parity. @item --parity=none Use no parity. No parity is also used if both @option{-e} and @option{-o} are given. @item -h @itemx --halfduplex Echo characters locally (half-duplex mode). @item --nostop Turn off XON/XOFF handling (it is on by default). @item -E char @itemx --escape char Set the escape character. Initially @kbd{~} (tilde). To eliminate the escape character, use @samp{-E ''}. @item -z system @itemx --system system The system to call. @item -c phone-number @itemx --phone phone-number The phone number to call. @item -p port @itemx -a port @itemx --port port Name the port to use. @item -l line @itemx --line line Name the line to use by giving a device name. This may be used to dial out on ports that are not listed in the UUCP configuration files. Write access to the device is required. @item -s speed @itemx -# @itemx --speed speed The speed (baud rate) to use. Here, @option{-#} means an actual number; e.g., @option{-9600}. @item -n @itemx --prompt Prompt for the phone number to use. @item -d Enter debugging mode. Equivalent to @option{--debug all}. @item -x type @itemx --debug type @itemx -I file @itemx --config file @itemx -v @itemx --version @itemx --help @xref{Standard Options}. @end table @node Invoking uucico, Invoking uuxqt, Invoking cu, Invoking the UUCP Programs @section Invoking uucico @menu * uucico Description:: Description of uucico * uucico Options:: Options Supported by uucico @end menu @node uucico Description, uucico Options, Invoking uucico, Invoking uucico @subsection uucico Description @example uucico [options] @end example The @command{uucico} daemon processes file transfer requests queued by @command{uucp} and @command{uux}. It is started when @command{uucp} or @command{uux} is run (unless they are given the @option{-r} or @option{--nouucico} options). It is also typically started periodically using entries in the @file{crontab} table(s). When @command{uucico} is invoked with @option{-r1}, @option{--master}, @option{-s}, @option{--system}, or @option{-S}, the daemon will place a call to a remote system, running in master mode. Otherwise the daemon will start in slave mode, accepting a call from a remote system. Typically a special login name will be set up for UUCP which automatically invokes @command{uucico} when a remote system calls in and logs in under that name. When @command{uucico} terminates, it invokes the @command{uuxqt} daemon, unless the @option{-q} or @option{--nouuxqt} options were given; @command{uuxqt} executes any work orders created by @command{uux} on a remote system, and any work orders created locally which have received remote files for which they were waiting. If a call fails, @command{uucico} will normally refuse to retry the call until a certain (configurable) amount of time has passed. This may be overriden by the @option{-f}, @option{--force}, or @option{-S} options. The @option{-l}, @option{--prompt}, @option{-e}, or @option{--loop} options may be used to force @command{uucico} to produce its own prompts of @samp{login: } and @samp{Password:}. When another @command{uucico} daemon calls in, it will see these prompts and log in as usual. The login name and password will normally be checked against a separate list kept specially for @command{uucico}, rather than the @file{/etc/passwd} file (@pxref{Configuration File Names}). It is possible, on some systems, to configure @command{uucico} to use @file{/etc/passwd}. The @option{-l} or @option{--prompt} options will prompt once and then exit; in this mode the UUCP administrator, or the superuser, may use the @option{-u} or @option{--login} option to force a login name, in which case @command{uucico} will not prompt for one. The @option{-e} or @option{--loop} options will prompt again after the first session is over; in this mode @command{uucico} will permanently control a port. If @command{uucico} receives a @code{SIGQUIT}, @code{SIGTERM} or @code{SIGPIPE} signal, it will cleanly abort any current conversation with a remote system and exit. If it receives a @code{SIGHUP} signal it will abort any current conversation, but will continue to place calls to (if invoked with @option{-r1} or @option{--master}) and accept calls from (if invoked with @option{-e} or @option{--loop}) other systems. If it receives a @code{SIGINT} signal it will finish the current conversation, but will not place or accept any more calls. @node uucico Options, , uucico Description, Invoking uucico @subsection uucico Options The following options may be given to @command{uucico}. @table @option @item -r1 @itemx --master Start in master mode: call out to a remote system. Implied by @option{-s}, @option{--system}, or @option{-S}. If no system is specified, sequentially call every system for which work is waiting to be done. @item -r0 @itemx --slave Start in slave mode. This is the default. @item -s system @itemx --system system Call the specified system. @item -S system Call the specified system, ignoring any required wait. This is equivalent to @samp{-s system -f}. @item -f @itemx --force Ignore any required wait for any systems to be called. @item -l @itemx --prompt Prompt for login name and password using @samp{login: } and @samp{Password:}. This allows @command{uucico} to be easily run from @command{inetd}. The login name and password are checked against the UUCP password file, which need not be @file{/etc/passwd}. The @option{--login} option may be used to force a login name, in which cause @command{uucico} will only prompt for a password. @item -p port @itemx --port port Specify a port to call out on or to listen to. @item -e @itemx --loop Enter an endless loop of login/password prompts and slave mode daemon execution. The program will not stop by itself; you must use @command{kill} to shut it down. @item -w @itemx --wait After calling out (to a particular system when @option{-s}, @option{--system}, or @option{-S} is specifed, or to all systems which have work when just @option{-r1} or @option{--master} is specifed), begin an endless loop as with @option{--loop}. @item -q @itemx --nouuxqt Do not start the @command{uuxqt} daemon when finished. @item -c @itemx --quiet If no calls are permitted at this time, then don't make the call, but also do not put an error message in the log file and do not update the system status (as reported by @command. @item -C @itemx --ifwork Only call the system named by @option{-s}, @option{--system}, or @option{-S} if there is work for that system. @item -D @itemx --nodetach Do not detach from the controlling terminal. Normally @command{uucico} detaches from the terminal before each call out to another system and before invoking @command{uuxqt}. This option prevents this. @item -u name @itemx --login name Set the login name to use instead of that of the invoking user. This option may only be used by the UUCP administrator or the superuser. If used with @option{--prompt}, this will cause @command{uucico} to prompt only for the password, not the login name. @item -z @itemx --try-next If a call fails after the remote system is reached, try the next alternate rather than simply exiting. @item -i type @itemx --stdin type Set the type of port to use when using standard input. The only supported port type is TLI, and this is only available on machines which support the TLI networking interface. Specifying @samp{-i TLI} causes @command{uucico} to use TLI calls to perform I/O. @item -X type Same as the standard option @option{-x type}. Provided for historical compatibility. @item -x type @itemx --debug type @itemx -I file @itemx --config file @itemx -v @itemx --version @itemx --help @xref{Standard Options}. @end table @node Invoking uuxqt, Invoking uuchk, Invoking uucico, Invoking the UUCP Programs @section Invoking uuxqt @example uuxqt [-c command] [-s system] [--command command] [--system system] @end example The @command{uuxqt} daemon executes commands requested by @command{uux} from either the local system or from remote systems. It is started automatically by the @command{uucico} daemon (unless @command{uucico} is given the @option{-q} or @option{--nouuxqt} options). There is normally no need to run @command{uuxqt}, since it will be invoked by @command{uucico}. However, @command{uuxqt} can be invoked directly to provide greater control over the processing of the work queue. Multiple invocations of @command{uuxqt} may be run at once, as controlled by the @code{max-uuxqts} configuration command; see @ref{Miscellaneous (config)}. The following options may be given to @command{uuxqt}. @table @option @item -c command @itemx --command command Only execute requests for the specified command. For example, @samp{uuxqt --command rmail}. @item -s system @itemx --system system Only execute requests originating from the specified system. @item -x type @itemx --debug type @itemx -I file @itemx --config @itemx -v @itemx --version @itemx --help @xref{Standard Options}. @end table @node Invoking uuchk, Invoking uuconv, Invoking uuxqt, Invoking the UUCP Programs @section Invoking uuchk @example uuchk [-s system] [--system system] @end example The @command{uuchk} program displays information read from the UUCP configuration files. It should be used to ensure that UUCP has been configured correctly. The @option{-s} or @option{--system} options may be used to display the configuration for just the specified system, rather than for all systems. The @command{uuchk} program also supports the standard UUCP program options; see @ref{Standard Options}. @need 2000 @node Invoking uuconv, Invoking uusched, Invoking uuchk, Invoking the UUCP Programs @section Invoking uuconv @example uuconv -i type -o type [-p program] [--program program] uuconv --input type --output type [-p program] [--program program] @end example The @command{uuconv} program converts UUCP configuration files from one format to another. The type of configuration file to read is specified using the @option{-i} or @option{--input} options. The type of configuration file to write is specified using the @option{-o} or @option{--output} options. The supported configuration file types are @samp{taylor}, @samp{v2}, and @samp{hdb}. For a description of the @samp{taylor} configuration files, see @ref{Configuration Files}. The other types of configuration files are used by traditional UUCP packages, and are not described in this manual. An input configuration of type @samp{v2} or @samp{hdb} is read from a compiled in directory (specified by @samp{oldconfigdir} in @file{Makefile}). An input configuration of type @samp{taylor} is read from a compiled in directory by default, but may be overridden with the standard @option{-I} or @option{--config} options (@pxref{Standard Options}). The output configuration is written to files in the directory in which @command{uuconv} is run. Some information in the input files may not be representable in the desired output format, in which case @command{uuconv} will silently discard it. The output of @command{uuconv} should be carefully checked before it is used. The @command{uuchk} program may be used for this purpose; see @ref{Invoking uuchk}. The @option{-p} or @option{--program} option may be used to convert specific @command{cu} configuration information, rather than the default of only converting the @command{uucp} configuration information; see @ref{config File}. The @command{uuchk} program also supports the standard UUCP program options; see @ref{Standard Options}. @node Invoking uusched, , Invoking uuconv, Invoking the UUCP Programs @section Invoking uusched The @command{uusched} program is actually just a shell script which invokes the @command{uucico} daemon. It is provided for backward compatibility. It causes @command{uucico} to call all systems for which there is work. Any option which may be given to @command{uucico} may also be given to @command{uusched}. @xref{Invoking uucico}. @node Installing Taylor UUCP, Using Taylor UUCP, Invoking the UUCP Programs, Top @chapter @end menu @node Compilation, Testing the Compilation, Installing Taylor UUCP, Installing Taylor UUCP @section Compiling Taylor UUCP (@pxref{Configuration}). Follow these steps to compile the source code. @enumerate @item Take a look at the top of @file{Makefile.in} and set the appropriate values for your system. These control where the programs are installed and which user on the system owns them (normally they will be owned by a special user @command{uucp} rather than a real person; they should probably not be owned by @code{root}). @item Run the shell script @code{configure}. This script was generated using the @command{autoconf} program written by David MacKenzie of the Free Software Foundation. It takes a while to run. It will generate the file @file{config.h} based on @file{config.h.in}, and, for each source code directory, will generate @file{Makefile} based on @file{Makefile.in}. You can pass certain arguments to @code{configure} in the environment. Because @code{configure} will compile little test programs to see what is available on your system, you must tell it how to run your compiler. It recognizes the following environment variables: @table @samp @item CC The C compiler. If this is not set, then if @code{configure} can find @samp{gcc} it will use it, otherwise it will use @samp{cc}. @item CFLAGS Flags to pass to the C compiler when compiling the actual code. If this is not set, @code{configure} will use @option{-g}. @item LDFLAGS Flags to pass to the C compiler when only linking, not compiling. If this is not set, @code{configure} will use the empty string. @item LIBS Libraries to pass to the C compiler. If this is not set, @code{configure} will use the empty string. @item INSTALL The program to run to install UUCP in the binary directory. If this is not set, then if @code{configure} finds the BSD @command{install} program, it will set this to @samp{install -c}; otherwise, it will use @samp{cp}. @end table Suppose, for example, you want to set the environment variable @samp{CC} to @samp{rcc}. If you are using @command{sh}, @command{bash}, or @command{ksh}, invoke @code{configure} as @samp{CC=rcc configure}. If you are using @command{csh}, do @samp{setenv CC rcc; sh configure}. On some systems you will want to use @samp{LIBS=-lmalloc}. On Xenix derived versions of Unix do not use @samp{LIBS=-lx} because this will bring in the wrong versions of certain routines; if you want to use @option{-lx} you must specify @samp{LIBS=-lc -lx}. You can also pass other arguments to @code{configure} on the command line. Use @samp{configure --help} for a complete list. Of particular interest: @table @option @item --prefix=@var{dirname} The directory under which all files are installed. Default @file{/usr/local}. @item --with-newconfigdir=@var{dirname} The directory in which to find new style configuration files. Default @file{@var{prefix}/conf/uucp}. @item --with-oldconfigdir=@var{dirname} The directory in which to find old style configuration files. Default @file{/usr/lib/uucp}. @end table If @code{configure} fails for some reason, or if you have a very weird system, you may have to configure the package by hand. To do this, copy the file @file{config.h.in} to @file{config.h} and edit it for your system. Then for each source directory (the top directory, and the subdirectories @file{lib}, @file{unix}, and @file{uuconf}) copy @file{Makefile.in} to @file{Makefile}, find the words within @kbd{@@} characters, and set them correctly for your system. @item Igor V. Semenyuk provided this (lightly edited) note about ISC Unix 3.0. The @code{configure} script will default to passing @option{-posix} to @command{gcc}. However, using @option{-posix} changes the environment to POSIX, and on ISC 3.0, at least, the default for @code{POSIX_NO_TRUNC} is 1. This can lead to a problem when @command{uuxqt} executes @command{rmail}. @code{IDA sendmail} has dbm configuration files named @file{mailertable.@{dir,pag@}}. Notice these names are 15 characters long. When @command{uuxqt} compiled with the @option{-posix} executes @command{rmail}, which in turn executes @command{sendmail}, the later is run under the POSIX environment too. This leads to @command{sendmail} bombing out with @samp{'error opening 'M' database: name too long' (mailertable.dir)}. It's rather obscure behaviour, and it took me a day to find out the cause. I don't use the @option{-posix} switch; instead, I run @command{gcc} with @option{-D_POSIX_SOURCE}, and add @option{-lcposix} to @samp{LIBS}. @item On some versions of BSDI there is a bug in the shell which causes the default value for @samp{CFLAGS} to be set incorrectly. If @samp{echo $@{CFLAGS--g@}} echoes @samp{g} rather than @option{-g}, then you must set @samp{CFLAGS} in the environment before running configure. There is a patch available from BSDI for this bug. (Reported by David Vrona). @item On AIX 3.2.5, and possibly other versions, @samp{cc -E} does not work, reporting @samp{Option NOROCONST is not valid}. Test this before running configure by doing something like @samp{touch /tmp/foo.c; cc -E /tmp/foo.c}. This may give a warning about the file being empty, but it should not give the @samp{Option NOROCONST} warning. The workaround is to remove the @samp{,noroconst} entry from the @samp{options} clause in the @samp{cc} stanza in @file{/etc/xlc.cfg}. (Reported by Chris Lewis). @item You should verify that @code{configure} worked correctly by checking @file{config.h} and the instances of @file{Makefile}. @item Edit @file{policy.h} for your local system. The comments explain the various choices. The default values are intended to be reasonable, so you may not have to make any changes. You must decide what type of configuration files to use; for more information on the choices, see @ref{Configuration}. You must also decide what sort of spool directory you want to use. If this is a new installation, I recommend @samp{SPOOLDIR_TAYLOR}; otherwise, select the spool directory corresponding to your existing UUCP package. @item Type @samp{make} to compile everything. The @file{tstuu.c} file is not particularly portable; if you can't figure out how to compile it you can safely ignore it, as it is only used for testing. To use STREAMS pseudo-terminals, tstuu.c must be compiled with @option{-DHAVE_STREAMS_PTYS}; this is not determined by the configure script. If you have any other problems there is probably a bug in the @code{configure} script. @item Please report any problems you have. That is the only way they will get fixed for other people. Supply a patch if you can (@pxref{Patches}), or just ask for help. @end enumerate @node Testing the Compilation, Installing the Binaries, Compilation, Installing Taylor UUCP @section Testing the Compilation If your system supports pseudo-terminals, and you compiled the code to support the new style of configuration files (@code{HAVE_TAYLOR_CONFIG} was set to 1 in @file{policy.h}), you should be able to use the @command{tstuu} program to test the @command{uucico} daemon. If your system supports STREAMS based pseudo-terminals, you must compile tstuu.c with @option{-DHAVE_STREAMS_PTYS}. (The STREAMS based code was contributed by Marc Boucher). To run @command{tstuu}, just type @samp{tstuu} with no arguments. You must run it in the compilation directory, since it runs @file{./uucp}, @file{./uux} and @file{./uucico}. The @command{tstuu} program will run a lengthy series of tests (it takes over ten minutes on a slow VAX). You will need a fair amount of space available in @file{/usr/tmp}. You will probably want to put it in the background. Do not use @kbd{^Z}, because the program traps on @code{SIGCHLD} and winds up dying. The @command{tstuu} program will create a directory @file{/usr/tmp/tstuu} and fill it with configuration files, and create spool directories @file{/usr/tmp/tstuu/spool1} and @file{/usr/tmp/tstuu/spool2}. If your system does not support the @code{FIONREAD} call, the @samp{tstuu} program will run very slowly. This may or may not get fixed in a later version. The @command{tstuu} program will finish with an execute file named @file{X.@var{something}} and a data file named @file{D.@var{something}} in the directory @file{/usr/tmp/tstuu/spool1} (or, more likely, in subdirectories, depending on the choice of @code{SPOOLDIR} in @file{policy.h}). Two log files will be created in the directory @file{/usr/tmp/tstuu}. They will be named @file{Log1} and @file{Log2}, or, if you have selected @code{HAVE_HDB_LOGGING} in @file{policy.h}, @file{Log1/uucico/test2} and @file{Log2/uucico/test1}. There should be no errors in the log files. You can test @command{uuxqt} with @samp{./uuxqt -I /usr/tmp/tstuu/Config1}. This should leave a command file @file{C.@var{something}} and a data file @file{D.@var{something}} in @file{/usr/tmp/tstuu/spool1} or in subdirectories. Again, there should be no errors in the log file. Assuming you compiled the code with debugging enabled, the @option{-x} switch can be used to set debugging modes; see the @code{debug} command for details (@pxref{Debugging Levels}). Use @option{-x all} to turn on all debugging and generate far more output than you will ever want to see. The @command{uucico} daemons will put debugging output in the files @file{Debug1} and @file{Debug2} in the directory @file{/usr/tmp/tstuu}. After that, you're pretty much on your own. On some systems you can also use @command{tstuu} to test @command{uucico} against the system @command{uucico}, by using the @option{-u} switch. For this to work, change the definitions of @code{ZUUCICO_CMD} and @code{UUCICO_EXECL} at the top of @file{tstuu.c} to something appropriate for your system. The definitions in @file{tstuu.c} are what I used for Ultrix 4.0, on which @file{/usr/lib/uucp/uucico} is particularly obstinate about being run as a child; I was only able to run it by creating a login name with no password whose shell was @file{/usr/lib/uucp/uucico}. Calling login in this way will leave fake entries in @file{wtmp} and @file{utmp}; if you compile @file{tstout.c} (in the @file{contrib} directory) as a setuid @code{root} program, @command{tstuu} will run it to clear those entries out. On most systems, such hackery should not be necessary, although on SCO I had to su to @code{root} (@code{uucp} might also have worked) before I could run @file{/usr/lib/uucp/uucico}. You can test @command{uucp} and @command{uux} (give them the @option{-r} switch to keep them from starting @command{uucico}) to make sure they create the right sorts of files. Unfortunately, if you don't know what the right sorts of files are, I'm not going to tell you here. If you can not run @command{tstuu}, or if it fails inexplicably, don't worry about it too much. On some systems @command{tstuu} will fail because of problems using pseudo terminals, which will not matter in normal use. The real test of the package is talking to another system. @node Installing the Binaries, Configuration, Testing the Compilation, Installing Taylor UUCP @section Installing the Binaries You can install the executable files by becoming @code{root} and typing @samp{make install}. Or you can look at what @s @samp{make uninstall}. Note that by default the programs are compiled with debugging information, and they are not stripped when they are installed. You may want to strip the installed programs to save disk space. For more information, see your system documentation for the @command{strip} program. Of course, simply installing the executable files is not enough. You must also arrange for them to be used correctly. @node Configuration, Testing the Installation, Installing the Binaries, Installing Taylor UUCP @section Configuring Taylor UUCP You will have to decide what types of configuration files you want to use. This package supports a new sort of configuration file; see @ref{Configuration Files}. It also supports V2 configuration files (@file{L.sys}, @file{L-devices}, etc.) and HDB configuration files (@file{Systems}, @file @file{acucap} or @file{modemcap}); however, V2 configuration files can be used with a new style dial file (@pxref{dial File}), or with a HDB @file{Dialers} file. Use of HDB configuration files has two known bugs. A blank line in the middle of an entry in the @file{Permissions} file will not be ignored as it should be. Dialer programs, as found in some versions of HDB, are not recognized directly. If you must use a dialer program, rather than an entry in @file{Devices}, you must use the @code{chat-program} command in a new style dial file; see @ref{dial File}. You will have to invoke the dialer program via a shell script or another program, since an exit code of 0 is required to recognize success; the @code{dialHDB} program in the @file{contrib} directory may be used for this purpose. The @command{uuconv} (@pxref @ref{Configuration Files} for details on how to do this. After writing the configuration files, use the @command{uuchk} program to verify that they are what you expect; see @ref{Invoking uuchk}. @node Testing the Installation, , Configuration, Installing Taylor UUCP @section Testing the Installation After you have written the configuration files, and verified them with the @command{uuchk} program (@pxref{Invoking uuchk}), you must check that UUCP can correctly contact another system. Tell @command{uucico} to dial out to the system by using the @option{-s} system switch (e.g., @samp{uucico -s uunet}). The log file should tell you what happens. The exact location of the log file depends upon the settings in @file{policy.h} when you compiled the program, and on the use of the @code{logfile} command in the @file{config} file. Typical locations are @file{/usr/spool/uucp/Log} or a subdirectory under @file{/usr/spool/uucp/.Log}. If you compiled the code with debugging enabled, you can use debugging mode to get a great deal of information about what sort of data is flowing back and forth; the various possibilities are described with the @code{debug} command (@pxref{Debugging Levels}). When initially setting up a connection @samp{-x chat} is probably the most useful (e.g., @samp{uucico -s uunet -x chat}); you may also want to use @samp{-x handshake,incoming,outgoing}. You can use @option{-x} multiple times on one command line, or you can give it comma separated arguments as in the last example. Use @samp{-x all} to turn on all possible debugging information. The debugging information is written to a file, normally @file{/usr/spool/uucp/Debug}, although the default can be changed in @file{policy.h}, and the @file{config} file can override the default with the @code{debugfile} command. The debugging file may contain passwords and some file contents as they are transmitted over the line, so the debugging file is only readable by the @code{uucp} user. You can use the @option{-f} switch to force @command{uucico} to call out even if the last call failed recently; using @option{-S} when naming a system has the same effect. Otherwise the status file (in the @file{.Status} subdirectory of the main spool directory, normally @file{/usr/spool/uucp}) (@pxref{Status Directory}) will prevent too many attempts from occurring in rapid succession. On older System V based systems which do not have the @code{setreuid} system call, problems may arise if ordinary users can start an execution of @command{uuxqt}, perhaps indirectly via @command{uucp} or @command{uux}. UUCP jobs may wind up executing with a real user ID of the user who invoked @command{uuxqt}, which can cause problems if the UUCP job checks the real user ID for security purposes. On such systems, it is safest to put @samp{run-uuxqt never} (@pxref{Miscellaneous (config)}) in the @file{config} file, so that @command{uucico} never starts @command{uuxqt}, and invoke @command{uuxqt} directly from a @file @command{uucico} dial out'' are impossible to answer without much more information. @node Using Taylor UUCP, Configuration Files, Installing Taylor UUCP, Top @chapter @end menu @node Calling Other Systems, Accepting Calls, Using Taylor UUCP, Using Taylor UUCP @section Calling Other Systems @cindex calling out By default @command{uucp} and @command{uux} will automatically start up @command{uucico} to call another system whenever work is queued up. However, the call may fail, or you may have put in time restrictions which prevent the call at that time (perhaps because telephone rates are high) (@pxref @command{uucico}. These periodic invocations are normally triggered by entries in the @file{crontab} file. The exact format of @file{crontab} files, and how new entries are added, varies from system to system; check your local documentation (try @samp{man cron}). To attempt to call all systems with outstanding work, use the command @samp{uucico -r1}. To attempt to call a particular system, use the command @samp{uucico -s @var{system}}. To attempt to call a particular system, but only if there is work for it, use the command @samp{uucico -C -s @var{system}}. (@pxref{Invoking uucico}). A common case is to want to try to call a system at a certain time, with periodic retries if the call fails. A simple way to do this is to create an empty UUCP command file, known as a @dfn{poll file}. If a poll file exists for a system, then @samp{uucico -r1} will place a call to it. If the call succeeds, the poll file will be deleted. A poll file can be easily created using the @samp{uux} command, by requesting the execution of an empty command. To create a poll file for @var{system}, just do something like this: @example uux -r @var{system}! @end example The @option{-r} tells @samp{uux} to not start up @samp{uucico} immediately. Of course, if you do want @samp{uucico} to start up right away, omit the @option{-r}; if the call fails, the poll file will be left around to cause a later call. For example, I use the following crontab entries locally: @example 45 * * * * /bin/echo /usr/lib/uucp/uucico -r1 | /bin/su uucpa 40 4,10,15 * * * /usr/bin/uux -r uunet! @end example Every hour, at 45 minutes past, this will check if there is any work to be done, and, if there is, will call the appropriate system. Also, at 4:40am, 10:40am, and 3:40pm, this will create a poll file file for @samp{uunet}, forcing the next run of @command{uucico} to call @samp{uunet}. @node Accepting Calls, Mail and News, Calling Other Systems, Using Taylor UUCP @section Accepting Calls @cindex calling in @cindex accepting calls To accept calls from another system, you must arrange matters such that when that system calls in, it automatically invokes @command{uucico} on your system. The most common arrangement is to create a special user name and password for incoming UUCP calls. This user name typically uses the same user ID as the regular @code{uucp} user (Unix permits several user names to share the same user ID). The shell for this user name should be set to @command{uucico}. Here is a sample @file{/etc/passwd} line to accept calls from a remote system named airs: @example Uairs:@var{password}:4:8:airs UUCP:/usr/spool/uucp:/usr/lib/uucp/uucico @end example The details may vary on your system. You must use reasonable user and group ID's. You must use the correct file name for @command{uucico}. The @var{password} must appear in the UUCP configuration files on the remote system, but will otherwise never be seen or typed by a human. Note that @command{uucico} appears as the login shell, and that it will be run with no arguments. This means that it will start in slave mode and accept an incoming connection. @xref{Invoking uucico}. On some systems, creating an empty file named @file{.hushlogin} in the home directory will skip the printing of various bits of information when the remote @command{uucico} logs in, speeding up the UUCP connection process. For the greatest security, each system which calls in should use a different user name, each with a different password, and the @code{called-login} command should be used in the @file{sys} file to ensure that the correct login name is used. @xref{Accepting a Call}, and see @ref{Security}. If you never need to dial out from your system, but only accept incoming calls, you can arrange for @command{uucico} to handle logins itself, completely controlling the port, by using the @option{--endless} option. @xref{Invoking uucico}. @node Mail and News, The Spool Directory Layout, Accepting Calls, Using Taylor UUCP @section Using UUCP for Mail and News. @cindex mail @cindex news Taylor UUCP does not include a mail package. All Unix systems come with some sort of mail delivery agent, typically @command{sendmail} or @code{MMDF}. Source code is available for some alternative mail delivery agents, such as @code{IDA sendmail} and @command{smail}. Taylor UUCP also does not include a news package. The two major Unix news packages are @code{C-news} and @code. @menu * Sending mail or news:: Sending mail or news via UUCP * Receiving mail or news:: Receiving mail or news via UUCP @end menu @node Sending mail or news, Receiving mail or news, Mail and News, Mail and News @subsection Sending mail or news via UUCP When mail is to be sent from your machine to another machine via UUCP, the mail delivery agent will invoke @command{uux}. It will generally run a command such as @samp{uux - @var{system}!rmail @var{address}}, where @var{system} is the remote system to which the mail is being sent. It may pass other options to @command{uux}, such as @option{-r} or @option{-g} (@pxref{Invoking uux}). The news system also invokes @command{uux} in order to transfer articles to another system. The only difference is that news will use @command{uux} to invoke @command{rnews} on the remote system, rather than @command{rmail}. You should arrange for your mail and news systems to invoke the Taylor UUCP version of @command{uux}. If you only have Taylor UUCP, or if you simply replace any existing version of @command @command{uux} will probably work fine with the Taylor @command{uucico} (the reverse is not the case: the Taylor @command{uux} requires the Taylor @command{uucico}). However, data transfer will be somewhat more efficient if the Taylor @command{uux} is used. @node Receiving mail or news, , Sending mail or news, Mail and News @subsection Receiving mail or news via UUCP To receive mail, all that is necessary is for UUCP to invoke @command{rmail}. Any mail delivery agent will provide an appropriate version of @command{rmail}; you must simply make sure that it is in the command path used by UUCP (it almost certainly already is). The default command path is set in @file{policy.h}, and it may be overridden for a particular system by the @code{command-path} command (@pxref{Miscellaneous (sys)}). Similarly, for news UUCP must be able to invoke @command{rnews}. Any news system will provide a version of @command{rnews}, and you must ensure that is in a directory on the path that UUCP will search. @node The Spool Directory Layout, Spool Directory Cleaning, Mail and News, Using Taylor UUCP @section The Spool Directory Layout @cindex spool directory In general, the layout of the spool directory may be safely ignored. However, it is documented here for the curious. This description only covers the @code{SPOOLDIR_TAYLOR} layout. The ways in which the other spool directory layouts differ are described in the source file @end menu @node System Spool Directories, Status Directory, The Spool Directory Layout, The Spool Directory Layout @subsection System Spool Directories @cindex system spool directories @table @file @item @var{system} There is a subdirectory of the main spool directory for each remote system. @item @var{system}/C. This directory stores files describing file transfer commands to be sent to the @var{system}. Each file name starts with @file{C.@var{g}}, where @var{g} is the job grade. Each file contains one or more commands. For details of the commands, see @ref{UUCP Protocol Commands}. @item @var{system}/D. This directory stores data files. Files with names like @file{D.@var{g}@var{ssss}}, where @var{g} is the grade and @var{ssss} is a sequence number, are waiting to be transferred to the @var{system}, as directed by the files in the @file{@var{system}/C.} directory. Files with other names, typically @file{D.@var{system}@var{g}@var{ssss}}, have been received from @var{system} and are waiting to be processed by an execution file in the @file{@var{system}/X.} directory. @item @var{system}/D.X This directory stores data files which will become execution files on the remote system. In current practice, this directory rarely exists, because most simple executions, including typical uses of @command{rmail} and @command{rnews}, send an @samp{E} command rather than an execution file (@pxref{The E Command}). @item @var{system}/X. This directory stores execution files which have been received from @var{system}. This directory normally exists, even though the corresponding @file{D.X} directory does not, because @command{uucico} will create an execution file on the fly when it receives an @samp{E} command. @item @var{system}/SEQF This file holds the sequence number of the last job sent to @var{system}. The sequence number is used to ensure that file names are unique in the remote system spool directory. The file is four bytes long. Sequence numbers are composed of digits and the upper case letters. @end table @node Status Directory, Execution Subdirectories, System Spool Directories, The Spool Directory Layout @subsection Status Directory @table @file @item .Status @cindex .Status @cindex status files This directory holds status files for each remote system. The name of the status file is the name of the system which it describes. Each status file describes the last conversation with the system. Running @samp{uustat --status} basically just formats and prints the contents of the status files (@pxref{uustat Examples}). Each status file has a single text line with six fields. @table @asis @item code A code indicating the status of the last conversation. The following values are defined, though not all are actually used. @table @samp @item 0 Conversation completed normally. @item 1 @command{uucico} was unable to open the port. @item 2 The last call to the system failed while dailing. @item 3 The last call to the system failed while logging in. @item 4 The last call to the system failed during the initial UUCP protocol handshake (@pxref{The Initial Handshake}). @item 5 The last call to the system failed after the initial handshake. @item 6 @command{uucico} is currently talking to the system. @item 7 The last call to the system failed because it was the wrong time to call (this is not used if calling the system is never permitted). @end table @item retries The number of retries since the last successful call. @item time of last call The time of the last call, in seconds since the epoch (as returned by the @code{time} system call). @item wait If the last call failed, this is the number of seconds since the last call before @command{uucico} may attempt another call. This is set based on the retry time; see @ref{When to Call}. The @option{-f} or @option{-S} options to @command{uucico} direct it to ignore this wait time; see @ref{Invoking uucico}. @item description A text description of the status, corresponding to the code in the first field. This may contain spaces. @item system name The name of the remote system. @end table @end table @node Execution Subdirectories, Other Spool Subdirectories, Status Directory, The Spool Directory Layout @subsection Execution Subdirectories @table @file @item .Xqtdir @cindex .Xqtdir When @command{uuxqt} executes a job requested by @command{uux}, it first changes the working directory to the @file{.Xqtdir} subdirectory. This permits the job to create any sort of temporary file without worrying about overwriting other files in the spool directory. Any files left in the @file{.Xqtdir} subdirectory are removed after each execution is complete. @item .Xqtdir@var{nnnn} When several instances of @command{uuxqt} are executing simultaneously, each one executes jobs in a separate directory. The first uses @file{.Xqtdir}, the second uses @file{.Xqtdir0001}, the third uses @file{.Xqtdir0002}, and so forth. @item .Corrupt @cindex .Corrupt If @command{uuxqt} encounters an execution file which it is unable to parse, it saves it in the @file{.Corrupt} directory, and sends mail about it to the UUCP administrator. @item .Failed @cindex .Failed If @command{uuxqt} executes a job, and the job fails, and there is enough disk space to hold the command file and all the data files, then @command{uuxqt} saves the files in the @file{.Failed} directory, and sends mail about it to the UUCP administrator. @end table @node Other Spool Subdirectories, Spool Lock Files, Execution Subdirectories, The Spool Directory Layout @subsection Other Spool Subdirectories @table @file @item .Sequence @cindex .Sequence This directory holds conversation sequence number files. These are used if the @code{sequence} command is used for a system (@pxref{Miscellaneous (sys)}). The sequence number for the system @var{system} is stored in the file @file{.Sequence/@var{system}}. It is simply stored as a printable number. @item .Temp @cindex .Temp This directory holds data files as they are being received from a remote system, before they are moved to their final destination. For file send requests which use a valid temporary file name in the @var{temp} field of the @samp{S} or @samp{E} command (@pxref{The S Command}), @command{uucico} receives the file into @file{.Temp/@var{system}/@var{temp}}, where @var{system} is the name of the remote system, and @var{temp} is the temporary file name. If a conversation fails during a file transfer, these files are used to automatically restart the file transfer from the point of failure. If the @samp{S} or @samp{E} command does not include a temporary file name, automatic restart is not possible. In this case, the files are received into a randomly named file in the @file{.Temp} directory itself. @item .Preserve @cindex .Preserve This directory holds data files which could not be transferred to a remote system for some reason (for example, the data file might be large, and exceed size restrictions imposed by the remote system). When a locally requested file transfer fails, @command{uucico} will store the data file in the @file{.Preserve} directory, and send mail to the requestor describing the failure and naming the saved file. @item .Received @cindex .Received This directory records which files have been received. If a conversation fails just after @command{uucico} acknowledges receipt of a file, it is possible for the acknowledgement to be lost. If this happens, the remote system will resend the file. If the file were an execution request, and @command{uucico} did not keep track of which files it had already received, this could lead to the execution being performed twice. To avoid this problem, when a conversation fails, @command{uucico} records each file that has been received, but for which the remote system may not have received the acknowledgement. It records this information by creating an empty file with the name @file{.Received/@var{system}/@var{temp}}, where @var{system} is the name of the remote system, and @var{temp} is the @var{temp} field of the @samp{S} or @samp{E} command from the remote system (@pxref{The S Command}). Then, if the remote system offers the file again in the next conversation, @command{uucico} refuses the send request and deletes the record in the @file{.Received} directory. This approach only works for file sends which use a temporary file name, but this is true of all execution requests. @end table @node Spool Lock Files, , Other Spool Subdirectories, The Spool Directory Layout @subsection Lock Files in the Spool Directory @cindex lock files in spool directory Lock files for devices and systems are stored in the lock directory, which may or may not be the same as the spool directory. The lock directory is set at compilation time by @code{LOCKDIR} in @file{policy.h}, which may be overridden by the @code{lockdir} command in the @file{config} file (@pxref{Miscellaneous (config)}). For a description of the names used for device lock files, and the format of the contents of a lock file, see @ref{UUCP Lock Files}. @table @file @item LCK..@var{sys} @cindex LCK..@var{sys} @cindex system lock files A lock file for a system, where @var{sys} is the system name. As noted above, these lock files are kept in the lock directory, which may not be the spool directory. These lock files are created by @command{uucico} while. @item LCK.XQT.@var{NN} @cindex LCK.XQT.@var{NN} When @command{uuxqt} starts up, it uses lock files to determine how many other @command{uuxqt} daemons are currently running. It first tries to lock @file{LCK.XQT.0}, then @file{LCK.XQT.1}, and so forth. This is used to implement the @code{max-uuxqts} command (@pxref{Miscellaneous (config)}). It is also used to parcel out the @file{.Xqtdir} subdirectories (@pxref{Execution Subdirectories}). @item LXQ.@var{cmd} @cindex LXQ.@var{cmd} When @command{uuxqt} is invoked with the @option{-c} or @option{--command} option (@pxref{Invoking uuxqt}), it creates a lock file named after the command it is executing. For example, @samp{uuxqt -c rmail} will create the lock file @file{LXQ.rmail}. This prevents other @command{uuxqt} daemons from executing jobs of the specified type. @item @var{system}/X./L.@var{xxx} @cindex L.@var{xxx} While @command{uuxqt} is executing a particular job, it creates a lock file with the same name as the @file{X.} file describing the job, but replacing the initial @samp{X} with @samp{L}. This ensures that if multiple @command{uuxqt} daemons are running, they do not simultaneously execute the same job. @item LCK..SEQ This lock file is used to control access to the sequence files for each system (@pxref{System Spool Directories}). It is only used on systems which do not support POSIX file locking using the @code{fcntl} system call. @end table @node Spool Directory Cleaning, , The Spool Directory Layout, Using Taylor UUCP @section Cleaning the Spool Directory @cindex spool directory, cleaning @cindex cleaning the spool directory The spool directory may need to be cleaned up periodically. Under some circumstances, files may accumulate in various subdirectories, such as @file{.Preserve} (@pxref{Other Spool Subdirectories}) or @file{.Corrupt} (@pxref{Execution Subdirectories}). Also, if a remote system stops calling in, you may want to arrange for any queued up mail to be returned to the sender. This can be done using the @command{uustat} command (@pxref{Invoking uustat}). The @file{contrib} directory includes a simple @file{uuclean} script which may be used as an example of a clean up script. It can be run daily out of @file{crontab}. You should periodically trim the UUCP log files, as they will otherwise grow without limit. The names of the log files are set in @file{policy.h}, and may be overridden in the configuration file (@pxref{config File}). By default they are are @file{/usr/spool/uucp/Log} and @file{/usr/spool/uucp/Stats}. You may find the @code{savelog} program in the @file{contrib} directory to be of use. There is a manual page for it in @file{contrib} as well. @node Configuration Files, Protocols, Using Taylor UUCP, Top @chapter Taylor UUCP Configuration Files This chapter describes the configuration files accepted by the Taylor UUCP package if compiled with @code{HAVE_TAYLOR_CONFIG} set to 1 in @file{policy.h}. The configuration files are normally found in the directory @var{newconfigdir}, which is defined by the @code{configure} option @option{--with-newconfigdir}; by default @var{newconfigdir} is @file{/usr/local/conf/uucp}. However, the main configuration file, @file{config}, is the only one which must be in that directory, since it may specify a different location for any or all of the other files. You may run any of the UUCP programs with a different main configuration file by using the @option{-I} or @option{--config} option; this can be useful when testing a new configuration. When you use the @option{-I} option the programs will revoke any setuid privileges. @menu * @end menu @node Configuration Overview, Configuration File Format, Configuration Files, Configuration Files @section Configuration File Overview UUCP uses several different types of configuration files, each describing a different kind of information. The commands permitted in each file are described in detail below. This section is a brief description of some of the different types of files. The @file{config} file is the main configuration file. It describes general information not associated with a particular remote system, such as the location of various log files. There are reasonable defaults for everything that may be specified in the @file{config} file, so you may not actually need one on your system. There may be only one @file{config} file, but there may be one or more of each other type of file. The default is one file for each type, but more may be listed in the @file{config} file. The @file{sys} files are used to describe remote systems. Each remote system to which you connect must be listed in a @file{sys} file. A @file{sys} file will include information for a system, such as the speed (baud rate) to use, or when to place calls. For each system you wish to call, you must describe one or more ports; these ports may be defined directly in the @file{sys} file, or they may be defined in a @file{port} file. The @file{port} files are used to describe ports. A port is a particular hardware connection on your computer. You would normally define as many ports as there are modems attached to your computer. A TCP connection is also described using a port. The @file{dial} files are used to describe dialers. Dialer is essentially another word for modem. The @file. @node Configuration File Format, Configuration Examples, Configuration Overview, Configuration Files @section Configuration File Format All the configuration files follow a simple line-oriented @samp{@var{keyword} @var @kbd{#} character is used for comments. Everything from a @kbd{#} to the end of the line is ignored unless the @kbd{#} is preceded by a @kbd{\} (backslash); if the @kbd{#} is preceeded by a @kbd{\}, the @kbd{\} is removed but the @kbd{#} remains in the line. This can be useful for a phone number containing a @kbd{#}. To enter the sequence @samp{\#}, use @samp{\\#}. @var{boolean} may be specified as @kbd{y}, @kbd{Y}, @kbd{t}, or @kbd{T} for true and @kbd{n}, @kbd{N}, @kbd{f}, or @kbd{F} for false; any trailing characters are ignored, so @code{true}, @code{false}, etc., are also acceptable. @node Configuration Examples, Time Strings, Configuration File Format, Configuration Files @section Examples of Configuration Files This section provides few typical examples of configuration files. There are also sample configuration files in the @file{sample} subdirectory of the distribution. @menu * config File Examples:: Examples of the Main Configuration File * Leaf Example:: Call a Single Remote Site * Gateway Example:: The Gateway for Several Local Systems @end menu @node config File Examples, Leaf Example, Configuration Examples, Configuration Examples @subsection config File Examples @cindex config file examples To start with, here are some examples of uses of the main configuration file, @file{config}. For a complete description of the commands that are permitted in @file{config}, see @ref{config File}. In many cases you will not need to create a @file @file{airs.com} our mail/news gateway machine is named @file{elmer.airs.com} (it is one of several machines all named @file{@var{localname}.airs.com}). If we did not provide a @file{config} file, then our UUCP name would be @file{elmer}; however, we actually want it to be @file{airs}. Therefore, we use the following line in @file{config}: @example nodename airs @end example @cindex changing spool directory @cindex spool directory, changing The UUCP spool directory name is set in @file{policy.h} when the code is compiled. You might at some point decide that it is appropriate to move the spool directory, perhaps to put it on a different disk partition. You would use the following commands in @file{config} to change to directories on the partition @file{/uucp}: @example spool /uucp/spool pubdir /uucp/uucppublic logfile /uucp/spool/Log debugfile /uucp/spool/Debug @end example You would then move the contents of the current spool directory to @file{/uucp/spool}. If you do this, make sure that no UUCP processes are running while you change @file{config} and move the spool directory. @cindex anonymous UUCP Suppose you wanted to permit any system to call in to your system and request files. This is generally known as @dfn{anonymous UUCP}, since the systems which call in are effectively anonymous. By default, unknown systems are not permitted to call in. To permit this you must use the @code{unknown} command in @file{config}. The @code{unknown} command is followed by any command that may appear in the system file; for full details, see @ref{sys File}. I will show two possible anonymous UUCP configurations. The first will let any system call in and download files, but will not permit them to upload files to your system. @example # No files may be transferred to this system unknown receive-request no # The public directory is /usr/spool/anonymous unknown pubdir /usr/spool/anonymous # Only files in the public directory may be sent (the default anyhow) unknown remote-send ~ @end example @noindent Setting the public directory is convenient for the systems which call in. It permits to request a file by prefixing it with @file{~/}. For example, assuming your system is known as @samp{server}, then to retrieve the file @file{/usr/spool/anonymous/INDEX} a user on a remote site could just enter @samp{uucp server!~/INDEX ~}; this would transfer @file{INDEX} from @samp{server}'s public directory to the user's local public directory. Note that when using @samp{csh} or @samp{bash} the @kbd{!} and the second @kbd{~} must be quoted. The next example will permit remote systems to upload files to a special directory named @file{/usr/spool/anonymous/upload}. Permitting a remote system to upload files permits it to send work requests as well; this example is careful to prohibit commands from unknown systems. @example # @end example @node Leaf Example, Gateway Example, config File Examples, Configuration Examples @subsection Leaf Example @cindex leaf site @cindex sys file example (leaf) A relatively common simple case is a @dfn{leaf site}, a system which only calls or is called by a single remote site. Here is a typical @file{sys} file that might be used in such a case. For full details on what commands can appear in the @file{sys} file, see @ref{sys File}. This is the @file{sys} file that is used at @file{airs.com}. We use a single modem to dial out to @file{uunet}. This example shows how you can specify the port and dialer information directly in the @file{sys} file for simple cases. It also shows the use of the following: @table @code @item call-login Using @code{call-login} and @code{call-password} allows the default login chat script to be used. In this case, the login name is specified in the call-out login file (@pxref{Configuration File Names}). @item call-timegrade @file{uunet} is requested to not send us news during the daytime. @item chat-fail If the modem returns @samp{BUSY} or @samp{NO CARRIER} the call is immediately aborted. @item protocol-parameter Since @file{uunet} tends to be slow, the default timeout has been increased. @end table This @file{sys} file relies on certain defaults. It will allow @file{uunet} to queue up @samp{rmail} and @samp{rnews} commands. It will allow users to request files from @file{uunet} into the UUCP public directory. It will also allow @file{uunet} to request files from the UUCP public directory; in fact @file{uunet} never requests files, but for additional security we could add the line @samp{request false}. @example # @end example @node Gateway Example, , Leaf Example, Configuration Examples @subsection Gateway Example @cindex gateway @cindex sys file example (gateway) Many organizations have several local machines which are connected by UUCP, and a single machine which connects to the outside world. This single machine is often referred to as a @dfn{gateway} machine. For this example I will assume a fairly simple case. It should still provide a good general example. There are three machines, @file{elmer}, @file{comton} and @file{bugs}. @file{elmer} is the gateway machine for which I will show the configuration file. @file{elmer} calls out to @file{uupsi}. As an additional complication, @file{uupsi} knows @file{elmer} as @file{airs}; this will show how a machine can have one name on an internal network but a different name to the external world. @file{elmer} has two modems. It also has an TCP connection to @file{uupsi}, but since that is supposed to be reserved for interactive work (it is, perhaps, only a 9600 baud SLIP line) it will only use it if the modems are not available. A network this small would normally use a single @file{sys} file. However, for pedagogical purposes I will show two separate @file{sys} files, one for the local systems and one for @file{uupsi}. This is done with the @code{sysfile} command in the @file{config} file. Here is the @file{config} file. @example # This is config # The local sys file sysfile /usr/local/lib/uucp/sys.local # The remote sys file sysfile /usr/local/lib/uucp/sys.remote @end example Using the defaults feature of the @file{sys} file can greatly simplify the listing of local systems. Here is @file{sys.local}. Note that this assumes that the local systems are trusted; they are permited to request any world readable file and to write files into any world writable directory. @example # @end example The @file{sys.remote} file describes the @file{uupsi} connection. The @code{myname} command is used to change the UUCP name to @file{airs} when talking to @file{uupsi}. @example # @end example The ports are defined in the file @file{port} (@pxref{port File}). For this example they are both connected to the same type of 2400 baud Hayes-compatible modem. @example # This is port port port1 type modem device /dev/ttyd0 dialer hayes speed 2400 port port2 type modem device /dev/ttyd1 dialer hayes speed 2400 @end example Dialers are described in the @file{dial} file (@pxref{dial File}). @example # @end example @node Time Strings, Chat Scripts, Configuration Examples, Configuration Files @section Time Strings @cindex time strings Several commands use time strings to specify a range of times. This section describes how to write time strings. A time string may be a list of simple time strings separated with a vertical bar @samp{|} or a comma @samp{,}. Each simple time string must begin with @samp{Su}, @samp{Mo}, @samp{Tu}, @samp{We}, @samp{Th}, @samp{Fr}, or @samp{Sa}, or @samp{Wk} for any weekday, or @samp{Any} for any day. Following the day may be a range of hours separated with a hyphen using 24 hour time. The range of hours may cross 0; for example @samp{2300-0700} means any time except 7 AM to 11 PM. If no time is given, calls may be made at any time on the specified day(s). The time string may also be the single word @samp{Never}, which does not match any time. The time string may also be a single word with a name defined in a previous @code{timetable} command (@pxref{Miscellaneous (config)}). Here are a few sample time strings with an explanation of what they mean. @table @samp @item Wk2305-0855,Sa,Su2305-1655, @samp{2305} rather than @samp{2300}; this will ensure a cheap rate phone call even if the computer clock is running up to five minutes ahead of the real time. @item Wk0905-2255,Su1705-2255 This means weekdays from 9:05 AM to 10:55 PM, or Sunday from 5:05 PM to 10:55 PM. This is approximately the opposite of the previous example. @item Any This means any day. Since no time is specified, it means any time on any day. @end table @node Chat Scripts, config File, Time Strings, Configuration Files @section Chat Scripts @cindex chat scripts Chat scripts are used in several different places, such as dialing out on modems or logging in to remote systems. Chat scripts are made up of pairs of strings. The program waits until it sees the first string, known as the @dfn{expect} string, and then sends out the second string, the @dfn{send} string. Each chat script is defined using a set of commands. These commands always end in a string beginning with @code{chat}, but may start with different strings. For example, in the @file{sys} file there is one set of commands beginning with @code{chat} and another set beginning with @code{called-chat}. The prefixes are only used to disambiguate different types of chat scripts, and this section ignores the prefixes when describing the commands. @table @code @item chat @var{strings} @findex chat Specify a chat script. The arguments to the @code @code @samp{\055} instead. An expect string may simply be @samp{""}, meaning to skip the expect phase. Otherwise, the following escape characters may appear in expect strings: @table @samp @item \b a backspace character @item \n a newline or line feed character @item \N a null character (for HDB compatibility) @item \r a carriage return character @item \s a space character @item \t a tab character @item \\ a backslash character @item \@var{ddd} character @var{ddd}, where @var{ddd} are up to three octal digits @item \x@var{ddd} character @var{ddd}, where @var{ddd} are hexadecimal digits. @end table As in C, there may be up to three octal digits following a backslash, but the hexadecimal escape sequence continues as far as possible. To follow a hexadecimal escape sequence with a hex digit, interpose a send string of @samp{""}. A chat script expect string may also specify a timeout. This is done by using the escape sequence @samp{\W@var{seconds}}. This escape sequence may only appear at the very end of the expect string. It temporarily overrides the timeout set by @code{chat-timeout} (described below) only for the expect string to which it is attached. A send string may simply be @samp{""} to skip the send phase. Otherwise, all of the escape characters legal for expect strings may be used, and the following escape characters are also permitted: @table @samp @item EOT send an end of transmission character (@kbd{^D}) @item BREAK send a break character (may not work on all systems) @item \c suppress trailing carriage return at end of send string @item \d delay sending for 1 or 2 seconds @item \e disable echo checking @item \E enable echo checking @item \K same as @samp{BREAK} @item \p pause sending for a fraction of a second @item \M do not require carrier signal @item \m require carrier signal (fail if not present) @end table Some specific types of chat scripts also define additional escape sequences that may appear in the send string. For example, the login chat script defines @samp{\L} and @samp{\P} to send the login name and password, respectively. A carriage return will be sent at the end of each send string, unless the @kbd{\c} escape sequence appears in the string. Note that some UUCP packages use @kbd{ @kbd{\E} and turned off for characters following @kbd{\e}. When used with a port which does not support the carrier signal, as set by the @code{carrier} command in the port file, @kbd{\M} and @kbd{\m} are ignored. Similarly, when used in a dialer chat script with a dialer which does not support the carrier signal, as set by the @code{carrier} command in the dial file, @kbd{\M} and @kbd{\m} are ignored. @item chat-timeout @var{number} @findex chat-timeout The number of seconds to wait for an expect string in the chat script, before timing out and sending the next subsend, or failing the chat script entirely. The default value is 10 for a login chat or 60 for any other type of chat. @item chat-fail @var{string} @findex chat-fail If the @var{string} is seen at any time during a chat script, the chat script is aborted. The string may not contain any whitespace characters: escape sequences must be used for them. Multiple @code{chat-fail} commands may appear in a single chat script. The default is to have none. This permits a chat script to be quickly aborted if an error string is seen. For example, a script used to dial out on a modem might use the command @samp{chat-fail BUSY} to stop the chat script immediately if the string @samp{BUSY} was seen. The @code. @item chat-seven-bit @var{boolean} @findex chat-seven-bit @code{chat-program}, which must ignore parity by itself if necessary. @item chat-program @var{strings} @findex chat-program Specify a program to run before executing the chat script. This program could run its own version of a chat script, or it could do whatever it wants. If both @code{chat-program} and @code{chat} are specified, the program is executed first followed by the chat script. The first argument to the @code{chat-program} command is the program name to run. The remaining arguments are passed to the program. The following escape sequences are recognized in the arguments: @table @kbd @item \Y port device name @item \S port speed @item \\ backslash @end table Some specific uses of @code{chat-program} define additional escape sequences. Arguments other than escape sequences are passed exactly as they appear in the configuration file, except that sequences of whitespace are compressed to a single space character (this exception may be removed in the future). If the @code @code{dialHDB} program in the @file{contrib} directory. The program will be run as the @code{uucp} user, and the environment will be that of the process that started @command{uucico}, so care must be taken to maintain security. No search path is used to find the program; a full file name must be given. If the program is an executable shell script, it will be passed to @file{/bin/sh} even on systems which are unable to execute shell scripts. @end table Here is a simple example of a chat script that might be used to reset a Hayes compatible modem. @example chat "" ATZ OK-ATZ-OK @end example The first expect string is @samp{""}, so it is ignored. The chat script then sends @samp{ATZ}. If the modem responds with @samp{OK}, the chat script finishes. If 60 seconds (the default timeout) pass before seeing @samp{OK}, the chat script sends another @samp{ATZ}. If it then sees @samp{OK}, the chat script succeeds. Otherwise, the chat script fails. For a more complex chat script example, see @ref{Logging In}. @node config File, sys File, Chat Scripts, Configuration Files @section The Main Configuration File @cindex config file @cindex main configuration file @cindex configuration file (config) The main configuration file is named @file{config}. Since all the values that may be specified in the main configuration file also have defaults, there need not be a main configuration file at all. Each command in @file{config} may have a program prefix, which is a separate word appearing at the beginning of the line. The currently supported prefixes are @samp{uucp} and @samp{cu}. Any command prefixed by @samp{uucp} will not be read by the @command{cu} program. Any command prefixed by @samp{cu} will only be read by the @command{cu} program. For example, to use a list of systems known only to @command{cu}, list them in a separate file @file{@var{file}} and put @samp{cu sysfile @file{@var{file}}} in @file{config}. @menu * Miscellaneous (config):: Miscellaneous config File Commands * Configuration File Names:: Using Different Configuration Files * Log File Names:: Using Different Log Files * Debugging Levels:: Debugging Levels @end menu @node Miscellaneous (config), Configuration File Names, config File, config File @subsection Miscellaneous config File Commands @table @code @item nodename @var{string} @findex nodename @itemx hostname @var{string} @findex hostname @itemx uuname @var{string} @findex uuname @cindex UUCP system name @cindex system name These keywords are equivalent. They specify the UUCP name of the local host. If there is no configuration file, an appropriate system function will be used to get the host name, if possible. @item spool @var{string} @findex spool @cindex spool directory, setting @cindex /usr/spool/uucp Specify the spool directory. The default is from @file{policy.h}. This is where UUCP files are queued. Status files and various sorts of temporary files are also stored in this directory and subdirectories of it. @item pubdir @var{string} @findex pubdir in config file @cindex public directory @cindex uucppublic @cindex /usr/spool/uucppublic Specify the public directory. The default is from @file{policy.h}. When a file is named using a leading @kbd{~/}, it is taken from or to the public directory. Each system may use a separate public directory by using the @code{pubdir} command in the system configuration file; see @ref{Miscellaneous (sys)}. @item lockdir @var{string} @findex lockdir @cindex lock directory Specify the directory to place lock files in. The default is from @file{policy.h}; see the information in that file. Normally the lock directory should be set correctly in @file{policy.h}, and not changed here. However, changing the lock directory is sometimes useful for testing purposes. This only affects lock files for devices and systems; it does not affect certain internal lock files which are stored in the spool directory (@pxref{Spool Lock Files}). @item unknown @var{string} @dots{} @findex unknown @cindex unknown systems The @var{string} and subsequent arguments are treated as though they appeared in the system file (@pxref{sys File}). They are used to apply to any unknown systems that may call in, probably to set file transfer permissions and the like. If the @code{unknown} command is not used, unknown systems are not permitted to call in. @item strip-login @var{boolean} @findex strip-login @cindex parity in login names If the argument is true, then, when @command{uucico} is doing its own login prompting with the @option{-e}, @option{-l}, or @option{. @item strip-proto @var{boolean} @findex strip-proto If the argument is true, then @command. @item max-uuxqts @var{number} @findex max-uuxqts Specify the maximum number of @command{uuxqt} processes which may run at the same time. Having several @command{uuxqt} processes running at once can significantly slow down a system, but, since @command{uuxqt} is automatically started by @command{uucico}, it can happen quite easily. The default for @code{max-uuxqts} is 0, which means that there is no limit. If HDB configuration files are being read and the code was compiled without @code{HAVE_TAYLOR_CONFIG}, then, if the file @file{Maxuuxqts} in the configuration directory contains a readable number, it will be used as the value for @code{max-uuxqts}. @item run-uuxqt @var{string} or @var{number} @findex run-uuxqt Specify when @command{uuxqt} should be run by @command{uucico}. This may be a positive number, in which case @command{uucico} will start a @command{uuxqt} process whenever it receives the given number of execution files from the remote system, and, if necessary, at the end of the call. The argument may also be one of the strings @samp{once}, @samp{percall}, or @samp{never}. The string @samp{once} means that @command{uucico} will start @command{uuxqt} once at the end of execution. The string @samp{percall} means that @command{uucico} will start @command{uuxqt} once per call that it makes (this is only different from @code{once} when @command{uucico} is invoked in a way that causes it to make multiple calls, such as when the @option{-r1} option is used without the @option{-s} option). The string @samp{never} means that @command{uucico} will never start @command{uuxqt}, in which case @command{uuxqt} should be periodically run via some other mechanism. The default depends upon which type of configuration files are being used; if @code{HAVE_TAYLOR_CONFIG} is used the default is @samp{once}, otherwise if @code{HAVE_HDB_CONFIG} is used the default is @samp{percall}, and otherwise, for @code{HAVE_V2_CONFIG}, the default is @samp{10}. @item timetable @var{string} @var{string} @findex timetable The @code{timetable} defines a timetable that may be used in subsequently appearing time strings; see @ref{Time Strings}. The first string names the timetable entry; the second is a time string. The following @code{timetable} commands are predefined. The NonPeak timetable is included for compatibility. It originally described the offpeak hours of Tymnet and Telenet, but both have since changed their schedules. @example timetable Evening Wk1705-0755,Sa,Su timetable Night Wk2305-0755,Sa,Su2305-1655 timetable NonPeak Wk1805-0655,Sa,Su @end example If this command does not appear, then, obviously, no additional timetables will be defined. @item v2-files @var{boolean} @findex v2-files If the code was compiled to be able to read V2 configuration files, a false argument to this command will prevent them from being read. This can be useful while testing. The default is true. @item hdb-files @var{boolean} @findex hdb-files If the code was compiled to be able to read HDB configuration files, a false argument to this command will prevent them from being read. This can be useful while testing. The default is true. @end table @node Configuration File Names, Log File Names, Miscellaneous (config), config File @subsection Configuration File Names @table @code @item sysfile @var{strings} @findex sysfile Specify the system file(s). The default is the file @file{sys} in the directory @var{newconfigdir}. These files hold information about other systems with which this system communicates; see @ref{sys File}. Multiple system files may be given on the line, and the @code{sysfile} command may be repeated; each system file has its own set of defaults. @item portfile @var{strings} @findex portfile Specify the port file(s). The default is the file @file{port} in the directory @var{newconfigdir}. These files describe ports which are used to call other systems and accept calls from other systems; see @ref{port File}. No port files need be named at all. Multiple port files may be given on the line, and the @code{portfile} command may be repeated. @item dialfile @var{strings} @findex dialfile Specify the dial file(s). The default is the file @file{dial} in the directory @var{newconfigdir}. These files describe dialing devices (modems); see @ref{dial File}. No dial files need be named at all. Multiple dial files may be given on the line, and the @code{dialfile} command may be repeated. @item dialcodefile @var{strings} @findex dialcodefile @cindex configuration file (dialcode) @cindex dialcode file @cindex dialcode configuration file Specify the dialcode file(s). The default is the file @file{dialcode} in the directory @var @code{dialcodefile} command may be repeated; all the dialcode files will be read in turn until a dialcode is located. @item callfile @var{strings} @findex callfile @cindex call out file @cindex call configuration file @cindex call out login name @cindex call out password @cindex configuration file (call) Specify the call out login name and password file(s). The default is the file @file{call} in the directory @var{newconfigdir}. If the call out login name or password for a system are given as @kbd{*} (@pxref{Logging In}), these files are read to get the real login name or password. Each line in the file(s) has three words: the system name, the login name, and the password. The login name and password may contain escape sequences like those in a chat script expect string (@pxref{Chat Scripts}). This file is only used when placing calls to remote systems; the password file described under @code @code{callfile} command may be repeated; all the files will be read in turn until the system is found. @item passwdfile @var{strings} @findex passwdfile @cindex passwd file @cindex passwd configuration file @cindex configuration file (passwd) @cindex call in login name @cindex call in password Specify the password file(s) to use for login names when @command{uucico} is doing its own login prompting, which it does when given the @option{-e}, @option{-l} or @option{-w} switches. The default is the file @file{passwd} in the directory @var{newconfigdir}. Each line in the file(s) has two words: the login name and the password (e.g., @samp{Ufoo foopas}). They may contain escape sequences like those in a chat script expect string (@pxref{Chat Scripts}). The login name is accepted before the system name is known, so these are independent of which system is calling in; a particular login may be required for a system by using the @code{called-login} command in the system file (@pxref{Accepting a Call}). These password files are optional, although one must exist if @command @code{HAVE_ENCRYPTED_PASSWORDS} macro in @file{policy.h}, permits using a standard Unix @file{/etc/passwd} as a UUCP password file, providing the same set of login names and passwords for both @command{getty} and @command{uucico}. Multiple password files may be specified on the line, and the @code{passwdfile} command may be repeated; all the files will be read in turn until the login name is found. @end table @node Log File Names, Debugging Levels, Configuration File Names, config File @subsection Log File Names @table @code @item logfile @var{string} @findex logfile @cindex log file Name the log file. The default is from @file{policy.h}. Logging information is written to this file. If @code{HAVE_HDB_LOGGING} is defined in @file{policy.h}, then by default a separate log file is used for each system; using this command to name a log file will cause all the systems to use it. @item statfile @var{string} @findex statfile @cindex statistics file Name the statistics file. The default is from @file{policy.h}. Statistical information about file transfers is written to this file. @item debugfile @var{string} @findex debugfile @cindex debugging file Name the file to which all debugging information is written. The default is from @file{policy.h}. This command is only effective if the code has been compiled to include debugging (this is controlled by the @code{DEBUG} macro in @file. @end table @node Debugging Levels, , Log File Names, config File @subsection Debugging Levels @table @code @item debug @var{string} @dots{} @findex debug in config file Set the debugging level. This command is only effective if the code has been compiled to include debugging. The default is to have no debugging. The arguments are strings which name the types of debugging to be turned on. The following types of debugging are defined: @table @samp @item abnormal Output debugging messages for abnormal situations, such as recoverable errors. @item chat Output debugging messages for chat scripts. @item handshake Output debugging messages for the initial handshake. @item uucp-proto Output debugging messages for the UUCP session protocol. @item proto Output debugging messages for the individual link protocols. @item port Output debugging messages for actions on the communication port. @item config Output debugging messages while reading the configuration files. @item spooldir Output debugging messages for actions in the spool directory. @item execute Output debugging messages whenever another program is executed. @item incoming List all incoming data in the debugging file. @item outgoing List all outgoing data in the debugging file. @item all All of the above. @end table The debugging level may also be specified as a number. A 1 will set @samp{chat} debugging, a 2 will set both @samp{chat} and @samp{handshake} debugging, and so on down the possibilities. Currently an 11 will turn on all possible debugging, since there are 11 types of debugging messages listed above; more debugging types may be added in the future. The @code{debug} command may be used several times in the configuration file; every debugging type named will be turned on. When running any of the programs, the @option{-x} switch (actually, for @command{uulog} it's the @option{-X} switch) may be used to turn on debugging. The argument to the @option{-x} switch is one of the strings listed above, or a number as described above, or a comma separated list of strings (e.g., @option{-x chat,handshake}). The @option{-x} switch may also appear several times on the command line, in which case all named debugging types will be turned on. The @option{-x} debugging is in addition to any debugging specified by the @code{debug} command; there is no way to cancel debugging information. The debugging level may also be set specifically for calls to or from a specific system with the @code{debug} command in the system file (@pxref{Miscellaneous (sys)}). The debugging messages are somewhat idiosyncratic; it may be necessary to refer to the source code for additional information in some cases. @end table @node sys File, port File, config File, Configuration Files @section The System Configuration File @cindex sys file @cindex system configuration file @cindex configuration file (sys) By default there is a single system configuration, named @file{sys} in the directory @var{newconfigdir}. This may be overridden by the @code{sysfile} command in the main configuration file; see @ref @end menu @node Defaults and Alternates, Naming the System, sys File, sys File @subsection Defaults and Alternates The first set of commands in the file, up to the first @code{system} command, specify defaults to be used for all systems in that file. Each @file{sys} file uses a different set of defaults. Subsequently, each set of commands from @code{system} up to the next @code{system} command describe a particular system. Default values may be overridden for specific systems. Each system may then have a series of alternate choices to use when calling out or calling in. The first set of commands for a particular system, up to the first @code{alternate} command, provide the first choice. Subsequently, each set of commands from @code{alternate} up to the next @code{alternate} command describe an alternate choice for calling out or calling in. When a system is called, the commands before the first @code{alternate} are used to select a phone number, port, and so forth; if the call fails for some reason, the commands between the first @code @code{alternate} command, uses the @code{called-login} command. The list of alternates will be searched, and the first alternate with a matching @code{called-login} command will be used. If no alternates match, the call will be rejected. The @code{alternate} command may also be used in the file-wide defaults (the set of commands before the first @code{system} command). This might be used to specify a list of ports which are available for all systems (for an example of this, see @ref{Gateway Example}) or to specify permissions based on the login name used by the remote system when it calls in. The first alternate for each system will default to the first alternate for the file-wide defaults (as modified by the commands used before the first @code @code{default-alternates} command may be used to modify this behaviour. This can all get rather confusing, although it's easier to use than to describe concisely; the @command{uuchk} program may be used to ensure that you are getting what you want. @need 2000 @node Naming the System, Calling Out, Defaults and Alternates, sys File @subsection Naming the System @table @code @item system @var{string} @findex system Specify the remote system name. Subsequent commands up to the next @code{system} command refer to this system. @item alternate [@var{string}] @findex alternate Start an alternate set of commands (@pxref{Defaults and Alternates}). An optional argument may be used to name the alternate. This name will be recorded in the log file if the alternate is used to call the system. There is no way to name the first alternate (the commands before the first @code{alternate} command). @item default-alternates @var{boolean} @findex default-alternates If the argument is false, any remaining default alternates (from the defaults specified at the top of the current system file) will not be used. The default is true. @item alias @var{string} @findex alias Specify an alias for the current system. The alias may be used by local @command{uucp} and @command{uux} commands, as well as by the remote system (which can be convenient if a remote system changes its name). The default is to have no aliases. @item myname @var{string} @findex myname Specifies a different system name to use when calling the remote system. Also, if @code{called-login} is used and is not @samp{ANY}, then, when a system logs in with that login name, @var{string} is used as the local system name. Because the local system name must be determined before the remote system has identified itself, using @code{myname} and @code. @end table @node Calling Out, Accepting a Call, Naming the System, sys File @subsection Calling Out This section describes commands used when placing a call to another system. @menu * When to Call:: When to Call * Placing the Call:: Placing the Call * Logging In:: Logging In @end menu @need 2000 @node When to Call, Placing the Call, Calling Out, Calling Out @subsubsection When to Call @table @code @item time @var{string} [@var{number}] @findex time Specify when the system may be called. The first argument is a time string; see @ref @code{time} command is always a fixed amount of time. The @code{time} command may appear multiple times in a single alternate, in which case if any time string matches the system may be called. When the @code{time} command is used for a particular system, any @code{time} or @code{timegrade} commands that appeared in the system defaults are ignored. The default time string is @samp{Never}. @item timegrade @var{character} @var{string} [@var{number}] @findex timegrade @cindex grades The @var{character} specifies a grade. It must be a single letter or digit. The @var{string} is a time string (@pxref{Time Strings}). All jobs of grade @var{character} or higher (where @kbd{0} > @kbd{9} > @kbd{A} > @kbd{Z} > @kbd{a} > @kbd{z}) may be run at the specified time. An ordinary @code{time} command is equivalent to using @code{timegrade} with a grade of @kbd{z}, permitting all jobs. If there are no jobs of a sufficiently high grade according to the time string, the system will not be called. Giving the @option{-s} switch to @command{uucico} to force it to call a system causes it to assume there is a job of grade @kbd{0} waiting to be run. The optional third argument specifies a retry time in minutes. See the @code{time} command, above, for more details. Note that the @code{timegrade} command serves two purposes: 1) if there is no job of sufficiently high grade the system will not be called, and 2) if the system is called anyway (because the @option{-s} switch was given to @command{uucico}) only jobs of sufficiently high grade will be transferred. However, if the other system calls in, the @code{timegrade} commands are ignored, and jobs of any grade may be transferred (but see @code{call-timegrade} and @code{called-timegrade}, below). Also, the @code{timegrade} command will not prevent the other system from transferring any job it chooses, regardless of who placed the call. The @code{timegrade} command may appear multiple times without using @code{alternate}. When the @code{timegrade} command is used for a particular system, any @code{time} or @code{timegrade} commands that appeared in the system defaults are ignored. If this command does not appear, there are no restrictions on what grade of work may be done at what time. @item max-retries @var{number} @findex max-retries Gives the maximum number of times this system may be retried. If this many calls to the system fail, it will be called at most once a day whatever the retry time is. The default is 26. @item success-wait @var. @item call-timegrade @var{character} @var{string} @findex call placed to the other system during a time which matches the time string, the remote system will be requested to only run jobs of grade @var @code{call-timegrade} command may appear multiple times without using @code{alternate}. If this command does not appear, or if none of the time strings match, the remote system will be allowed to send whatever grades of work it chooses. @item called-timegrade @var{character} @var{string} @findex called received from the other system during a time which matches the time string, only jobs of grade @var @code{called-timegrade} command may appear multiple times. If this command does not appear, or if none of the time strings match, any grade may be sent to the remote system upon receiving a call. @end table @need 2000 @node Placing the Call, Logging In, When to Call, Calling Out @subsubsection Placing the Call @table @code @itemx speed @var{number} @findex speed in sys file @item baud @var{number} @findex baud in sys file Specify the speed (the term @dfn{baud} is technically incorrect, but widely understood) at which to call the system. This will try all available ports with that speed until an unlocked port is found. The ports are defined in the port file. If both @code{speed} and @code{port} commands appear, both are used when selecting a port. To allow calls at more than one speed, the @code{alternate} command must be used (@pxref. @item port @var{string} @findex port in sys file @code{speed} command or explicitly using the next version of @code{port}). There may be many ports with the same name; each will be tried in turn until an unlocked one is found which matches the desired speed. @item port @var{string} @dots{} If more than one string follows the @code{port} command, the strings are treated as a command that might appear in the port file (@pxref{port File}). If a port is named (by using a single string following @code{port}) these commands are ignored; their purpose is to permit defining the port completely in the system file rather than always requiring entries in two different files. In order to call out, a port must be specified using some version of the @code{port} command, or by using the @code{speed} command to select ports from the port file. @item phone @var{string} @findex phone @itemx address @var{string} @findex address Give a phone number to call (when using a modem port) or a remote host to contact (when using a TCP or TLI port). The commands @code{phone} and @code{address} are equivalent; the duplication is intended to provide a mnemonic choice depending on the type of port in use. When used with a modem port, an @kbd{=} character in the phone number means to wait for a secondary dial tone (although only some modems support this); a @kbd{-} character means to pause while dialing for 1 second (again, only some modems support this). If the system has more than one phone number, each one must appear in a different alternate. The @code (@pxref{Chat Scripts}). The @code{dialer-sequence} command in the port file may override this address (@pxref{port File}). When used with a port that not a modem or TCP or TLI, this command is ignored. @end table @node Logging In, , Placing the Call, Calling Out @subsubsection Logging In @table @code @item chat @var{strings} @findex chat in sys file @item chat-timeout @var{number} @findex chat-timeout in sys file @item chat-fail @var{string} @findex chat-fail in sys file @item chat-seven-bit @var{boolean} @findex chat-seven-bit in sys file @item chat-program @var{strings} @findex chat-program in sys file These commands describe a chat script to use when logging on to a remote system. This login chat script is run after any chat script defined in the @file{dial} file (@pxref{dial File}). Chat scripts are explained in @ref{Chat Scripts}. Two additional escape sequences may be used in send strings. @table @samp @item \L Send the login name, as set by the @code{call-login} command. @item \P Send the password, as set by the @code{call-password} command. @end table Three additional escape sequences may be used with the @code{chat-program} command. These are @samp{\L} and @samp{\P}, which become the login name and password, respectively, and @samp{\Z}, which becomes the name of the system of being called. The default chat script is: @example chat "" \r\c ogin:-BREAK-ogin:-BREAK-ogin: \L word: \P @end example This will send a carriage return (the @kbd{\c} suppresses the additional trailing carriage return that would otherwise be sent) and waits for the string @samp{ogin:} (which would be the last part of the @samp{login:} prompt supplied by a Unix system). If it doesn't see @samp{ogin:}, it sends a break and waits for @samp{ogin:} again. If it still doesn't see @samp{ogin:}, it sends another break and waits for @samp{ogin:} again. If it still doesn't see @samp{ogin:}, the chat script aborts and hangs up the phone. If it does see @samp{ogin:} at some point, it sends the login name (as specified by the @code{call-login} command) followed by a carriage return (since all send strings are followed by a carriage return unless @kbd{\c} is used) and waits for the string @samp{word:} (which would be the last part of the @samp{Password:} prompt supplied by a Unix system). If it sees @samp{word:}, it sends the password and a carriage return, completing the chat script. The program will then enter the handshake phase of the UUCP protocol. This chat script will work for most systems, so you will only be required to use the @code{call-login} and @code{call-password} commands. In fact, in the file-wide defaults you could set defaults of @samp{call-login *} and @samp{call-password *}; you would then just have to make an entry for each system in the call-out login file. Some systems seem to flush input after the @samp{login:} prompt, so they may need a version of this chat script with a @kbd{\d} before the @kbd{\L}. When using UUCP over TCP, some servers will not be handle the initial carriage return sent by this chat script; in this case you may have to specify the simple chat script @samp{ogin: \L word: \P}. @item call-login @var{string} @findex call-login Specify the login name to send with @kbd{\L} in the chat script. If the string is @samp{*} (e.g., @samp{call-login *}) the login name will be fetched from the call out login name and password file (@pxref{Configuration File Names}). The string may contain escape sequences as though it were an expect string in a chat script (@pxref{Chat Scripts}). There is no default. @item call-password @var{string} @findex call-password Specify the password to send with @kbd{\P} in the chat script. If the string is @samp{*} (e.g., @samp{call-password *}) the password will be fetched from the call-out login name and password file (@pxref{Configuration File Names}). The string may contain escape sequences as though it were an expect string in a chat script (@pxref{Chat Scripts}). There is no default. @end table @node Accepting a Call, Protocol Selection, Calling Out, sys File @subsection Accepting a Call @table @code @item called-login @var{strings} @findex called-login The first @var{string} specifies the login name that the system must use when calling in. If it is @samp{ANY} (e.g., @samp{called-login ANY}) any login name may be used; this is useful to override a file-wide default and to indicate that future alternates may have different login names. Case is significant. The default value is @samp{ANY}. Different alternates (@pxref{Defaults and Alternates}) may use different @code{called-login} commands, in which case the login name will be used to select which alternate is in effect; this will only work if the first alternate (before the first @code{alternate} command) uses the @code{called-login} command. Additional strings may be specified after the login name; they are a list of which systems are permitted to use this login name. If this feature is used, then normally the login name will only be given in a single @code{called-login} command. Only systems which appear on the list, or which use an explicit @code @code{called-login} command for every single system; you can achieve a similar effect by using a different system file for each permitted login name with an appropriate @code{called-login} command in the file-wide defaults. @item callback @var{boolean} @findex callback If @var{boolean} is true, then when the remote system calls @command{uucico} will hang up the connection and prepare to call it back. The default is false. @item called-chat @var{strings} @findex called-chat @item called-chat-timeout @var{number} @findex called-chat-timeout @item called-chat-fail @var{string} @findex called-chat-fail @item called-chat-seven-bit @var{boolean} @findex called-chat-seven-bit @item called-chat-program @var{strings} @findex called-chat-program These commands may be used to define a chat script (@pxref{Chat Scripts}) that is run whenever the local system is called by the system being defined. The chat script defined by the @code{chat} command (@pxref @ref{Logging In}. There is no default called chat script. If the called chat script fails, the incoming call will be aborted. @end table @node Protocol Selection, File Transfer Control, Accepting a Call, sys File @subsection Protocol Selection @table @code @item protocol @var{string} @findex protocol in sys file Specifies which protocols to use for the other system, and in which order to use them. This would not normally be used. For example, @samp{protocol tfg}. The default depends on the characteristics of the port and the dialer, as specified by the @code{seven-bit} and @code{reliable} commands. If neither the port nor the dialer use either of these commands, the default is to assume an eight-bit reliable connection. The commands @samp{seven-bit true} or @samp{reliable false} might be used in either the port or the dialer to change this. Each protocol has particular requirements that must be met before it will be considered during negotiation with the remote side. The @samp{t} and @samp{e} protocols are intended for use over TCP or some other communication path with end to end reliability, as they do no checking of the data at all. They will only be considered on a TCP port which is both reliable and eight bit. For technical details, see @ref{t Protocol}, and @ref{e Protocol}. The @samp{i} protocol is a bidirectional protocol. It requires an eight-bit connection. It will run over a half-duplex link, such as Telebit modems in PEP mode, but for efficient use of such a connection you must use the @code{half-duplex} command (@pxref{port File}). @xref{i Protocol}. The @samp{g} protocol is robust, but requires an eight-bit connection. @xref{g Protocol}. The @samp{G} protocol is the System V Release 4 version of the @samp{g} protocol. @xref{Big G Protocol}. The @samp{a} protocol is a Zmodem like protocol, contributed by Doug Evans. It requires an eight-bit connection, but unlike the @samp{g} or @samp{i} protocol it will work if certain control characters may not be transmitted. The @samp{j} protocol is a variant of the @samp. @xref{j Protocol}. The @samp. @xref{f Protocol}. The @samp{v} protocol is the @samp{g} protocol as used by the DOS program UUPC/Extended. It is provided only so that UUPC/Extended users can use it; there is no particular reason to select it. @xref{v Protocol}. The @samp{y} protocol is an efficient streaming protocol. It does error checking, but when it detects an error it immediately aborts the connection. This requires a reliable, flow controlled, eight-bit connection. In practice, it is only useful on a connection that is nearly always error-free. Unlike the @samp{t} and @samp{e} protocols, the connection need not be entirely error-free, so the @samp{y} protocol can be used on a serial port. @xref{y Protocol}. The protocols will be considered in the order shown above. This means that if neither the @code{seven-bit} nor the @code{reliable} command are used, the @samp{t} protocol will be used over a TCP connection and the @samp{i} protocol will be used over any other type of connection (subject, of course, to what is supported by the remote system; it may be assumed that all systems support the @samp{g} protocol). Note that currently specifying both @samp{seven-bit true} and @samp{reliable false} will not match any protocol. If this occurs through a combination of port and dialer specifications, you will have to use the @code{protocol} command for the system or no protocol will be selected at all (the only reasonable choice would be @samp{protocol f}). A protocol list may also be specified for a port (@pxref{port File}), but, if there is a list for the system, the list for the port is ignored. @item protocol-parameter @var{character} @var{string} @dots{} @findex protocol-parameter in sys file @var{character} is a single character specifying a protocol. The remaining strings are a command specific to that protocol which will be executed if that protocol is used. A typical command is something like @samp{window 7}. The particular commands are protocol specific. The @samp{i} protocol supports the following commands, all of which take numeric arguments: @table @code @item window The window size to request the remote system to use. This must be between 1 and 16 inclusive. The default is 16. @item packet-size The packet size to request the remote system to use. This must be between 1 and 4095 inclusive. The default is 1024. @item remote-packet-size If this is between 1 and 4095 inclusive, the packet size requested by the remote system is ignored, and this is used instead. The default is 0, which means that the remote system's request is honored. @item sync-timeout The length of time, in seconds, to wait for a SYNC packet from the remote system. SYNC packets are exchanged when the protocol is started. The default is 10. @item sync-retries The number of times to retry sending a SYNC packet before giving up. The default is 6. @item timeout The length of time, in seconds, to wait for an incoming packet before sending a negative acknowledgement. The default is 10. @item retries The number of times to retry sending a packet or a negative acknowledgement before giving up and closing the connection. The default is 6. @item errors The maximum number of errors to permit before closing ack-frequency The number of packets to receive before sending an acknowledgement. The default is half the requested window size, which should provide good performance in most cases. @end table The @samp{g}, @samp{G} and @samp{v} protocols support the following commands, all of which take numeric arguments, except @code{short-packets} which takes a boolean argument: @table @code @item window The window size to request the remote system to use. This must be between 1 and 7 inclusive. The default is 7. @item packet-size The packet size to request the remote system to use. This must be a power of 2 between 32 and 4096 inclusive. The default is 64 for the @samp{g} and @samp{G} protocols and 1024 for the @samp. @item startup-retries The number of times to retry the initialization sequence. The default is 8. @item init-retries The number of times to retry one phase of the initialization sequence (there are three phases). The default is 4. @item init-timeout The timeout in seconds for one phase of the initialization sequence. The default is 10. @item retries The number of times to retry sending either a data packet or a request for the next packet. The default is 6. @item timeout The timeout in seconds when waiting for either a data packet or an acknowledgement. The default is 10. @item garbage The number of unrecognized bytes to permit before dropping the connection. This must be larger than the packet size. The default is 10000. @item errors The number of errors (malformed packets, out of order packets, bad checksums, or packets rejected by the remote system) to permit before dropping. @item remote-packet-size If this is between 32 and 4096 inclusive the packet size requested by the remote system is ignored and this is used instead. There is probably no good reason to use this. The default is 0, which means that the remote system's request is honored. @item short-packets If this is true, then the code will optimize by sending shorter packets when there is less data to send. This confuses some UUCP packages, such as System V Release 4 (when using the @samp{G} protocol) and Waffle; when connecting to such a package, this parameter must be set to false. The default is true for the @samp{g} and @samp{v} protocols and false for the @samp{G} protocol. @end table The @samp{a} protocol is a Zmodem like protocol contributed by Doug Evans. It supports the following commands, all of which take numeric arguments except for @code{escape-control}, which takes a boolean argument: @table @code @item timeout Number of seconds to wait for a packet to arrive. The default is 10. @item retries The number of times to retry sending a packet. The default is 10. @item startup-retries The number of times to retry sending the initialization packet. The default is 4. @item garbage The number of garbage characters to accept before closing the connection. The default is 2400. @item send-window The number of characters that may be sent before waiting for an acknowledgement. The default is 1024. @item escape-control Whether to escape control characters. If this is true, the protocol may be used over a connection which does not transmit certain control characters, such as @code{XON} or @code{XOFF}. The connection must still transmit eight bit characters other than control characters. The default is false. @end table The @samp{j} protocol can be used over an eight bit connection that will not transmit certain control characters. It accepts the same protocol parameters that the @samp{i} protocol accepts, as well as one more: @table @code @item avoid A list of characters to avoid. This is a string which is interpreted as an escape sequence (@pxref{Chat Scripts}). The protocol does not have a way to avoid printable ASCII characters (byte values from 32 to 126, inclusive); only ASCII control characters and eight-bit characters may be avoided. The default value is @samp{\021\023}; these are the characters @code{XON} and @code{XOFF}, which many connections use for flow control. If the package is configured to use @code{HAVE_BSD_TTY}, then on some versions of Unix you may have to avoid @samp{\377} as well, due to the way some implementations of the BSD terminal driver handle signals. @end table The @samp{f} protocol is intended for use with error-correcting modems only; it checksums each file as a whole, so any error causes the entire file to be retransmitted. It supports the following commands, both of which take numeric arguments: @table @code @item timeout The timeout in seconds before giving up. The default is 120. @item retries How many times to retry sending a file. The default is 2. @end table The @samp{t} and @samp{e} protocols are intended for use over TCP or some other communication path with end to end reliability, as they do no checking of the data at all. They both support a single command, which takes a numeric argument: @table @code @item timeout The timeout in seconds before giving up. The default is 120. @end table The @samp{y} protocol is a streaming protocol contributed by Jorge Cwik. It supports the following commands, both of which take numeric arguments: @table @code @item timeout The timeout in seconds when waiting for a packet. The default is 60. @item packet-size The packet size to use. The default is 1024. @end table The protocol parameters are reset to their default values after each call. @end table @node File Transfer Control, Miscellaneous (sys), Protocol Selection, sys File @subsection File Transfer Control @table @code @item send-request @var{boolean} @findex send-request The @var{boolean} determines whether the remote system is permitted to request files from the local system. The default is yes. @item receive-request @var{boolean} @findex receive-request The @var{boolean} determines whether the remote system is permitted to send files to the local system. The default is yes. @item request @var{boolean} @findex request A shorthand command, equivalent to specifying both @samp{send-request @var{boolean}} and @samp{receive-request @var{boolean}}. @item call-transfer @var{boolean} @findex call-transfer The @var{boolean} is checked when the local system places the call. It determines whether the local system may do file transfers queued up for the remote system. The default is yes. @item called-transfer @var{boolean} @findex called-transfer The @var{boolean} is checked when the remote system calls in. It determines whether the local system may do file transfers queued up for the remote system. The default is yes. @item transfer @var{boolean} @findex transfer A shorthand command, equivalent to specifying both @samp{call-transfer @var{boolean}} and @samp{called-transfer @var{boolean}}. @item call-local-size @var{number} @var{string} @findex call-local-size The @var{string} is a time string (@pxref{Time Strings}). The @var. @item call-remote-size @var{number} @var{string} @findex call-remote-size Specify the size in bytes of the largest file that should be transferred at a given time by remote request, when the local system placed the call. This command may appear multiple times in a single alternate. If this command does not appear, there are no size restrictions. @item called-local-size @var{number} @var{string} @findex called-local-size Specify the size in bytes of the largest file that should be transferred at a given time by local request, when the remote system placed the call. This command may appear multiple times in a single alternate. If this command does not appear, there are no size restrictions. @item called-remote-size @var{number} @var{string} @findex called-remote-size Specify the size in bytes of the largest file that should be transferred at a given time by remote request, when the remote system placed the call. This command may appear multiple times in a single alternate. If this command does not appear, there are no size restrictions. @item local-send @var{strings} @findex local-send Specifies that files in the directories named by the @var{strings} may be sent to the remote system when requested locally (using @command{uucp} or @command{uux}). The directories in the list should be separated by whitespace. A @samp{~} may be used for the public directory. On a Unix system, this is typically @file{/usr/spool/uucppublic}; the public directory may be set with the @code{pubdir} command. Here is an example of @code{local-send}: @example local-send ~ /usr/spool/ftp/pub @end example Listing a directory allows all files within the directory and all subdirectories to be sent. Directories may be excluded by preceding them with an exclamation point. For example: @example local-send /usr/ftp !/usr/ftp/private ~ @end example @noindent means that all files in @file{/usr/ftp} or the public directory may be sent, except those files in @file{). @item remote-send @var{strings} @findex remote-send Specifies that files in the named directories may be sent to the remote system when requested by the remote system. The default is @samp{~}. @item local-receive @var{strings} @findex local-receive Specifies that files may be received into the named directories when requested by a local user. The default is @samp{~}. @item remote-receive @var{strings} @findex remote-receive Specifies that files may be received into the named directories when requested by the remote system. The default is @samp{~}. On Unix, the remote system may only request that files be received into directories that are writeable by the world, regardless of how this is set. @item forward-to @var{strings} @findex forward-to Specifies a list of systems to which files may be forwarded. The remote system may forward files through the local system on to any of the systems in this list. The string @samp{ANY} may be used to permit forwarding to any system. The default is to not permit forwarding to other systems. Note that if the remote system is permitted to execute the @command{uucp} command, it effectively has the ability to forward to any system. @item forward-from @var{strings} @findex forward-from Specifies a list of systems from which files may be forwarded. The remote system may request files via the local system from any of the systems in this list. The string @samp{ANY} may be used to permit forwarding to any system. The default is to not permit forwarding from other systems. Note that if a remote system is permitted to execute the @command{uucp} command, it effectively has the ability to request files from any system. @item forward @var{strings} @findex forward Equivalent to specifying both @samp{forward-to @var{strings}} and @samp{forward-from @var{strings}}. This would normally be used rather than either of the more specific commands. @item max-file-time @var{number} @findex max-file-time The maximum amount of time which will be sent sending any one file if there are other files to send. This will only be effective when using a protocol which permits interrupting one file send to send another file. This is true of the @samp{i} and @samp{j} protocols. The default is to have no maximum. @end table @node Miscellaneous (sys), Default sys File Values, File Transfer Control, sys File @subsection Miscellaneous sys File Commands @table @code @item sequence @var{boolean} @findex sequence If @var{boolean} is true, then conversation sequencing is automatically used for the remote system, so that if somebody manages to spoof as the remote system, it will be detected the next time the remote system actually calls. This is false by default. @item command-path @var{strings} @findex command-path Specifies the path (a list of whitespace separated directories) to be searched to locate commands to execute. This is only used for commands requested by @command{uux}, not for chat programs. The default is from @file{policy.h}. @item commands @var{strings} @findex commands The list of commands which the remote system is permitted to execute locally. For example: @samp{commands rnews rmail}. If the value is @samp{ALL} (case significant), all commands may be executed. The default is @samp{rnews rmail}. @item free-space @var{number} @findex free-space, @command{uucico} will periodically check the amount of free space. If it drops below the amount given by the @code{free-space} command, the file transfer will be aborted. The default amount of space to leave free is from @file{policy.h}. This file space checking may not work on all systems. @item pubdir @var{string} @findex pubdir in sys file Specifies the public directory that is used when @samp{~} is specifed in a file transfer or a list of directories. This essentially overrides the public directory specified in the main configuration file for this system only. The default is the public directory specified in the main configuration file (which defaults to a value from @file{policy.h}). @item debug @var{string} @dots{} @findex debug in sys file Set additional debugging for calls to or from the system. This may be used to debug a connection with a specific system. It is particularly useful when debugging incoming calls, since debugging information will be generated whenever the call comes in. See the @code{debug} command in the main configuration file (@pxref{Debugging Levels}) for more details. The debugging information specified here is in addition to that specified in the main configuration file or on the command line. @item max-remote-debug @var{string} @dots{} @findex max-remote-debug When the system calls in, it may request that the debugging level be set to a certain value. The @code{max-remote-debug} command may be used to put a limit on the debugging level which the system may request, to avoid filling up the disk with debugging information. Only the debugging types named in the @code{max-remote-debug} command may be turned on by the remote system. To prohibit any debugging, use @samp{max-remote-debug none}. @end table @node Default sys File Values, , Miscellaneous (sys), sys File @subsection Default sys File Values The following are used as default values for all systems; they can be considered as appearing before the start of the file. @example time Never chat "" \r\c ogin:-BREAK-ogin:-BREAK-ogin: \L word: \P chat-timeout 10 callback n sequence n request y transfer y local-send / remote-send ~ local-receive ~ remove-receive ~ command-path [ from @file{policy.h} ] commands rnews rmail max-remote-debug abnormal,chat,handshake @end example @node port File, dial File, sys File, Configuration Files @section The Port Configuration File @cindex port file @cindex port configuration file @cindex configuration file (port) The port files may be used to name and describe ports. By default there is a single port file, named @file{port} in the directory @var{newconfigdir}. This may be overridden by the @code{portfile} command in the main configuration file; see @ref{Configuration File Names}. Any commands in a port file before the first @code{port} command specify defaults for all ports in the file; however, since the @code{type} command must appear before all other commands for a port, the defaults are only useful if all ports in the file are of the same type (this restriction may be lifted in a later version). All commands after a @code{port} command up to the next @code. @table @code @item port @var{string} @findex port in port file Introduces and names a port. @item type @var{string} @findex type Define the type of port. The default is @samp{modem}. If this command appears, it must immediately follow the @code{port} command. The type defines what commands are subsequently allowed. Currently the types are: @table @samp @item modem For a modem hookup. @item stdin For a connection through standard input and standard output, as when @command{uucico} is run as a login shell. @item direct For a direct connection to another system. @item tcp For a connection using TCP. @item tli For a connection using TLI. @item pipe For a connection through a pipe running another program. @end table @item protocol @var{string} @findex protocol in port file Specify a list of protocols to use for this port. This is just like the corresponding command for a system (@pxref{Protocol Selection}). A protocol list for a system takes precedence over a list for a port. @item protocol-parameter @var{character} @var{strings} [ any type ] @findex protocol-parameter in port file The same command as the @code{protocol-parameter} command used for systems (@pxref{Protocol Selection}). This one takes precedence. @item seven-bit @var{boolean} [ any type ] @findex seven-bit in port file This is only used during protocol negotiation; if the argument is true, it forces the selection of a protocol which works across a seven-bit link. It does not prevent eight bit characters from being transmitted. The default is false. @item reliable @var{boolean} [ any type ] @findex reliable in port file This is only used during protocol negotiation; if the argument is false, it forces the selection of a protocol which works across an unreliable communication link. The default is true. It would be more common to specify this for a dialer rather than a port. @item half-duplex @var{boolean} [ any type ] @findex half-duplex in port file If the argument is true, it means that the port only supports half-duplex connections. This only affects bidirectional protocols, and causes them to not do bidirectional transfers. @item device @var{string} [ modem, direct and tli only ] @findex device Names the device associated with this port. If the device is not named, the port name is taken as the device. Device names are system dependent. On Unix, a modem or direct connection might be something like @file{/dev/ttyd0}; a TLI port might be @file{/dev/inet/tcp}. @itemx speed @var{number} [modem and direct only ] @findex speed in port file @item baud @var{number} [ modem and direct only ] @findex baud in port file. @itemx speed-range @var{number} @var{number} [ modem only ] @findex speed-range @item baud-range @var{number} @var{number} [ modem only ] @findex baud-range Specify a range of speeds this port can run at. The first number is the minimum speed, the second number is the maximum speed. These numbers will be used when matching a system which specifies a desired speed. The simple @code{speed} (or @code{baud}) command is still used to determine the speed to run at if the system does not specify a speed. For example, the command @samp{speed-range 300 19200} means that the port will match any system which uses a speed from 300 to 19200 baud (and will use the speed specified by the system); this could be combined with @samp{speed 2400}, which means that when this port is used with a system that does not specify a speed, the port will be used at 2400 baud. @item carrier @var{boolean} [ modem and direct only ] @findex carrier in port file. @item hardflow @var{boolean} [ modem and direct only ] @findex hardflow The argument indicates whether the port supports hardware flow control. If it does not, hardware flow control will not be turned on for this port. The default is true. Hardware flow control is only supported on some systems. @item dial-device @var{string} [ modem only ] @findex dial-device Dialing instructions should be output to the named device, rather than to the normal port device. The default is to output to the normal port device. @item dialer @var{string} [ modem only ] @findex dialer in port file Name a dialer to use. The information is looked up in the dial file. There is no default. Some sort of dialer information must be specified to call out on a modem. @item dialer @var{string} @dots{} [ modem only ] If more than one string follows the @code{dialer} command, the strings are treated as a command that might appear in the dial file (@pxref{dial. @item dialer-sequence @var{strings} [ modem or tcp or tli only ] @findex dialer-sequence @code{phone} command in the system file is used as the final token. The token is what is used for @kbd{\D} or @kbd{\T} in the dialer chat script. If the token in this string is @kbd{\D}, the system phone number will be used; if it is @kbd{\T}, the system phone number will be used after undergoing dialcodes translation. A missing final token is taken as @kbd{\D}. This command currently does not work if @code{dial-device} is specified; to handle this correctly will require a more systematic notion of chat scripts. Moreover, the @code{complete} and @code{abort} chat scripts, the protocol parameters, and the @code{carrier} and @code @samp{TLI} or @samp{TLIS} the first token is used as the address to connect to. If the first dialer is something else, or if there is no token, the address given by the @code{address} command is used (@pxref{Placing the Call}). Escape sequences in the address are expanded as they are for chat script expect strings (@pxref{Chat Scripts}). The different between @samp{TLI} and @samp{TLIS} is that the latter implies the command @samp{stream true}. These contortions are all for HDB compatibility. Any subsequent dialers are treated as they are for a TCP port. @item lockname @var{string} [ modem and direct only ] @findex lockname Give the name to use when locking this port. On Unix, this is the name of the file that will be created in the lock directory. It is used as is, so on Unix it should generally start with @samp{LCK..}. For example, if a single port were named both @file{/dev/ttycu0} and @file{/dev/tty0} (perhaps with different characteristics keyed on the minor device number), then the command @code{lockname LCK..ttycu0} could be used to force the latter to use the same lock file name as the former. @item service @var{string} [ tcp only ] @findex service Name the TCP port number to use. This may be a number. If not, it will be looked up in @file{/etc/services}. If this is not specified, the string @samp{uucp} is looked up in @file{/etc/services}. If it is not found, port number 540 (the standard UUCP-over-TCP port number) will be used. @item version @var{string} [ tcp only ] @findex version Specify the IP version number to use. The default is @samp{0}, which permits any version. The other possible choices are @samp{4}, which requires @samp{IPv4}, or @samp{6}, which requires @samp{IPv6}. Normally it is not necessary to use this command, but in some cases, as @samp{IPv6} is rolled out across the Internet, it may be necessary to require UUCP to use a particular type of connection. @item push @var{strings} [ tli only ] @findex push Give a list of modules to push on to the TLI stream. @item stream @var{boolean} [ tli only ] @findex stream If this is true, and the @code{push} command was not used, the @samp{tirdwr} module is pushed on to the TLI stream. @item server-address @var{string} [ tli only ] @findex server-address Give the address to use when running as a TLI server. Escape sequences in the address are expanded as they are for chat script expect strings (@pxref{Chat Scripts}). The string is passed directly to the TLI @code @samp{SSPPIIII}, where @samp{SS} is the service number (for TCP, this is 2), @samp{PP} is the TCP port number, and @samp{IIII} is the Internet address. For example, to accept a connection from on port 540 from any interface, use @samp). @item command @var{strings} [ pipe only ] @findex command Give the command, with arguments, to run when using a pipe port type. When a port of this type is used, the command is executed and @command{uucico} communicates with it over a pipe. This permits @command{uucico} or @command{cu} to communicate with another system which can only be reached through some unusual means. A sample use might be @samp{command /bin/rlogin -E -8 -l @var{login} @var{system}}. The command is run with the full privileges of UUCP; it is responsible for maintaining security. @end table @node dial File, UUCP Over TCP, port File, Configuration Files @section The Dialer Configuration File @cindex dial file @cindex dialer configuration file @cindex configuration file (dial) The dialer configuration files define dialers. By default there is a single dialer file, named @file{dial} in the directory @var{newconfigdir}. This may be overridden by the @code{dialfile} command in the main configuration file; see @ref{Configuration File Names}. Any commands in the file before the first @code{dialer} command specify defaults for all the dialers in the file. All commands after a @code{dialer} command up to the next @code{dialer} command are associated with the named dialer. @table @code @item dialer @var{string} @findex dialer in dial file Introduces and names a dialer. @item chat @var{strings} @findex chat in dial file @item chat-timeout @var{number} @findex chat-timeout in dial file @item chat-fail @var{string} @findex chat-fail in dial file @item chat-seven-bit @var{boolean} @findex chat-seven-bit in dial file @item chat-program @var{strings} @findex chat-program in dial file Specify a chat script to be used to dial the phone. This chat script is used before the login chat script in the @file{sys} file, if any (@pxref{Logging In}). For full details on chat scripts, see @ref{Chat Scripts}. The @command{uucico} daemon will sleep for one second between attempts to dial out on a modem. If your modem requires a longer wait period, you must start your chat script with delays (@samp{\d} in a send string). The chat script will be read from and sent to the port specified by the @code{dial-device} command for the port, if there is one. The following escape addition escape sequences may appear in send strings: @table @kbd @item \D send phone number without dialcode translation @item \T send phone number with dialcode translation @end table See the description of the dialcodes file (@pxref{Configuration File Names}) for a description of dialcode translation. If both the port and the dialer support carrier, as set by the @code{carrier} command in the port file and the @code{carrier} command in the dialer file, then every chat script implicitly begins with @kbd{\M} and ends with @kbd{\m}. There is no default chat script for dialers. The following additional escape sequences may be used in @code{chat-program}: @table @kbd @item \D phone number without dialcode translation @item \T phone number with dialcode translation @end table If the program changes the port in any way (e.g., sets parity) the changes will be preserved during protocol negotiation, but once the protocol is selected it will change the port settings. @item dialtone @var{string} @findex dialtone A string to output when dialing the phone number which causes the modem to wait for a secondary dial tone. This is used to translate the @kbd{=} character in a phone number. The default is a comma. @item pause @var{string} @findex pause A string to output when dialing the phone number which causes the modem to wait for 1 second. This is used to translate the @kbd{-} character in a phone number. The default is a comma. @item carrier @var{boolean} @findex carrier in dial file An argument of true means that the dialer supports the modem carrier signal. After the phone number is dialed, @command{uucico} will require that carrier be on. One some systems, it will be able to wait for it. If the argument is false, carrier will not be required. The default is true. @item carrier-wait @var{number} @findex carrier-wait If the port is supposed to wait for carrier, this may be used to indicate how many seconds to wait. The default is 60 seconds. Only some systems support waiting for carrier. @item dtr-toggle @var{boolean} @var{boolean} @findex dtr-toggle If the first argument is true, then DTR is toggled before using the modem. This is only supported on some systems and some ports. The second @var{boolean} need not be present; if it is, and it is true, the program will sleep for 1 second after toggling DTR. The default is to not toggle DTR. @need 500 @item complete-chat @var{strings} @findex complete-chat @item complete-chat-timeout @var{number} @findex complete-chat-timeout @item complete-chat-fail @var{string} @findex complete-chat-fail @item complete-chat-seven-bit @var{boolean} @findex complete-chat-seven-bit @item complete-chat-program @var{strings} @findex complete-chat-program These commands define a chat script (@pxref{Chat Scripts}) which is run when a call is finished normally. This allows the modem to be reset. There is no default. No additional escape sequences may be used. @item complete @var{string} @findex complete This is a simple use of @code{complete-chat}. It is equivalent to @code{complete-chat "" @var{string}}; this has the effect of sending @var{string} to the modem when a call finishes normally. @item abort-chat @var{strings} @findex abort-chat @item abort-chat-timeout @var{number} @findex abort-chat-timeout @item abort-chat-fail @var{string} @findex abort-chat-fail @item abort-chat-seven-bit @var{boolean} @findex abort-chat-seven-bit @item abort-chat-program @var{strings} @findex abort-chat-program These commands define a chat script (@pxref{Chat Scripts}) to be run when a call is aborted. They may be used to interrupt and reset the modem. There is no default. No additional escape sequences may be used. @item abort @var{string} @findex abort This is a simple use of @code{abort-chat}. It is equivalent to @code{abort-chat "" @var{string}}; this has the effect of sending @var{string} to the modem when a call is aborted. @item protocol-parameter @var{character} @var{strings} @findex protocol-parameter in dial file Set protocol parameters, just like the @code{protocol-parameter} command in the system configuration file or the port configuration file; see @ref{Protocol Selection}. These parameters take precedence, then those for the port, then those for the system. @item seven-bit @var{boolean} @findex seven-bit in dial file. @item reliable @var{boolean} @findex reliable in dial file This is only used during protocol negotiation; if it is false, it forces selection of a protocol which works across an unreliable communication link. The default is true. @item half-duplex @var{boolean} [ any type ] @findex half-duplex in dial file If the argument is true, it means that the dialer only supports half-duplex connections. This only affects bidirectional protocols, and causes them to not do bidirectional transfers. @end table @node UUCP Over TCP, Security, dial File, Configuration Files @section UUCP Over TCP If your system has a Berkeley style socket library, or a System V style TLI interface library, you can compile the code to permit making connections over TCP. Specifying that a system should be reached via TCP is easy, but nonobvious. @menu * TCP Client:: Connecting to Another System Over TCP * TCP Server:: Running a TCP Server @end menu @node TCP Client, TCP Server, UUCP Over TCP, UUCP Over TCP @subsection Connecting to Another System Over TCP If you are using the new style configuration files (@pxref{Configuration Files}), add the line @samp{port type tcp} to the entry in the @file{sys} file. By default UUCP will get the port number by looking up @samp{uucp} in @file{/etc/services}; if the @samp{uucp} service is not defined, port 540 will be used. You can set the port number to use with the command @samp{port service @var{xxx}}, where @var{xxx} can be either a number or a name to look up in @file{/etc/services}. You can specify the address of the remote host with @samp{address @var @file{L.sys}: @example @var{sys} Any TCP uucp @var{host}.@var{domain} chat-script @end example. If you are using HDB configuration files, add a line like this to Systems: @example @var{sys} Any TCP - @var{host}.@var{domain} chat-script @end example and a line like this to @file{Devices}: @example TCP uucp - - @end example You only need one line in @file{Devices} regardless of how many systems you contact over TCP.. @node TCP Server, , TCP Client, UUCP Over TCP @subsection Running a TCP Server The @command{uucico} daemon may be run as a TCP server. To use the default port number, which is a reserved port, @command{uucico} must be invoked by the superuser (or it must be set user ID to the superuser, but I don't recommend doing that). You must define a port, either using the port file (@pxref{port File}), if you are using the new configuration method, or with an entry in @file{Devices} if you are using HDB; there is no way to define a port using V2. If you are using HDB the port must be named @samp{TCP}; a line as shown above will suffice. You can then start @command{uucico} as @samp{uucico -p TCP} (after the @option{-p}, name the port; in HDB it must be @samp{TCP}). This will wait for incoming connections, and fork off a child for each one. Each connection will be prompted with @samp{login:} and @samp{Password:}; the results will be checked against the UUCP (not the system) password file (@pxref{Configuration File Names}). Another way to run a UUCP TCP server is to use the BSD @command{uucpd} program. Yet another way to run a UUCP TCP server is to use @command{inetd}. Arrange for @command{inetd} to start up @command{uucico} with the @option{-l} switch. This will cause @command{uucico} to prompt with @samp{login:} and @samp{Password:} and check the results against the UUCP (not the system) password file (you may want to also use the @option{-D} switch to avoid a fork, which in this case is unnecessary). @node Security, , UUCP Over TCP, Configuration Files @section Security @code{uucp}) into a separate group; the use of this is explained in the following paragraphs, which refer to this separate group as @code{uucp-group}. When the @command{uucp} program is invoked to copy a file to a remote system, it will, by default, copy the file into the UUCP spool directory. When the @command{uux} program is used, the @option{ @code{uucp-group} and set the mode to permit group write access. This will allow the file be requested without permitting it to be viewed by any other user. There is no provision for security for @command{uucp} requests (as opposed to @command (@pxref{Configuration File Names}) and let @command{uucico} do its own login prompting. For example, to let remote sites log in on a port named @samp{entry} in the port file (@pxref{port File}), you might invoke @samp{uucico -e -p entry}. This would cause @command{uucico} to enter an endless loop of login prompts and daemon executions. The advantage of this approach is that even if remote users break into the system by guessing or learning the password, they will only be able to do whatever @command @code{remote-send} and @code{remote-receive} commands to control the directories the remote UUCP can access. You can use the @code @command{rmail} and @command{rnews}, which will suffice for most systems. If different remote systems call in and they must be granted different privileges (perhaps some systems are within the same organization and some are not) then the @code{called-login} command should be used for each system to require that they use different login names. Otherwise, it would be simple for a remote system to use the @code{myname} command and pretend to be a different system. The @code{sequence} command can be used to detect when one system pretended to be another, but, since the sequence numbers must be reset manually after a failed handshake, this can sometimes be more trouble than it's worth. @c START-OF-FAQ @ignore This chapter is used to generate the comp.mail.uucp UUCP Internals FAQ, as well as being part of the Taylor UUCP manual. Text that should appear only in the manual is bracketed by ifclear faq. Text that should appear only in the FAQ is bracketed by ifset faq. @end ignore @ifset faq @paragraphindent asis @format Subject: UUCP Internals Frequently Asked Questions Newsgroups: comp.mail.uucp,comp.answers,news.answers Followup-To: comp.mail.uucp Reply-To: ian@@airs.com (Ian Lance Taylor) Keywords: UUCP, protocol, FAQ Approved: news-answers-request@@MIT.Edu Archive-name: uucp-internals Version: $Revision: 1.132 $ Last-modified: $Date: 2002/03/07 05:56:27 $ @end format @end ifset @node Protocols, Hacking, Configuration Files, Top @chapter UUCP Protocol Internals @ifclear faq @samp{comp.mail.uucp}, @samp{news.answers}, and @samp{comp.answers}. The posting is available from any @samp{news.answers} archive site, such as @samp{rtfm.mit.edu}. If you plan to use this information to write a UUCP program, please make sure you get the most recent version of the posting, in case there have been any corrections. @end ifclear @ifset faq Recent changes: @itemize @bullet @item Conversion to Texinfo format. @item Description of the @samp{E} command. @item Description of optional number following @samp{-N} and @samp{ROKN} in UUCP protocol startup. @item Detailed description of the @samp{y} protocol. @item Mention the name uuxqt uses for lock files. @end itemize This article was written by Ian Lance Taylor @samp{ @samp @samp{comp.mail.uucp}, please send mail to the poster referring her or him to this FAQ. There is no reason to post a followup, as most of us know the answer already. @end ifset @menu * @end menu @ifset faq @format UUCP Protocol Sources Alarm in Debugging Output UUCP Grades UUCP Lock Files Execution File Format UUCP Protocol UUCP @samp{g} Protocol UUCP @samp{f} Protocol UUCP @samp{t} Protocol UUCP @samp{e} Protocol UUCP @samp{G} Protocol UUCP @samp{i} Protocol UUCP @samp{j} Protocol UUCP @samp{x} Protocol UUCP @samp{y} Protocol UUCP @samp{d} Protocol UUCP @samp{h} Protocol UUCP @samp{v} Protocol Thanks ---------------------------------------------------------------------- From: UUCP Protocol Sources Subject: UUCP Protocol Sources @end format @end ifset @node UUCP Protocol Sources, UUCP Grades, Protocols, Protocols @section UUCP Protocol Sources @quotation ``Unix-to-Unix Copy Program,'' said PDP-1. ``You will never find a more wretched hive of bugs and flamers. We must be cautious.'' @flushright ---DECWars @end flushright @end quotation I took a lot of the information from Jamie E. Hanrahan's paper in the Fall 1990 DECUS Symposium, and from @cite{Managing UUCP and Usenet} by Tim O'Reilly and Grace Todino (with contributions by several other people). The latter includes most of the former, and is published by @example O'Reilly & Associates, Inc. 103 Morris Street, Suite A Sebastopol, CA 95472 @end example It is currently in its tenth edition. The ISBN number is @samp{0-937175-93-5}. Some information is originally due to a Usenet article by Chuck Wegrzyn. The information on execution files comes partially from Peter Honeyman. The information on the @samp{g} protocol comes partially from a paper by G.L.@: Chesson of Bell Laboratories, partially from Jamie E. Hanrahan's paper, and partially from source code by John Gilmore. The information on the @samp{f} protocol comes from the source code by Piet Berteema. The information on the @samp{t} protocol comes from the source code by Rick Adams. The information on the @samp{e} protocol comes from a Usenet article by Matthias Urlichs. The information on the @samp: @cite{The Whole Internet}, by Ed Krol, and @cite{Zen and the Art of the Internet}, by Brendan P. Kehoe. Good technical discussions of networking issues can be found in @cite{Internetworking with TCP/IP}, by Douglas E. Comer and David L. Stevens and in @cite{Design and Validation of Computer Protocols} by Gerard J. Holzmann. @ifset faq @c Note that this section is only in the FAQ, since it does not fit in @c here in the manual. @format ------------------------------ From: Alarm in Debugging Output Subject: Alarm in Debugging Output Alarm in Debugging Output ========================= @end format The debugging output of many versions of UUCP will include messages like @samp{alarm 1} or @samp{pkcget: alarm 1}. Taylor UUCP does not use the word @samp{alarm}, but will instead log messages like @s. @format ------------------------------ From: UUCP Grades Subject: UUCP Grades @end format @end ifset @node UUCP Grades, UUCP Lock Files, UUCP Protocol Sources, Protocols @section UUCP Grades @cindex grades implementation Modern UUCP packages support a priority grade for each command. The grades generally range from @kbd{A} (the highest) to @kbd{Z} followed by @kbd{a} to @kbd{z}. Some UUCP packages (including Taylor UUCP) also support @kbd{0} to @kbd{9} before @kbd{A}. Some UUCP packages may permit any ASCII character as a grade. On Unix, these grades are encoded in the name of the command file created by @command{uucp} or @command{uux}. A command file name generally has the form @file{C.nnnngssss} where @samp{nnnn} is the remote system name for which the command is queued, @samp{g} is a single character grade, and @samp{ssss} is a four character sequence number. For example, a command file created for the system @samp{airs} at grade @samp{Z} might be named @file. @ifclear faq Taylor UUCP uses any alphanumeric character. @end ifclear @samp{D}, and news to grade @samp{N}, except that when the grade of incoming mail can be determined, that grade is preserved if the mail is forwarded to another system. The default grades may be changed by editing the @file{LIB/MAILRC} file for mail, or the @file{UUPLUS.CFG} file for news. UUPC/extended for DOS, OS/2 and Windows NT handles mail at grade @samp{C}, news at grade @samp{d}, and file transfers at grade @samp{n}. The UUPC/extended @command{UUCP} and @command @file{Systems} (or @file{L.sys}) file like this: @example airs Any/Z,Any2305-0855 ... @end example This allows grades @samp @code{timegrade} and @code{call-timegrade} commands to achieve the same effect. @ifclear faq @xref{When to Call}. @end ifclear It supports the above format when reading @file{Systems} or @file{L.sys}. UUPC/extended provides the @code{symmetricgrades} option to announce the current grade in effect when calling the remote system. UUPlus allows specification of the highest grade accepted on a per-call basis with the @option{-g} option in @command{UUCICO}. This sort of grade restriction is most useful if you know what grades are being used at the remote site. The default grades used depend on the UUCP package. Generally @command{uucp} and @command{uux} have different defaults. A particular grade can be specified with the @option{-g} option to @command{uucp} or @command{uux}. For example, to request execution of @command{rnews} on @samp{airs} with grade @samp{d}, you might use something like @example uux -gd - airs!rnews < article @end example Uunet queues up mail at grade @samp{C}, but increases the grade based on the size. News is queued at grade @samp{d}, and file transfers at grade @samp{n}. The example above would allow mail (below some large size) to be received at any time, but would only permit news to be transferred at night. @ifset faq @format ------------------------------ From: UUCP Lock Files Subject: UUCP Lock Files @end format @end ifset @node UUCP Lock Files, Execution File Format, UUCP Grades, Protocols @section UUCP Lock Files @cindex lock files This discussion applies only to Unix. I have no idea how UUCP locks ports on other systems. UUCP creates files to lock serial ports and systems. On most, if not all, systems, these same lock files are also used by @command{cu} to coordinate access to serial ports. On some systems @command{getty} also uses these lock files, often under the name @command, @file{/usr/spool/uucp}. HDB UUCP generally puts the lock files in a directory of their own, usually @file{/usr/spool/locks} or @file{ (@command{uucp}, @command{cu}, or @command{getty/uugetty}). I have also seen a third type of UUCP lock file which does not contain the process ID at all. The name of the lock file is traditionally @file{LCK..} followed by the base name of the device. For example, to lock @file{/dev/ttyd0} the file @file{LCK..ttyd0} would be created. On SCO Unix, the last letter of the lock file name is always forced to lower case even if the device name ends with an upper case letter. System V Release 4 UUCP names the lock file using the major and minor device numbers rather than the device name. The file is named @file{LK.@var{XXX}.@var{YYY}.@var{ZZZ}}, where @var{XXX}, @var{YYY} and @var{ZZZ} are all three digit decimal numbers. @var{XXX} is the major device number of the device holding the directory holding the device file (e.g., @file{/dev}). @var{YYY} is the major device number of the device file itself. @var{ZZZ} is the minor device number of the device file itself. If @code{s} holds the result of passing the device to the stat system call (e.g., @code{stat ("/dev/ttyd0", &s)}), the following line of C code will print out the corresponding lock file name: @example printf ("LK.%03d.%03d.%03d", major (s.st_dev), major (s.st_rdev), minor (s.st_rdev)); @end example The advantage of this system is that even if there are several links to the same device, they will all use the same lock file name. When two or more instances of @command @file{X.*} file, except that the initial @samp{X} is changed to an @samp{L}. The lock file holds the process ID as described above. @ifset faq @format ------------------------------ From: Execution File Format Subject: Execution File Format @end format @end ifset @node Execution File Format, UUCP Protocol, UUCP Lock Files, Protocols @section Execution File Format @cindex execution file format @cindex @file{X.*} file format UUCP @file{X.*} files control program execution. They are created by @command{uux}. They are transferred between systems just like any other file. The @command{uuxqt} daemon reads them to figure out how to execute the job requested by @command{uux}. An @file{X.*} file is simply a text file. The first character of each line is a command, and the remainder of the line supplies arguments. The following commands are defined: @table @samp @item C command This gives the command to execute, including the program and all arguments. For example, @samp{rmail ian@@airs.com}. @item U user system This names the user who requested the command, and the system from which the request came. @item I standard-input This names the file from which standard input is taken. If no standard input file is given, the standard input will probably be attached to @file{/dev/null}. If the standard input file is not from the system on which the execution is to occur, it will also appear in an @samp{F} command. @item O standard-output [system] This names the standard output file. The optional second argument names the system to which the file should be sent. If there is no second argument, the file should be created on the executing system. @item F required-file [filename-to-use] The @samp{F} command can appear multiple times. Each @samp{F} command names a file which must exist before the execution can proceed. This will usually be a file which is transferred from the system on which @command @samp{F} command and an @samp{I} command. @item R requestor-address This is the address to which mail about the job should be sent. It is relative to the system named in the @samp{U} command. If the @samp{R} command does not appear, then mail is sent to the user named in the @samp{U} command. @item Z This command takes no arguments. It means that a mail message should be sent if the command failed. This is the default behaviour for most modern UUCP packages, and for them the @samp{Z} command does not actually do anything. @item N This command takes no arguments. It means that no mail message should be sent, even if the command failed. @item n This command takes no arguments. It means that a mail message should be sent if the command succeeded. Normally a message is sent only if the command failed. @item B This command takes no arguments. It means that the standard input should be returned with any error message. This can be useful in cases where the input would otherwise be lost. @item e This command takes no arguments. It means that the command should be processed with @file{/bin/sh}. For some packages this is the default anyhow. Most packages will refuse to execute complex commands or commands containing wildcards, because of the security holes this opens. @item E This command takes no arguments. It means that the command should be processed with the @code{execve} system call. For some packages this is the default anyhow. @item M status-file This command means that instead of mailing a message, the message should be copied to the named file on the system named by the @samp{U} command. @item Q This command takes no arguments. It means that the string arguments to all the other commands are backslash quoted. Any backslash in one of the strings should be followed by either a backslash or three octal digits. The backslash quoting is interpreted as in a C string. If the @samp{Q} command does not appear, backslashes in the strings are not treated specially. The @samp{Q} command was introduced in Taylor UUCP version 1.07. @item # comment This command is ignored, as is any other unrecognized command. @end table Here is an example. Given the following command executed on system test1 @example uux - test2!cat - test2!~ian/bar !qux '>~/gorp' @end example (this is only an example, as most UUCP systems will not permit the cat command to be executed) Taylor UUCP will produce something like the following @file{X.} file: @example U ian test1 F D.test1N003r qux O /usr/spool/uucppublic/gorp test1 F D.test1N003s I D.test1N003s C cat - ~ian/bar qux @end example The standard input will be read into a file and then transferred to the file @file{D.test1N003s} on system @samp{test2}. The file @file{qux} will be transferred to @file{D.test1N003r} on system @samp{test2}. When the command is executed, the latter file will be copied to the execution directory under the name @samp{qux}. Note that since the file @file{~ian/bar} is already on the execution system, no action need be taken for it. The standard output will be collected in a file, then copied to the file @file{/usr/spool/uucppublic/gorp} on the system @samp{test1}. @ifset faq @format ------------------------------ From: UUCP Protocol Subject: UUCP Protocol @end format @end ifset @node UUCP Protocol, g Protocol, Execution File Format, Protocols @section UUCP Protocol @cindex UUCP protocol @cindex protocol, UUCP The UUCP protocol is a conversation between two UUCP packages. A UUCP conversation consists of three parts: an initial handshake, a series of file transfer requests, and a final handshake. @menu * The Initial Handshake:: The Initial Handshake * UUCP Protocol Commands:: UUCP Protocol Commands * The Final Handshake:: The Final Handshake @end menu @node The Initial Handshake, UUCP Protocol Commands, UUCP Protocol, UUCP Protocol @subsection The Initial Handshake @cindex initial handshake Before the initial handshake, the caller will usually have logged in the called machine and somehow started the UUCP package there. On Unix this is normally done by setting the shell of the login name used to @file{/usr/lib/uucp/uucico}. All messages in the initial handshake begin with a @kbd{^P} (a byte with the octal value @samp{\020}) and end with a null byte (@samp{\000}). A few systems end these messages with a line feed character (@samp{. @table @asis @item called: @samp{\020Shere=hostname\000} The hostname is the UUCP name of the called machine. Older UUCP packages do not output it, and simply send @samp{\020Shere\000}. @item caller: @samp{\020Shostname options\000} The hostname is the UUCP name of the calling machine. The following options may appear (or there may be none): @table @samp @item . @item -xLEVEL Requests the called system to set its debugging level to the specified value. This is not supported by all systems. @item -pGRADE @itemx -vgrade=GRADE Requests the called system to only transfer files of the specified grade or higher. This is not supported by all systems. Some systems support @samp{-p}, some support @samp{-vgrade=}. UUPlus allows either @samp{-p} or @samp{-v} to be specified on a per-system basis in the @file{SYSTEMS} file (@samp{gradechar} option). @item -R Indicates that the calling UUCP understands how to restart failed file transmissions. Supported only by System V Release 4 UUCP, QFT, and Taylor UUCP. @item -ULIMIT Reports the ulimit value of the calling UUCP. The limit is specified as a base 16 number in C notation (e.g., @samp{ @file{UUPLUS.CFG}. Taylor UUCP understands this option, but does not generate it. @item . @end table @item called: @samp{\020ROK\000} There are actually several possible responses. @table @samp @item ROK The calling UUCP is acceptable, and the handshake proceeds to the protocol negotiation. Some options may also appear; see below. @item ROKN[NUMBER] The calling UUCP is acceptable, it specified @samp{-N}, and the called UUCP also understands the Taylor UUCP size limiting extensions. The optional number is a bitmask of features supported by the called UUCP, and is described below. @item RLCK The called UUCP already has a lock for the calling UUCP, which normally indicates the two machines are already communicating. @item RCB The called UUCP will call back. This may be used to avoid impostors (but only one machine out of each pair should call back, or no conversation will ever begin). @item RBADSEQ The call sequence number is wrong (see the @samp{-Q} discussion above). @item RLOGIN The calling UUCP is using the wrong login name. @item RYou are unknown to me The calling UUCP is not known to the called UUCP, and the called UUCP does not permit connections from unknown systems. Some versions of UUCP just drop the line rather than sending this message. @end table If the response is @samp{ROK}, the following options are supported by System V Release 4 UUCP and QFT. @table @samp @item -R The called UUCP knows how to restart failed file transmissions. @item . @item -xLEVEL I'm not sure just what this means. It may request the calling UUCP to set its debugging level to the specified value. @end table If the response is not @samp{ROK} (or @samp{ROKN}) both sides hang up the phone, abandoning the call. @item called: @samp{ @samp{\020Pgf\000}. @item caller: @samp{\020Uprotocol\000} The calling UUCP selects which protocol to use out of the protocols offered by the called UUCP. If there are no mutually supported protocols, the calling UUCP sends @samp{\020UN\000} and both sides hang up the phone. Otherwise the calling UUCP sends something like @samp{\020Ug\000}. @end table @file{Devices} file or the @file{Systems} file. For example, to select the @samp{e} protocol in @file{Systems}, @example airs Any ACU,e ... @end example or in Devices, @example ACU,e ttyXX ... @end example Taylor UUCP provides the @code{protocol} command which may be used either for a system @ifclear faq (@pxref{Protocol Selection}) @end ifclear or a @ifclear faq port (@pxref{port File}). @end ifclear @ifset faq port. @end ifset UUPlus allows specification of the protocol string on a per-system basis in the @file{SYSTEMS} file. The optional number following a @samp{-N} sent by the calling system, or an @samp @samp{011}. @table @samp @item 01 UUCP supports size negotiation. @item 02 UUCP supports file restart. @item 04 UUCP supports the @samp{E} command. @item 010 UUCP requires the file size in the @samp{S} and @samp{R} commands to be in base 10. This bit is used by default if no number appears, but should not be explicitly sent. @item 020 UUCP expects a dummy string between the notify field and the size field in an @samp{S} command. This is true of SVR4 UUCP. This bit should not be used. @item 040 UUCP supports the @samp{q} option in the @samp{S}, @samp{R}, @samp{X}, and @samp{E} commands. @end table After the protocol has been selected and the initial handshake has been completed, both sides turn on the selected protocol. For some protocols (notably @samp{g}) a further handshake is done at this point. @node UUCP Protocol Commands, The Final Handshake, The Initial Handshake, UUCP Protocol @subsectionamp{S}, @samp{R}, @samp{X}, @samp{E}, or @samp{H}. Any file name referred to below is either an absolute file name beginning with @file{/}, a public directory file name beginning with @file{~/}, a file name relative to a user's home directory beginning with @file{~@var{USER}/}, or a spool directory file name. File names in the spool directory are not absolute, but instead are converted to file names within the spool directory by UUCP. They always begin with @file{C.} (for a command file created by @command{uucp} or @command{uux}), @file{D.} (for a data file created by @command{uucp}, @command{uux} or by an execution, or received from another system for an execution), or @file{X.} (for an execution file created by @command{uux} or received from another system). All the commands other than the @samp{H} command support options. The @samp{q} option indicates that the command argument strings are backslash quoted. If the @samp{q} option appears, then any backslash in one of the arguments should be followed by either a backslash or three octal digits. The backslash quoting is interpreted as in a C string. If the @samp{q} option does not appear, backslashes in the strings are not treated specially. The @samp{q} option was introduced in Taylor UUCP version 1.07. @menu * The S Command:: The S Command * The R Command:: The R Command * The X Command:: The X Command * The E Command:: The E Command * The H Command:: The H Command @end menu @node The S Command, The R Command, UUCP Protocol Commands, UUCP Protocol Commands @subsubsection The S Command @cindex S UUCP protocol command @cindex UUCP protocol S command @table @asis @item master: @samp{S @var{from} @var{to} @var{user} -@var{options} @var{temp} @var{mode} @var{notify} @var{size}} The @samp{S} and the @samp{-} are literal characters. This is a request by the master to send a file to the slave. @table @var @item from The name of the file to send. If the @samp{C} option does not appear in @var{options}, the master will actually open and send this file. Otherwise the file has been copied to the spool directory, where it is named @var{temp}. The slave ignores this field unless @var{to} is a directory, in which case the basename of @var{from} will be used as the file name. If @var{from} is a spool directory filename, it must be a data file created for or by an execution, and must begin with @file{D.}. @item to The name to give the file on the slave. If this field names a directory the file is placed within that directory with the basename of @var{from}. A name ending in @samp{/} is taken to be a directory even if one does not already exist with that name. If @var{to} begins with @file{X.}, an execution file will be created on the slave. Otherwise, if @var{to} begins with @file{D.} it names a data file to be used by some execution file. Otherwise, @var{to} should not be in the spool directory. @item user The name of the user who requested the transfer. d The slave should create directories as necessary (this is the default). @item f The slave should not create directories if necessary, but should fail the transfer instead. @item m The master should send mail to @var{user} when the transfer is complete. @item n The slave should send mail to @var{notify} when the transfer is complete. @item q Backslash quoting is applied to the @var{from}, @var{to}, @var{user}, and @var{notify} arguments. @xref{UUCP Protocol Commands}. This option was introduced in Taylor UUCP version 1.07. @end table @item temp If the @samp{C} option appears in @var{options}, this names the file to be sent. Otherwise if @var{from} is in the spool directory, @var{temp} is the same as @var{from}. Otherwise @var{temp} may be a dummy string, such as @file{D.0}. After the transfer has been succesfully completed, the master will delete the file @var{temp}. @item mode This is an octal number giving the mode of the file on the master. If the file is not in the spool directory, the slave will always create it with mode 0666, except that if (@var. @item notify This field may not be present, and in any case is only meaningful if the @samp{n} option appears in @var{options}. If the @samp{n} option appears, then, when the transfer is successfully completed, the slave will send mail to @var{notify}, which must be a legal mailing address on the slave. If a @var{size} field will appear but the @samp{n} option does not appear, @var{notify} will always be present, typically as the string @samp{dummy} or simply a pair of double quotes. @item. @end table The slave then responds with an @samp{S} command response. @table @samp @item SY @var{start} The slave is willing to accept the file, and file transfer begins. The @var{start} field will only be present when using file restart. It specifies the byte offset into the file at which to start sending. If this is a new file, @var{start} will be 0x0. @item SN2 The slave denies permission to transfer the file. This can mean that the destination directory may not be accessed, or that no requests are permitted. It implies that the file transfer will never succeed. @item SN4 The slave is unable to create the necessary temporary file. This implies that the file transfer might succeed later. @item SN6 This is only used by Taylor UUCP size negotiation. It means that the slave considers the file too large to transfer at the moment, but it may be possible to transfer it at some other time. @item SN7 This is only used by Taylor UUCP size negotiation. It means that the slave considers the file too large to ever transfer. @item SN8 This is only used by Taylor UUCP. It means that the file was already received in a previous conversation. This can happen if the receive acknowledgement was lost after it was sent by the receiver but before it was received by the sender. @item SN9 This is only used by Taylor UUCP (versions 1.05 and up) and UUPlus (versions 2.0 and up). It means that the remote system was unable to open another channel (see the discussion of the @samp{i} protocol for more information about channels). This implies that the file transfer might succeed later. @item SN10 This is reportedly used by SVR4 UUCP to mean that the file size is too large. @end table If the slave responds with @samp{SY}, a file transfer begins. When the file transfer is complete, the slave sends a @samp{C} command response. @table @samp @item CY The file transfer was successful. @item CYM The file transfer was successful, and the slave wishes to become the master; the master should send an @samp{H} command, described below. @item CN5 The temporary file could not be moved into the final location. This implies that the file transfer will never succeed. @end table @end table After the @samp{C} command response has been received (in the @samp{SY} case) or immediately (in an @samp{SN} case) the master will send another command. @node The R Command, The X Command, The S Command, UUCP Protocol Commands @subsubsection The R Command @cindex R UUCP protocol command @cindex UUCP protocol R command @table @asis @item master: @samp{R @var{from} @var{to} @var{user} -@var{options} @var{size}} The @samp{R} and the @samp{-} are literal characters. This is a request by the master to receive a file from the slave. I do not know how SVR4 UUCP or QFT implement file transfer restart in this case. @table @var @item from This is the name of the file on the slave which the master wishes to receive. It must not be in the spool directory, and it may not contain any wildcards. @item. @item user The name of the user who requested the transfer. @item options A list of options to control the transfer. The following options are defined (all options are single characters): @table @samp @item d The master should create directories as necessary (this is the default). @item f The master should not create directories if necessary, but should fail the transfer instead. @item m The master should send mail to @var{user} when the transfer is complete. @item q Backslash quoting is applied to the @var{from}, @var{to}, and @var{user} arguments. @xref{UUCP Protocol Commands}. This option was introduced in Taylor UUCP version 1.07. @end table @item size This only appears if Taylor UUCP size negotiation is being used. It specifies the largest file which the master is prepared to accept (when using SVR4 UUCP or QFT, this was specified in the @samp{-U} option during the initial handshake). @end table The slave then responds with an @samp{R} command response. UUPlus does not support @samp{R} requests, and always responds with @samp{RN2}. @table @samp @item RY @var{mode} [@var{size}] The slave is willing to send the file, and file transfer begins. The @var{mode} argument is the octal mode of the file on the slave. The master treats this just as the slave does the @var{mode} argument in the send command, q.v. I am told that SVR4 UUCP sends a trailing @var{size} argument. For some versions of BSD UUCP, the @var{mode} argument may have a trailing @samp{M} character (e.g., @samp{RY 0666M}). This means that the slave wishes to become the master. @item RN2 The slave is not willing to send the file, either because it is not permitted or because the file does not exist. This implies that the file request will never succeed. @item). @item RN9 This is only used by Taylor UUCP (versions 1.05 and up) and FSUUCP (versions 1.5 and up). It means that the remote system was unable to open another channel (see the discussion of the @samp{i} protocol for more information about channels). This implies that the file transfer might succeed later. @end table If the slave responds with @samp{RY}, a file transfer begins. When the file transfer is complete, the master sends a @samp{C} command. The slave pretty much ignores this, although it may log it. @table @samp @item CY The file transfer was successful. @item CN5 The temporary file could not be moved into the final location. @end table After the @samp{C} command response has been sent (in the @samp{RY} case) or immediately (in an @samp{RN} case) the master will send another command. @end table @node The X Command, The E Command, The R Command, UUCP Protocol Commands @subsubsection The X Command @cindex X UUCP protocol command @cindex UUCP protocol X command @table @asis @item master: @samp{X @var{from} @var{to} @var{user} -@var{options}} The @samp{X} and the @samp{-} are literal characters. This is a request by the master to, in essence, execute uucp on the slave. The slave should execute @samp{uucp @var{from} @var{to}}. @table @var @item @samp{R} command would suffice. The master can also use this command to request that the slave transfer files to a third system. @item to This is the name of the file or directory to which the files should be transferred. This will normally use a UUCP name. For example, if the master wishes to receive the files itself, it would use @samp{master!path}. @item user The name of the user who requested the transfer. @item options A list of options to control the transfer. As far as I know, only one option is defined: @table @samp @item q Backslash quoting is applied to the @var{from}, @var{to}, and @var{user} arguments. @xref{UUCP Protocol Commands}. This option was introduced in Taylor UUCP version 1.07. @end table @end table The slave then responds with an @samp{X} command response. FSUUCP does not support @samp{X} requests, and always responds with @samp{XN}. @table @samp @item XY The request was accepted, and the appropriate file transfer commands have been queued up for later processing. @item XN The request was denied. No particular reason is given. @end table In either case, the master will then send another command. @end table @node The E Command, The H Command, The X Command, UUCP Protocol Commands @subsubsection The E Command @cindex E UUCP protocol command @cindex UUCP protocol E command @table @asis @item master: @samp{E @var{from} @var{to} @var{user} -@var{options} @var{temp} @var{mode} @var{notify} @var{size} @var{command}} The @samp{E} command is only supported by Taylor UUCP 1.04 and up. It is used to make an execution request without requiring a separate @file{X.*} file. @ifclear faq @xref{Execution File Format}. @end ifclear It is only used when the command to be executed requires a single input file which is passed to it as standard input. All the fields have the same meaning as they do for an @samp{S} command, except for @var{options} and @var{command}. @table @var N No mail message should be sent, even if the command fails. This is the equivalent of the @samp{N} command in an @file{X.*} file. @item Z A mail message should be sent if the command fails (this is generally the default in any case). This is the equivalent of the @samp{Z} command in an @file{X.*} file. @item R Mail messages about the execution should be sent to the address in the @var{notify} field. This is the equivalent of the @samp{R} command in an @file{X.*} file. @item e The execution should be done with @file{/bin/sh}. This is the equivalent of the @samp{e} command in an @file{X.*} file. @item q Backslash quoting is applied to the @var{from}, @var{to}, @var{user}, and @var{notify} arguments. @xref{UUCP Protocol Commands}. This option was introduced in Taylor UUCP version 1.07. Note that the @var{command} argument is not backslash quoted---that argument is defined as the remainder of the line, and so is already permitted to contain any character. @end table @item command The command which should be executed. This is the equivalent of the @samp{C} command in an @file{X.*} file. @end table The slave then responds with an @samp{E} command response. These are the same as the @samp{S} command responses, but the initial character is @samp{E} rather than @samp{S}. If the slave responds with @samp{EY}, the file transfer begins. When the file transfer is complete, the slave sends a @samp{C} command response, just as for the @samp{S} command. After a successful file transfer, the slave is responsible for arranging for the command to be executed. The transferred file is passed as standard input, as though it were named in the @samp{I} and @samp{F} commands of an @file{X.*} file. After the @samp{C} command response has been received (in the @samp{EY} case) or immediately (in an @samp{EN} case) the master will send another command. @end table @node The H Command, , The E Command, UUCP Protocol Commands @subsubsection The H Command @cindex H UUCP protocol command @cindex UUCP protocol H command @table @asis @item master: @samp{H} This is used by the master to hang up the connection. The slave will respond with an @samp{H} command response. @table @samp @item HY The slave agrees to hang up the connection. In this case the master sends another @samp{HY} command. In some UUCP packages the slave will then send a third @samp{HY} command. At this point the protocol is shut down, and the final handshake is begun. @item HN The slave does not agree to hang up. In this case the master and the slave exchange roles. The next command will be sent by the former slave, which is the new master. The roles may be reversed several times during a single connection. @end table @end table @node The Final Handshake, , UUCP Protocol Commands, UUCP Protocol @subsection The Final Handshake @cindex). @table @asis @item caller: @samp{\020OOOOOO\000} @item called: @samp{\020OOOOOOO\000} @end table That is, the calling UUCP sends six @samp{O} characters and the called UUCP replies with seven @samp{O} characters. Some UUCP packages always send six @samp{O} characters. @ifset faq @format ------------------------------ From: UUCP @samp{g} Protocol Subject: UUCP @samp{g} Protocol @end format @end ifset @node g Protocol, f Protocol, UUCP Protocol, Protocols @section UUCP @samp{g} Protocol @cindex @samp{g} protocol @cindex protocol @samp{g} The @samp @samp{g} protocol generally refer to specific implementations, rather than to the correctly implemented protocol. The @samp{g} protocol was originally designed for general packet drivers, and thus contains some features that are not used by UUCP, including an alternate data channel and the ability to renegotiate packet and window sizes during the communication session. The @samp{g} protocol is spoofed by many Telebit modems. When spoofing is in effect, each Telebit modem uses the @samp @samp{g} protocol very well at all. When a Telebit is spoofing the @samp{g} protocol, it forces the packet size to be 64 bytes and the window size to be 3. This discussion of the @samp{g} protocol explains how it works, but does not discuss useful error handling techniques. Some discussion of this can be found in Jamie E. Hanrahan's paper, cited @ifclear faq above (@pxref{UUCP Protocol Sources}). @end ifclear @ifset faq above. @end ifset All @samp{g} protocol communication is done with packets. Each packet begins with a six byte header. Control packets consist only of the header. Data packets contain additional data. The header is as follows: @table @asis @item @samp{\020} Every packet begins with a @kbd{^P}. @item @var{k} (1 <= @var{k} <= 9) The @var{k} value is always 9 for a control packet. For a data packet, the @var{k} value indicates how much data follows the six byte header. The amount of data is @ifinfo 2 ** (@var{k} + 4), where ** indicates exponentiation. @end ifinfo @iftex @tex $2^{k + 4}$. @end tex @end iftex Thus a @var{k} value of 1 means 32 data bytes and a @var{k} value of 8 means 4096 data bytes. The @var{k} value for a data packet must be between 1 and 8 inclusive. @item checksum low byte @itemx checksum high byte The checksum value is described below. @item control byte The control byte indicates the type of packet, and is described below. @item xor byte This byte is the xor of @var{k}, the checksum low byte, the checksum high byte and the control byte (i.e., the second, third, fourth and fifth header bytes). It is used to ensure that the header data is valid. @end table The control byte in the header is composed of three bit fields, referred to here as @var{tt} (two bits), @var{xxx} (three bits) and @var{yyy} (three bits). The control is @var{tt}@var{xxx}@var{yyy}, or @code{(@var{tt} << 6) + (@var{xxx} << 3) + @var{yyy}}. The @var{TT} field takes on the following values: @table @samp @item 0 This is a control packet. In this case the @var{k} byte in the header must be 9. The @var{xxx} field indicates the type of control packet; these types are described below. @item 1 This is an alternate data channel packet. This is not used by UUCP. @item 2 This is a data packet, and the entire contents of the attached data field (whose length is given by the @var{k} byte in the header) are valid. The @var{xxx} and @var{yyy} fields are described below. @item 3 This is a short data packet. Let the length of the data field (as given by the @var{k} byte in the header) be @var{l}. Let the first byte in the data field be @var{b1}. If @var{b1} is less than 128 (if the most significant bit of @var{b1} is 0), then there are @code{@var{l} - @var{b1}} valid bytes of data in the data field, beginning with the second byte. If @code{@var{b1} >= 128}, let @var{b2} be the second byte in the data field. Then there are @code{@var{l} - ((@var{b1} & 0x7f) + (@var{b2} << 7))} valid bytes of data in the data field, beginning with the third byte. In all cases @var{l} bytes of data are sent (and all data bytes participate in the checksum calculation) but some of the trailing bytes may be dropped by the receiver. The @var{xxx} and @var{yyy} fields are described below. @end table In a data packet (short or not) the @var{xxx} field gives the sequence number of the packet. Thus sequence numbers can range from 0 to 7, inclusive. The @var @var{yyy} field of a data packet, or as the @var{yyy} field of a @samp{RJ} or @samp{RR} control packet (described below). Each packet must be transmitted in order (the sender may not skip sequence numbers). Each packet must be acknowledged, and each packet must be acknowledged in order. In a control packet, the @var{xxx} field takes on the following values: @table @asis @item 1 @samp{CLOSE} The connection should be closed immediately. This is typically sent when one side has seen too many errors and wants to give up. It is also sent when shutting down the protocol. If an unexpected @samp{CLOSE} packet is received, a @samp{CLOSE} packet should be sent in reply and the @samp{g} protocol should halt, causing UUCP to enter the final handshake. @item 2 @samp{RJ} or @samp{NAK} The last packet was not received correctly. The @var{yyy} field contains the sequence number of the last correctly received packet. @item 3 @samp{SRJ} Selective reject. The @var{yyy} field contains the sequence number of a packet that was not received correctly, and should be retransmitted. This is not used by UUCP, and most implementations will not recognize it. @item 4 @samp{RR} or @samp{ACK} Packet acknowledgement. The @var{yyy} field contains the sequence number of the last correctly received packet. @item 5 @samp{INITC} Third initialization packet. The @var{yyy} field contains the maximum window size to use. @item 6 @samp{INITB} Second initialization packet. The @var{yyy} field contains the packet size to use. It requests a size of @ifinfo 2 ** (@var{yyy} + 5). @end ifinfo @iftex @tex $2^{yyy + 5}$. @end tex @end iftex Note that this is not the same coding used for the @var{k} byte in the packet header (it is 1 less). Most UUCP implementations that request a packet size larger than 64 bytes can handle any packet size up to that specified. @item 7 @samp{INITA} First initialization packet. The @var{yyy} field contains the maximum window size to use. @end table To compute the checksum, call the control byte (the fifth byte in the header) @var{c}. The checksum of a control packet is simply @code{0xaaaa - @var{c}}. The checksum of a data packet is @code{0xaaaa - (@var{check} ^ @var{c})}, where @code{^} denotes exclusive or, and @var @code{z} argument points to the data and the @code{c} argument indicates how much data there is. @example; @} @end example When the @samp{g} protocol is started, the calling UUCP sends an @samp{INITA} control packet with the window size it wishes the called UUCP to use. The called UUCP responds with an @samp{INITA} packet with the window size it wishes the calling UUCP to use. Pairs of @samp{INITB} and @samp @samp{CLOSE} control packet. @ifset faq @format ------------------------------ From: UUCP @samp{f} Protocol Subject: UUCP @samp{f} Protocol @end format @end ifset @node f Protocol, t Protocol, g Protocol, Protocols @section UUCP @samp{f} Protocol @cindex @samp{f} protocol @cindex protocol @samp{f} The @samp{f} protocol is a seven bit protocol which checksums an entire file at a time. It only uses the characters between @samp{\040} and @samp{\176} (ASCII @kbd{space} and @kbd{~}) @samp{f} protocol originated in BSD versions of UUCP. It was originally intended for transmission over X.25 PAD links. The @samp @var{b} of the file is translated according to the following table: @example 0 <= @var{b} <= 037: 0172, @var{b} + 0100 (0100 to 0137) 040 <= @var{b} <= 0171: @var{b} ( 040 to 0171) 0172 <= @var{b} <= 0177: 0173, @var{b} - 0100 ( 072 to 077) 0200 <= @var{b} <= 0237: 0174, @var{b} - 0100 (0100 to 0137) 0240 <= @var{b} <= 0371: 0175, @var{b} - 0200 ( 040 to 0171) 0372 <= @var{b} <= 0377: 0176, @var{b} - 0300 ( 072 to 077) @end example That is, a byte between @samp{\040} and @samp{\171} inclusive is transmitted as is, and all other bytes are prefixed and modified as shown. When all the file data is sent, a seven byte sequence is sent: two bytes of @samp{\176} followed by four ASCII bytes of the checksum as printed in base 16 followed by a carriage return. For example, if the checksum was 0x1234, this would be sent: @samp{\176\1761234\r}. The checksum is initialized to 0xffff. For each byte that is sent When the receiving UUCP sees the checksum, it compares it against its own calculated checksum and replies with a single character followed by a carriage return. @table @samp @item G The file was received correctly. @item R The checksum did not match, and the file should be resent from the beginning. @item Q The checksum did not match, but too many retries have occurred and the communication session should be abandoned. @end table The sending UUCP checks the returned character and acts accordingly. @ifset faq @format ------------------------------ From: UUCP @samp{t} Protocol Subject: UUCP @samp{t} Protocol @end format @end ifset @node t Protocol, e Protocol, f Protocol, Protocols @section UUCP @samp{t} Protocol @cindex @samp{t} protocol @cindex protocol @samp{t} The @samp{t} protocol is intended for use on links which provide reliable end-to-end connections, such as TCP. It does no error checking or flow control, and requires an eight bit clear channel. I believe the @samp{t} protocol originated in BSD versions of UUCP. When a UUCP package transmits a command, it first gets the length of the command string, @var{c}. It then sends @code{((@var{c} / 512) + 1) * 512} bytes (the smallest multiple of 512 which can hold @var @code{htonl}) followed by that amount of data. The end of the file is signalled by a block containing zero bytes of data. @ifset faq @format ------------------------------ From: UUCP @samp{e} Protocol Subject: UUCP @samp{e} Protocol @end format @end ifset @node e Protocol, Big G Protocol, t Protocol, Protocols @section UUCP @samp{e} Protocol @cindex @samp{e} protocol @cindex protocol @samp{e} The @samp{e} protocol is similar to the @samp{t} protocol. It does no flow control or error checking and is intended for use over networks providing reliable end-to-end connections, such as TCP. The @samp @samp{1000\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0}). It then sends the entire file. @ifset faq @format ------------------------------ From: UUCP @samp{G} Protocol Subject: UUCP @samp{G} Protocol @end format @end ifset @node Big G Protocol, i Protocol, e Protocol, Protocols @section UUCP @samp{G} Protocol @cindex @samp{G} protocol @cindex protocol @samp{G} The @samp{G} protocol is used by SVR4 UUCP. It is identical to the @samp{g} protocol, except that it is possible to modify the window and packet sizes. The SVR4 implementation of the @samp{g} protocol reportedly is fixed at a packet size of 64 and a window size of 7. Supposedly SVR4 chose to implement a new protocol using a new letter to avoid any potential incompatibilities when using different packet or window sizes. Most implementations of the @samp{g} protocol that accept packets larger than 64 bytes will also accept packets smaller than whatever they requested in the @samp{INITB} packet. The SVR4 @samp{G} implementation is an exception; it will only accept packets of precisely the size it requests in the INITB packet. @ifset faq @format ------------------------------ From: UUCP @samp{i} Protocol Subject: UUCP @samp{i} Protocol @end format @end ifset @node i Protocol, j Protocol, Big G Protocol, Protocols @section UUCP @samp{i} Protocol @cindex @samp{i} protocol @cindex protocol @samp{i} The @samp{i} protocol was written by Ian Lance Taylor (who also wrote this @ifclear faq manual). @end ifclear @ifset faq FAQ). @end ifset It was first used by Taylor UUCP version 1.04. It is a sliding window packet protocol, like the @samp{g} protocol, but it supports bidirectional transfers (i.e., file transfers in both directions simultaneously). It requires an eight bit clear connection. Several ideas for the protocol were taken from the paper @cite{A High-Throughput Message Transport System} by P.@: Lauder. I don't know where the paper was published, but the author's e-mail address is @email{piers@@cs.su.oz.au}. The @samp (@samp{DATA}, @samp{SYNC}, @samp{ACK}, @samp{NAK}, @samp{SPOS}, @samp{CLOSE}) which are described below. Although any packet type may include data, any data provided with an @samp{ACK}, @samp{NAK} or @samp{CLOSE} packet is ignored. Every @samp{DATA}, @samp{SPOS} and @samp @samp{SPOS}, which is used to change the file position. The reason for keeping track of the file position is described below. The header is as follows: @table @asis @item @samp{\007} Every packet begins with @kbd{^G}. @item @code{(@var{packet} << 3) + @var{locchan}} The five bit packet number combined with the three bit local channel number. @samp{DATA}, @samp{SPOS} and @samp{CLOSE} packets use the packet sequence number for the @var{packet} field. @samp{NAK} packet types use the @var{packet} field for the sequence number to be resent. @samp{ACK} and @samp{SYNC} do not use the @var{packet} field, and generally leave it set to 0. Packets which are not associated with a UUCP command from the local system use a local channel number of 0. @item @code{(@var{ack} << 3) + @var. @item @code{(@var{type} << 5) + (@var{caller} << 4) + @var. @item @var{len2} The lower eight bits of the data length. The twelve bits of data length permit packets ranging in size from 0 to 4095 bytes. @item @var{check} The exclusive or of the second through fifth bytes of the header. This provides an additional check that the header is valid. @end table: @table @asis @item 0 @samp{DATA} This is a plain data packet. @item 1 @samp{SYNC} @samp{SYNC} packets are exchanged when the protocol is initialized, and are described further below. @samp{SYNC} packets do not carry sequence numbers (that is, the @var{packet} field is ignored). @item 2 @samp{ACK} This is an acknowledgement packet. Since @samp{DATA} packets also carry packet acknowledgements, @samp{ACK} packets are only used when one side has no data to send. @samp{ACK} packets do not carry sequence numbers. @item 3 @samp{NAK} This is a negative acknowledgement. This is sent when a packet is received incorrectly, and means that the packet number appearing in the @var{packet} field must be resent. @samp{NAK} packets do not carry sequence numbers (the @var{packet} field is already used). @item 4 @samp{SPOS} This packet changes the file position. The packet contains four bytes of data holding the file position, most significant byte first. The next packet received will be considered to be at the named file position. @item 5 @samp{CLOSE} When the protocol is shut down, each side sends a @samp{CLOSE} packet. This packet does have a sequence number, which could be used to ensure that all packets were correctly received (this is not needed by UUCP, however, which uses the higher level @samp{H} command with an @samp{HY} response). @end table When the protocol starts up, both systems send a @samp{SYNC} packet. The @samp @samp{ACK} when half the window is received. Note that the @samp{NAK} packet corresponds to the unused @samp{g} protocol @samp{SRJ} packet type, rather than to the @samp{RJ} packet type. When a @samp @samp{SY} response before sending the file, the file data is sent beginning immediately after the @samp{S} command is sent. If an @samp{SN} response is received, the file send is aborted, and a final data packet of length zero is sent to indicate that the channel number may be reused. If an @samp. @ifset faq @format ------------------------------ From: UUCP @samp{j} Protocol Subject: UUCP @samp{j} Protocol @end format @end ifset @node j Protocol, x Protocol, i Protocol, Protocols @section UUCP @samp{j} Protocol @cindex @samp{j} protocol @cindex protocol @samp{j} The @samp{j} protocol is a variant of the @samp{i} protocol. It was also written by Ian Lance Taylor, and first appeared in Taylor UUCP version 1.04. The @samp{j} protocol is a version of the @samp{i} protocol designed for communication links which intercept a few characters, such as XON or XOFF. It is not efficient to use it on a link which intercepts many characters, such as a seven bit link. The @samp{j} protocol performs no error correction or detection; that is presumed to be the responsibility of the @samp{i} protocol. When the @samp{j} protocol starts up, each system sends a printable ASCII string indicating which characters it wants to avoid using. The string begins with the ASCII character @kbd{^} (octal 136) and ends with the ASCII character @kbd{~} (octal 176). After sending this string, each system looks for the corresponding string from the remote system. The strings are composed of escape sequences: @samp{\ooo}, where @samp{o} is an octal digit. For example, sending the string @samp{^ @samp{i} protocol start up is done, and the rest of the conversation uses the normal @samp{i} protocol. However, each @samp{i} protocol packet is wrapped to become a @samp{j} protocol packet. Each @samp{j} protocol packet consists of a seven byte header, followed by data bytes, followed by index bytes, followed by a one byte trailer. The packet header looks like this: @table @asis @item @kbd{^} Every packet begins with the ASCII character @kbd{^}, octal 136. @item @var{high} @itemx @var{low} These two characters give the total number of bytes in the packet. Both @var{high} and @var{low} are printable ASCII characters. The length of the packet is @code{(@var{high} - 040) * 0100 + (@var{low} - 040)}, where @code{040 <= @var{high} < 0177} and @code{040 <= @var{low} < 0140}. This permits a length of 6079 bytes, but there is a further restriction on packet size described below. @item @kbd{=} The ASCII character @kbd{=}, octal 075. @item @var{data-high} @itemx @var{data-low} These two characters give the total number of data bytes in the packet. The encoding is as described for @var{high} and @var{low}. The number of data bytes is the size of the @samp{i} protocol packet wrapped inside this @samp{j} protocol packet. @item @kbd{@@} The ASCII character @kbd{@@}, octal 100. @end table The header is followed by the number of data bytes given in @var{data-high} and @var{data-low}. These data bytes are the @samp{i} protocol packet which is being wrapped in the @samp{j} protocol packet. However, each character in the @samp{i} protocol packet which the @samp @samp @samp{i} protocol packet is determined. This index is turned into a two byte printable ASCII index, @var{index-high} and @var{index-low}, such that the index is @code{(@var{index-high} - 040) * 040 + (@var{index-low} - 040)}. @var{index-low} is restricted such that @code{040 <= @var{index-low} < 0100}. @var{index-high} is not permitted to be 0176, so @code{040 <= @var{index-high} < 0176}. @var{index-low} is then modified to encode the transformation: @itemize @bullet @item If the character transformation only had to subtract 0200, then @var{index-low} is used as is. @item If the character transformation only had to xor by 020, then 040 is added to @var{index-low}. @item If both operations had to be performed, then 0100 is added to @var{index-low}. However, if the value of @var{index-low} was initially 077, then adding 0100 would result in 0177, which is not a printable ASCII character. For that special case, @var{index-high} is set to 0176, and @var{index-low} is set to the original value of @var{index-high}. @end itemize The receiver decodes the index bytes as follows (this is the reverse of the operations performed by the sender, presented here for additional clarity): @itemize @bullet @item The first byte in the index is @var{index-high}, and the second is @var{index-low}. @item If @code{040 <= @var{index-high} < 0176}, the index refers to the data byte at position @code{(@var{index-high} - 040) * 040 + @var{index-low} % 040}. @item If @code{040 <= @var{index-low} < 0100}, then 0200 must be added to indexed byte. @item If @code{0100 <= @var{index-low} < 0140}, then 020 must be xor'ed to the indexed byte. @item If @code{0140 <= @var{index-low} < 0177}, then 0200 must be added to the indexed byte, and 020 must be xor'ed to the indexed byte. @item If @code{@var{index-high} == 0176}, the index refers to the data byte at position @code{(@var{index-low} - 040) * 040 + 037}. 0200 must be added to the indexed byte, and 020 must be xor'ed to the indexed byte. @end itemize This means the largest @samp{i} protocol packet which may be wrapped inside a @samp{j} protocol packet is @code{(0175 - 040) * 040 + (077 - 040) == 3007} bytes. The final character in a @samp{j} protocol packet, following the index bytes, is the ASCII character @kbd{~} (octal 176). The motivation behind using an indexing scheme, rather than escape characters, is to avoid data movement. The sender may simply add a header and a trailer to the @samp{i} protocol packet. Once the receiver has loaded the @samp{j} protocol packet, it may scan the index bytes, transforming the data bytes, and then pass the data bytes directly on to the @samp{i} protocol routine. @ifset faq @format ------------------------------ From: UUCP @samp{x} Protocol Subject: UUCP @samp{x} Protocol @end format @end ifset @node x Protocol, y Protocol, j Protocol, Protocols @section UUCP @samp{x} Protocol @cindex @samp{x} protocol @cindex protocol @samp{x} The @samp. @ifset faq @format ------------------------------ From: UUCP @samp{y} Protocol Subject: UUCP @samp{y} Protocol @end format @end ifset @node y Protocol, d Protocol, x Protocol, Protocols @section UUCP @samp{y} Protocol @cindex @samp{y} protocol @cindex protocol @samp{y} The @samp @samp{f} protocol; there are no packet acknowledgements, so the protocol is efficient over a half-duplex communication line such as PEP. Every packet contains a six byte header: @table @asis @item sequence low byte @itemx sequence high byte A two byte sequence number, in little endian order. The first sequence number is 0. Since the first packet is always a sync packet (described below) the sequence number of the first data packet is always 1. Each system counts sequence numbers independently. @item length low byte @itemx. @item checksum low byte @itemx checksum high byte A two byte checksum, in little endian order. The checksum is computed over the data bytes. The checksum algorithm is described below. If there are no data bytes, the checksum is sent as 0. @end table: @table @asis @item version The version number of the protocol. Currently this must be 1. Larger numbers should be ignored; it is the responsibility of the newer version to accommodate the older one. @item packet size The maximum data length to use divided by 256. This is sent as a single byte. The maximum data length permitted is 32768, which would be sent as 128. Customarily both systems will use the same maximum data length, the lower of the two requested. @item flags low byte @itemx flags high byte Two bytes of flags. None are currently defined. These bytes should be sent as 0, and ignored by the receiver. @end table A length field with the high bit set is a control packet. The following control packet types are defined: @table @asis @item 0xfffe @samp{YPKT_ACK} Acknowledges correct receipt of a file. @item 0xfffd @samp{YPKT_ERR} Indicates an incorrect checksum. @item 0xfffc @samp{YPKT_BAD} Indicates a bad sequence number, an invalid length, or some other error. @end table If a control packet other than @samp{YPKT_ACK} is received, the connection is dropped. If a checksum error is detected for a received packet, a @samp{YPKT_ERR} control packet is sent, and the connection is dropped. If a packet is received out of sequence, a @samp{YPKT_BAD} control packet is sent, and the connection is dropped. The checksum is initialized to 0xffff. For each data byte in a packet This is the same algorithm as that used by the @samp @samp{YPKT_ACK} control packet. The sending system waits for the @samp{YPKT_ACK} control packet before continuing; this wait should be done with a large timeout, since there may be a considerable amount of data buffered on the communication path. @ifset faq @format ------------------------------ From: UUCP @samp{d} Protocol Subject: UUCP @samp{d} Protocol @end format @end ifset @node d Protocol, h Protocol, y Protocol, Protocols @section UUCP @samp{d} Protocol @cindex @samp{d} protocol @cindex protocol @samp{d} The @samp{d} protocol is apparently used for DataKit muxhost (not RS-232) connections. No file size is sent. When a file has been completely transferred, a write of zero bytes is done; this must be read as zero bytes on the other end. @ifset faq @format ------------------------------ From: UUCP @samp{h} Protocol Subject: UUCP @samp{h} Protocol @end format @end ifset @node h Protocol, v Protocol, d Protocol, Protocols @section UUCP @samp{h} Protocol @cindex @samp{h} protocol @cindex protocol @samp{h} The @samp{h} protocol is apparently used in some places with HST modems. It does no error checking, and is not that different from the @samp{t} protocol. I don't know the details. @ifset faq @format ------------------------------ From: UUCP @samp{v} Protocol Subject: UUCP @samp{v} Protocol @end format @end ifset @node v Protocol, , h Protocol, Protocols @section UUCP @samp{v} Protocol @cindex @samp{v} protocol @cindex protocol @samp{v} The @samp{v} protocol is used by UUPC/extended, a PC UUCP program. It is simply a version of the @samp{g} protocol which supports packets of any size, and also supports sending packets of different sizes during the same conversation. There are many @samp{g} protocol implementations which support both, but there are also many which do not. Using @samp{v} ensures that everything is supported. @ifset faq @format ------------------------------ From: Thanks Subject: Thanks @end format Besides the papers and information acknowledged at the top of this article, the following people have contributed help, advice, suggestions and information: @format ****************************** @end format @end ifset @c END-OF-FAQ @node Hacking, Acknowledgements, Protocols, Top @chapter Hacking Taylor UUCP This chapter provides the briefest of guides to the Taylor UUCP source code itself. @menu * System Dependence:: System Dependence * Naming Conventions:: Naming Conventions * Patches:: Patches @end menu @node System Dependence, Naming Conventions, Hacking, Hacking @section System Dependence The code is carefully segregated into a system independent portion and a system dependent portion. The system dependent code is in the @file{unix} subdirectory, and also in the file @file{sysh.unx} (also known as @file{sysdep.h}). With the right configuration parameters, the system independent code calls only ANSI C functions. Some of the less common ANSI C functions are also provided in the @file{lib} directory. The replacement function @code{strtol} in @file{lib/strtol.c} assumes that the characters @kbd{A} to @kbd{F} and @kbd{a} to @kbd{f} appear in strictly sequential order. The function @code{igradecmp} in @file. @node Naming Conventions, Patches, System Dependence, Hacking @section: @table @samp @item a array; the next character is the type of an element @item b byte or character @item c count of something @item e stdio FILE * @item f boolean @item i generic integer @item l double @item o file descriptor (as returned by open, creat, etc.) @item p generic pointer @item q pointer to structure @item s structure @item u void (function return values only) @item z character string @end table A generic pointer (@code{p}) is sometimes a @code{void *}, sometimes a function pointer in which case the prefix is pf, and sometimes a pointer to another type, in which case the next character is the type to which it points (pf is overloaded). An array of strings (@code{char *[]}) would be named @code{az} (array of string). If this array were passed to a function, the function parameter would be named @code{paz} (pointer to array of string). Note that the variable name prefixes do not necessarily indicate the type of the variable. For example, a variable prefixed with @kbd{i} may be int, long or short. Similarly, a variable prefixed with @kbd{b} may be a char or an int; for example, the return value of @code{getchar} would be caught in an int variable prefixed with @kbd @file{protg.c}, the @samp{g} protocol source code, use a module prefix of @samp{g}. This isn't too useful, as a number of modules use a module prefix of @samp{s}. @node Patches, , Naming Conventions, Hacking @section @command{diff} program invoked with the @option{-c} option (if you have the GNU version of @command{diff}, use the @option{-p} option). Always invoke @command{diff} with the original file first and the modified file second. If your @command{diff} does not support @option{-c} (or you don't have @command{diff}), send a complete copy of the modified file (if you have just changed a single function, you can just send the new version of the function). In particular, please do not send @command{diff} output without the @option{. @node Acknowledgements, Index (concepts), Hacking, Top @chapter @file{uunet} access. I would also like to thank Richard Stallman @email{rms@@gnu.org} for founding the Free Software Foundation, and John Gilmore @email{gnu@@toad.com} for writing the initial version of gnuucp (based on uuslave) which was a direct inspiration for this somewhat larger project. Chip Salzenberg @email{chip@@tct.com} has contributed many patches. @ifinfo Franc,ois @end ifinfo @iftex @tex Fran\c cois @end tex @end iftex Pinard @email{pinard@@iro.umontreal.ca} tirelessly tested the code and suggested many improvements. He also put together the initial version of this manual. Doug Evans contributed the zmodem protocol. Marc Boucher @email{marc@@CAM.ORG} contributed the code supporting the pipe port type. Jorge Cwik @email{jorge@@laser.satlink.net} contributed the @samp{y} protocol code. Finally, Verbus M. Counts @email{verbus@@westmark.com} and Centel Federal Systems, Inc., deserve special thanks, since they actually paid me money to port this code to System III. In alphabetical order: @example Meno Abels @email{Meno.Abels@@Technical.Adviser.com} "Earle F. Ake - SAIC" @email{ake@@Dayton.SAIC.COM} @email{mra@@searchtech.com} (Michael Almond) @email{cambler@@zeus.calpoly.edu} (Christopher J. Ambler) Brian W. Antoine @email{briana@@tau-ceti.isc-br.com} @email{jantypas@@soft21.s21.com} (John Antypas) @email{james@@bigtex.cactus.org} (James Van Artsdalen) @email{jima@@netcom.com} (Jim Avera) @email{nba@@sysware.DK} (Niels Baggesen) @email{uunet!hotmomma!sdb} (Scott Ballantyne) Zacharias Beckman @email{zac@@dolphin.com} @email{mike@@mbsun.ann-arbor.mi.us} (Mike Bernson) @email{bob@@usixth.sublink.org} (Roberto Biancardi) @email{statsci!scott@@coco.ms.washington.edu} (Scott Blachowicz) @email{bag%wood2.cs.kiev.ua@@relay.ussr.eu.net} (Andrey G Blochintsev) @email{spider@@Orb.Nashua.NH.US} (Spider Boardman) Gregory Bond @email{gnb@@bby.com.au} Marc Boucher @email{marc@@CAM.ORG} Ard van Breemen @email{ard@@cstmel.hobby.nl} @email{dean@@coplex.com} (Dean Brooks) @email{jbrow@@radical.com} (Jim Brownfield) @email{dave@@dlb.com} (Dave Buck) @email{gordon@@sneaky.lonestar.org} (Gordon Burditt) @email{dburr@@sbphy.physics.ucsb.edu} (Donald Burr) @email{mib@@gnu.ai.mit.edu} (Michael I Bushnell) Brian Campbell @email{brianc@@quantum.on.ca} Andrew A. Chernov @email{ache@@astral.msk.su} @email{jhc@@iscp.bellcore.com} (Jonathan Clark) @email{mafc!frank@@bach.helios.de} (Frank Conrad) Ed Carp @email{erc@@apple.com} @email{mpc@@mbs.linet.org} (Mark Clements) @email{verbus@@westmark.westmark.com} (Verbus M. Counts) @email{cbmvax!snark.thyrsus.com!cowan} (John Cowan) Bob Cunningham @email{bob@@soest.hawaii.edu} @email{jorge@@laser.satlink.net} (Jorge Cwik) @email{kdburg@@incoahe.hanse.de} (Klaus Dahlenburg) Damon @email{d@@exnet.co.uk} @email{celit!billd@@UCSD.EDU} (Bill Davidson) @email{hubert@@arakis.fdn.org} (Hubert Delahaye) @email{markd@@bushwire.apana.org.au} (Mark Delany) Allen Delaney @email{allen@@brc.ubc.ca} Gerriet M. Denkmann @email{gerriet@@hazel.north.de} @email{denny@@dakota.alisa.com} (Bob Denny) Drew Derbyshire @email{ahd@@kew.com} @email{ssd@@nevets.oau.org} (Steven S. Dick) @email{gert@@greenie.gold.sub.org} (Gert Doering) @email{gemini@@geminix.in-berlin.de} (Uwe Doering) Hans-Dieter Doll @email{hd2@@Insel.DE} @email{deane@@deane.teleride.on.ca} (Dean Edmonds) Mark W. Eichin @email{eichin@@cygnus.com} @email{erik@@pdnfido.fidonet.org} Andrew Evans @email{andrew@@airs.com} @email{dje@@cygnus.com} (Doug Evans) Marc Evans @email{marc@@synergytics.com} Dan Everhart @email{dan@@dyndata.com} @email{kksys!kegworks!lfahnoe@@cs.umn.edu} (Larry Fahnoe) Matthew Farwell @email{dylan@@ibmpcug.co.uk} @email{fenner@@jazz.psu.edu} (Bill Fenner) @email{jaf@@inference.com} (Jose A. Fernandez) "David J. Fiander" @email{golem!david@@news.lsuc.on.ca} Thomas Fischer @email{batman@@olorin.dark.sub.org} Mister Flash @email{flash@@sam.imash.ras.ru} @email{louis@@marco.de} (Ju"rgen Fluk) @email{erik@@eab.retix.com} (Erik Forsberg) @email{andy@@scp.caltech.edu} (Andy Fyfe) Lele Gaifax @email{piggy@@idea.sublink.org} @email{Peter.Galbavy@@micromuse.co.uk} @email{hunter@@phoenix.pub.uu.oz.au} (James Gardiner [hunter]) Terry Gardner @email{cphpcom!tjg01} @email{dgilbert@@gamiga.guelphnet.dweomer.org} (David Gilbert) @email{ol@@infopro.spb.su} (Oleg Girko) @email{jimmy@@tokyo07.info.com} (Jim Gottlieb) Benoit Grange @email{ben@@fizz.fdn.org} @email{elg@@elgamy.jpunix.com} (Eric Lee Green) @email{ryan@@cs.umb.edu} (Daniel R. Guilderson) @email{greg@@gagme.chi.il.us} (Gregory Gulik) Richard H. Gumpertz @email{rhg@@cps.com} Scott Guthridge @email{scooter@@cube.rain.com} Michael Haberler @email{mah@@parrot.prv.univie.ac.at} Daniel Hagerty @email{hag@@eddie.mit.edu} @email{jh@@moon.nbn.com} (John Harkin) @email{guy@@auspex.auspex.com} (Guy Harris) @email{hsw1@@papa.attmail.com} (Stephen Harris) Tom Ivar Helbekkmo @email{tih@@Norway.EU.net} Petri Helenius @email{pete@@fidata.fi} @email{gabe@@edi.com} (B. Gabriel Helou) Bob Hemedinger @email{bob@@dalek.mwc.com} Andrew Herbert @email{andrew@@werple.pub.uu.oz.au} @email{kherron@@ms.uky.edu} (Kenneth Herron) Peter Honeyman @email{honey@@citi.umich.edu} @email{jhood@@smoke.marlboro.vt.us} (John Hood) Mark Horsburgh @email{markh@@kcbbs.gen.nz} John Hughes @email{john@@Calva.COM} Mike Ipatow @email{mip@@fido.itc.e-burg.su} Bill Irwin @email{bill@@twg.bc.ca} @email{pmcgw!personal-media.co.jp!ishikawa} (Chiaki Ishikawa) @email{ai@@easy.in-chemnitz.de} (Andreas Israel) @email{iverson@@lionheart.com} (Tim Iverson) @email{bei@@dogface.austin.tx.us} (Bob Izenberg) @email{djamiga!djjames@@fsd.com} (D.J.James) Rob Janssen @email{cmgit!rob@@relay.nluug.nl} @email{harvee!esj} (Eric S Johansson) Kevin Johnson @email{kjj@@pondscum.phx.mcd.mot.com} @email{rj@@rainbow.in-berlin.de} (Robert Joop) Alan Judge @email{aj@@dec4ie.IEunet.ie} @email{chris@@cj_net.in-berlin.de} (Christof Junge) Romain Kang @email{romain@@pyramid.com} @email{tron@@Veritas.COM} (Ronald S. Karr) Brendan Kehoe @email{brendan@@cs.widener.edu} @email{warlock@@csuchico.edu} (John Kennedy) @email{kersing@@nlmug.nl.mugnet.org} (Jac Kersing) @email{ok@@daveg.PFM-Mainz.de} (Olaf Kirch) Gabor Kiss @email{kissg@@sztaki.hu} @email{gero@@gkminix.han.de} (Gero Kuhlmann) @email{rob@@pact.nl} (Rob Kurver) "C.A. Lademann" @email{cal@@zls.gtn.com} @email{kent@@sparky.IMD.Sterling.COM} (Kent Landfield) Tin Le @email{tin@@saigon.com} @email{lebaron@@inrs-telecom.uquebec.ca} (Gregory LeBaron) @email{karl@@sugar.NeoSoft.Com} (Karl Lehenbauer) @email{alex@@hal.rhein-main.de} (Alexander Lehmann) @email{merlyn@@digibd.com} (Merlyn LeRoy) @email{clewis@@ferret.ocunix.on.ca} (Chris Lewis) @email{gdonl@@ssi1.com} (Don Lewis) @email{libove@@libove.det.dec.com} (Jay Vassos-Libove) @email{bruce%blilly@@Broadcast.Sony.COM} (Bruce Lilly) Godfrey van der Linden @email{Godfrey_van_der_Linden@@NeXT.COM} Ted Lindgreen @email{tlindgreen@@encore.nl} @email{andrew@@cubetech.com} (Andrew Loewenstern) "Arne Ludwig" @email{arne@@rrzbu.hanse.de} Matthew Lyle @email{matt@@mips.mitek.com} @email{djm@@eng.umd.edu} (David J. MacKenzie) John R MacMillan @email{chance!john@@sq.sq.com} @email{jum@@helios.de} (Jens-Uwe Mager) Giles D Malet @email{shrdlu!gdm@@provar.kwnet.on.ca} @email{mem@@mv.MV.COM} (Mark E. Mallett) @email{pepe@@dit.upm.es} (Jose A. Manas) @email{peter@@xpoint.ruessel.sub.org} (Peter Mandrella) @email{martelli@@cadlab.sublink.org} (Alex Martelli) W Christopher Martin @email{wcm@@geek.ca.geac.com} Yanek Martinson @email{yanek@@mthvax.cs.miami.edu} @email{thomasm@@mechti.wupper.de} (Thomas Mechtersheimer) @email{jm@@aristote.univ-paris8.fr} (Jean Mehat) @email{me@@halfab.freiburg.sub.org} (Udo Meyer) @email{les@@chinet.chi.il.us} (Leslie Mikesell) @email{bug@@cyberdex.cuug.ab.ca} (Trever Miller) @email{mmitchel@@digi.lonestar.org} (Mitch Mitchell) Emmanuel Mogenet @email{mgix@@krainte.jpn.thomson-di.fr} @email{rmohr@@infoac.rmi.de} (Rupert Mohr) Jason Molenda @email{molenda@@sequent.com} @email{ianm@@icsbelf.co.uk} (Ian Moran) @email{jmorriso@@bogomips.ee.ubc.ca} (John Paul Morrison) @email{brian@@ilinx.wimsey.bc.ca} (Brian J. Murrell) @email{service@@infohh.rmi.de} (Dirk Musstopf) @email{lyndon@@cs.athabascau.ca} (Lyndon Nerenberg) @email{rolf@@saans.north.de} (Rolf Nerstheimer) @email{tom@@smart.bo.open.de} (Thomas Neumann) @email{mnichols@@pacesetter.com} Richard E. Nickle @email{trystro!rick@@Think.COM} @email{stephan@@sunlab.ka.sub.org} (Stephan Niemz) @email{raymond@@es.ele.tue.nl} (Raymond Nijssen) @email{nolan@@helios.unl.edu} (Michael Nolan) david nugent @email{david@@csource.oz.au} Jim O'Connor @email{jim@@bahamut.fsc.com} @email{kevin%kosman.uucp@@nrc.com} (Kevin O'Gorman) Petri Ojala @email{ojala@@funet.fi} @email{oneill@@cs.ulowell.edu} (Brian 'Doc' O'Neill) @email{Stephen.Page@@prg.oxford.ac.uk} Peter Palfrader @email{peter@@palfrader.org} @email{abekas!dragoman!mikep@@decwrl.dec.com} (Mike Park) Tim Peiffer @email{peiffer@@cs.umn.edu} @email{don@@blkhole.resun.com} (Don Phillips) "Mark Pizzolato 415-369-9366" @email{mark@@infocomm.com} John Plate @email{plate@@infotek.dk} @email{dplatt@@ntg.com} (Dave Platt) @email{eldorado@@tharr.UUCP} (Mark Powell) Mark Powell @email{mark@@inet-uk.co.uk} @email{pozar@@kumr.lns.com} (Tim Pozar) @email{joey@@tessi.UUCP} (Joey Pruett) Paul Pryor @email{ptp@@fallschurch-acirs2.army.mil} @email{putsch@@uicc.com} (Jeff Putsch) @email{ar@@nvmr.robin.de} (Andreas Raab) Vadim Radionov @email{rvp@@zfs.lg.ua} Jarmo Raiha @email{jarmo@@ksvltd.FI} James Revell @email{revell@@uunet.uu.net} Scott Reynolds @email{scott@@clmqt.marquette.Mi.US} @email{mcr@@Sandelman.OCUnix.On.Ca} (Michael Richardson) Kenji Rikitake @email{kenji@@rcac.astem.or.jp} @email{arnold@@cc.gatech.edu} (Arnold Robbins) @email{steve@@Nyongwa.cam.org} (Steve M. Robbins) Ollivier Robert @email{Ollivier.Robert@@keltia.frmug.fr.net} Serge Robyns @email{sr@@denkart.be} Lawrence E. Rosenman @email{ler@@lerami.lerctr.org} Jeff Ross @email{jeff@@wisdom.bubble.org} Aleksey P. Rudnev @email{alex@@kiae.su} "Heiko W.Rupp" @email{hwr@@pilhuhn.ka.sub.org} @email{wolfgang@@wsrcc.com} (Wolfgang S. Rupprecht) @email{tbr@@tfic.bc.ca} (Tom Rushworth) Peter Rye @email{prye@@picu-sgh.demon.co.uk} @email{jsacco@@ssl.com} (Joseph E. Sacco) @email{rsalz@@bbn.com} (Rich Salz) Curt Sampson @email{curt@@portal.ca} @email{sojurn!mike@@hobbes.cert.sei.cmu.edu} (Mike Sangrey) Nickolay Saukh @email{nms@@ussr.EU.net} Ignatios Souvatzis @email{is@@jocelyn.rhein.de} @email{heiko@@lotte.sax.de} (Heiko Schlittermann) Eric Schnoebelen @email{eric@@cirr.com} @email{russell@@alpha3.ersys.edmonton.ab.ca} (Russell Schulz) @email{scott@@geom.umn.edu} Igor V. Semenyuk @email{iga@@argrd0.argonaut.su} Christopher Sawtell @email{chris@@gerty.equinox.gen.nz} @email{schuler@@bds.sub.org} (Bernd Schuler) @email{uunet!gold.sub.org!root} (Christian Seyb) Marcus Shang @email{marcus.shang@@canada.cdev.com} @email{s4mjs!mjs@@nirvo.nirvonics.com} (M. J. Shannon Jr.) @email{shields@@tembel.org} (Michael Shields) @email{peter@@ficc.ferranti.com} (Peter da Silva) @email{vince@@victrola.sea.wa.us} (Vince Skahan) @email{frumious!pat} (Patrick Smith) @email{roscom!monty@@bu.edu} (Monty Solomon) @email{sommerfeld@@orchard.medford.ma.us} (Bill Sommerfeld) Julian Stacey @email{stacey@@guug.de} @email{evesg@@etlrips.etl.go.jp} (Gjoen Stein) Harlan Stenn @email{harlan@@mumps.pfcs.com} Ralf Stephan @email{ralf@@ark.abg.sub.org} @email{johannes@@titan.westfalen.de} (Johannes Stille) @email{chs@@antic.apu.fi} (Hannu Strang) @email{ralf@@reswi.ruhr.de} (Ralf E. Stranzenbach) @email{sullivan@@Mathcom.com} (S. Sullivan) Shigeya Suzuki @email{shigeya@@dink.foretune.co.jp} @email{kls@@ditka.Chicago.COM} (Karl Swartz) @email{swiers@@plains.NoDak.edu} Oleg Tabarovsky @email{olg@@olghome.pccentre.msk.su} @email{ikeda@@honey.misystems.co.jp} (Takatoshi Ikeda) John Theus @email{john@@theus.rain.com} @email{rd@@aii.com} (Bob Thrush) ppKarsten Thygesen @email{karthy@@dannug.dk} Graham Toal @email{gtoal@@pizzabox.demon.co.uk} @email{rmtodd@@servalan.servalan.com} (Richard Todd) Michael Ju. Tokarev @email{mjt@@tls.msk.ru} Martin Tomes @email{mt00@@controls.eurotherm.co.uk} Len Tower @email{tower-prep@@ai.mit.edu} Mark Towfiq @email{justice!towfiq@@Eingedi.Newton.MA.US} @email{mju@@mudos.ann-arbor.mi.us} (Marc Unangst) Matthias Urlichs @email{urlichs@@smurf.noris.de} Tomi Vainio @email{tomppa@@fidata.fi} @email{a3@@a3.xs4all.nl} (Adri Verhoef) Andrew Vignaux @email{ajv@@ferrari.datamark.co.nz} @email{vogel@@omega.ssw.de} (Andreas Vogel) Dima Volodin @email{dvv@@hq.demos.su} @email{jos@@bull.nl} (Jos Vos) @email{jv@@nl.net} (Johan Vromans) David Vrona @email{dave@@sashimi.wwa.com} @email{Marcel.Waldvogel@@nice.usergroup.ethz.ch} (Marcel Waldvogel) @email{steve@@nshore.org} (Stephen J. Walick) @email{syd@@dsinc.dsi.com} (Syd Weinstein) @email{gerben@@rna.indiv.nluug.nl} (Gerben Wierda) @email{jbw@@cs.bu.edu} (Joe Wells) @email{frnkmth!twwells.com!bill} (T. William Wells) Peter Wemm @email{Peter_Wemm@@zeus.dialix.oz.au} @email{mauxci!eci386!woods@@apple.com} (Greg A. Woods) @email{John.Woods@@proteon.com} (John Woods) Michael Yu.Yaroslavtsev @email{mike@@yaranga.ipmce.su} Alexei K. Yushin @email{root@@july.elis.crimea.ua} @email{jon@@console.ais.org} (Jon Zeeff) Matthias Zepf @email{agnus@@amylnd.stgt.sub.org} Eric Ziegast @email{uunet!ziegast} @end example @node Index (concepts), Index (configuration file), Acknowledgements, Top @unnumbered Concept Index @printindex cp @node Index (configuration file), , Index (concepts), Top @unnumbered Configuration File Index @printindex fn @contents @bye
http://opensource.apple.com//source/uucp/uucp-11/uucp/uucp.texi
CC-MAIN-2016-40
refinedweb
43,826
55.64
Driver Entry Points probe(9E) NAME probe - determine if a non-self-identifying device is present SYNOPSIS #include <sys/conf.h> #include <sys/ddi.h> #include <sys/sunddi.h> static intprefixprobe(dev_info_t *dip); INTERFACE LEVEL. ARGUMENTS dip Pointer to the device's dev_info structure. DESCRIPTION). probe() should only probe the device. It should not create or change any software state. Device initialization should be done in attach(9E). For a self-identifying device, this entry point is not necessary. However, if a device exists in both self- identifying and non-self-identifying forms, a probe() rou- tine can be provided to simplify the driver. ddi_dev_is_sid(9F) can then be used to determine whether probe() needs to do any work. See ddi_dev_is_sid(9F) for an example. RETURN VALUES DDI_PROBE_SUCCESS If the probe was successful. DDI_PROBE_FAILURE If the probe failed. DDI_PROBE_DONTCARE If the probe was unsuccessful, yet attach(9E) should SunOS 5.8 Last change: 18 Nov 1992 1 Driver Entry Points probe(9E) still be called. DDI_PROBE_PARTIAL If the instance is not present now, but may be present in the future. SEE ALSO attach(9E), identify(9E), ddi_dev_is_sid(9F), ddi_map_regs(9F), ddi_peek(9F), ddi_poke(9F), nulldev(9F), dev_ops(9S) Writing Device Drivers SunOS 5.8 Last change: 18 Nov 1992 2
http://www.manpages.info/sunos/probe.9.html
crawl-003
refinedweb
210
61.83
A lightweight introduction to Recursion Schemes in Scala by Diego Alonso - • - February 28, 2018 - • - functional programming• scala• matryoshka• recursion schemes Last December, I attended a talk by Zainab Ali entitled Topiary and the art of origami. This talk focused how to write a decision-tree-learning program in Scala. Zainab showed how, from some non-recursive functions, one can raise a decision tree data type, a learning algorithm, and a predicting algorithm, each in one line. She could do this by using Matryoshka, a library that implements recursion schemes in Scala. Recursion schemes are relevant in functional programming. Contrary to what the jargon suggests, they are not too difficult to understand. Here is a small-steps introduction to basic recursion schemes in Scala. Lists A simple and commonplace recursive data type is the List. Scala provides a List[A] type, which is generic on the type A of elements. Since recursion schemes work just as well with non-generic collections, for convenience I am using a numbers-only list type instead: sealed trait List case object Nil extends List case class Cons(head: Int, tail: List) extends List This List type is recursive because the Cons class has a value parameter which is a List itself. Step 1: list fold How do you add up all the numbers in a list? Since most functional languages have no while loops, one has to use recursion: def sum(list: List): Int = list match { case Nil => 0 case Cons(head, tail) => head + sum(tail) } However, recursion should not be abused: it is too powerful and confusing, and can make programs difficult to read. This is why functional programming aims at hiding it behind higher-order functions like map or fold. Here is a fold for our List type: def fold( zero: Int, op: (Int, Int) => Int)(list: List): Int = list match { case Nil => zero case Cons(head, tail) => op( head, fold(zero, op)(tail) ) } Apart from the list, fold takes as parameters the result for Nil, and a function op used to combine the head of a Cons with the result of the recursive fold on the tail. We can now write the sum as a special case of fold, like this: def add(a: Int, b: Int) = a + b def sum(list) = fold(0, add)(list) Now there is no recursion in the code of sum: it has been moved out of sum and into fold. Step 2: list unfold We can write a digits function, to get the list of digits ( [0-9]) of a number, also using recursion: def digits(seed: Int): List = if (seed == 0) Nil else Cons(seed % 10, digits(seed / 10) ) Unlike in sum, in digits we use recursion to build a list. Nevertheless, like sum with fold, we can also take recursion out of digits and put into a higher-order function called unfold: def unfold( isEnd: Int => Boolean, op: Int => (Int, Int) )( seed: Int ): List = if (isEnd(seed)) Nil else { val (head, next) = op(seed) Cons(head, unfold(isEnd, op)(next) ) } def isZero(x: Int): Boolean = x == 0 def div10(x: Int): (Int, Int) = (x % 10, x / 10) def digits(seed) = unfold(isZero, div10)(seed) Apart from the seed, unfold takes as parameter a predicate to mark the end, and a function Int => (Int, Int) to split a seed into the value at the head, and the seed from which to build the tail. Step 3: The optional cons type fold and unfold are opposed: this grows a list from a seed, that reduces it to a result. However, there is a sort of symmetry between their definitions. To see this symmetry, we use an OCons type alias: type OCons = Option[(Int, Int)] This OCons alias can describe an optional Cons, intuitively, if there is a recursive case ( Some(Int, Int)), or a base case ( None). We can write fold and unfold to work around this type: def fold(out: OCons => Int)(list: List): Int = list match { case Nil => out( None) case Cons(head, tail) => out( Some( head, fold(out)(tail) )) } def unfold( into: Int => OCons)(seed: Int): List = into(seed) match { case None => Nil case Some((head, next)) => Cons(head, unfold(into)(next) ) } Both fold and unfold now take as parameter a function on OCons, but with reversed types: in fold the function gets a number out of an OCons, whereas in unfold it splits a number into an OCons. The new types of fold and unfold show another similarity: fold: (OCons => Int) => (List => Int) unfold: (Int => OCons) => (Int => List) So, we can see fold and unfold each as lifting a single-step functions, out or in, into a loop that performs that step many times. Step 4: Tree Folds Another recursive data type is the type of binary trees of numbers: sealed trait Tree case object Leaf extends Tree case class Node(left: Tree, top: Int, right: Tree) Unlike lists, the Tree type appears recursively twice in the Node subclass, for the left and right subtrees. As we did for lists, we can write a sum function for trees, using recursion: def sum(tree: Tree): Int = tree match { case Leaf => 0 case Node(ll,top,rr) => sum(ll) + top + sum(rr) } Unlike the sum for lists, here we have two recursive calls, one per subtree. Despite this, we can also take the recursion out of sum and into a tree- fold function: def fold( zero: Int, op: (Int, Int, Int) => Int)(tree: Tree): Int = tree match { case Leaf => zero case Node(ll,top,rr) => op( fold(zero, op)(ll), top, fold(zero, op)(rr) ) } def add3(a: Int, b: Int, c: Int) = a + b + c def sum(tree: Tree): Int = fold(0, add3)(tree) Step 5: Tree unfolds We can write a function digits to lay the digits of a large number out in a binary tree, much like the digits function from Step 2. def digits(seed: Int): Tree = if (seed ==0) Leaf else { val (pref, mid, suff) = splitNumber(seed) Node( digits(pref), mid, digits(suff) ) } // splitNumbers: split a number's digits in the middle, // for example, splitNumber(56784197) = (567, 8, 4197) def splitNumber(seed: Int): (Int, Int, Int) = /*---*/ As in the case of a list, we can extract recursion from the digits function into an unfold functions for trees: def unfold( isEnd: Int => Boolean, op: Int => (Int, Int, Int) )(seed: Int ): Tree = if (isEnd(seed)) Leaf else { val (ll, top, rr) = op(seed) Node( unfold(isEnd, op)(ll), top, unfold(isEnd, op)(rr) ) } def digits(seed: Int): Tree = unfold(isEnd, splitNumber)(seed) Step 6: The optional node type for Like the fold and unfold for lists, the fold and unfold for trees are opposed but similar. As we did with OCons, we can show the symmetry between the fold and unfold functions for trees using an ONode type: type ONode = Option[(Int, Int, Int)] def fold(out: ONode => Int)(tree: Tree): Int = tree match { case Leaf => fun( None ) case Node(ll, top, rr) => fun( Some( fold(out)(ll), top, fold(out)(rr) ) ) } def unfold(in: Int => ONode)(seed: Int): Tree = fun(seed) match { case None => Leaf case Some( (ll, top, rr) ) => Node( unfold(out)(ll), top, unfold(out)(rr) ) } Here, ONode intuition is if there is recursion or not. It has three instead of two numbers, because in a tree there are two recursive occurrences, not one. Step 7: Using maps Let us compare the fold- unfold functions for lists based on the OCons alias, with those for trees that use the ONode alias. We can see some similarities between them: - The foldfunctions for lists and trees both take a function to get the number outof the OConsor ONode, respectively. - The unfoldfunctions both take a function to split a number intoan OConsor ONode. - There is a one-to-one translation between the cases of the data structure, and those of the non-recursive type alias. - A same recursive call, fold(out)or unfold(into), is applied to each recursive occurrence of the data type. To highlight this similarity, we make OCons and ONode generic on a data type R, to mark the recursive positions. type OCons[R] = Option[(Int, R)] type ONode[R] = Option[(R, Int, R)] At each type OCons or ONode, we can “apply a same recursive call to each recursive position” by using a map function: def mapOC[A,B]( fun: A => B, ocons: OCons[A]): OCons[B] = ocons match { case None => None case Some(head, tail) => Some(head, fun(tail)) } def mapON(fun: A => B, onode: ONode[A]): ONode[B] = onode match { case None => None case Some(ll, top, rr) => (fun(ll), top, fun(rr)) } These are very similar to the map function for the Option type. Step 8: Aligning Folds The idea of a one-to-one translation from the cases of each data type to the cases of the Option alias, we can write with an open function for each type: def open(tree: List): ONode[List] = tree match { case Nil => None case Cons(head, tail) => Some(head, tail) } def open(tree: Tree): ONode[Tree] = tree match { case Leaf => None case Node(ll, top, rr) => Some(ll, top, rr) } Using the functions open and map functions, we can now express the fold and unfold functions into a single line each: def fold(out: OCons[Int] => Int)(List: Tree): Int = out( mapOC(fold(out), open(tree)) ) def fold(out: ONode[Int] => Int)(tree: Tree): Int = out( mapON(fold(out), open(tree)) ) These definitions of fold use a same map function to recursively apply fold or unfold to each appearance of the R type parameter. Step 9: Align Unfolds For its part, the 1:1 translation from the cases of the alias type to the cases of the data type we can write with a function close, for lists and trees. def close(ocons: OCons[List]): List = ocons match { case None => Nil case Some(head, tail) => Cons(head, tail) } def close(onode: ONode[Tree]): Tree = onode match { case None => Leaf case Some(ll, top, rr) => Node(ll, top, rr) } Using the functions open, close, and map, we can write fold/ unfold with a single line each: def unfold(into: Int => OCons[Int])(seed: Int): List = close( mapOC( unfold(into), into(seed) ) ) def unfold(into: Int => ONode[Int])(seed: Int): Tree = close( mapON( unfold(into), into(seed) ) ) Again, we use map to apply the recursive call to unfold at each appearance of the R type parameter. Step 10: Indirect Recursion for each Data Type So far, we have only focused on hiding direct recursion from functions. Can we do the same with data types? Yes, if we use a form of indirection. In fold and unfold, the functions in and out functions represent one recursive step. Likewise, we can use the types OCons[R] and ONode[R] to represent one link in a list or tree. The type parameter R gives us the indirection that we need to cut recursion in each data type: case class List_Ind(opt: OCons[List_Ind]) case class Tree_Ind(opt: ONode[Tree_Ind]) The List_Ind (resp. Tree_Ind) data type now consists of a single case class with a single parameter opt. The type of opt is the type operator OCons (or ONode) applied to the data type List_Ind (or Tree_Ind) itself. Thus, we have shifted recursion from values to types. For the List_Ind and Tree_Ind classes, we can write the following fold and unfold) )) The two fold functions (resp. unfold) are almost the same, and they only differ in the type operator ( OCons vs ONode), and in the map function ( mapOC vs mapON) for it. Note that, since List_Ind and Tree_Ind already use OCons and ONode, we no longer need the auxiliary functions open and Step 11: Indirect recursion for all data types Now we can unify List_Ind and Tree_Ind into a single Ind case class, which generalizes indirect recursion in the data types. The classes List_Ind and Tree_Ind above only differ in the type operator ( OCons or ONode) used in the type of the opt field. To unify them, we need to extract that type operator as a type parameter, which we call as Rec. case class Ind[ Rec[_] ]( opt: Rec[ Ind[Rec] ] ) The Rec[_] symbol means that Rec generalizes type operators, like OCons[R] or ONode[R], which themselves take a type parameter R. We can now turn List_Ind and Tree_Ind each into a special case of Ind: type List_Ind = Ind[OCons] type Tree_Ind = Ind[ONode] Step 12: Unified fold and unfold. Now, let us unify the fold and unfold functions for List_Ind and Tree_Ind, from Step 10, into a single pair of fold- unfold functions for the Ind[Rec[_]] type. Let us start from the code of this) )) To join the two fold (or unfold) functions into a single one, we have to 1) add the generic parameter Rec[_] to the function, replace each appearance of OCons and ONode by Rec; and replace each appearance of List_Ind and Tree_Ind by Ind[Rec]. This yields the following: def fold[ Rec[_] ](out: Rec[Int] => Int)(ind: Ind[Rec]): Int = out( map( fold(out), ind.opt) ) def unfold[ Rec[_] ](into: Int => Rec[Int])(seed: Int): Ind[Rec] = Ind( map(unfold(into), into(seed)) ) We also have to unify mapOC and mapON into a function map for Rec[_]. However, since Rec[_] is generic, the map must be provided as another parameter of the function. A common way to do so is to wrap the map inside a trait, usually called a Functor, which is also generic on Rec. trait Functor[F[_]] { def map[A, B](fun: A => B, from: F[A]): F[B] } def fold[Rec[_]]( ff: Functor[Rec], out: Rec[Int] => Int)(ind: Ind[F] ): Int = out( ff.map( fold(ff, out), ind.opt) ) def unfold[Rec[_]]( ff: Functor[Rec], into: Int => Rec[Int])(seed: Int ): Ind = Ind( ff.map( unfold(into), into(seed)) ) Now we can use these functions as a fold and unfold for lists, trees, or any data structure we need. End After all of the previous steps, what we have is the following: - We joined two recursive data types, lists and trees, into a single the case class Ind[Rec[_]]for indirect recursion. - We joined two foldfunctions, for lists and trees, into a single foldfunction for the Indtypeclass. - We joined two unfoldfunctions, for lists and trees, into a single unfoldfunction for the Indtypeclass. The class Ind[Rec[_]] is generic on the Rec[_] type constructor, which means that it can represent not just lists or binary trees, but also any directly recursive data type. Jargon The notions and intuitions we used above have some special names. - A generic type like Ind[Rec[_]], that takes another generic type Rec[_]as a type parameter, is called a higher-kinded type. - The trait Functor, used to provide a mapfunction, is a fundamental type-class in cats. - The Ind[Rec[_]]case class, that captures the recursive application of Rec[_]type constructor, is called the fixpoint data type. In Matryoshka, it is called Fix. - The outfunction, of the form Rec[A] => A, is called an algebra. - The intofunction, of the form A => Rec[A], is called a coalgebra. - The foldfunction, which reiterates the algebra Rec[A] => Ainto a function Ind[Rec] => A, is called a catamorphism. - A function like unfold, which reiterates the coalgebra A => Rec[A]into a function A => Ind[Rec], is called anamorphism. Some of this jargon can be traced back to the academic research on recursion schemes, and by convention it is used in libraries, like Matryoshka, that implement recursion schemes. Additional reading - Gibbon’s Origami Programming introduces the main recursion schemes, apart from folds and unfolds, using diverse examples such as sorting algorithms. - Zainab Ali’s talk, which inspired this post, applies the techniques we used above to another data type, decision trees, which are like binary trees but with more detail in each node or leaf. - Rob Norris presentation, about how to store academic genealogy trees (another recursive data type) in a relational database (Sep. 2016).
https://www.47deg.com/blog/basic-recursion-schemes-in-scala/
CC-MAIN-2019-04
refinedweb
2,683
57.84
I confused. There is RootManageSharedAccessKey for the ServiceBus itself: the Endpoint=sb://analyticssample.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;... There can be also the "Shared Access Policies" for each EventHub, which have names. Seems we have to use the RootManageSharedAccessKey for the create/delete/.. eventhubs. What we have to use to send/receive(listen) messages from eventhubs? Can we use both keys (RootManageSharedAccessKey and Shared Access Policy Keys? Or we have to use only the Shared Access Policy Keys? Leonid Ganeline [BizTalk MVP] You are right - My understanding is this... We may use any for runtime operations on specific EventHub, however, it's always good to use keys that have minimal but sufficient access. Management operations should be using SAS tokens of entire ServiceBus namespace. I believe, SAS tokens exclusively for each EventHub is for selective auth per EventHub. Abin
https://social.msdn.microsoft.com/Forums/en-US/ba1c7742-6483-4062-944f-cd4e991a7450/eventhub-connection-string-for-servicebus-or-for-eventhub-policy?forum=servbus
CC-MAIN-2019-51
refinedweb
139
50.02
All - O'Reilly Media 2016-09-24T22:33:07. Reducing Risk in the Petroleum Industry 2016-09-23T11:00:00Z tag: <p><img src=''/></p><p><em>Learn the challenges oil and gas companies face when collecting data and how they mitigate short-term operational risk and optimize long-term reservoir management.</em></p> <h2>Reducing Risk in the Petroleum Industry: Machine Data and Human Intelligence</h2> <h2>Introduction</h2> <p>To the buzzword-weary, <em>Big Data</em> has become the latest in the infinite series of technologies that "change the world as we know it." But amidst the hype, there is an epochal shift: the current exponential growth in data is unprecedented and is not showing any signs of slowing down.</p> <p>Compared to the short timelines of technology startups, the long history of the petroleum industry provides stark examples to illustrate this change. Seismic research happens early in the exploration and extraction stages. In 1990, one square kilometer yielded 300 megabytes of seismic data. In 2015, this was 10 petabytes—33 million times more, according to Satyam Priyadarshy, chief data scientist at Halliburton. First principles, intuition, and manual arts are overwhelmed by this volume and variety of data. Data-driven models, however, can derive immense value from this data flood. This report gathers highlights from Strata+Hadoop World conferences that showcase the use of data science to minimize risk in the petroleum industry.</p><p>Continue reading <a href=''>Reducing Risk in the Petroleum Industry.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Naveen Viswanath 2 major reasons why modern C++ is a performance beast 2016-09-23T10:00:00Z tag: <p><img src=''/></p><p><em>Use smart pointers and move semantics to supercharge your C++ code base.</em></p><p.</p> <p>One only needs to do a bit of Googling to see that there are a lot of new features in modern C++. In this article, I’ll focus on two key features that represent major milestones in C++’s performance evolution: smart pointers and move semantics.</p> <h2>Smart pointers</h2> <p."</p> <p.</p> <p>The trouble with raw pointers is that there are too many ways to misuse them, including: forgetting to initialize them, forgetting to release dynamic memory, and releasing dynamic memory <em>too many</em> times. Many such problems can be mitigated or even completely eliminated through the use of <em>smart</em> pointers—class templates designed to encapsulate raw pointers and greatly improve their overall reliability. C++98 provided the <code>auto_ptr</code> template that did part of the job, but there just wasn't enough language support to do that tricky job completely.</p> <p>As of C++11, that language support is there, and as of C++14 not only is there no remaining need for the use of raw pointers in the language, but there's rarely even any need for the use of the raw <code>new</code> and <code>delete</code>:</p> <pre data- <code>// ---------------------------- // </code></pre> <h2>Move semantics</h2> <p>Pre C++11, there was still one fundamental area where performance was throttled: where C++’s value-based semantics incurred costs for the unnecessary copying of resource-intensive objects. In C++98, a function declared something like this:</p> <pre data- <code>vector<Widget> makeWidgetVec(creation-parameters); </code></pre> <p>struck fear into any cycle-counter’s heart, due to the potential expense of returning a vector of <code>Widget</code>s by value (let’s assume that <code>Widget</code> is some sort of resource-hungry type). The rules of the C++98 language require that a vector be constructed within the function and then copied upon return or, at the very least, that the program behave <em>as if</em> that were the case. When individual <code>Widget</code>s <em>optimization</em> and circumstances do not always allow compilers to apply it. Hence, the cost of such code could not be reliably predicted.</p> <p>The introduction of <em>move semantics</em> in modern C++ completely removes that uncertainty. Even if the <code>Widget</code>s are not “move-enabled," returning a temporary container of <code>Widget</code>s from a function by value becomes a very efficient operation because the vector template itself is move-enabled.</p> <p>Additionally, if the <code>Widget</code> class <em>is</em> <code>Widget</code>, a "conventional" version as displayed below:</p> <pre data- <code>#include <cstring> class Widget { public: Widget() : size(10000), ptr(new char[size]) {} ~Widget() { delete ptr; } // Copy constructor: Widget(const Widget &rhs) : size(rhs.size), ptr(new char[rhs.size]) { std::memcpy(ptr, rhs.ptr, size); } // Copy assignment operator: Widget& operator=(const Widget &rhs) { char *p = new char[rhs.size]; delete [] ptr; ptr = p; size = rhs.size; std::memcpy(ptr, rhs.ptr, size); return *this; } private: size_t size; char *ptr; }; // Output of test program: // // Size of vw: 500000 // Time for one push_back on full vector: 57.861 </code></pre> <p>And one enhanced to support move semantics:</p> <pre><code>#include <cstring> class Widget { public: Widget() : size(10000), ptr(new char[size]) {} ~Widget() { delete ptr; } // Copy constructor: Widget(const Widget &rhs) : size(rhs.size), ptr(new char[rhs.size]) { std::memcpy(ptr, rhs.ptr, size); } // Move constructor: Widget(Widget &&rhs) noexcept : size(rhs.size), ptr(rhs.ptr) { rhs.size = 0; rhs.ptr = nullptr; } // Copy assignment operator: Widget& operator=(const Widget &rhs) { size = rhs.size; char *p = new char[rhs.size]; delete []ptr; ptr = p; std::memcpy(ptr, rhs.ptr, size); return *this; } // Move assignment operator Widget &operator=(Widget &&rhs) noexcept { delete [] ptr; size = rhs.size; ptr = rhs.ptr; rhs.size = 0; rhs.ptr = nullptr; return *this; } private: size_t size; char *ptr; }; // Output: // // Size of vw: 500000 // Time for one push_back on full vector: 0.031 </code></pre> <p>Using a simple <code>timer</code> class, the test program populates a vector with half a million instances of a <code>Widget</code> and, making sure the vector is at its capacity, reports the time for a single additional <code>push_back</code> of a Widget onto the vector. In C++98, the <code>push_back</code> operation takes almost a minute (on my ancient Dell Latitude E6500). In modern C++ and using the move-enabled version of <code>Widget</code>, the same <code>push_back</code> operation takes .031 seconds. Here's a simple <code>timer</code> class:</p> <pre data- <code>#include <ctime> class Timer { public: Timer(): start(std::clock()) {} operator double() const { return (std::clock() - start) / static_cast<double>(CLOCKS_PER_SEC); } void reset() { start = std::clock(); } private: std::clock_t start; }; </code></pre> <p>And here's a test program to time one <code>push_back</code> call on a large, full vector of <code>Widget</code>s (a memory-hogging class):</p> <pre data- <code>; } </code></pre> <p.</p> <p>It's difficult to say much more about these new features without drilling down into implementation techniques. If you’d like to learn more, register for my in-person training course on October 26-28, <a href="">Transitioning to Modern C++</a>, where I’ll teach you these features and much more will be explored in greater detail.</p> <p>Continue reading <a href=''>2 major reasons why modern C++ is a performance beast.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Leor Zolman The challenges of holiday performance readiness 2016-09-23T09:30:00Z tag: <p><img src=''/></p><p><em>Five important things you can do to survive the holiday rush.</em></p><p>Preparing for holidays, especially if you’re engaging in commerce, can be a stressful time if haven’t done your homework. Black Friday and other holidays can cause traffic to skyrocket, often exceeding 100x non-holiday peak. Failure during Black Friday or other peak times could put you out of a job, generate bad press for your company and even impact your company’s stock price.</p> <p>To survive, you need to first understand that holiday readiness is primarily a management activity. While technology is involved, it is not the focus. Let’s review some of the top ways to ensure you have a stress-free holiday or special event.</p> <h2>Remove the “Human” element</h2> <p>Humans are bad at repeatedly performing even basic tasks. From driving a car, to eating soup without spilling it on your shirt—humans make mistakes. In our industry, one mistyped character can mean the difference between success and failure. The airline industry has coped with this by making extensive use of automation (autopilot) and checklists. These two principles can be directly applied to holiday readiness.</p> <p>Automation should be extensively employed to avoid downtime. Auto-scaling should be employed at all levels to avoid mistakes while provisioning hardware and software. The installation and configuration of all software should be scripted. Health checking should be automated. A human’s job should be to automate—not to manually perform tasks.</p> <p>Where automation cannot be employed, checklists should be used—just like pilots use in the cockpit. For example, each team should develop an extensive checklist verifying that their part of the overall system is functioning. Check for file permission issues, web server configuration, iptables rules, etc. In addition to verification checklists, have checklists for what happens in an outage. Whose role is it to communicate with executives? Whose role is it to communicate with each vendor? There shouldn’t be any “guessing.” You should have checklists for everything.</p> <h2>Manage change properly</h2> <p>To avoid unexpected downtime, you should aim to minimize as many changes as possible. Changes include deploying new custom code, changing or upgrading supporting software (application servers, databases, firmware, etc.), manually adding new network gear, upgrading hardware, etc. Every change is a possible cause of an outage. Many retailers freeze their production systems for the months of October and November in preparation for Black Friday, with any change requiring the CIO to sign off. While this puts forward business and technology progress on hold, it does wonders for stability. Changes that need to go into production close to your special event should go through an extensive change management process. Remember, many have lost their jobs due to outages.</p> <h2>Cache everything</h2> <p>Most of the traffic for a holiday is likely to be for a handful of pages, like the home page, category overview pages, and product detail pages. A 10x, 100x, or even 1,000x spike in traffic can often be served if those common handful of pages are served directly by your Content Delivery Network (CDN). Rather than pass the requests back to your platform, the CDN can directly serve up a cached copy of the most common pages. Once cached, the pages are just static files served from a web server. Any CDN could serve millions of copies of those pages per second. In addition to caching entire pages, you can also cache page fragments, images, objects within your custom application and objects within your datastore. Caching is well understood and should be done liberally.</p> <h2>Test, test, test. And then test again.</h2> <p>Testing can occur in three locations: the developer’s local environment, an integration environment, and production. Let’s explore each.</p> <p>Locally, your developers should always be running unit tests that verify code’s functionality. Ideally, at least 80% of all code written should be covered by one or more unit tests. Successful unit testing should always be a prerequisite before code is checked in to source control. Developers should also be executing white box security scans, which are scans that look inside the source code for vulnerabilities.</p> <p>Next, once code is checked in, there should be at least one environment where your developers’ code is tested in together. Functional testing should be at the application layer (e.g. direct API calls) and through the various user interfaces (e.g. web, mobile, IoT, etc). At the application layer, you’ll want to make sure that each component produces the correct output for a given input. For example, the REST API for pricing should return something like <code>{"price": 19.95}</code> when invoked. There should be hundreds of tests for each component, ensuring that every conceivable input will be gracefully tolerated. You’ll also want to test your application’s functionality through the various user interfaces using some form of synthetic testing. Synthetic testing simulates real end-user behavior and interacts with your application through a web browser, mobile, etc. It’s more comprehensive than component testing, which tests each component in isolation.</p> <p>You’ll also want to test the performance of each component and various transactions, both without and with load. Find out how much load your application can take before performance rises to an unacceptable level. Work with your CDN vendor to throttle traffic at the edge, before your application’s breaking point is hit.</p> <p>Security should also be tested, but in this environment it should be black box testing rather than white box. Black box testing is from the outside, with no access to the source code. This type of testing acts like an outside hacker would and includes port scans, testing for cross-site scripting vulnerabilities, etc.</p> <p>Testing in production is largely the same as integration, but you’ll want your synthetic testing to be from multiple endpoints around the world in order to more accurately test the user experience. A new generation of cloud-based load generators can quickly generate load from around the world, simulating a variety of different devices (various web browsers, various mobile devices, etc).</p> <h2>Intelligently monitor health</h2> <p>The health of each instance of your application should be continually monitored in production. Health checks should be automated and as in-depth as possible. Define a single URL (e.g. /healthcheck/) for checking an application’s health. That endpoint should test common application functions, like retrieving a product and placing an order in a commerce application. Once these tests are performed, a simple message should be returned, like <code>{"healthy": true}</code>. If the endpoint doesn’t respond or responds with <code>{"healthy": false}</code> too many times, it should be automatically pulled from the load balancer and a new instance should be spawned. Everything should be automated, including the checking of health and re-provisioning of failed instances.</p> <h2>Conclusion</h2> <p>Holiday readiness is both a technical and human problem. Start by automating as much as possible to remove the human factor and reinforce all manual work with appropriate change management. Then, focus on technology. Optimize your stack by caching at all possible layers. Then, test everything. Finally, ensure you are properly monitoring your entire stack.</p> <p>A good first step is to get your teams on the same page, with a shared set of metrics for what success looks like. An example might be handling 25,000 HTTP requests per second and 99.9999% uptime through the month of November. Once the different teams are unified in achieving a shared measurable goal, you can then begin to implement the topics discussed earlier. Good luck!</p> <hr> <p><em>This post is a collaboration between O'Reilly and HPE. <a href="">See our statement of editorial independence</a>.</em></p> <p>Continue reading <a href=''>The challenges of holiday performance readiness.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Kelly Goetsch Four short links: 23 September 2016 2016-09-23T09:00:00Z tag: <p><em>On Reproducibility, Robot Monkey Startup, Stealing Predictive Models, and GPU Equivalence</em></p><ol> <li> <a href="">The Winds Have Changed</a> -- wonderfully constructed rebuttal to a self-serving "those nasty people saying they can't reproduce our media-packaged research findings are just terrible stone-throwers, ignore them" editorial, which builds and builds and should have you reaching for a can of petrol and a lighter by the end.</li> <li> <a href="">Kindred AI</a> -- <i>using artificial intelligence and high-tech exoskeleton suits to allow humans—and, at least according to one description of the technology, monkeys, too—to control and train an army of intelligent robots.</i> Planet of the Apes inches its way closer to being.</li> <li> <a href="">Stealing Machine Learning Models via Prediction APIs</a> -- <i.</i> </li> <li> <a href="">GPU Equivalence for Deep Learning</a> -- <i>In our own testing, we've found that one GPU server is about as fast as 400 CPU cores for running the algorithms we're using</i>. The article itself is an unremarkable overview, but this anecdatum leapt out at me.</li> </ol> <p>Continue reading <a href=''>Four short links: 23 September 2016.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Nat Torkington Keynotes from the O'Reilly Velocity Conference in New York 2016 2016-09-22T19:05:00Z tag: <p><img src=''/></p><p><em>Watch full keynotes covering DevOps, performance, infrastructure, and more. From the O'Reilly Velocity Conference in New York 2016.</em></p><p>Experts from across the web operations and performance worlds came together in New York for the <a href="">O’Reilly Velocity Conference in New York 2016</a>. Below you'll find links to keynotes from the event.</p> <h2>Don't gamble when it comes to reliability</h2> <p>How do you stay reliable when you can’t keep the whole system in your head? Tom Croucher discusses Uber's approach to reliability.</p> <ul> <li>Watch "<a href="">Don't gamble when it comes to reliability</a>."</li> </ul> <h2>DevOps, collaboration, and globally distributed teams</h2> <p>Ashish Kuthiala presents research-based findings on the factors that play the most important roles in accelerating DevOps adoption.</p> <ul> <li>Watch "<a href="">DevOps, collaboration, and globally distributed teams</a>."</li> </ul> <h2>Serverless is other people</h2> <p>Rachel Chalmers explores about what serverless means for security, networking, support, and culture.</p> <ul> <li>Watch "<a href="">Serverless is other people</a>."</li> </ul> <h2>We need a bigger goal than collecting data</h2> <p>Mehdi Daoudi challenges business leaders and IT ops professionals to consider the ROI of analyses. How quickly can we get real insights from our data?</p> <ul> <li>Watch "<a href="">We need a bigger goal than collecting data</a>."</li> </ul> <h2>Situation normal: All fouled up</h2> <p>Richard Cook and David Woods examine the problems and potential in Internet-facing business incident response.</p> <ul> <li>Watch "<a href="">Situation normal: All fouled up</a>."</li> </ul> <h2>Data science: Next-gen performance analytics</h2> <p>Ken Gardner looks at the latest innovations in performance analytics and how data science can be used in surprising ways to visualize and prioritize improvements.</p> <ul> <li>Watch "<a href="">Data science: Next-gen performance analytics</a>."</li> </ul> <h2>Two years of the U.S. Digital Service</h2> <p>Now two years old and including about 150 people spanning a network of federal agencies, the U.S. Digital Service has taken on immigration, education, veterans benefits, and health data interoperability.</p> <ul> <li>Watch "<a href="">Two years of the U.S. Digital Service</a>."</li> </ul> <h2>Building bridges with DevOps</h2> <p>Katherine Daniels explains how the principles of effective DevOps environments can be used to create sustainable participation from a wide range of people.</p> <ul> <li>Watch "<a href="">Building bridges with DevOps</a>."</li> </ul> <h2>Make performance data—and beyond—accessible</h2> <p>Alois Reitbauer discusses the conversational interface Dynatrace has built to make performance data accessible through natural language questions.</p> <ul> <li>Watch "<a href="">Make performance data—and beyond—accessible</a>."</li> </ul> <h2>Turning data into leverage</h2> <p>Ozan Turgut discusses how to use visualization and analytics to apply data to decision making.</p> <ul> <li>Watch "<a href="">Turning data into leverage</a>."</li> </ul> <h2>Is ad blocking good for advertisers?</h2> <p>Tony Ralph explains why the rise of ad blocking could incite progress in online advertising.</p> <ul> <li>Watch "<a href="">Is ad blocking good for advertisers?</a>."</li> </ul> <h2>Transforming how the world operates software</h2> <p>The problems around software aren’t all solved and the story isn’t over. What role will you play?</p> <ul> <li>Watch "<a href="">Transforming how the world operates software</a>."</li> </ul> <h2>Security at the speed of innovation: Defensive development for a fast-paced world</h2> <p>Kelly Lum shares her experiences maintaining a break-neck pace while still producing hacker-resilient code.</p> <ul> <li>Watch "<a href="">Security at the speed of innovation: Defensive development for a fast-paced world</a>."</li> </ul> <p>Continue reading <a href=''>Keynotes from the O'Reilly Velocity Conference in New York 2016.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Mac Slocum Is ad blocking good for advertisers? 2016-09-22T19:00:00Z tag: <p><img src=''/></p><p><em>Tony Ralph explains why the rise of ad blocking could incite progress in online advertising.</em></p><p>Continue reading <a href=''>Is ad blocking good for advertisers?.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Tony Ralph Building bridges with DevOps 2016-09-22T19:00:00Z tag: <p><img src=''/></p><p><em>Katherine Daniels explains how the principles of effective DevOps environments can be used to create sustainable participation from a wide range of people.</em></p><p>Continue reading <a href=''>Building bridges with DevOps.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Katherine Daniels Make performance data–and beyond–accessible 2016-09-22T19:00:00Z tag: <p><img src=''/></p><p><em>Alois Reitbauer discusses the conversational interface Dynatrace has built to make performance data accessible through natural language questions.</em></p><p>Continue reading <a href=''>Make performance data–and beyond–accessible.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Alois Reitbauer Turning data into leverage 2016-09-22T19:00:00Z tag: <p><img src=''/></p><p><em>Ozan Turgut discusses how to use visualization and analytics to apply data to decision making.</em></p><p>Continue reading <a href=''>Turning data into leverage.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Ozan Turgut Transforming how the world operates software 2016-09-22T19:00:00Z tag: <p><img src=''/></p><p><em>The problems around software aren’t all solved and the story isn’t over. What role will you play?</em></p><p>Continue reading <a href=''>Transforming how the world operates software.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Andrew Shafer Security at the speed of innovation: Defensive development for a fast-paced world 2016-09-22T19:00:00Z tag: <p><img src=''/></p><p><em>Kelly Lum shares her experiences maintaining a break-neck pace while still producing hacker-resilient code.</em></p><p>Continue reading <a href=''>Security at the speed of innovation: Defensive development for a fast-paced world.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Kelly Lum What is hardcore data science—in practice? 2016-09-22T11:40:00Z tag: <p><img src=''/></p><p><em>The anatomy of an architecture to bring data science into production.</em></p><p>Data science has become widely accepted across a broad range of industries in the past few years. Originally more of a research topic, data science has early roots in scientists efforts to understand human intelligence and create artificial intelligence; it has since proven that it can add real business value.</p> <p>As an example, we can look at the company I work for: <a href="">Zalando</a>, one of Europe’s biggest fashion retailers, where data science is heavily used to provide data-driven recommendations, among other things. Recommendations are provided as a back-end service in many places, including product pages, catalogue pages, newsletters, and for retargeting.</p><p>Continue reading <a href=''>What is hardcore data science—in practice?.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Mikio Braun Haakon Faste on designing for a "post-human" world 2016-09-22T11:34:00Z tag: <p><img src=''/></p><p><em>The O'Reilly Radar Podcast: perceptual robotics, post-evolutionary humans, and designing our future with intent.</em></p><p>In this <a href="">Radar Podcast</a> episode, I chat with <a href="">Haakon Faste</a>, a design educator and innovation consultant. We talk about his interesting career path, including his perceptual robotics work, his teaching approaches, and his mission with the <a href="">Ralf A. Faste Foundation</a>. We also talk about navigating our way to a "post-human" world and the importance of designing to make the world a more human-centered place.</p><p>Continue reading <a href=''>Haakon Faste on designing for a "post-human" world.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Jenn Webb Data architectures for streaming applications 2016-09-22T11:30:00Z tag: <p><img src=''/></p><p><em>The O’Reilly Data Show Podcast: Dean Wampler on streaming data applications, Scala and Spark, and cloud computing.</em></p><p>In this episode of the <a href="">O’Reilly Data Show</a> I sat down with <a href="">O’Reilly author Dean Wampler</a>, big data architect at <a href="">Lightbend</a>. We talked about new architectures for stream processing, Scala, and cloud computing.</p><p>Continue reading <a href=''>Data architectures for streaming applications.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Ben Lorica Andy Mauro on bot platforms and tools 2016-09-22T11:15:00Z tag: <p><img src=''/></p><p><em>The O’Reilly Bots Podcast: A look at some of the technologies behind the chatbot boom.</em></p><p>In this episode of the <a href="">O’Reilly Bots Podcast</a>, Pete Skomoroch and I speak with Andy Mauro, co-founder and CEO of<a href=""> Automat</a>, a startup whose tools make it easy to build AI-powered bots. (Disclosure: Automat is a portfolio company of O’Reilly AlphaTech Ventures, a VC firm affiliated with O’Reilly Media.) Mauro will be speaking at <a href="">O’Reilly Bot Day</a> on October 19, 2016, in San Francisco.</p><p>Continue reading <a href=''>Andy Mauro on bot platforms and tools.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Jon Bruner Trends shaping the London tech scene 2016-09-22T10:00:00Z tag: <p><img src=''/></p><p><em>London's tech scene has not only pervaded all of its world-leading activities, it’s also created a vibrant, independent business environment of its own.</em></p> <figure id="id-1VikFN"><img alt="" class="iimagestarsierpng" src=""> <figcaption>Figure 1-1. <em>London does have unicorns</em></figcaption> </figure> .</p> <p>This report aims to be a comprehensive view of the computer technology scene in London: where it stands, some of its origins, who's participating in it, and what feeds its strengths.</p><p>Continue reading <a href=''>Trends shaping the London tech scene.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Andy Oram Evolving architectures of FinTech 2016-09-22T10:00:00Z tag: <p><img src=''/></p><p><em>Learn how new FinTech architectures and startups are creating novel types of business models in Africa and Asia, where there are far fewer traditional banks, and in Europe and the US, where financial institutions generally avoid the market for small business loans.</em></p> <p>Fintech, or financial technology, is often reduced to breathless sound bites, such as “It’s like having a bank in your smartphone!” or “By this time next year, no one will be carrying cash or writing checks!”</p> <p>But the fintech phenomenon is broadly misunderstood, mainly because <em>disruption</em> is a sexier headline word than <em>integration</em>. In the vast majority of cases, fintech solutions will be integrated with existing systems of hardware and software. From the perspective of fintech developers, the challenge is integrating new software with old systems. From the perspective of financial services institutions, the challenge is providing operating platforms that are friendly to developers.</p><p>Continue reading <a href=''>Evolving architectures of FinTech.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Mike Barlow Four short links: 22 September 2016 2016-09-22T09:00:00Z tag: <p><em>Ops Papers, Moral Tests, Self-Powered Computing Materials, and Self-Driving Regulation</em></p><ol> <li> <a href="">Operability</a> (Morning Paper) -- text of a talk that was a high-speed run past a lot of papers that cover ops issues. Great read, which will swell your reading list.</li> <li> <a href="">Moral Machine</a> (MIT) -- <i>We show you moral dilemmas, where a driverless car must choose the lesser of two evils, such as killing two passengers or five pedestrians. As an outside observer, you decide which outcome you think is more acceptable. You can then see how your responses compare with those of other people.</i> </li> <li> <a href="">Self-Powered "Materials That Compute" and Recognize Simple Patterns</a> -- <i>“By combining these attributes into a ‘BZ-PZ’ unit and then connecting the units by electrical wires, we designed a device that senses, actuates, and communicates without an external electrical power source,” the researchers explain in the paper.</i> </li> <li> <a href="">NHTSA Guidance on Autonomous Vehicles</a> -- requires companies developing self-driving cars to share a lot of data with the regulator.</li> </ol> <p>Continue reading <a href=''>Four short links: 22 September 2016.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Nat Torkington Data science: Next-gen performance analytics 2016-09-21T20:00:00Z tag: <p><img src=''/></p><p><em>Ken Gardner looks at the latest innovations in performance analytics and how data science can be used in surprising ways to visualize and prioritize improvements.</em></p><p>Continue reading <a href=''>Data science: Next-gen performance analytics.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Ken Gardner Serverless is other people 2016-09-21T20:00:00Z tag: <p><img src=''/></p><p><em>Rachel Chalmers explores about what serverless means for security, networking, support, and culture.</em></p><p>Continue reading <a href=''>Serverless is other people.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Rachel Chalmers We need a bigger goal than collecting data 2016-09-21T20:00:00Z tag: <p><img src=''/></p><p><em>Mehdi Daoudi challenges business leaders and IT ops professionals to consider the ROI of analyses. How quickly can we get real insights from our data?</em></p><p>Continue reading <a href=''>We need a bigger goal than collecting data.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Mehdi Daoudi Situation normal: All fouled up 2016-09-21T20:00:00Z tag: <p><img src=''/></p><p><em>Richard Cook and David Woods examine the problems and potential in Internet-facing business incident response.</em></p><p>Continue reading <a href=''>Situation normal: All fouled up.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Richard CookDavid Woods Two years of the U.S. Digital Service 2016-09-21T20:00:00Z tag: <p><img src=''/></p><p><em>Now two years old and including about 150 people spanning a network of federal agencies, the U.S. Digital Service has taken on immigration, education, veterans benefits, and health data interoperability. </em></p><p>Continue reading <a href=''>Two years of the U.S. Digital Service.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Mikey Dickerson DevOps, collaboration, and globally distributed teams 2016-09-21T20:00:00Z tag: <p><img src=''/></p><p><em>Ashish Kuthiala presents research-based findings on the factors that play the most important roles in accelerating DevOps adoption.</em></p><p>Continue reading <a href=''>DevOps, collaboration, and globally distributed teams.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Ashish Kuthiala Don't gamble when it comes to reliability 2016-09-21T20:00:00Z tag: <p><img src=';_representing_the_phrenolo_wellcome_v0009472_crop-a91503df9f903fcf4b2038cdce083ed1.jpg'/></p><p><em>How do you stay reliable when you can’t keep the whole system in your head? Tom Croucher discusses Uber's approach to reliability. </em></p><p>Continue reading <a href=''>Don't gamble when it comes to reliability.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Tom Hughes-Croucher O'Reilly Velocity Conference in New York 2016 livestream 2016-09-21T12:45:00Z tag: <p><img src=''/></p><p><em>Watch keynotes from the O'Reilly Velocity Conference in New York City.</em></p><p>Continue reading <a href=''>O'Reilly Velocity Conference in New York 2016 livestream.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Ask the CTO: Completing projects is difficult with a changing business roadmap 2016-09-21T12:19:00Z tag: <p><img src=''/></p><p><em>Anticipate change, but keep an eye on technical debt.</em></p><h2>The problem: It’s hard to direct a team when our goals as a company keep changing</h2> <p><em?</em></p> <h2>The solution: Accept the challenge and be flexible</h2> <p>Change is inevitable, especially in a startup. In fact, change is probably what drew us to working for a startup in the first place! We want to be able to move fast, and switch direction as needed. So, why do we fight against the changes when they actually happen?</p> <p.</p> <p.</p> <p.</p> <h2>Practical advice: Shift work to respond to company’s goals and create a plan to reduce technical debt</h2> <p>How do you handle this kind of change and uncertainty?</p> <p><strong>Be realistic about the likelihood of changing plans given the size and stage of the company you work for</strong>..</p> <p><strong>Think about how to break down big projects into a series of smaller deliverables so that you can achieve some of the results, even if you don’t necessarily complete the grand vision</strong>..</p> <p><strong>Don’t over-promise a future of technical projects. </strong.</p> <p><strong>Dedicate 20% of your team’s schedule to “sustaining engineering.”</strong>.</p> <p><strong>Understand how important various engineering projects really are.</strong>:</p> <ul> <li>How big is that project?</li> <li>How important is it?</li> <li>Can you articulate the value of that project to anyone who asks?</li> <li>What would successful completion of the project mean for the team?</li> </ul> <p>The value of these questions is that you start to <em>treat big technical projects the same way as product initiatives</em>..</p> <p.</p> <p>When you are faced with waves, you can let them pull you under, or you can learn how to surf. Hang 10.</p> <p>Continue reading <a href=''>Ask the CTO: Completing projects is difficult with a changing business roadmap.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Camille Fournier How to build a robot that “sees” with $100 and TensorFlow 2016-09-21T11:15:00Z tag: <p><img src=''/></p><p><em>Adventures in deep learning, cheap hardware, and object recognition.</em></p><p.</p> <p>Deep learning and a large public training data set called <a href="">ImageNet</a> has made an impressive amount of progress toward object recognition. <a href="">TensorFlow</a> is a well-known framework that makes it very easy to implement deep learning algorithms on a variety of architectures. TensorFlow is especially good at taking advantage of GPUs, which in turn are also very good at running deep learning algorithms.</p><p>Continue reading <a href=''>How to build a robot that “sees” with $100 and TensorFlow.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Lukas Biewald Ethics in data project design: It’s about planning 2016-09-21T11:00:00Z tag: <p><img src=''/></p><p><em>The destination and rules of the road are clear; the route you choose to get there makes a huge difference. </em></p><p>When I explain the value of ethics to students and professionals alike, I refer it as an “orientation.” As any good designer, scientist, or researcher knows, how you orient yourself toward a problem can have a big impact on the sort of solution you develop—and how you get there. As <a href="">Ralph Waldo Emerson once wrote</a>, “perception is not whimsical, but fatal.” Your particular perspective, knowledge of, and approach to a problem shapes your solution, opening up certain paths forward and forestalling others.</p> <p>Data-driven approaches to business help optimize measurable outcomes—but the early planning of a project needs to account for the ethical (and in many cases, the literal) landscape to avoid ethically treacherous territory. Several recent cases in the news illustrate this point and show the type of preparation that enables a way to move forward in both a data-driven and ethical fashion: Princeton Review’s ZIP-code-based pricing scheme, which turned out to <a href="">unfairly target Asian-American families</a>, and <a href="">Amazon’s same-day-delivery areas</a>, which neglect majority-Black neighborhoods.</p> <p>You can approach a new project using a road trip analogy. The destination is straightforward—profit, revenue, or another measurable KPI. But the path you take to get there will need to be determined. If my wife and I, for example, want to drive from our apartment in Oakland (Point A) to visit my wife’s sister in Los Angeles (Point B), we have to figure out how we’d like to approach the trip. If we’re concerned primarily with efficiency, certain questions immediately come to the fore, namely: what’s the fastest route to LA? Determining the fastest route requires us to pay attention to certain features of the possible trip, such as traffic speeds, easily accessible gas stations, and traffic conditions.</p> <p>On the other hand, if my wife and I are interested in taking the most scenic route from Oakland to LA, a whole different set of concerns become salient. Gas stations are likely still relevant, but speed is less of a factor. We’ll also want to take into account things like notable landmarks and towns (and my tendency toward car sickness) along the way.</p> <p>The destination is the same; the laws we have to abide by are the same, but how we get from Point A to Point B, then, is heavily determined by how we orient ourselves toward the trip in the first place.</p> <p>The same goes for research or design projects: how you orient yourself or your team toward solving certain problems or achieving certain goals will fundamentally shape the journey you take. If you’re interested in reaching a goal as quickly as possible—if your only concern is speed or turnaround time—a particular set of concerns are going to be salient. But if you’re interested in reaching a goal not only efficiently but ethically, then a different set of concerns will pop up.</p> <p>Moreover, understanding ethics as a way of orienting yourself toward a problem helps differentiate ethical behavior from mere legal compliance; just as the laws governing driving constrain my possible actions but don’t fully determine the contours of my road trip, nor do the laws or regulations governing data collection, storage, and use exhaust the possible decisions you and your team may face.</p> <p>My students lovingly refer to this as "Anna's red flags” approach—teaching folks to see and address red flags they’d otherwise miss along the way. Of course, orienting yourself toward your work with an eye to ethics is only a starting point. Simply intending to do ethical data science isn’t enough; once you’ve established an orientation, you need information and specialized training.</p> <p>In the case of The Princeton Review, it’s clear that the program designers had a suspicion that they could charge higher prices to customers in wealthier areas. Similarly, the city maps showing Amazon’s same-day delivery areas are immediately recognizable to residents of those cities as showing where people of color live. An awareness of the <a href="">distribution of wealth and race</a> in the U.S. would have set off alarm bells in either case—but this requires asking at the beginning of a research query “Are we about to build a proxy for race and class with this model?” </p> <p>To return to the road trip analogy, since both my wife and I are Midwestern transplants, we don’t have a deep background and knowledge of California to help guide our journey. Instead, we have to ask for help from folks with more extensive expertise in the space between the Bay Area and Southern California.</p> <p>Similarly, you may have some ideas about data ethics already. You and your team may have even discussed the topic, but you likely don’t have deep knowledge in applied ethics of data or technology, it’s not enough to rely only on your own (necessarily limited) perspective to guide you. Instead, you need to be proactive in reaching out to consult experts in the field, taking advantage of training opportunities when available, and diversifying the range of voices informing your work.</p> <p>Continue reading <a href=''>Ethics in data project design: It’s about planning.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Anna Lauren Hoffmann Cracking security misconceptions 2016-09-21T10:00:00Z tag: <p><img src=''/></p><p><em>Untangling common myths about modern information security.</em></p> <h2>Introduction</h2> <p.</p> <p>So there’s nothing we can do, right?</p><p>Continue reading <a href=''>Cracking security misconceptions.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Four short links: 21 September 2016 2016-09-21T09:50:00Z tag: <p><em>Simple Text Processing, Future Work Strategies, Chatbot Errors, and Formal Verification</em></p><ol> <li> <a href="">textblob</a> -- simple Python library for text processing, which plays well with NLTK and pattern.</li> <li> <a href="">10 Strategies For a Workable Future</a> (PDF, IFTF) -- <i>These 10 strategies invite us to consider both the technical details and the broad policy questions that will help us build a workable future.</i> </li> <li> <a href="">11 Mess-Ups While Making a Chatbot</a> -- <i>When people fully tune into a bot, they hand over complete control of their thinking. And common sense can go out of the window; it means your bot needs to cover all of the bases, however small. Following instructions one after another seems to put people into a passive state—and if you don’t tell them to do something, they just don’t do it.</i> </li> <li> <a href="">On Formal Verification</a> (Quanta) -- introductory article to the idea that formal verification methods will make systems more secure.</li> </ol> <p>Continue reading <a href=''>Four short links: 21 September 2016.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Nat Torkington How do I undo my last comment in Git? 2016-09-21T08:00:00Z tag: <p><img src=''/></p><p><em>Learn how to use the “git reset” command and reset the HEAD pointer to undo your last commit.</em></p><p>Continue reading <a href=''>How do I undo my last comment in Git?.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Lorna Jane Mitchell How do I delete a branch locally and remotely in Git? 2016-09-21T07:00:00Z tag: <p><img src=''/></p><p><em>Learn how to view and delete branches on both local and remote repositories so you can keep your project tidy and manageable.</em></p><p>Continue reading <a href=''>How do I delete a branch locally and remotely in Git?.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Lorna Jane Mitchell What is the difference between String and string in C#? 2016-09-21T04:00:00Z tag: <p><img src=''/></p><p><em>Should you be concerned about the difference a capital letter makes with “string” in C#?</em></p><p>Continue reading <a href=''>What is the difference between String and string in C#?.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Jeremy McPeak Four short links: 20 September 2016 2016-09-20T10:25:00Z tag: <p><em>Aligning Incentives, Git Recovery, Google's Public Service, and Quadruped Robots</em></p><ol> <li> <a href="">Lessons on Comp Structures from Wells-Fargo</a> -- <i>employees were evaluated for continuing employment by supervisors on cross-selling. Yet, they did not receive the same financial incentives to make such cross-selling. Branch managers and supervisors could receive bonuses of up to $10,000 per month for meeting cross-selling quotas when employees who hit their monthly quotas received, in addition to continued employment, $25 gift cards.</i> </li> <li> <a href="">Oh Shit, Git!</a> -- recovering from common mistakes made with git. Caution: contains even more swearing than you've already read.</li> <li> <a href="">Inside Jigsaw</a> (Wired) -- Jigsaw is Google's moonshot <i>not to advance the best possibilities of the Internet but to fix the worst of it: surveillance, extremist indoctrination, censorship.</i> </li> <li> <a href="">Minitaur: An Affordable Quadruped Robot</a> (IEEE) -- ok, not affordable YET, but the $10K current price could be as low as $1,500 if manufactured at scale.</li> </ol> <p>Continue reading <a href=''>Four short links: 20 September 2016.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Nat Torkington Understanding Etsy's 411 alerting framework 2016-09-20T10:00:00Z tag: <p><img src=''/></p><p><em>Five questions for Ken Lee and Kai Zhong: Insights on building Etsy's alerting framework and best practices for monitoring and alerting.</em></p><p>I recently sat down with Kenneth Lee and Kai Zhong, security engineers at Etsy, to discuss their alerting framework 411, and best practices for monitoring and alerting. Here are some highlights from our talk.</p> <h2>Etsy has created its own open source, real-time alerting framework. Can you describe it briefly?</h2> <p>Kai: 411 is alert management in a box. It provides the framework for querying data sources and managing the alerts it generates. We primarily use it for Elasticsearch-based alerts at Etsy, but it supports other alert types as well.</p> <p>Kenneth: We’ve also generalized much of the code to make it as painless as possible for developers to extend on the functionality we’ve provided so they can create alerts from other search sources.</p> <h2>What prompted your move to Elasticsearch?</h2> <p>Kenneth: This was primarily a decision driven by the operations team and one that the security team had very little say over. The creation of 411 happened during this transition process because <a href="">ELK</a> <span id="docs-internal-guid-c8175b53-2478-8de6-2a0e-4ce94b05e754">(Elasticsearch-Logstash-Kibana)</span> at the time lacked functionality that the security team needed when we first started the transition from Splunk.</p> <h2>How does 411 differ from other alerting and anomaly detection tools?</h2> <p>Kai: 411 focuses on providing a framework for alerting. You can use 411 with <span id="docs-internal-guid-c8175b53-2478-c94c-2b95-72dd187d8467">Elastic Stack (ES)</span>, or you can go an entirely different direction. The important takeaway is that you can easily add additional data sources to 411 to alert on the data <em>you</em> care about.</p> <h2>How should people decide what to log when designing their own alerting?</h2> <p>Kenneth: Log everything! Provided your ELK cluster is able to handle the volume, prioritizing adding logging functionality to base classes, or certain sensitive classes such as login or password changing, is a great place to begin. For people starting out in alerting, a good first pass is to add logging to calls that you want to know about that should usually not happen (non-technical users sshing into production boxes, number of successful site logins dips to zero, etc). The alerting functionality of 411 can definitely be put to good use for more nuanced (but actionable) alerts like attackers who are attempting to scrape your website. For developers, having a standard logger class that you can seamlessly utilize in your application that logs a bunch of information by default makes it easy for them to incorporate into their code, and also provides the secondary benefit of allowing you to specify one grok pattern to index those logs.</p> <p>Kai: It’s better to have too many logs than too little, especially when you’re trying to do incident response.</p> <h2>You're speaking at the <a href="">Security Conference in New York</a> this November. What presentations are you looking forward to attending while there?</h2> <p>Kai: The “<a href="">Future UX of security software</a>” talk looks interesting (and relevant to us as developers of security software). I’m also looking forward to “<a href="">AppSec programs for the rest of us</a>” as it fits in well with the security culture at Etsy!</p> <p>Kenneth: There are a bunch of great presentations lined up that I’m looking forward to. Among others, I’m planning on checking out “<a href="">Classifiers under attack</a>,” “<a href="">Hacker quantified security</a>,” and Jessica Frazelle’s talk, “<a href="">Benefits of isolation provided by containers</a>.”</p> <p>Continue reading <a href=''>Understanding Etsy's 411 alerting framework.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Courtney Nash,Kenneth LeeKai Zhong Stateless processes in cloud-native apps 2016-09-20T10:00:00Z tag: <p><img src=''/></p><p><em>Understanding statelessness, the share-nothing pattern, and data caching in modern>Factor 6, <em>processes</em>, discusses the stateless nature of the processes supporting cloud-native applications.</p> <p>Applications should execute as a <em>single, stateless</em> process. I have a strong opinion about the use of administrative and secondary processes, and modern cloud-native applications should each consist of a <em>single</em>,<a href="#_ftn1"><sup>[1]</sup></a> stateless process.</p> <p>This slightly contradicts the original 12 factor discussion of stateless processes, which is more relaxed in its requirement, allowing for applications to consist of multiple processes.</p> <h2>A practical definition of stateless</h2> <p>One question that I field on a regular basis stems from confusion around the concept of statelessness. People wonder how they can build a process that maintains no state. After all, every application needs <em>some</em> kind of state, right? Even the simplest of application leaves some bit of data floating around, so how can you ever have a truly stateless process?</p> <p>A stateless application makes no assumptions about the contents of memory prior to handling a request, nor does it make assumptions about memory contents after handling that request. The application can create and consume transient state in the middle of handling a request or processing a transaction, but that data should all be gone by the time the client has been given a response.</p> <p>To put it as simply as possible, <em>all long-lasting state must be external to the application, provided by backing services</em>. So the concept isn’t that state cannot exist; it is that it cannot be maintained within your application.</p> <p>As an example, a microservice that exposes functionality for user management must be stateless, so the list of all users is maintained in a backing service (an Oracle or MongoDB database, for instance). For obvious reasons, it would make no sense for a database to be stateless.</p> <h2>The share-nothing pattern</h2> <p>Processes often communicate with each other by sharing common resources. Even without considering the move to the cloud, there are a number of benefits to be gained from adopting the <em>share-nothing</em> pattern.</p> <p>Firstly, anything shared among processes is a liability that makes all of those processes more brittle. In many high-availability patterns, processes will share data through a wide variety of techniques to elect cluster leaders, to decide on whether a process is a primary or backup, and so on.</p> <p>All of these options need to be avoided when running in the cloud. Your processes can vanish at a moment’s notice with no warning, and <em>that’s a good thing</em>. Processes come and go, scale horizontally and vertically, and are highly disposable. This means that anything shared among processes could also vanish, potentially causing a cascading failure.</p> <p>It should go without saying, but <em>the filesystem is not a backing service</em>. This means that you cannot consider files a means by which applications can share data. Disks in the cloud are ephemeral and, in some cases, even read-only.</p> <p>If processes need to share data, like session state for a group of processes forming a web farm, then that session state should be externalized and made available through a true backing service.</p> <h2>Data caching</h2> <p>A common pattern, especially among long-running, container-based web applications, is to cache frequently used data during process startup. Processes need to start and stop quickly, and taking a long time to fill an in-memory cache violates this principle.</p> <p>Worse, storing an in-memory cache that your application thinks is always available can bloat your application, making each of your instances (which should be elastically scalable) take up far more RAM than is necessary.</p> <p>There are dozens of third-party caching products, including Gemfire and Redis, and all of them are designed to act as a backing service cache for your applications. They can be used for session state, but they can also be used to cache data your processes may need during startup and to avoid tightly coupled data sharing among processes.</p> <hr> <aside data- <div id="_ftn1"> <p><sup>[1]</sup> “Single” in this case refers to a single conceptual process. Some servers and frameworks might actually require more than one process to support your application.</p> </div> </aside> <p>Continue reading <a href=''>Stateless processes in cloud-native apps.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Kevin Hoffman Extend structured streaming for Spark ML 2016-09-19T12:40:00Z tag: <p><img src=''/></p><p><em>Early methods to integrate machine learning using Naive Bayes and custom sinks.</em></p><p>Spark’s new <a href="">ALPHA</a><a href=""> Structured Streaming API</a> has caused a lot of excitement because it brings the Data set/DataFrame/SQL APIs into a streaming context. In this initial version of Structured Streaming, the machine learning APIs have not yet been integrated. However, this doesn’t stop us from having fun exploring how to get machine learning to work with Structured Streaming. (Simply keep in mind this is exploratory, and things will change in future versions.)</p> <p>For our <a href="">Spark Structured Streaming for machine learning</a> talk on at Strata + Hadoop World New York 2016, we’ve started early proof-of-concept work to integrate structured streaming and machine learning available in the <a href="">spark-structured-streaming-ml</a> repo. If you are interested in following along with the progress toward Spark's ML pipelines supporting structured streaming, I encourage you to follow <a href="">SPARK-16424</a> and give us your feedback on our early <a href="">draft design document</a>.</p> <p>One of the simplest streaming machine learning algorithms you can implement on top of structured streaming is Naive Bayes, since much of the computation can be simplified to grouping and aggregating. The challenge is how to collect the aggregate data in such a way that you can use it to make predictions. The approach taken in the current streaming Naive Bayes won’t directly work, as the ForeachSink available in Spark Structured Streaming executes the actions on the workers, so you can’t update a local data structure with the latest counts.</p> <p>Instead, Spark's Structured Streaming has an in-memory table output format you can use to store the aggregate counts.</p> <pre> // Compute the counts using a Dataset transformation <strong>val</strong> counts <strong>=</strong> ds.flatMap{ <strong>case</strong> <strong>LabeledPoint</strong>(label, vec) <strong>=></strong> vec.toArray.zip(<strong>Stream</strong> from <strong>1</strong>).map(value <strong>=></strong> <strong>LabeledToken</strong>(label, value)) }.groupBy($"label", $"value").agg(count($"value").alias("count")) .as[<strong>LabeledTokenCounts</strong>] // Create a table name to store the output in <strong>val</strong> tblName <strong>=</strong> "qbsnb" + java.util.<strong>UUID</strong>.randomUUID.toString.filter(<strong>_</strong> != '-').toString // Write out the aggregate result in complete form to the in memory table <strong>val</strong> query <strong>=</strong> counts.writeStream.outputMode(<strong>OutputMode</strong>.<strong>Complete</strong>()) .format("memory").queryName(tblName).start() <strong>val</strong> tbl <strong>=</strong> ds.sparkSession.table(tblName).as[<strong>LabeledTokenCounts</strong>] </pre> <p>The initial approach taken with Naive Bayes is not easily generalizable to other algorithms, which cannot as easily be represented by aggregate operations on a <code>Dataset</code>. Looking back at how the early DStream-based Spark Streaming API implemented machine learning can provide some hints on one possible solution. Provided you can come up with an <code>update</code> mechanism on how to merge new data into your existing model, the <code>DStream</code> <code>foreachRDD</code> solution allows you to access the underlying micro-batch view of the data. Sadly, <code>foreachRDD</code> doesn't have a direct equivalent in Structured Streaming, but by using a custom sink, you can get similar behavior in Structured Streaming.</p> <p>The sink API is defined by <a href="">StreamSinkProvider</a>, which is used to create an instance of the Sink given a SQLContext and settings about the sink, and Sink trait, which is used to process the actual data on a batch basis.</p> <pre> <strong>abstract</strong> <strong>class</strong> <strong>ForeachDatasetSinkProvider</strong> <strong>extends</strong> <strong>StreamSinkProvider</strong> { <strong>def</strong> func(df<strong>:</strong> <strong>DataFrame</strong>)<strong>:</strong> <strong>Unit</strong> <strong>def</strong> createSink( sqlContext<strong>:</strong> <strong>SQLContext</strong>, parameters<strong>:</strong> <strong>Map</strong>[<strong>String</strong>, <strong>String</strong>], partitionColumns<strong>:</strong> <strong>Seq</strong>[<strong>String</strong>], outputMode<strong>:</strong> <strong>OutputMode</strong>)<strong>:</strong> <strong>ForeachDatasetSink</strong> = { <strong>new</strong> <strong>ForeachDatasetSink</strong>(func) } } <strong>case</strong> <strong>class</strong> <strong>ForeachDatasetSink</strong>(func<strong>:</strong> <strong>DataFrame</strong> => <strong>Unit</strong>) <strong>extends</strong> <strong>Sink</strong> { <strong>override</strong> <strong>def</strong> addBatch(batchId<strong>:</strong> <strong>Long</strong>, data<strong>:</strong> <strong>DataFrame</strong>)<strong>:</strong> <strong>Unit</strong> = { func(data) } } </pre> <p>As with writing DataFrames to customs formats, to use a third-party sink, you can specify the full class name of the sink. Since you need to specify the full class name of the format, you need to ensure that any instance of the SinkProvider can update the model—and since you can’t get access to the sink object that gets constructed—you need to make the model outside of the sink.</p> <pre> <strong>object</strong> <strong>SimpleStreamingNaiveBayes</strong> { <strong>val</strong> model <strong>=</strong> <strong>new</strong> <strong>StreamingNaiveBayes</strong>() } <strong>class</strong> <strong>StreamingNaiveBayesSinkProvider</strong> <strong>extends</strong> <strong>ForeachDatasetSinkProvider</strong> { <strong>override</strong> <strong>def</strong> func(df<strong>:</strong> <strong>DataFrame</strong>) { <strong>val</strong> spark <strong>=</strong> df.sparkSession <strong>SimpleStreamingNaiveBayes</strong>.model.update(df) } } </pre> <p>You can use the custom sink shown above to integrate machine learning into Structured Streaming while you are waiting for Spark ML to be updated with Structured Streaming.</p> <pre> // Train using the model inside SimpleStreamingNaiveBayes object // - if called on multiple streams all streams will update the same model :( // or would except if not for the hard coded query name preventing multiple // of the same running. <strong>def</strong> train(ds<strong>:</strong> <strong>Dataset</strong>[<strong>_</strong>]) <strong>=</strong> { ds.writeStream.format( "com.highperformancespark.examples.structuredstreaming." + "StreamingNaiveBayesSinkProvider") .queryName("trainingnaiveBayes") .start() } </pre> <p>If you are willing to throw caution to the wind, you can access some <a href="">Spark internals to construct a sink that behaves more like the original <code>foreachRDD</code></a>. If you are interested in custom sink support, you can follow <a href="">SPARK-16407</a> or <a href="">this PR</a>.</p> <p>The cool part is, regardless of whether you want to access the internal Spark APIs, you can now handle batch updates in the same way Spark’s earlier streaming machine learning is implemented.</p> <p>While this certainly isn't ready for production usage, you can see that the Structured Streaming API offers a number of different ways it can be extended to support machine learning.</p> <p><em>You can learn more in <a href="">High Performance Spark: Best Practices for Scaling and Optimizing Apache Spark</a>.</em></p> <p>Continue reading <a href=''>Extend structured streaming for Spark ML.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Holden Karau Four short links: 19 September 2016 2016-09-19T11:15:00Z tag: <p><em>Visualizing Circuits, Working on Climate Change, Cashless Challenges, Architecture & Politics</em></p><ol> <li> <a href="">OmniBlox</a> -- visualize your BRD circuit layouts in the web browser. So purty! And open source!</li> <li> <a href="">What Can a Technologist Do About Climate Change?</a> (Bret Victor) -- <i>software isn’t just for drawing pixels and manipulating databases—it underlies a lot of the innovation even in physical technology.</i> </li> <li> <a href="">The War on Cash</a> -- the cashless society assumes you can get a bank account, and there are many for whom that is very difficult indeed. <i>'Cashless society' is a euphemism for the "ask-your-banks-for-permission-to-pay society."</i> </li> <li> <a href="">Architecture's Impact on Politics</a> (Wired) -- <i>All 193 assembly halls fall into one of five organizational layouts: “semicircle,” “horseshoe,” “opposing benches,” “circle,” and “classroom.” And these layouts make a difference.</i> </li> </ol> <p>Continue reading <a href=''>Four short links: 19 September 2016.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Nat Torkington Run strikingly fast parallel file searches in Go with sync.ErrGroup 2016-09-19T10:00:00Z tag: <p><img src=''/></p><p><em>Go’s new sync.ErrGroup package significantly improves developer productivity with goroutines.</em></p><p.</p> <p:</p> <pre data-() </pre> <p>WaitGroups made it significantly easier to deal with concurrency in Go because they reduced the amount of accounting you had to do when launching goroutines. Every time you launch a goroutine you increment the WaitGroup by calling <code>Add()</code>. When one finishes, you call <code>wg.Done()</code>. To wait for all of them to complete, you call <code>wg.Wait()</code> which blocks until they’ve all finished. The only issue is that if a problem happened in one of your goroutines it was difficult to figure out what the error was.</p> <h2>Extending sync.WaitGroup’s functionality</h2> <p>Recently the Go team added a new package in the experimental repository called <a href="">sync.ErrGroup</a>. sync.ErrGroup extends sync.WaitGroup by adding error propagation and the ability to cancel an entire set of goroutines when an unrecoverable error occurs, or a timeout is reached. Here’s the same example rewritten to use an ErrGroup:</p> <pre data-.") } </pre> <p>The <code>g.Go()</code> function above is a wrapper that allows you to launch an anonymous function but still capture the errors that it may return without all the verbose plumbing that would otherwise be required. It’s a significant improvement in developer productivity when using goroutines.</p> <p.</p> <p>When applied against the directory for my sample application it produces these results:</p> <pre data- $ gogrep -timeout 1000ms . fmt gogrep.go 1 hits </pre> <p>If you call it without the right number of parameters it prints the correct usage:</p> <pre data- gogrep by Brian Ketelsen Flags: -timeout duration timeout in milliseconds (default 500ms) Usage: gogrep [flags] path pattern </pre> <h2>How sync.ErrGroup makes application building easier</h2> <p>Let’s take a look at the code and see how sync.ErrGroup makes this application so easy to build. We’ll start with <code>main()</code> because I like to read code like a story, and every code story starts with <code>main()</code>.</p> <pre data-") } </pre> <p>The first 15 lines set up the flags and arguments that are expected and print a nice default error message when the program is called without the right number of arguments. The first line of interest is line 16:</p> <pre data- ctx, _ := context.WithTimeout(context.Background(), *duration) </pre> <p>Here, I’ve created a new <code>context.Context</code> with a timeout attached to it. The timeout duration is set to the duration flag variable. When the timeout is reached, “ctx” and all contexts that inherit from it will receive a message on a channel alerting them to the timeout. <code>WithTimeout</code> also returns a cancel function which we won’t need, so I’ve discarded it by assigning it to “_”.</p> <p>The next line calls the <code>search()</code> function passing in the context, search path, and search patterns. Finally the results are printed to the terminal followed by a count of search hits.</p> <h2>Breaking down the search() function</h2> <p>The <code>search()</code> function is a little longer than <code>main()</code> so I’ll break it up as I explain what’s happening.</p> <p>The first thing that happens in the search function is the creation of a new errgroup. This structure contains the context and does all the process accounting for the concurrency that will follow.</p> <pre data- func search(ctx context.Context, root string, pattern string) ([]string, error) { g, ctx := errgroup.WithContext(ctx) </pre> <p.</p> <pre data- paths := make(chan string, 100) </pre> <p>The errgroup type has two methods: <code>Wait()</code> and <code>Go()</code>. <code>Go()</code> launches tasks and <code>Wait()</code> blocks until they’ve all completed. Here, we call <code>Go()</code> with an anonymous function that returns an error.</p> <pre data- g.Go(func() error { </pre> <p>Next, we defer closing the “paths” channel to signal that all of the directory searching has completed. This allows us to use Go’s “range” statement later to process all the candidate files in more goroutines.</p> <pre data- defer close(paths) </pre> <p>Finally we use the filepath package’s <code>Walk()<.</p> <pre data- return filepath.Walk(root, func(path string, info os.FileInfo, err error) error { if err != nil { return err } if !info.Mode().IsRegular() { return nil } if !info.IsDir() && !strings.HasSuffix(info.Name(), ".go") { return nil } </pre> <h2>Spotting the real power of sync.Errgroup</h2> <p.</p> <pre data- select { case paths <- path: case <-ctx.Done(): return ctx.Err() } return nil }) }) </pre> <p>Next I created a channel to handle all the files that matched the search pattern.</p> <pre data- c := make(chan string,100) </pre> <p>Now we can iterate over the files in paths channel and search their contents.</p> <pre data- for path := range paths { </pre> <p> <pre data- p := path </pre> <p>Now we’ll fire off another anonymous function for every candidate file. This function reads the contents of the Go source file and checks to see if it contains the supplied search pattern.</p> <pre data- g.Go(func() error { data, err := ioutil.ReadFile(p) if err != nil { return err } if !bytes.Contains(data, []byte(pattern)) { return nil } </pre> <p>Once again we’ll use a select statement to make watch for the timeout firing before our processing has completed.</p> <pre data- select { case c <- p: case <-ctx.Done(): return ctx.Err() } return nil }) } </pre> <p>This function will wait for all of the errgroup’s goroutines to complete then close the results channel, signalling that all processing is complete and terminating the range statement above.</p> <pre data- go func() { g.Wait() close(c) }() </pre> <p>Now we collect the results from the channel and put them in a slice to return back to <code>main()</code>.</p> <pre data- var m []string for r := range c { m = append(m, r) } </pre> <p>Finally we’ll wrap up by checking for errors in the errgroup. If any of the goroutines above returned an error, we’ll return it back to <code>main()</code> with an empty resultset.</p> <pre data- return m, g.Wait() } </pre> <p.</p> <p>The full source code is available on Github <a href="">here</a>.</p> <p>If you have Go installed, you can also run “go get github.com/bketelsen/gogrep” from your command line to download the application.</p> <p <a href="">Go Beyond the Basics in-person training in Boston</a>, October 3 & 4, and also my <a href="">online training October 25 & 26.</a></p> <p>Continue reading <a href=''>Run strikingly fast parallel file searches in Go with sync.ErrGroup.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Brian Ketelsen How do you learn? 2016-09-19T04:00:00Z tag: <p><img src=''/></p><p><em>Shared learning: It's what we do at O'Reilly, and it's what we’d like to share with you.</em></p><p>Learning isn’t a one-shot process: take the course, pass an exam, and get out. You learn by interacting with instructors and students, by assessing your progress, and using that to plan your next steps. It’s an ongoing feedback loop that involves everyone in the classroom (whether the classroom is virtual or physical).</p> <p>At O’Reilly, we’ve thought a lot about how people learn. And we’ve built live training programs (both online and in-person) that allow people to learn in whatever way is best for them. We have built learning programs that <em>work</em>.</p> <h2>Learning is social</h2> <p>O’Reilly Media has always been a learning company. When we began to publish books in the mid-'80s, our editorial guidance for authors was to write as if they were “a friend looking over the reader’s shoulder, providing wise and experienced advice.”</p> <p>In 2016, we’re about much more than books. We’ve long had instructional video and conferences. In the past year, we’ve introduced live online training, in addition to live in-person training at conferences and other locations. But the same standard applies: we want the people who learn with us to feel like they have an expert looking over their shoulders and providing seasoned advice.</p> <p>How do we do that? It starts with realizing that learning isn’t just a one-way experience. When we analyzed conference attendance and early online courses, we saw that people participate as groups. <em>More than 50% attend our online courses as teams.</em> People don’t attend courses as teams just so they can hang out together during breaks; they attend as teams so they can learn from each other, apply their new knowledge to their particular situation, and bring that knowledge back to their co-workers. When a team attends training, the group learns much more than sum of the individual experiences.</p> <p>Furthermore, we discovered that teams attend to hear <em>what other teams ask</em>. For example, another group attending may be further along their journey toward streaming analytics; they may have already encountered issues your team won’t hit for another six months. So learning isn’t just about what you need to know now, or what your teacher thinks you need to know now: it’s about interaction with other learners and their problems. It’s about taking advantage of other learnings to discover where you’ll be in six months or a year.</p> <p>O’Reilly’s live training events—whether online or in person—offer instructor-led, hands-on courses, with an emphasis on the <a href="">social aspects</a> of learning. That’s in stark contrast to typical training products based on pre-recorded classes. Each course is led by an <em>expert practitioner</em> in the subject, someone working in industry. They’ve seen what works and what doesn’t—instead of just memorizing a syllabus and slide deck. These instructors guide you through hands-on course materials and they’re available to help answer your questions at any point. And you’re able to interact with other students, learning from their experience.</p> <h2>Learning is a feedback loop</h2> <p>You and your team have just finished a course. How do you know that you’ve learned? Assessment is one of the trickiest areas in education. Most “assessment” in the industry devolves into memory quizzes. But simply quizzing students on whether they’ve memorized the appropriate answers doesn’t tell you anything about learning.</p> <p>O’Reilly integrates assessment into almost all aspects of our learning experiences. We focus on <a href="">formative assessment</a>—in other words, providing useful feedback—rather than “quantifying students.” Our work on <em><a href="">computable content</a></em> opens the door for live coding as part of assessment. We don’t give quizzes that test whether you can remember the right answer: we can give actual coding assignments, and evaluate learners’ solutions.</p> <p>This kind of assessment goes hand-in-hand with instructor feedback: our instructors can see students’ weaknesses and can help them through tough spots, rather than just assign a grade. And we can recommend content for future learning, with full knowledge of the students’ strengths and weaknesses. If a student is weak in some areas, additional background may be helpful. If a student is particularly strong, she may be able to skip ahead. All this is part of our <a href="">subscription-based learning platform</a>.</p> <h2>Offerings: What do you want to learn?</h2> <p>You can have the best teachers and teaching methods in the world; it isn’t worth anything if they aren’t teaching courses that are relevant to your needs. In the next two months, <a href="">our live training calendar</a> includes courses on data science, design, operations, security, software architecture, programming, and business. Topics range from design for the Internet of Things to programming in Python and distributed services with Spark. We have courses for beginners, to help them build the broad <a href="">structural literacy</a> that will help them to learn more; we have courses for experts, to help them drill down into specialized topics. And that’s just the start: we’re still building out the calendar.</p> <p>All of our courses feature a <em>learning promise</em>. That’s a “Roles to Goals” description of the intended audience, what prerequisites are required, what you’ll learn, why it matters, and how you’ll apply it.</p> <figure class="center" id="id-5bMiM"><img alt="" class="iimageslearning-promise-examplepng" src=""> <figcaption><span class="label">Figure 1. </span><a href="">An example of an O'Reilly learning promise</a>.</figcaption> </figure> <p>That’s how we approach learning at O’Reilly. We recognize that learning is social, and we’ve built a platform that allows instructor and students to interact with each other. We’ve engaged instructors who are expert practitioners in their fields, and who have hard-earned wisdom to share with their students: they’re familiar with the best (and worst) practices, the trade-offs, the gotchas that can drive a student crazy. We’ve built tools that make assessment part of a feedback loop, so instructors can “<a href="">invert the classroom</a>” and focus on providing help where it’s most needed. And we’re building a rich program of in-person learning events: you can attend online, at conferences, at our <a href="">Boston training center</a>, or at other venues.</p> <p>Shared learning: that’s what we do, and we’d like to share that with you.</p> <p>Continue reading <a href=''>How do you learn?.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Paco NathanMike Loukides Infographic: The bot platform ecosystem 2016-09-17T12:00:00Z tag: <p><img src=''/></p><p><em>A look at the artificial intelligence and messaging platforms behind the fast-growing chatbot community</em></p><p>Behind the recent bot boom are big improvements in artificial intelligence and the rise of ubiquitous messaging services..</p> <figure id="id-31Vik"><img alt="bots landscape infographic" class="iimagesbots-landscape-1400png" src=""> </figure><p>Continue reading <a href=''>Infographic: The bot platform ecosystem.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Jon Bruner Configuring a continuous delivery pipeline in Jenkins 2016-09-16T10:00:00Z tag: <p><img src=''/></p><p><em>Learn the basics for setting up a continuous delivery pipeline in Jenkins, from modeling the pipeline to integrating the software.</em></p><p>If you read many websites talking about assembling a pipeline with Jenkins and other technologies, you may get the feeling that doing this requires a mixture of deep research, extensive trial-and-error, cryptic command-line invocations and argument passing, black magic, and wishful thinking.</p> <p>While not quite that bad, it can be difficult to navigate through the process without a guide and some examples of what to do (and what not to do). This is especially true given the different dimensions involved in putting together a Continuous Delivery pipeline. In this article, you'll learn about the basics for setting up a Continuous Delivery pipeline in Jenkins, from modeling the pipeline to integrating the software.</p><p>Continue reading <a href=''>Configuring a continuous delivery pipeline in Jenkins.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Brent Laster Security architecture 2016-09-16T10:00:00Z tag: <p><img src=''/></p><p><em>Become familiar with various ways to design technical methods that minimize the risk of having a class of users who must be trusted—of their own volition—to behave within a set of rules in order to safeguard privacy.</em></p> <h2>Overview</h2> <p><a data-</a><a data-</a>Privacy <a data-</a.</p> <h2>Separating Roles, Separating Powers</h2> <p><a data-</a><a data-</a><a data-</a><a data-</a><a data-</a>Privacy controls serve to limit the behavior of users <em>inside</em> the system. However, to protect data from access occurring beyond the confines of the privacy-protected application (but rather at some lower system level), it’s important to strictly separate the roles of individuals interacting with the system.<a data-1</a> It’s then possible to establish a clear separation of powers between these different roles.</p><p>Continue reading <a href=''>Security architecture.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Courtney Bowman,Ari Gesher,John GrantDaniel Slate Let’s use Keystone! 2016-09-16T10:00:00Z tag: <p><img src=''/></p><p><em>Use this instructional tutorial to interact with Keystone and invoke its core operational capabilities.</em></p> <p><a contenteditable="false" data- </a><a contenteditable="false" data- </a>In this chapter we will explain how to use Keystone in a development environment. This involves a few steps. First, we deploy OpenStack with DevStack, then try basic Keystone operations with OpenStackClient (a command line interface), then we perform the same Keystone operations with Horizon (a Web interface). We will also be providing cURL alternatives to the OpenStackClient commands to illustrate the fact that the CLI is simply just a wrapper for a REST call.</p> <h2>2.1 Getting DevStack</h2> <p><a contenteditable="false" data- </a><a contenteditable="false" data- </a><a contenteditable="false" data- </a>The examples shown below were performed on a new Ubuntu 64-bit virtual machine (VM). There are several options available for quickly getting an Ubuntu VM up and running, such as VMWare Fusion or Oracle’s VirtualBox.</p><p>Continue reading <a href=''>Let’s use Keystone!.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Steve MartinelliBrad Topol RxJava for Android app development 2016-09-16T10:00:00Z tag: <p><img src=''/></p><p><em>RxJava is a powerful library, but it has a steep learning curve. Learn RxJava by seeing how it can make asynchronous data handling in Android apps much cleaner and more flexible.</em></p> <h2>An Introduction to RxJava</h2> <h2>Sharp Learning Curve, Big Rewards</h2> <p>I was pretty much dragged into RxJava by my coworkers...[RxJava] was a lot like git...when I first learned git, I didn't really learn it. I just spent three weeks being mad at it...and then something clicked and I was like 'Oh! I get it! And this is amazing and I love it!' The same thing happened with RxJava. </p> <p data-Dan Lew<a data-1</a></p><p>Continue reading <a href=''>RxJava for Android app development.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> K. Matt Dupree Analyzing and visualizing data with F# 2016-09-16T10:00:00Z tag: <p><img src=''/></p><p><em>Learn how to get more out of your data with real-world examples using the powerful F# programming language.</em></p> <h2>Accessing Data with Type Providers</h2> <p <em>pantograph punch</em> was used to punch the data on punch cards, which were then fed to the <em>tabulator</em> that counted cards with certain properties, or to the <em>sorter</em> for filtering. The census still required a large amount of clerical work, but Hollerith’s machines sped up the process eight times to just one year.<a data-1</a></p> <em>tabulator</em> and <em>sorter</em> have become standard library functions in many programming languages and data analytics libraries.</p><p>Continue reading <a href=''> Analyzing and visualizing data with F#.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Tomas Petricek Four short links: 16 September 2016 2016-09-16T09:00:00Z tag: <p><em>I Rock Moments, Trusting Black Boxes, Design Heuristics, and Chart Components</em></p><ol> <li> <a href="">Building Activities</a> -- framed around creating math activities that don't suck, but the approach to crafting meaningful "I rock!" moments apply to more than kids and math. <i>Give students opportunities to be right and wrong in different, interesting ways.</i> </li> <li> <a href="">Whose Black Box Do You Trust?</a> -- <i>Here are my four rules for evaluating whether you can trust an algorithm:?</i> </li> <li> <a href="">Hints for Computer System Design</a> (Paper a Day) -- worth it for the sweet diagram breaking down design heuristics into Why/Where.</li> <li> <a href="">plottablejs</a> -- Palantir open-sourced <i>set of flexible, premade components that you can combine and rearrange to build charts.</i> </li> </ol> <p>Continue reading <a href=''>Four short links: 16 September 2016.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Nat Torkington How do I convert an InputStream to a string in Java? 2016-09-16T08:00:00Z tag: <p><img src=''/></p><p><em>Learn how to load text in a binary file to an InputStream and convert it to a string using ByteArrayOutputStream with a ByteBuffer. </em></p><p>Continue reading <a href=''>How do I convert an InputStream to a string in Java?.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Brian L. Gorman How do I iterate a hash map in Java? 2016-09-16T07:00:00Z tag: <p><img src=''/></p><p><em>Learn to iterate HashMaps using forEach and Java 8’s new lambda syntax.</em></p><p>Continue reading <a href=''>How do I iterate a hash map in Java?.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Brian L. Gorman 5 mistakes to avoid when deploying an analytical database 2016-09-15T11:40:00Z tag: <p><img src=''/></p><p><em>Leading data-driven organizations point out five common pitfalls.</em></p><p>Analytical databases are an increasingly critical part of businesses’ big data infrastructure. Specifically designed to offer performance and scalability advantages over conventional relational databases, analytical databases enable business users as well as data analysts and data scientists to easily extract meaning from large and complex data stores.</p> <p>But to wring the most knowledge and meaning from the data your business is collecting every minute—if not every second—it’s important to keep some best practices in mind when you deploy your big data analytical database. Leading businesses that have deployed such analytical databases share five pitfalls you should avoid to keep you on track as your big data initiatives mature.</p> <h2>1. Don’t ignore your users when choosing your analytical database tools</h2> <p>Business users, analysts, and data scientists are very different people, says Chris Bohn, “CB,” a senior database engineer with Etsy, a marketplace where millions of people around the world connect, both online and offline, to make, sell, and buy unique goods. For the most part, data scientists are going to be comfortable working with Hadoop, MapReduce, Scalding, and Spark, whereas data analysts live in an SQL world. “If you put tools in place that your users don’t have experience with, they won’t use those tools. It’s that simple,” says Bohn.</p> <p>Etsy made sure to consider the end users of the analytics database before choosing an analytical database—and those end users, it turned out, were mainly analysts. So, Etsy made sure to pick a database based on the same SQL as PostgreSQL, which offered familiarity for end users and increased their productivity.</p> <h2>2. Don’t think too big when starting your big data initiative</h2> <p>Big data has generated a lot of interest lately. CEOs are reading about it in the business press and expressing their desire to leverage enterprise data to do everything from customizing product offerings, to improving worker productivity, to ensuring better product quality. But too many companies begin their big data journeys with big budgets and even bigger expectations. They attempt to tackle too much. Then, 18 months down the road, they have very little to show.</p> <p>It’s more realistic to think small. Focus on one particular business problem—preferably one with high visibility—that could be solved by leveraging data more effectively. Address that problem with basic data analytics tools—even Excel can work. Create a hypothesis and perform an exercise that analyzes the data to test that hypothesis. Even if you get a different result than you expected, you’ve learned something. Rinse and repeat. Do more and more projects using that methodology “and you’ll find you’ll never stop—the use cases will keep coming,” affirms HPE’s Colin Mahony, senior vice president and general manager for HPE Software Big Data.</p> <p>Larry Lancaster, the former chief data scientist at a company offering hardware and software solutions for data storage and backup, agrees. “Just find a problem your business is having,” advises Lancaster. “Look for a hot button. Instead of hiring a new executive to solve that problem, hire a data scientist.”</p> <h2>3. Don’t underestimate data volume growth</h2> <p>Virtually all big data veterans warn about unanticipated data volumes. Cerner, a company working at the intersection of health care and information technology, was no exception. Based in Kansas City, Cerner’s health information technology (HIT) solutions connect people and systems at more than 20,000 facilities worldwide.</p> <p>Even though Cerner estimated quite substantial data volume growth at the time of the proof of concept in 2012, the growth has accelerated beyond Cerner’s wildest expectations.</p> <p>“At the time we predicted a great deal of growth, and it certainly wasn’t linear,” says Dan Woicke, director of enterprise system management at Cerner. “Even so, we never would have predicted how fast we would grow. We’re probably at double or triple the data we expected.”</p> <p>The moral: choose a database that can scale to meet unanticipated data volumes.</p> <h2>4. Don’t throw away any of your data</h2> <p>One mistake that many businesses make is not saving all of their data. They think once data gets old, it is stale and irrelevant. Or they can’t think of a specific use for a data point, and so they discard it. This is a serious error. Further down the road, that data might turn out to be essential for a key business decision.</p> <p>“You never know what might come in handy,” says Etsy’s Bohn.</p> <p>Today’s storage and database technologies make it quite inexpensive to store data for the long term. Why not save it all? Look for analytical databases that can scale to accommodate as much data as you generate. “As long as you have a secure way to lock data down, you should keep it,” says Bohn. You may later find there’s gold in it.”</p> <h2>5. Don’t lock yourself into rigid, engineered-system data warehouses</h2> <p>According to Bohn, one of the lessons he’s learned in his big data journey is this: your data is your star, and this drives your database purchasing decisions.</p> <p>“Do you use the cloud or bare iron in a colocation facility?” asks Bohn. “This will matter, because to get data into the cloud you have to send it over the Internet—which will be not as fast as if your big data analytical system is located right next to your production system.”</p> <p>Bohn adds, “It’s also important that you don’t go down any proprietary technological dead ends.” Bohn says to be careful of some of the newer technologies that may not stand the test of time. “It’s better to be on the leading than the bleeding edge,” says Bohn. For example, message queuing has become an important part of infrastructure for distributing data. Many such systems have been brought to market in the past decade, with a lot of hype and promises. More than a few companies made investments in those technologies, only to find that they didn't perform as advertised.</p> <p>“Companies that made those investments then found they had to extricate themselves—at considerable cost,” Bohn notes. Etsy is currently using Kafka as an event and data pipeline, and may soon use it for <a href="">HPE Vertica</a> data ingestion. “Kafka has been gaining a lot of traction, and we think it will also be around for a while. We like its model, and so far it has proven to be robust. Vertica has developed a good Kafka connector, and it may well become a major path for data to get into Vertica.”</p> <p><em>This post is a collaboration between O’Reilly and HPE Vertica. </em><a href=""><em>See our statement of editorial independence</em></a><em>.</em></p> <p>Continue reading <a href=''>5 mistakes to avoid when deploying an analytical database.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Alice LaPlante Paul Adams on Intercom’s mission to re-humanize customer service 2016-09-15T11:35:00Z tag: <p><img src=''/></p><p><em>The O’Reilly Design Podcast: Connecting humans at scale, bot philosophy, and failed attempts to defy the laws of physics.</em></p><p>In this week’s Design Podcast, I sit down with <a href="">Paul Adams</a>, VP of product at Intercom. Before joining Intercom, Adams had stints at Dyson, Google, and Facebook. We talk about his career path, building design teams, and Intercom’s goal to connect humans at scale.</p><p>Continue reading <a href=''>Paul Adams on Intercom’s mission to re-humanize customer service.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Mary Treseler Data science for humans and data science for machines 2016-09-15T11:25:00Z tag: <p><img src=''/></p><p><em>The O’Reilly Data Show Podcast: Michael Li on the state of data engineering and data science training programs.</em></p><p>In this episode of the<a href=""> O’Reilly Data Show</a>, I spoke with <a href="">Michael Li</a>, cofounder and CEO of <a href="">the Data Incubator</a>. We discussed the current state of data science and data engineering training programs, Apache Spark, quantitative finance, and the misunderstanding around the term “data science.”</p><p>Continue reading <a href=''>Data science for humans and data science for machines.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Ben Lorica Joshua Browder on bots that fight bureaucracy 2016-09-15T11:20:00Z tag: <p><img src=''/></p><p><em>The O’Reilly Bots Podcast: Can bots replace lawyers?</em></p><p>In episode five of the <a href="">O’Reilly Bots podcast</a>, Pete Skomoroch and I speak with <a href="">Joshua Browder</a>, the 19-year old founder and CEO of <a href="">DoNotPay</a>, a series of bots that help people with legal issues, including challenging parking tickets, challenging bank charges, and claiming government assistance for homelessness. Dubbed “the world’s first robot lawyer,” his bots have attracted 260,000 users and provided 175,000 successful parking-ticket appeals.</p><p>Continue reading <a href=''>Joshua Browder on bots that fight bureaucracy.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Jon Bruner The great question of the 21st century: Whose black box do you trust? 2016-09-15T11:05:00Z tag: <p><img src=''/></p><p><em>Algorithms shape choice not just for consumers but for businesses.</em></p><p>Some years ago, <a href="">John Mattison</a>, <a href="">black box</a>, by definition, is a system whose inputs and outputs are known, but the system by which one is transformed to the other is unknown.)</p> <p>A lot of attention has been paid to the role of algorithms in shaping the experience of consumers. Much less attention has been paid to the role of algorithms in shaping the incentives for business decision-making.</p> <p>For example, there has been hand-wringing for years about how algorithms shape the news we see from Google or Facebook. Eli Pariser warned of a "<a href="">filter bubble</a>,".</p> <p>But there's a deeper, more pervasive risk that came out in a conversation I had recently with <a href="">Chris O'Brien</a>?</p> <p>The need to get attention from search engines and social media is arguably one factor in <a href="">the dumbing down of news media</a> and a style of reporting that leads even great publications to a culture of hype, <a href="">fake controversies</a>, and other techniques to drive traffic. <a href="">The race to the bottom</a> in coverage of the U.S. presidential election is a casualty of the primary shift of news industry revenue from subscription to advertising and from a secure base of local readers to chasing readers via social media. You must please the algorithms if you want your business to thrive.</p> <p?</p> <p "<a href="">content farms</a>," vast collections of cross-linked low-quality content (often scraped from other sites) that fooled the algorithms into thinking they were highly regarded. <a href="">In 2011, when Google rejiggered their algorithm</a> to downgrade content farms, many companies who'd been following this practice were badly hurt. <a href="">Many went out of business</a> (as well they should), and others had to improve their business practices to survive.</p> <p>Publishers targeting Facebook recently went through a similar experience, when Facebook announced last month that it was <a href="">updating its News Feed algorithm to de-emphasize stories with "clickbait" headlines</a> ."</p> <p>As Warren Buffet <a href="">is reputed to have said</a>, .)</p> <p>Here we get to the black box bit. According to Facebook's VP of product management on News Feed, Adam Mosseri, <a href="">as reported by TechCrunch</a>, .'"</p> <p>Because many of the algorithms that shape our society are black boxes—either for reasons like those cited by Facebook, or because they are, in the world of deep learning, inscrutable even to their creators—that question of trust is key.</p> <p>Understanding how to evaluate algorithms without knowing the exact rules they follow is a key discipline in today's world. And it is possible. Here are my four rules for evaluating whether you can trust an algorithm:</p> <ol> <li>Its creators have made clear what outcome they are seeking, and it is possible for external observers to verify that outcome.</li> <li>Success is measurable.</li> <li>The goals of the algorithm's creators are aligned with the goals of the algorithm's consumers.</li> <li>Does the algorithm lead its creators and its users to make better longer term decisions?</li> </ol> <p>Let's consider a couple of examples.</p> <h2>Google Search and Facebook News Feed</h2> <p>Continuing the discussion above, you can see the application of my four principles to Google Search and the Facebook News Feed:</p> <ol> <li.</li> <li.)</li> <li <a href="">listicles</a>. It becomes the job of the algorithmic manager to adjust the algorithms to deal with these opposing forces, just as the designer of an airplane autopilot must design its algorithms to deal with changing weather conditions.</li> <li>Long-term decisions. The alignment between the goals of the platform and the goals of its users holds for the short term. But does it hold for the long term?</li> </ol> <h2>Autonomous vehicles</h2> <p.</p> <p>If you're like me until a few months ago, you probably assume that the autopilot is kind of like cruise control—it flies the plane on long boring stretches, while the pilots do the hard stuff like takeoff and landing. Not so. <a href="">On my way to StartupFest in Montreal</a>, I had an extended conversation with the pilot of a jet (and even got to sit in the co-pilot's seat and feel the minute adjustments to the controls that the autopilot made to keep the plane constantly on course.)</p> <figure id="id-5AEiz"><img alt="" class="iimagestim-plane-photojpg" src=""> <figcaption><span class="label">Figure 1. </span>Image courtesy of Tim O'Reilly.</figcaption> </figure> ."</p> <p>Let's subject an airplane autopilot to my four tests:</p> <ol> <li>Clarity of intended outcome. Get the plane from point A to point B following a predefined route. Respond correctly to wind and weather, in accordance with known principles of aeronautics. Optimize for congestion at busy airports. Do not crash.</li> <li <a href="">National Transportation Safety Board</a> does a deep dive to analyze the causes and improve processes to reduce the chance that the same accident will recur.</li> <li.</li> <li <a href="">Airline Pilot's Association</a> to defend the jobs of its members.</li> </ol> <p, <a href="">remarked on stage at my Next:Economy Summit last year</a>, self-driving vehicles learn faster than any human, because whenever one of them makes a mistake, both the mistake and the way to avoid it can be passed along to every other vehicle.</p> <p> </p> <div class="responsive-video"><iframe allowfullscreen="true" frameborder="0" src=""></iframe></div> <p> </p> <p."</p> <p.</p> <h2>Regulating new technologies</h2> <p>If you think about it, government regulations are also a kind of algorithm, a set of rules and procedures for achieving what should be a determinate outcome. Unfortunately, too often government regulations fail my four tests for whether you can trust an algorithm.</p> <ol> <li>Clarity of intended outcome. When regulations are promulgated, their intended result is typically stated. Only rarely is it done in a form that can be easily understood. New agencies such as U.K. Government Digital Service and the U.S. Consumer Finance Protection Bureau have <a href="">made plain language a priority</a>, and have demonstrated that it is possible to create regulations whose goals and implementations are as clear as the goals and implementations of Google Search Quality or Adwords quality. But this clarity is rare.</li> <li>Success is measurable. Regulations rarely include any provision for measuring or determining their effect. Measurement, if done at all, occurs only years later.</li> <li>Goal alignment. The goals of regulators and of consumers are often aligned—think, for example, of fire codes, which were instituted after the Triangle Shirtwaist Fire of 1911. (<a href="">Carl Malamud</a> gave a <a href="">brilliant speech</a> about the role of this NYC sweatshop fire in the development of safety codes at my <a href="">Gov 2.0 Summit in 2009</a>. It is rare for a conference speech to get a standing ovation. This is one of those speeches. The <a href="">video is here</a>.) <a href="">Stop Online Piracy Act</a>. I made my case for the arguments against it as bad public policy, but her response told me what the real decision criteria were: "We have to balance the interests of the tech industry with the interests of Hollywood."</li> <li>Long-term decisions. Over time, regulations get out of step with the needs of society. When regulations do not have their intended effects, they normally continue unabated. And new regulations are often simply piled on top of them.</li> </ol> <p>Let's start with a good example. In <a href="">a regulatory proposal from the CFPB</a> on Payday, Vehicle Title, and Certain High-Cost Installment Loans, we see a clear rationale for the regulation:</p> <blockquote> <p>."</p> </blockquote> <p>The proposal goes on to specify the rules designed to address this deficit. The CFPB has also put in place mechanisms for measurement and enforcement.</p> <p>By contrast, check out the <a href="">New York City rules for taxi and limousine drivers</a>. They are vague in their statement of purpose and mind-numbing in scope. I challenge anyone to come up with a methodology by which the rules can be evaluated as to whether or not they are achieving an intended outcome.</p> <p.</p> <p>Think about this for a moment. What are possible goals for licensing Uber, Lyft, and taxi drivers? Passenger safety. To protect passengers from price gouging. To reducing congestion. (The latter two goals were the reason for the first taxi regulations, <a href="">promulgated in London by King Charles I in 1637.</a>).</p> <p.</p> <p.</p> <h2>Long-term trust and the master algorithm</h2> <p.</p> <p>There is a master algorithm that rules our society, and, with apologies to <a href="">Pedro Domingos</a>, it is not some powerful new approach to machine learning. Nor is it the regulations of government. It is <a href="">a rule that was encoded into modern business decades ago</a>, and has largely gone unchallenged since. That is the notion that the only obligation of a business is to its shareholders.</p> <p>It is the algorithm that led <a href="">CBS chairman Leslie Moonves to say back in March</a> that [Trump's campaign] "may not be good for America, but it's damn good for CBS."</p> <p?</p> <p>Continue reading <a href=''>The great question of the 21st century: Whose black box do you trust?.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Tim O'Reilly Four short links: 15 September 2016 2016-09-15T10:55:00Z tag: <p><em>Slowly Self-Driving, Effective Deep Learning, Andreessen's Bookshelves, and Robotaxi Financial Models</em></p><ol> <li> <a href="">Ford Charts Cautious Path Toward Self-driving, Shared Vehicles</a> (Reuters) -- see also <a href="">Google's car project losing leaders and advantage</a>. Both talk about this being a long game, not a short one, where this is a genuinely hard problem.</li> <li> <a href="">The Extraordinary Link Between Deep Neural Networks and the Nature of the Universe</a> (MIT TR) -- <i>“We have shown that the success of deep and cheap learning depends not only on mathematics, but also on physics, which favors certain classes of exceptionally simple probability distributions that deep learning is uniquely suited to model,” conclude Lin and Tegmark.</i> </li> <li> <a href="">Marc Andreessen's Book Collection</a> -- book shelves tell you about their owners.</li> <li> <a href="">Robotaxi Economics</a> (Brad Templeton) -- financial models for the prices of robotaxis and the operating costs. </li> </ol> <p>Continue reading <a href=''>Four short links: 15 September 2016.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Nat Torkington Starting an Angular 2 RC.6 project 2016-09-15T10:00:00Z tag: <p><img src=''/></p><p><em>Yakov Fain shows how to get started with the latest from the Angular 2 team.</em></p><p>Angular 2 is about to be officially released (most likely it’ll happen at the AngularConnect conference on September 25, 2016). The current version of of Angular is Release Candidate 6 and I’ll describe how to create your first project with this release.</p> <p>During the last couple of months Angular 2 was substantially redesigned/improved in several areas.</p> <p:</p> <pre data- {} </pre> <p>Only the bootable module needs BrowserModule. If your app uses several modules (e.g. ShipmentModule, BillingModule, FormsModule, HttpModule), those modules are called feature modules and are based on the CommonModule instead of the BrowserModule. Each of the feature modules can be loaded either eagerly when the app starts, or lazily, e.g. when the user clicks on the "shipping" link.</p> <p <a href="">Angular compiler-cli (ngc)</a>, and not just transpiling from TypeScript to JavaScript here. A half of the size of a small app is the Angular compiler itself, and just by using the AoT your app becomes slimmer because the compiler won't be downloaded to the browser.</p> <p>In this post we’ll use the JIT compilation, and I'll show you how to start a new Angular 2 RC.6 project (managed by npm) from scratch without using the scaffolding tool Angular-CLI.</p> <p>To start a new project, create a new directory (e.g. angular-seed) and open it in the command window. Then run the command <code>npm init -y</code>, which will create the initial version of the package.json configuration file. Normally <code>npm init</code> asks several questions while creating the file, but the <code>-y</code> flag makes it accept the default values for all options. The following example shows the log of this command running in the empty angular-seed directory.</p> <pre data- $ npm init -y Wrote to /Users/username/angular-seed/package.json: { "name": "angular-seed", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "keywords": [], "author": "", "license": "ISC" } </pre> <p>Most of the generated configuration is needed either for publishing the project into the npm registry or while installing the package as a dependency for another project.</p> <p>Because we’re not going to publish our app into the npm registry, you should remove all of the properties except name, description, and scripts. The configuration of any npm-based project is located in the file package.json, which can look like this:</p> <pre data- { "name": "angular-seed", "description": "A simple npm-managed project", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" } } </pre> <p>The script's configuration allows you to specify commands that you can run in the command window. By default, <code>npm init</code> creates the <code>test</code> command, which can be run like this: <code>npm test</code>. Let’s replace it with the <code>start</code> command that we’ll be using for launching a web server that will feed our app to the browser. Several simple web servers are available and we'll be using the one called live-server that we’ll add to the generated package.json a bit later. Here’s the configuration of the scripts property:</p> <pre data- { ... "scripts": { "start": "live-server" } } </pre> <p>The <code>start</code> command is one of the pre-defined commands in npm scripts, and you can run it from the command window by entering <code>npm start</code>. Actually, you can define and run any other command that would serve as a shortcut for any command you could run manually, but in this case you'd need to run such command as follows: <code>npm run mycommand</code>.</p> <p.</p> <p>So let’s add the dependencies and devDependencies sections to the package.json file so it'll include everything that a typical Angular 2 app needs:</p> <pre data- { "name": "angular-seed", "description": "My simple project", "private": true, "scripts": { "start": "live-server" }, "dependencies": { "@angular/common": "2.0.0-rc.6", "@angular/compiler": "2.0.0-rc.6", "@angular/core": "2.0.0-rc.6", "@angular/forms": "2.0.0-rc.6", "@angular/http": "2.0.0-rc.6", "@angular/platform-browser": "2.0.0-rc.6", "@angular/platform-browser-dynamic": "2.0.0-rc.6", "@angular/router": "^3.0.0-rc.2", "core-js": "^2.4.0", "rxjs": "5.0.0-beta.11", "systemjs": "0.19.37", "zone.js": "0.6.17" }, "devDependencies": { "live-server": "0.8.2", "typescript": "^2.0.0" } } </pre> <p>Now run the command <code>npm install</code> on the command line from the directory where your package.json is located, and npm will download the preceding packages and their dependencies into the node_modules folder. After this process is complete, you’ll see a couple of hundred (sigh) directories and in the node_modules dir, including @angular, systemjs, live-server, and typescript.</p> <p>angular-seed<br> ├── index.html<br> ├── package.json<br> └── app<br> └── app.ts<br> ├──node_modules<br> ├── @angular<br> ├── systemjs<br> ├── typescript<br> ├── live-server<br> └── …</p> <p>In the angular-seed folder, let’s create a slightly modified version of index.html with the following content:</p> <pre data- <!DOCTYPE html> <html> <head> <title>Angular seed project</title> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <script src="node_modules/typescript/lib/typescript.js"></script> <script src="node_modules/core-js/client/shim.min.js"></script> <script src="node_modules/zone.js/dist/zone.js"></script> <script src="node_modules/systemjs/dist/system.src.js"></script> <script src="systemjs.config.js"></script> <script> System.import('app').catch(function(err){ console.error(err); }); </script> </head> <body> <app>Loading...</app> </body> </html> </pre> <p>The script tags load the required dependencies of Angular from the local directory node_modules. Angular modules will be loaded according to the SystemJS configuration file systemjs.config.jsm which can look as follows:</p> <pre data-'} } }); </pre> <p>The app code will consist of three files:</p> <ul> <li>app.component.ts – the one and only component of our app</li> <li>app.module.ts – The declaration of the module that will include our component</li> <li>main.ts – the bootstrap of the module</li> </ul> <p>Let’s create the file app.component.ts with the following content:</p> <pre data- ---- import {Component} from '@angular/core'; @Component({ selector: 'app', template: `<h1>Hello !</h1>` }) export class AppComponent { name: string; constructor() { this.Starting an Angular 2 RC.6 project.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Yakov Fain Capturing the performance of real users 2016-09-15T10:00:00Z tag: <p><img src=''/></p><p><em>Five Questions for Philip Tellis: Insights on the organizational benefits of RUM, and new techniques for measuring user performance and emotion.</em></p><p>I recently sat down with Philip Tellis, Chief Architect and RUM Distiller at SOASTA, to discuss the nuts and bolts of measuring real user performance and reaction. Here are some highlights from our talk.</p> <h2>What is RUM (and how is it different from synthetic monitoring)?</h2> <p>RUM stands for real user measurement. It is the practice of measuring the experience of real users while they browse through a site. Humans have many differences from robots, and these differences are what makes RUM stand out from synthetic monitoring.</p> <p>Unlike synthetic monitors, real users have emotions, are impatient, can be delighted or frustrated by an experience, make buying decisions that include emotional intuition, and have a high monetary value on their time. The performance experienced by a real user during their browsing can directly affect how they react to that experience. Real user measurement aims to measure not just the experience, but also the user's reaction to that experience.</p> <p>While RUM on its own goes beyond performance, at Velocity our focus is primarily on performance.</p> <h2>How do you measure real user performance?</h2> <p>Most modern browsers have APIs that help us measure the time it takes to carry out various actions in the browser. The NavigationTiming API tells us how long it took a page to load, the ResourceTiming API tells us how long it took individual resources to load, and the UserTiming API can be unleashed to determine how long different execution points in the page took. These APIs are great to provide us with operational metrics about the page's structure and the network infrastructure that it was loaded over. Where they are found lacking is in their ability to tell us user reaction, or to measure things beyond a full page load, like Single Page Applications (SPAs), Accelerated Mobile Pages (AMP), offline apps, and post load user interactions.</p> <p>To measure the rest of these, we need to write some creative JavaScript that hooks into and proxies several other browser APIs like MutationObserver, PerformanceObserver, XMLHttpRequest and event handlers. Hooking these things up is simple, but doing it right is hard. We've seen many libraries hook into these APIs but get a small detail wrong, which ends up breaking the site in some edge cases.</p> <p>Beyond performance, we need to measure how much time a user spends on a site, how many pages they visit, and the kinds of actions they take—for example, clicking a Like button, adding something to their shopping cart, clicking an ad, going through a checkout process, signing up for a newsletter.</p> <p>We use the boomerang JavaScript library to encapsulate all these APIs and measure real user performance and reaction.</p> <h2>What does measuring real user performance allow an organization to do?</h2> <p>With RUM data, an organization can quickly determine if performance is affecting business metrics and revenue. They can put a dollar value on every second of load time that a user experiences, and segment users based on how they might react to different load times. They can tell whether adding a new feature that may slow down the site is worth it based on how much revenue that change in load time is worth, or they can budget for performance improvements based on what those improvements will bring in.</p> <p>Organizations can also determine which pages are the most important to optimize and, with real-time data, can quickly tell if a newly launched campaign has the right results (and if not, why).</p> <p>Real-time data when combined with anomaly detection algorithms can also tell an organization if things are beyond the norm, which allows for dynamic alerting, saving an operations team from either having too many false positives or risking missing a single false negative. This can be scary at first, but after a few nights of uninterrupted sleep it starts to feel pretty good.</p> <h2>What are some exciting new techniques for measuring user performance?</h2> <p>The availability of NavigationTiming2 and ResourceTiming2 are definitely exciting, but beyond that the ability to garner some idea of user emotion by measuring rage clicks, mouse movements, eyebrow tracking, and mind reading is where real groundbreaking research is happening.</p> <h2>You're speaking at the <a href="">Velocity Conference in New York</a> this September. What presentations are you looking forward to attending while there?</h2> <p>It's actually quite hard to choose. I looked at my personal schedule for Velocity, and there are at least seven slots when I want to attend three talks in parallel. The talks about AMP, low powered devices, data science and all the case studies are what I look forward to the most.</p> <p>Continue reading <a href=''>Capturing the performance of real users.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Brian AndersonPhilip Tellis Evaluating cybersecurity product claims with a critical eye 2016-09-15T10:00:00Z tag: <p><img src=''/></p><p><em>Questions to help you weigh the true value of “scientifically proven” security solutions.</em></p><p>You probably encounter new cybersecurity products and services every day, whether from peers or online or at conferences. RSA Conference 2016, for example, had more than 475 exhibitors! With so many options, how do you see through the tactics and claims appealing for your attention?</p> <p>In my upcoming class, <a href="">Getting Started with Cybersecurity Science</a>, I will guide you through the steps, principles, and tools for how and why to do experiments with your code, your systems, and your cybersecurity solutions. The appendix to my book <a href=""><em>Essential Cybersecurity Science</em></a> has a discussion about bad science, scientific claims, and marketing hype–basically some approaches to evaluate the validity and value of cybersecurity product claims.</p> <p>Here is an excerpt from the appendix that provides some example questions you can use when presented with a cybersecurity product or solution. These questions can help you bring a healthy skepticism to your own evaluation of other people’s claims.</p> <h2>Clarifying questions for salespeople, researchers, and developers</h2> <p>Your experience and expertise are valuable when learning and evaluating new technology. The first time you read about a new cybersecurity development or see a new product, chances are that your intuition will give you a sense of the value and utility of that result for you. Vendors, marketers, even researchers are trying to convince you of something. It can be helpful for you to have ready some clarifying questions that probe deeper through the sales pitch. Whether you’re chatting with colleagues, reading an academic paper, or talking with an exhibitor at a conference, these questions might help you decide for yourself whether the product or experimental results are valid.</p> <ul> <li>Who did the work? Are there any conflicts of interest?</li> <li>Who paid for the work and why was it done?</li> <li>Did the experimentation or research follow the scientific method? Is it repeatable?</li> <li>How were the experimental or evaluation dataset or test subjects chosen?</li> <li>How large was the sample size? Was it truly representative?</li> <li>What is the precision associated with the results, and does it support the implied degree of accuracy?</li> <li>What are the factually supported conclusions, and what are the speculations?</li> <li>What is the sampling error?</li> <li>What was the developer or researcher looking for when the result was found? Was he or she biased by expectations?</li> <li>What other studies have been done on this topic? Do they say the same thing? If they are different, why are they different?</li> <li>Do the graphics and visualizations help convey meaningful information without manipulating the viewer?</li> <li>Are adverbs like “significantly” and “substantially” describing the product or research sufficiently supported by evidence?</li> <li>The product seems to be supported primarily by anecdotes and testimonials. What is the supporting evidence?</li> <li>How did you arrive at causation for the correlated data/event?</li> <li>Who are the authors of the study or literature? Are they credible experts in their field?</li> <li>Do the results hinge on rare or extreme data that could be attributed to anomalies or non-normal conditions?</li> <li>What is the confidence interval of the result?</li> <li>Are the conclusions based on predictions extrapolated from different data than the actual data?</li> <li>Are the results based on rare occurrences? What is the likelihood of the condition occurring?</li> <li>Has the result been confirmed or replicated by multiple, independent sources?</li> <li>Was there <em>no effect</em>, no effect detected, or a nonsignificant effect?</li> <li>Even if the results are statistically significant, is the effect size so small that the result is unimportant?</li> </ul> <p>For more red flags of bad science, see the <a href=""><em href="">Science or Not</em></a> blog.</p> <p>Continue reading <a href=''>Evaluating cybersecurity product claims with a critical eye.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Josiah Dykstra Advancing Procurement Analytics 2016-09-14T12:05:00Z tag: <p><img src=''/></p><p><em>Learn how your company can significantly improve procurement analytics to solve business questions quickly and effectively.</em></p> <h2>Introduction</h2> <p.</p> <p>An important area where this transformation has a huge business impact is the optimization of<em> <em>procurement processes</em></em>. During the procurement process, some companies may spend more than <em>two thirds of revenue</em> buying goods and services, which means that even a modest reduction in purchasing costs can have a significant effect on profit. From this perspective, procurement—<em>out of all business activities</em>—is the key element in achieving cost reduction.</p><p>Continue reading <a href=''>Advancing Procurement Analytics.</a></p><img src="" height="1" width="1" alt=""/><img src="" height="1" width="1" alt=""/> Federico Castanedo
http://feeds.feedburner.com/oreilly/radar/rss10?m=167
CC-MAIN-2016-40
refinedweb
20,880
53.71
Opened 10 years ago Closed 9 years ago Last modified 9 years ago #61 closed enhancement (fixed) [patch] auth.User admin form shouldn't require people to edit hashes Description People shouldn't have to enter MD5 hashes in the password field. World Online never used the user form to create users or edit passwords, but now there's a demand for a better form. We can solve it with JavaScript. Attachments (4) Change History (67) comment:1 Changed 10 years ago by pb@… comment:2 Changed 10 years ago by anonymous - milestone set to Version 1.0 comment:3 Changed 10 years ago by GomoX <gomo AT datafull DOT com>). comment:4 Changed 10 years ago by jacob - milestone changed from Version 1.0 to Version 1.1 comment:5 Changed 10 years ago by ubernostrum: - It makes password behavior more consistent. Currently, adding a user or changing a user's password behaves differently in different cases: - If the user is being added/modified by the AddManipulator or ChangeManipulator for the User class, the password is expected to be a hash. - If the user's password is being modified by set_password, it's expected to be the raw password. - If the user is being created by 'django-admin.py createsuperuser', the password is expected to be the raw password. -... :) comment:6 Changed 10 years ago by Tom Tobin <korpios@…> Hashing before sending offers protection against cleartext password sniffing when the connection is not encrypted (through SSL, typically). I believe LiveJournal is among the sites that use this technique. comment:7 Changed 10 years ago by eugene@…. comment:8 Changed 10 years ago by hugo ;) comment:9 Changed 10 years ago by rjwittams. comment:10 Changed 10 years ago by eug. comment:11 Changed 10 years ago by rjwittams Eugene: eh? I am suggesting that we have this built in to the admin. How does that make things repetitive etc? comment:12 Changed 10 years ago by eugene@… I am sorry. It looks like I misunderstood you. comment:13 Changed 10 years ago by ian@…. comment:14 Changed 10 years ago by afternoon@…. comment:15 Changed 10 years ago by eugene@… Isn't it the same as sending password in clear text to the server? comment:16 Changed 10 years ago by django@…. comment:17 Changed 10 years ago by Tom Tobin <korpios@…>. comment:18 Changed 10 years ago by eugene@… django@…: I wanted to confirm that I understood afternoon@…... comment:19 Changed 10 years ago by anonymous: - the admin has to enter the password twice, it is compared and if it is given and twice the same, the password of the user is set. This sends the password over in clear - the admin has to enter the salted hash, but some JavaScript is used to encrypt it on the fly. This makes JS a requirement for password setting ... comment:20 Changed 10 years ago by Amit Upadhyay <upadhyay@…> - Cc upadhyay@… added. So here are the options we have: - sotring password in clear on server, and sending it clear from client is the worst option. Assessment: -S -N - storing hash of password on server adn sending hash of password using javascript is indistinguishable from the above, hash of the password becomes the hash. Assessment: -S -N - storing salted hash on server, and sending password in clear from client protects us when server is compromised. Assessment: +S -N. - Store salted hash on server. Send random string from server to client. Send (salted?) hash of random_string+salted hash of password to the server from client for auth. Assessment: -S . comment:21 Changed 10 years ago by Amit Upadhyay <upadhyay@…>. comment:22 Changed 10 years ago by Amit Upadhyay <upadhyay@…>. comment:23 Changed 10 years ago by hugo Ok, even though I am not a fan of the JS-idea for login, this would be a way how you could do it: - when loading the login form, a onload javascript deactivates the login button - the onblur event on the username field will fetch the salt for the given user via XmlHttpRequest and store it somewhere in the current page. The salt is the part in the middle of the stored password between the $ chars. - the callback for the salt-fetching XHR will enable the login button (because only now we can login securely) - the onclick handler of the login button uses SHA1 to calculate the correct hash and then concatenates 'sha1', the salt and the hash delimited by '$' and puts that into the password field. Then it submits the form. - the form handler checks the password for the sha1$<salt>$<hash> syntax and if it receives that, doesn't do it's own hash calculation but just uses the provided value and does a direct compare.). comment:24 Changed 10 years ago by hugo My test page is at Please keep in mind that if you don't have JS active, it will send your password in the clear, as it's just a form with submit button. Please don't try the form with real passwords ;-) comment:25 Changed 10 years ago by GomoX <gomo@…> - Cc gomo@… added comment:26 Changed 10 years ago by hugo? ;-) comment:27 Changed 10 years ago by hugo. comment:28 Changed 10 years ago by eugene@… Obviously there are Blowfish implementations written in js. Example:. I know for a fact that Dojo adds one soon (not in SVN at this moment). comment:29 Changed 10 years ago by GomoX <gomo@…>. comment:30 Changed 10 years ago by. comment:31 Changed 10 years ago by eugene@… discusses one possible solution using AJAX and salted hash. comment:32 Changed 10 years ago by eugene@… While the above solution is secure for logging in existing users, it doesn't solve the problem of password creation nor password change. Nevertheless it is useful by itself --- usually logging in is more frequent operation. comment:33 Changed 10 years ago by mattipatti... comment:34 Changed 10 years ago by mattipatti and ofcourse there is a closing parenthesis missing in the code -block, so it should be like: def _pre_save(self): import re if not re.match(r"(^[a-z0-9]+\$[a-z0-9]+\$[a-z0-9]+$)", self.password): self.set_password(self.password) comment:35 Changed 10 years ago by akaihola. comment:36 Changed 10 years ago by boxed@…. comment:37 Changed 10 years ago by anonymous comment:38 Changed 10 years ago by lalo.martins@…... comment:39 Changed 10 years ago by akaihola A Firefox Greasemonkey user script which does client-side SHA-1 generating is available in the Django Cookbook. No changes needed in Django and it works instantly on all Django sites. comment:40 Changed 9 years ago by joaoma@…. comment:41 Changed 9 years ago by Home - Type defect deleted comment:42 Changed 9 years ago by anonymous - Component changed from Admin interface to Database wrapper - milestone changed from Version 1.1 to Version 0.93 - priority changed from normal to high - Severity changed from normal to major - Type set to defect - Version set to magic-removal comment:43 Changed 9 years ago by adrian comment:44 Changed 9 years ago by Tom Tobin <korpios@…> - Component changed from Database wrapper to Admin interface - milestone changed from Version 0.93 to Version 1.1 - priority changed from high to normal - Severity changed from major to normal - Version magic-removal deleted Reverted properties. comment:45 Changed 9 years ago by Carl - Cc Carl added; upadhyay@… gomo@… removed - Component changed from Admin interface to 1 - Keywords Carl added - milestone changed from Version 1.1 to 1 - priority changed from normal to 1 - Severity changed from normal to 1 - Summary changed from auth.User admin form shouldn't require people to edit MD5 hashes to Carl - Type changed from defect to 1 - Version set to 1 comment:46 Changed 9 years ago by adrian - Cc upadhyay@… gomo@… added; Carl removed - Component changed from 1 to Admin interface - Keywords Carl removed - milestone changed from 1 to Version 1.1 - priority changed from 1 to normal - Severity changed from 1 to normal - Summary changed from Carl to auth.User admin form shouldn't require people to edit MD5 hashes - Version changed from 1 to SVN Reverted spam. Changed 9 years ago by SmileyChris Changed 9 years ago by SmileyChris Oops, spaces instead of tabs in documentation this time :( comment:47 Changed 9 years ago by SmileyChris - Summary changed from auth.User admin form shouldn't require people to edit MD5 hashes to [patch] auth.User admin form shouldn't require people to edit hashes - Type changed from 1 to enhancement I got sick of waiting so here you go: finally, a complete working PasswordField patch! comment:48 Changed 9 years ago by mtredinnick Changed 9 years ago by SmileyChris All db.backends creation data types updated to work with PasswordField comment:49 Changed 9 years ago by adrian - Resolution set to fixed - Status changed from new to closed comment:50 Changed 9 years ago by dummy@… - Resolution fixed deleted - Status changed from closed to reopened The latest changeset seems to have something strange error: Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/django/core/handlers/base.py" in get_response - response = callback(request, *callback_args, callback_kwargs) File "/usr/local/lib/python2.4/site-packages/django/contrib/admin/views/auth.py" in user_add_stage - }, context_instance=template.RequestContext(request)) File "/usr/local/lib/python2.4/site-packages/django/shortcuts/init.py" in render_to_response - return HttpResponse(loader.render_to_string(*args, kwargs)) File "/usr/local/lib/python2.4/site-packages/django/template/loader.py" in render_to_string - t = get_template(template_name) File "/usr/local/lib/python2.4/site-packages/django/template/loader.py" in get_template - return get_template_from_string(*find_template_source(template_name)) File "/usr/local/lib/python2.4/site-packages/django/template/loader.py" in find_template_source - raise TemplateDoesNotExist, name TemplateDoesNotExist at /admin/auth/user/add/ admin/auth/user/add_form.html comment:51 Changed 9 years ago by jacob - Resolution set to fixed - Status changed from reopened to closed That template's definitally there (); you probably need to update the rest of your tree. Changed 9 years ago by dummy@… this fix the template packing for admin/auth/user/*.html comment:52 Changed 9 years ago by dummy@… - Resolution fixed deleted - Status changed from closed to reopened yes, the template is there, but it won't be packed by setup.py comment:53 Changed 9 years ago by adrian - Resolution set to fixed - Status changed from reopened to closed comment:54 Changed 9 years ago by erob@… - Cc erob@… added - Resolution fixed deleted - Status changed from closed to reopened. comment:55 Changed 9 years ago by ubernostrum If you're on current SVN, or any rev later than 3524, you shouldn't have to do any patching; a special-case "create user" view was added which doesn't require you to enter a hash. comment:56 Changed 9 years ago by ubernostrum - Resolution set to fixed - Status changed from reopened to closed comment:57 Changed 9 years ago by erob@… - Resolution fixed deleted - Status changed from closed to reopened ? comment:58 Changed 9 years ago by adrian - Resolution set to fixed - Status changed from reopened to closed erob@…: The PasswordField patch in this ticket is not supported and probably won't be added to Django. The *real* issue of this ticket (improving the auth.User admin form not to use hashes) has been fixed. comment:59 Changed 9 years ago by anonymous - milestone changed from Version 1.1 to Version 1.0 - Resolution fixed deleted - Status changed from closed to reopened Curious as to why this was marked as fixed because when you edit a user you still have to enter encrypted text. Nice job on the add user fix but it needs to be taken a step further and support editing users without having to enter encrypted text. comment:60 Changed 9 years ago by joaoma@… Agreed, the fix is incomplete, I was really surprised when I realized I still had to enter the encrypted text when changing a user's password. Why not have a "change password" button available for all users when the logged user is an admin? comment:61 Changed 9 years ago by SmileyChris This ticket has gone through the wars, maybe a new more specific one should be opened. :) comment:62 Changed 9 years ago by adrian - Resolution set to fixed - Status changed from reopened to closed comment:63 Changed 9 years ago by anonymous - milestone Version 1.0 deleted Milestone Version 1.0 deleted Here's a decent-looking JS MD5 implementation to use in whatever widget you create.
https://code.djangoproject.com/ticket/61
CC-MAIN-2015-48
refinedweb
2,102
62.07
On Mon, 2005-08-01 at 09:13 +0000, Mikhael Goikhman wrote: > Filtering by category and branch will always be useful. Otherwise why you > would want to use a/c-b-v namespace in the first place? Because it looked like a good idea at the time. Matching "project--main--" and "project--maintenance--" with "project--ma" might not be useful, but matching "project-doc--main" and "project--main" with "project" is useful. > It is only the > question whether such filtering should be done in individual frontends or > in baz. I think the second, like it is done now. In my perspective, the question is whether explicit support in baz is: * necessary: no, since the desired effect can me achieved by matching with "category--" * or desirable: no, since explicit support for c-b-v causes UI clutter and confusing terminology (what's a branch? what's a version?) > Anyway, I think abrowse should not be obsoleted for several months if you > remove the concept of category and branch in the new browse command. Since you seem to feel strongly about it, I think abrowse could stay around for a little while, as a "compatibility" command. Notwithstanding renaming of rbrowse to browse, and the others changes that have been discussed here. There might also be issues of additional maintenance burden (abrowse is reportedly an unmaintainable mess), but that's up to you and Matthieu to discuss. -- -- ddaa signature.asc Description: This is a digitally signed message part
http://lists.gnu.org/archive/html/gnu-arch-users/2005-08/msg00006.html
CC-MAIN-2015-06
refinedweb
245
62.17
451 [details] Dropdown activated - shows 5 versions of same AppIcon resource Hi guys, There appears to be a slight problem within Info.plist -> iPad Icons -> App, Spotlight and Settings Icons -> Asset Catalog dropdown. Please see the pic below. There is only one Asset Catalog in the Solution Explorer however I am unable to actually open to view the icons. Moreover, on build a "AppIcons.xcassets" is created within the /Resources tree. The properties of this generate namespace indicate that all images are there with Build Action set to "ImageAsset" and "Copy always". Is this correct behavior? Hi John, It seems something got wrongly saved in you project and that's causing all the issues you've described. Since we couldn't reproduce this bug, I'll need to ask you for some information: - Could you attach your project or a sample project in which we can reproduce this issue? - Which Visual Studio, Xamarin for Visual Studio, Xamarin iOS versions are you using? (Help -> About Microsoft Visual Studio) - Could you reproduce this issue with a new project? Are you hitting it right after creating the project or after making some changes to the project/Asset catalog? Thanks for your help! John, please let us know if you still need some help with this issue. Thanks, Adrian
https://xamarin.github.io/bugzilla-archives/42/42102/bug.html
CC-MAIN-2019-39
refinedweb
215
65.83
Sorting Topics - Introduction - Performance Criteria - Selection Sort - Insertion Sort - Shell Sort - Quicksort - Choosing a Sorting Algorithm Interesting Links Animated demonstrations of the algorithms can be viewed at the following sites: - Java® Sorting Programs - More Java® Sorting Animations 1. Introduction Sorting techniques have a wide variety of applications. Computer-Aided Engineering systems often use sorting algorithms to help reason about geometric objects, process numerical data, rearrange lists, etc. In general, therefore, we will be interested in sorting a set of records containing keys, so that the keys are ordered according to some well defined ordering rule, such as numerical or alphabetical order. Often, the keys will form only a small part of the record. In such cases, it will usually be more efficient to sort a list of keys without physically rearranging the records. 2. Performance Criteria There are several criteria to be used in evaluating a sorting algorithm: - Running time. Typically, an elementary sorting algorithm requires O(N2) steps to sort N randomly arranged items. More sophisticated sorting algorithms require O(N log N) steps on average. Algorithms differ in the constant that appears in front of the N2 or N log N. Furthermore, some sorting algorithms are more sensitive to the nature of the input than others. Quicksort, for example, requires O(N log N) time in the average case, but requires O(N2) time in the worst case. - Memory requirements. The amount of extra memory required by a sorting algorithm is also an important consideration. In place sorting algorithms are the most memory efficient, since they require practically no additional memory. Linked list representations require an additional N words of memory for a list of pointers. Still other algorithms require sufficent memory for another copy of the input array. These are the most inefficient in terms of memory usage. - Stability. This is the ability of a sorting algorithm to preserve the relative order of equal keys in a file. Examples of elementary sorting algorithms are: selection sort, insertion sort, shell sort and bubble sort. Examples of sophisticated sorting algorithms are quicksort, radix sort, heapsort and mergesort. We will consider a selection of these algorithms which have widespread use. In the algorithms given below, we assume that the array to be sorted is stored in the memory locations a[1],a[2],...,a[N]. The memory location a[0] is reserved for special keys called sentinels, which are described below. 3. Selection Sort This "brute force'' method is one of the simplest sorting algorithms. Approach - Find the smallest element in the array and exchange it with the element in the first position. - Find the second smallest element in the array and exchange it with the element in the second position. - Continue this process until done. Here is the code for selection sort: Selection.cpp #include "Selection.h" // Typedefs ItemType. inline void swap(ItemType a[], int i, int j) { ItemType t = a[i]; a[i] = a[j]; a[j] = t; } void selection(ItemType a[], int N) { int i, j, min; for (i = 1; i < N; i++) { min = i; for (j = i+1; j <= N; j++) if (a[j] < a[min]) min = j; swap(a,min,i); } } Selection sort is easy to implement; there is little that can go wrong with it. However, the method requires O(N2) comparisons and so it should only be used on small files. There is an important exception to this rule. When sorting files with large records and small keys, the cost of exchanging records controls the running time. In such cases, selection sort requires O(N) time since the number of exchanges is at most N. 4. Insertion Sort This is another simple sorting algorithm, which is based on the principle used by card players to sort their cards. Approach - Choose the second element in the array and place it in order with respect to the first element. - Choose the third element in the array and place it in order with respect to the first two elements. - Continue this process until done. Insertion of an element among those previously considered consists of moving larger elements one position to the right and then inserting the element into the vacated position. Here is the code for insertion sort: Insertion.cpp #include "Insertion.h" // Typedefs ItemType. void insertion(ItemType a[], int N) { int i, j; ItemType v; for (i = 2; i <= N; i++) { v = a[i]; j = i; while (a[j-1] > v) { a[j] = a[j-1]; j--; } a[j] = v; } } It is important to note that there is no test in the while loop to prevent the index j from running out of bounds. This could happen if v is smaller than a[1],a[2],...,a[i-1]. To remedy this situation, we place a sentinel key in a[0], making it at least as small as the smallest element in the array. The use of a sentinel is more efficient than performing a test of the form while (j > 1 && a[j-1] > v). Insertion sort is an O(N2) method both in the average case and in the worst case. For this reason, it is most effectively used on files with roughly N < 20. However, in the special case of an almost sorted file, insertion sort requires only linear time. 5. Shell Sort This is a simple, but powerful, extension of insertion sort, which gains speed by allowing exchanges of non-adjacent elements. Definition An h-sorted file is one with the property that taking every hth element (starting anywhere) yields a sorted file. Approach - Choose an initial large step size, hK, and use insertion sort to produce an hK-sorted file. - Choose a smaller step size, hK-1, and use insertion sort to produce an hK-1-sorted file, using the hK-sorted file as input. - Continue this process until done. The last stage uses insertion sort, with a step size h1 = 1, to produce a sorted file. Each stage in the sorting process brings the elements closer to their final positions. The method derives its efficiency from the fact that insertion sort is able to exploit the order present in a partially sorted input file; input files with more order to them require a smaller number of exchanges. It is important to choose a good sequence of increments. A commonly used sequence is (3K-1)/2,...,121,40,13,4,1, which is obtained from the recurrence hk = 3 hk+1+1. Note that the sequence obtained by taking powers of 2 leads to bad performance because elements in odd positions are not compared with elements in even positions until the end. Here is the complete code for shell sort: Shell.cpp #include "Shell.h" // Typedefs ItemType. void shell(ItemType a[], int N) { int i, j, h; ItemType v; for (h = 1; h < = N/9; h = 3*h+1); for (; h > 0; h /= 3) for (i = h+1; i <= N; i++) { v = a[i]; j = i; while (j > h && a[j-h] > v) { a[j] = a[j-h]; j -= h; } a[j] = v; } } Shell sort requires O(N3/2) operations in the worst case, which means that it can be quite effectively used even for moderately large files (say N < 5000). 6. Quicksort This divide and conquer algorithm is, in the average case, the fastest known sorting algorithm for large values of N. Quicksort is a good general purpose method in that it can be used in a variety of situations. However, some care is required in its implementation. Since the algorithm is based on recursion, we assume that the array (or subarray) to be sorted is stored in the memory locations a[left],a[left+1],...,a[right]. In order to sort the full array, we simply initialize the algorithm with left = 1 and right = N. Approach - Partition the subarray a[left],a[left+1],...,a[right] into two parts, such that - element a[i] is in its final place in the array for some i in the interval [left,right]. - none of the elements in a[left],a[left+1],...,a[i-1] are greater than a[i]. - none of the elements in a[i+1],a[i+2],...,a[right] are less than a[i]. - Recursively partition the two subarrays, a[left],a[left+1],...,a[i-1] and a[i+1],a[i+2],...,a[right], until the entire array is sorted. - Choose a[right] to be the element that will go into its final position. - Scan from the left end of the subarray until an element greater than a[right] is found. - Scan from the right end of the subarray until an element less than a[right] is found. - Exchange the two elements which stopped the scans. - Continue the scans in this way. Thus, all the elements to the left of the left scan pointer will be less than a[right] and all the elements to the right of the right scan pointer will be greater than a[right]. - When the scan pointers cross we will have two new subarrays, one with elements less than a[right] and the other with elements greater than a[right]. We may now put a[right] in its final place by exchanging it with the leftmost element in the right subarray. Quicksort.cpp // inline void swap() is the same as for selection sort. void quicksort(ItemType a[], int left, int right) { int i, j; ItemType v; if (right > left) { v = a[right]; i = left - 1; j = right; for (;;) { while (a[++i] < v); while (a[--j] > v); if (i >= j) break; swap(a,i,j); } swap(a,i,right); quicksort(a,left,i-1); quicksort(a,i+1,right); } } Note that this code requires a sentinel key in a[0] to stop the right-to-left scan in case the partitioning element is the smallest element in the file. Quicksort requires O(N log N) operations in the average case. However, its worst case performance is O(N2), which occurs in the case of an already sorted file! There are a number of improvements which can be made to the basic quicksort algorithm. - Using the median of three partitioning method makes the worst case far less probable, and it eliminates the need for sentinels. The basic idea is as follows. Choose three elements, a[left], a[middle] and a[right], from the left, middle and right of the array. Sort them (by direct comparison) so that the median of the three is in a[middle] and the largest is in a[right]. Now exchange a[middle] with a[right-1]. Finally, we run the partitioning algorithm on the subarray a[left+1],a[left+2],...,a[right-2] with a[right-1] as the partitioning element. - Another improvement is to remove recursion from the algorithm by using an explicit stack. The basic idea is as follows. After partitioning, push the larger subfile onto the stack. The smaller subfile is processed immediately by simply resetting the parameters left and right (this is known as end-recursion removal). With the explicit stack implementation, the maximum stack size is about log2 N. On the other hand, with the recursive implementation, the underlying stack could be as large as N. - A third improvement is to use a cutoff to insertion sort whenever small subarrays are encountered. This is because insertion sort, albeit an O(N2) algorithm, has a sufficiently small constant in front of the N2 to be more efficient than quicksort for small N. A suitable value for the cutoff subarray size would be approximately in the range 5 ~ 25. 7. Choosing a Sorting Algorithm Table 1 summarizes the performance characteristics of some common sorting algorithms. Shell sort is usually a good starting choice for moderately large files N < 5000, since it is easily implemented. Bubble sort, which is included in Table 1 for comparison purposes only, is generally best avoided. Insertion sort requires linear time for almost sorted files, while selection sort requires linear time for files with large records and small keys. Insertion sort and selection sort should otherwise be limited to small files. Quicksort is the method to use for very large sorting problems. However, its performance may be significantly affected by subtle implementation errors. Furthermore, quicksort performs badly if the file is already sorted. Another possible disadvantage is that quicksort is not stable i.e. it does not preserve the relative order of equal keys. All of the above sorting algorithms are in-place methods. Quicksort requires a small amount of additional memory for the auxiliary stack. There are a few other sorting methods which we have not considered. Heapsort requires O(N log N) steps both in the average case and the worst case, but it is about twice as slow as quicksort on average. Mergesort is another O(N log N) algorithm in the average and worst cases. Mergesort is the method of choice for sorting linked lists, where sequential access is required. Table 1: Approximate running times for various sorting algorithms
http://ocw.mit.edu/courses/civil-and-environmental-engineering/1-124j-foundations-of-software-engineering-fall-2000/lecture-notes/sorting/
crawl-003
refinedweb
2,165
63.9
I was working on an article comparing C++ containers to builtin arrays, which quickly grew out of hand due to the required prerequisite information. Instead I will be spending some time covering the std::vector,std::array, and std::string containers. After an initial introduction, I will compare these containers to the builtin C-style arrays and show how they can improve your programs and reduce Today I will start the series by introducing you to std::vector, the C++ container type used for dynamic arrays (those that can grow or shrink during runtime). std::vector is worth adding to your toolbox because it provides many extra features over builtin arrays. std::vector Overview First thing's first: In order to utilize std::vector containers, you will need to include the vector header: #include <vector> std::vector is a header-only implementation, which means that once you have a C++ runtime set up for your target system you will get this feature for free. As mentioned above, std::vector is a templated class that represents dynamic arrays. std::vector typically allocates memory on the heap (unless you override this behavior with your own allocator). The std::vector class mostly abstracts memory management, as it grows and shrinks automatically if elements are added or removed. However, std::vector is not magic, as the class still suffers from the runtime costs associated with dynamic memory allocation. The std::vector class simplifies operations which are cumbersome with builtin arrays. For example: - Current array size is tracked - With C-style arrays, you would need to manually track this - Current amount of allocated memory is tracked - With C-style arrays, you would need to manually track this - Growing and shrinking the array is a simple function call - Inserting elements into the middle of an array is simplified into a function call. - C-style arrays require manually moving memory around when inserting elements into the middle of the array Unlike C-style arrays, you can use strategies such as SBRM (scope-bound resource management). If you declare a std::vector in a specific scope, when that scope is no longer valid the std::vector memory will be destructed and freed automatically. Strategies like SBRM reduce the chance of introducing memory leaks. In summary: std::vector simplifies much of the overhead code required to manage dynamic arrays. This eliminates boilerplate code that developers are required to write for basic program functionality. You are also relying on tested code and eliminating extra boilerplate code that is no longer required. These factors help to improve your code's reliability and reduce the potential areas where bugs can be introduced. Creating a std::vector Now that we know a little bit about std::vector, let's see how we work with one. std::vector is a templated container class. When you are declaring a std::vector, you need to template the class on the type of data that needs to be stored: //Declare an empty `std::vector` that will hold `int`s std::vector<int> v2; You can use an initializer list when creating your std::vector to assign initial values: //v1 will be sized to the length of the initializer list std::vector<int> v1 = {-1, 3, 5, -8, 0}; Copying vectors is quite simple! You don't need to worry about remembering the correct arguments for memcpy, simply use the equals operator (=): v2 = v1; //copy Or declare a new std::vector and use the vector you want copied as a constructor argument: auto v3(v1); Accessing Data std::vector provides many interfaces for accessing the underlying data: front() back() - operator [] - no bounds checks are performed at() - bounds checks are performed - generates an exception data() The first element of the vector can be accessed using front(). Likewise, the last element can be accessed using v1.back(). There are two methods used to access specific elements: the at() function and the [] operator. The primary difference between the two methods is that the at() member function provides bounds checking. If the element you are trying to access is out-of-bounds, at() throws an exception. Using the familiar [] operator will not provide any bounds checking, providing for faster access while increasing the risk of a segmentation fault. std::cout << "v1.front() is the first element: " << v1.front() << std::endl; std::cout << "v1.back() is the last element: " << v1.back() << std::endl; std::cout << "v1[0]: " << v1[0] << std::endl; std::cout << "v1.at(4): " << v1.at(4) << std::endl; // Bounds checking will generate exceptions. Try: //auto b = v2.at(10); //However, operator [] is not bounds checked! //This may or may not seg fault //std::cout << "v2[6]: " << v2[6] << std::endl; Before assigning to elements using the [] operator, make sure your std::vector memory allocation is sufficient, else you risk causing a segmentation fault by accessing memory that you don't own. data() You can get the pointer to the underlying std::vector buffer using the data() member function. This is useful for interoperating with code that utilizes raw pointers to buffers. For example, let's assume you are using an API with a C-style buffer interface: void carr_func(int * vec, size_t size) { std::cout << "carr_func - vec: " << vec << std::endl; } If you tried to pass a std::vector for the first argument, you would generate a compiler error: std::vector, unlike builtin arrays, will not decay into a pointer. Instead you need to use the data() member: //Error: //carr_func(v1, v1.size()); //OK: carr_func(v1.data(), v1.size()); Adding and Removing Elements There are a variety of functions used to add or remove elements from a std::vector: push_back() emplace_back() pop_back() clear() resize() emplace() - requires an iterator insert() - requires an iterator erase() - requires an iterator To add new elements to the end of a std::vector, use push_back() and emplace_back(). These member functions are quite similar, with one critical distinction: emplace_back() allows for constructor arguments to be forwarded so that the new element can be constructed in place. If you have an existing object (or don't mind a temporary object being created), then you would use push_back(). For an integer the distinction is trivial: int x = 0; v2.push_back(x); //adds element to end v2.emplace_back(10); //constructs an element in place at the end However, for more complex types with complicated constructors, emplace_back() becomes very useful: //Constructor: circular_buffer(size_t size) std::vector<circular_buffer<int>> vb; vb.emplace_back(10); //forwards the arg to the circular_buffer constructor to make a buffer of size 10 For another emplace_back() example, check out the cppreference entry. You are not limited to inserting elements at the end of a std::vector: insert() and emplace() play similar roles and allow you to insert elements anywhere in the list. Simply supply an iterator to the location where the new element should be inserted: v2.insert(v2.begin(), -1); //insert an element - you need an iterator v2.emplace(v2.end(), 1000); //construct and place an element at the iterator To erase a specific element, call the erase() function and supply an iterator to the element you want removed. v2.erase(v2.begin()); //erase our new element - also needs an iterator You can also remove the last element from the std::vector using pop_back(): v2.pop_back(); //removes last element However, note that this function does not return a value! The element is simply removed from the vector. Read the element using end() prior to removal. You can clear all elements in a std::vector by using the clear() function. This will set the size() to 0, but note that capacity() will remain at the previous allocation level. Managing space You need to be at least somewhat aware of the underlying memory implications of using a std::vector. Your vector often ends up occupying more space than a builtin array, as memory can be allocated to handle future growth. Extra memory can be allocated to prevent a std::vector from needing to reallocate memory every time a new element is inserted. Additionally, if you remove elements from the std::vector, that memory remains allocated to the std::vector. Luckily the std::vector class provides us a few functions for managing memory: reserve() shrink_to_fit() resize() Reallocations are costly in terms of performance. Often, you are aware of how many elements you need to store in your std::vector (at least as a maximum). You can use the reserve() function to make a single large allocation, reducing the runtime hit that may occur from frequent reallocations: v2.reserve(10) //increase vector capacity to 10 elements reserve() can only increase the vector size. If the requested capacity is smaller than the current capacity, nothing happens. If you want to shrink your vector's current capacity to match the current size, you can use the shrink_to_fit() function: //If you have reserved space greater than your current needs, you can shrink the buffer v2.shrink_to_fit(); You can also use the resize() function to manually increase or decrease the size of your std::vector at will. If you are increasing the size using resize(), the new elements will be 0-initialized by default. You can also supply an initialization value for the new elements. If you are resizing to a smaller size, the elements at the end of the list will be removed. v2.resize(7); //resize to 7. The new elements will be 0-initialized v2.resize(10, -1); //resize to 10. New elements initialized with -1 v2.resize(4); //shrink and strip off extra elements If you want to add space but don't want to add elements to the vector, use the reserve() function instead of resize(). Other Useful Interfaces std::vector keeps track of useful details for you and provides the following functions: size() capacity() empty() max_size() size() represents the current number of elements in the array. If there are no elements in the std::vector, empty() will return true. If there are elements, it will return false. if(v1.empty()) { std::cout << "v1 is empty!" << std::endl; } else { std::cout << "v1 is not empty. # elements: " << v1.size() << std::endl; } The current number of elements stored in the std::vector is not necessarily the same as the amount of memory currently allocated. To find the number of elements that can be stored based on the current memory allocation, you can use the capacity() function: if(v1.capacity() == v1.size()) { std::cout << "v1 has used all of the current memory allocation. Adding new elements will trigger a realloc." << std::endl; } In order to know the maximum size of a std::vector on your system, you can use the max_size() member function. On my host machine this number is huge. You can tune this capacity on an embedded system to limit the max vector size on your system. if(v1.size() == v1.max_size()) { std::cout << "v1 is at maximum size. no more memory for you!" << std::endl; } Container Operations I just want to briefly note: since std::vector is a container class, any operations meant to be used on containers will work: //Container operations work std::sort(v2.begin(), v2.end()); I will be discussing container operations further in a future article. For Loop One benefit of using modern C++ is that you can use more convenient for loops! Instead of specifying the format we are all used to: for(int j = 0; j < v2.size(); j++) We can simply use this format to iterate through all elements of a vector: for(auto &t: v2) Here's a trivial example which prints every element using the new for loop construct: std::cout << std::endl << "v2: " << std::endl; for (const auto & t : v2) { std::cout << t << " "; } std::cout << std::endl; std::vector Overhead Using std::vector does come with minor overhead costs (which, to me, are worth the benefits). These costs are similar to manually managing memory allocations on your own. - The std::vectorcontainer requires 24 bytes of overhead - Construction and destruction take time - Allocation time is required - tor the initial storage allocation, as well as any resizing that is required. cppreference lists the complexity of vector operations as follows: - Random access - constant O(1) - Insertion or removal of elements at the end - amortized constant O(1) - Insertion or removal of elements - linear in distance to the end of the vector O(n) Putting it All Together Example code for std::vector can be found in the embedded-resources Github repository. Further Reading I have just provided a cursory overview of the std::vector container. Many great resources on the web exist to help explain this feature, so don't be afraid to search around or ask questions!
https://embeddedartistry.com/blog/2017/6/20/an-introduction-to-stdvector
CC-MAIN-2017-43
refinedweb
2,104
51.89
Opened 4 years ago Last modified 2 years ago This patch solves a problem of multiple errors while hosting Django app with FastCGI and MySQL. I was running it for 24 hours without any problems. Some highlights: 1) Django uses the same connection for all threads. It breaks MySQL leading to numerous random CR_SERVER_GONE_ERROR and CR_SERVER_LOST errors. Every independently talking entity should have its own connection. I've implemented mysql.DatabaseWrapper using dictionary to keep one connection per thread. 2) During request for new connection, old connections are examined. If thread is gone, its connection is closed and garbage-collected. 3) MySQL connection can time-out depending on MySQL server settings. The standard practice is to ping() stored connections before use. My implementation does it for every retrieved connection. Some potential problems: 1) I rename threads, which request connections, to make them unique. If some other code relies on thread names, it may be broken. 2) 24 hour testing is not enough for production quality code. Changes are very small and compact. They can be verified manually. But still it is not a full blown system test under different hosting scenarios. 3) Please take a look at the code and verify that it is optimal --- my python experience is limited, I could implement something in sub-optimal way. The patch: Index: mysql.py =================================================================== --- mysql.py (revision 629) +++ mysql.py (working copy) @@ -11,6 +11,9 @@ from MySQLdb.constants import FIELD_TYPE import types +import thread, threading +from sets import Set + DatabaseError = Database.DatabaseError django_conversions = conversions.copy() @@ -23,32 +26,78 @@ class DatabaseWrapper: def __init__(self): - self.connection = None + self.connections = {} + self.threads = Set() + self.lock = thread.allocate_lock() self.queries = [] + + def _get_connection(self): + self.lock.acquire() + try: + # find existing connection + id = threading.currentThread().getName() + if id in self.connections: + connection = self.connections[id] + connection.ping() + return connection + # normalize thread name + if id != 'MainThread': + id = str(thread.get_ident()) + threading.currentThread().setName(id) + # remove deadwood + dead = self.threads - Set([x.getName() for x in threading.enumerate()]) + for name in dead: + self.connections[name].close() + del self.connections[name] + self.threads -= dead + # create new connection + from django.conf.settings import DATABASE_USER, DATABASE_NAME, DATABASE_HOST, DATABASE_PASSWORD + connection = Database.connect(user=DATABASE_USER, db=DATABASE_NAME, + passwd=DATABASE_PASSWORD, host=DATABASE_HOST, conv=django_conversions) + self.connections[id] = connection + self.threads.add(id) + return connection + finally: + self.lock.release() def cursor(self): - from django.conf.settings import DATABASE_USER, DATABASE_NAME, DATABASE_HOST, DATABASE_PASSWORD, DEBUG - if self.connection is None: - self.connection = Database.connect(user=DATABASE_USER, db=DATABASE_NAME, - passwd=DATABASE_PASSWORD, host=DATABASE_HOST, conv=django_conversions) + connection = self._get_connection() + from django.conf.settings import DEBUG if DEBUG: - return base.CursorDebugWrapper(self.connection.cursor(), self) - return self.connection.cursor() + return base.CursorDebugWrapper(connection.cursor(), self) + return connection.cursor() def commit(self): - self.connection.commit() + self.lock.acquire() + try: + id = threading.currentThread().getName() + if id in self.connections: + self.connections[id].commit() + finally: + self.lock.release() def rollback(self): - if self.connection: - try: - self.connection.rollback() - except Database.NotSupportedError: - pass + self.lock.acquire() + try: + id = threading.currentThread().getName() + if id in self.connections: + try: + self.connections[id].rollback() + except Database.NotSupportedError: + pass + finally: + self.lock.release() def close(self): - if self.connection is not None: - self.connection.close() - self.connection = None + self.lock.acquire() + try: + id = threading.currentThread().getName() + if id in self.connections: + connection = self.connections[id] + connection.close() + del self.connections[id] + finally: + self.lock.release() def get_last_insert_id(cursor, table_name, pk_name): cursor.execute("SELECT LAST_INSERT_ID()") patch as a file I lack the time to check out the solution, but I'm all for fixing the problems described. I ran this patch for 3 weeks now. No problems so far. For anyone interesting. There is a similar issue with Postgres for which I filed ticket 900. I'm rewrote your patch for latest [1432] revision. There are some differences: new patch Checking connections while pinging -- good catch. Using RWLock --- good refresh, I forgot to update the patch with RWLock, when it became available. Regarding currentThread() -- I had some reservations about it. I tried to experiment with currentThread() and: Basically I was concerned about potential clash because uniqueness is not guaranteed. Maybe it is unfounded. E.g., I cannot remember why (2) was important. It doesn't look dangerous now. (1) depends on implementation. If we assume that all Python implementations return unique dummy thread objects, it's probably OK. Theoretically it can return the same static object. OTOH "thread renaming" may suffer from similar problems depending on implementation. I want a sage advice on that. :-) Is this per thread stuff is really required? As far as I can see real problem is in not checking for connection validity and not threading stuff - in my test setup (FastCGI & mod_fcgi) all problems are gone when I only check with connection.ping and reconnect if connection is invalid. Maybe someone more knowledgeable in django database internals may shed some light on this. Second solution is using some type of database connection pool instead thread local connections. Actually currentThread() will return unique objects - they can't be reused, the thread names might be reused, but the object ID won't. And that's what is used when you use objects as dictionary keys. The dummy object created might have limited functionality - but we don't use the functionality, we only use the identity, so that's enough. Originally I had problems with multithreaded FCGI under load. It was not related to pings --- MySQL got confused when SQL statements from different threads came using one connection. I implemented "per thread stuff" after advices of MySQL experts. I added pings to prevent time-outs, which happened after some idle time. INHO, database connection pool is a good idea. It's good to know that currentThread() is unique. In my original implementation I used get_ident() as a source of certifiably unique names with silly renaming of threads. It is the only thread-related method, which is explicitly documented as returning unique values, which are suitable as dictionary index (). No other thread/treading-related method is documented as such. Unfortunately I didn't find TLS in python. That's what is used usually for stuff like that. I'm about to port this new patch for Postgres and have a question. Why is "remove deadwood" part needed? As far as I understand connections are always deleted on every request calling db.close(). "Remove deadwood" is just an extra security measure against potential sloppiness in closing connections and handling exceptions. After watching MySQL connections die randomly I decided to take no chances. patch against release 0.91 Please include this patch ASAP. Otherwise Django is no use in production as fcgi using MySQL database. patch against trunk (rev. 2307) Bumping priority and severity down one rank each; while this may be significant for this particular use-case (MySQL and FastCGI), that use-case has at least two obvious workarounds — switching to another database (preferably PostgreSQL), or switching to mod_python. I'm not saying it shouldn't get fixed; I'm just saying that marking it "highest" and "critical" seems nonsensical. Tom, Postgres' backend in Django suffers from same problems :-). And it also have the patch waiting in #900 After reading through #900, the alternatives I'd suggest become "switch to mod_python" or "use the preforking mpm". Again — I never said it shouldn't be fixed, or that it wasn't a PITA if your situation falls under these cases . . . just that there are production-ready workarounds at the moment. :) I doidn't mean to disagree with your resolution. I think it would be critical for 1.0 but not now. But to clarify: all these bad things happen with mod_python too. Switching to preforking worker would help but many people don't control their server (shared hosting, evil admin, other projects requiring threads etc.) Patch for trunk 2360, using thread-local storage Patch for magic-removal 2360, using thread-local storage Patch for trunk 2360, using thread-local storage and python 2.3 support Patch for magic-removal 2360, using thread-local storage and python 2.3 support If I may, let me suggest a slightly different implementation that I've had good luck with. Instead of using the current thread (or a value derived from the current thread) as a key into a dict, you can use Python's built-in thread-local storage mechanism. That way you don't need to worry about synchronization or removing "deadwood" (I like the name :-). There's one minor hitch, which is that threading.local() isn't available before Python 2.4. So I included an implementation of it for earlier interpreters (e.g. 2.3) that don't have this functionality built-in. I've attached patches for the trunk and the magic-removal branch, as of revision 2360. The ones to look at are mysql_trunk_rev2360-1.diff and mysql_magic-removal_rev2360-1.diff. Please ignore mysql_trunk_rev2360.diff and mysql_magic-removal_rev2360.diff; Trac reported an internal error when I tried to replace the first versions that I uploaded with newer versions that support Python 2.3, so I had to change the names to get it to work. Sorry for the extra cruft. Oops, I didn't mean to leave the last comment anonymous... that was me. Oh... or, we could include a copy of _thread_local.py from python 2.4 like #1268 does (in current_user.diff). Eugene's code in #1442 is more elegant... IMHO we should use his fix, and close this as a duplicate. Superseded by #1442 Milestone Version 1.0 deleted By Edgewall Software.
http://code.djangoproject.com/ticket/463
crawl-002
refinedweb
1,588
53.98
27 December 2011 10:09 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> SP Chemical also operates a 600,000 dmt/year chlor-alkali line at Taixing. “The total 750,000 dmt/year chlor-alkali project will be operated at full rate after the 150,000 dmt/year chlor-alkali line restart,” the source said. The restart of the 150,000 dmt/year line has bolstered local buyers’ sentiment, with the domestic price of 32% ion-exchange membrane caustic soda falling by 10% week on week to yuan (CNY) 2,656/dmt ($420/dmt) on 26 December, according to Chemease, an ICIS service in China. Major chlor-alkali producers in east China include Shanghai Chlor-Alkali Chemical (Shanghai CA), Zhejiang Juhua, Jiangsu Meilan, Suzhou Fine, Jiangsu Leeman, and Yangnong Chemical. ($1 = CNY 6.32)
http://www.icis.com/Articles/2011/12/27/9519263/chinas-sp-chemical-restarts-chlor-alkali-plant-at-taixing.html
CC-MAIN-2014-52
refinedweb
133
52.49
Forum:Removing templates and categories from inactive userpages From Uncyclopedia, the content-free encyclopedia As a temporary admin, I'm temporarily giving Mn-z and anyone else who wants to do it the green light to remove categories, and any templates containing categories, from the userspaces of inactive users. We have a lot of categories that are clogged with six years of users who added themselves to that category, made a few random edits, and never came back. So: If someone hasn't made an edit in a year, feel free to remove categories and category-containing templates from their userspace. Because, really, they don't care, and if they do care, they've lost the right to care by abandoning the wiki. Also, feel free to remove any DEFAULTSORT templates from userpages that have them. A category should cluster all users together under "U" - not scatter them throughout the page. If a user has made an edit within the last year, please don't touch their userspace in any way, or if you see something that really needs to be addressed, talk to an admin. Cheers! pillow talk 17:57, March 11, 2011 (UTC) - Userpages themselves are one thing, but what about userspace pages that are actual articles or versions of articles? Those have categories for a reason, generally. What does it really help to, say, take category:Politicians off a userspace article about a politician, though? Seems like you guys might be taking this a little far. ~ 01:58, 12 March 2011 - I can see the reasoning behind your concern. However, the decategorization is only effecting users who have not been here in over a year. That means we can safely assume the pages are abandoned, and since they are userpages, they are probably works in progress, otherwise, they would have been moved to mainspace. The problem is that in some categories, such as Category:Axis of Evil-Doers, there was more userpage clutter than actual articles. - Right now that category is still 45% userpages, but that was after clean-up of both inactive userpages and removal of pages not on the template. If I hadn't removed the userpages, but still did a mainspace cleanup, the category would be 70% userpages. If I had kept the inactive userspace pages that were "on topic", it would be 56% userpages. (My math might be off due to counting errors, but you get the idea.) And the number of inactive userpages is only going to increase with time. - Also, I believe the policy is that users are allowed to revert the de-catting. The current policy really only deals with the "worst offenders", i.e. pages that have virtually no chance of ever being meaningfully edited again. --Mn-z 02:48, March 12, 2011 (UTC) - If it's only affecting inactive users, why did you remove the DEFAULTSORT from User:Maniac1075/Hulk Hogan? :52, 12 March 2011 - I think that is what Hyperbole said to do, so that userpages cluster at "U". From what he said here, I believe he meant it should be removed from all userpages, not just inactive ones. Of course, I could be interpreting him wrong. --Mn-z 02:58, March 12, 2011 (UTC) - We shouldn't remove DEFAULTSORTs from active userspaces without discussing it further with everyone, first. So, let's not go doing that, at least not yet. pillow talk 10:51, March 12, 2011 (UTC) - So why not just remove the troublesome categories, like Ao:08, 12 March 2011 - Clutter is an issue regardless of where it is at. It does effect some categories more than others, but I think its best to trim down before most categories get swamped to the point AoED is. There are some exceptions to de-catting, such as categories that are about userpages/userspaced content (i.e. Category:BUTT POOP and Category:IT'S A SECRET TO EVERYBODY.), categories that are too narrow to separate into a mainspace cat and a user cat (for example Category:Preggosexuals), or catting templates that are actually relevant on userpages, i.e. {{NSFWArticle}} - The point is that userpages are generally clutter (with exceptions) in categories. They generally suck even by random article standards, hence they aren't in mainspace, and they can't be improved, since they are in an inactive user's userspace. Since this is a humor wiki, some clutter is unavoidable, but we don't want it overwhelming the actual content. And like I said before, this problem is just going to get worse with time. --Mn-z 03:28, March 12, 2011 (UTC) - Not the userpages, the user subpages that actually are things... oh, nevermind. Why don't you guys just delete the:41, 12 March 2011 - There is a slight chance the users might come back and want to work on the pages, and deleting them really doesn't accomplish anything. I think the "out-of-the-way storage" system Hype just created makes the most sense. We don't need this articles on 30 categories, but they aren't hurting anything by sitting around in userspace. --Mn-z 03:53, March 12, 2011 (UTC) - Everything is hurting something. Just by existing. Everything. ~, 12 March 2011 - Hype, how about we generally leave other peoples stuff alone? I really find the level of hysteria being perpetuated by less than a handful of 'temp admins' and their sycophants unhelpful. This place is the worst - you want to be useful people get a wikipedia account, I hear they love this kind of crap. --Sycamore (Talk) 08:53, March 12, 2011 (UTC) - What's with this up-tight attitude, man? Cheel, Weens-tahn. The most important part of being an admin is to never, ever use it for anything. It's like that condom in your wallet. You, like it, should expire without having ever been used. Sir Modusoperandi Boinc! 09:36, March 12, 2011 (UTC) - Being uptight is my USP, my contract stipulates that I can only not be uptight on Christmas day. It's a tough job....--Sycamore (Talk) 09:58, March 12, 2011 (UTC) - The "admin" bit of his comment, along with the indentation, suggests he was replying to Hyperbole. But 'tis the beauty of a Modusoperandi post; the possible interpretations are bountiful. Let us sit back and marvel at his linguistic gymnastics and wonder if we have ever truly known art before this moment. – Sir Skullthumper, MD (criticize • writings • SU&W) 10:02 Mar 12, 2011 - Userspace has always been sort of a sacred spot, a "safe haven" for users on a wiki where everything else they create can be edited mercilessly and possibly destroyed. It's where they keep their writing, deleted and draft copies alike. That includes categories and templates. The whole idea of modifying it makes me a bit squeamish. At the very least the templates ought to be subst'd instead of removed entirely, to keep the pages intact. – Sir Skullthumper, MD (criticize • writings • SU&W) 09:42 Mar 12, 2011 - Okay, I'm inclined to charge in with a horde of rabid lemurs and say something serious as well, regardless of how I feel about this (do I feel? Is uncomfortableness I cannot rationalise a feeling?). But should this not have been the sort of thing to start in a forum first, where the lemurs growl? Sanity charges around gnawing on the furniture, but and before anyone starts bursting into purple, being angry, dramaising like... well, what's done is done. I'd wonder, but so many shiny things, all done. Now. Stuff. Look, the happening. Look! They're glowing! —The preceding unsigned comment was added by Lyrithya (talk • contribs) - Removing a template from someone's userpage has less impact to the universe than using Mr Muscle to polish a single grain of sand. What is the point...seriously. mAttlobster. (hello) 09:50, March 12, 2011 (UTC) - The ensuing drama, of course! Can you see it, building, building, building? The lemurs are dancing; meanwhile everyone prepares to be embraced in a nice clasp of —The preceding unsigned comment was added by Lyrithya (talk • contribs) - You're still messing with userspace (someone deliberately put that template there). It's pretty much unprecedented on this wiki. – Sir Skullthumper, MD (criticize • writings • SU&W) 09:53 Mar 12, 2011 - I completely agree. My point is that even if it didn't have the potential to piss off users by having their userpage messed about with, the actual taskl its removal is trying to accomplish has an unprecidented ferocious unimportance. So why bother doing it. mAttlobster. (hello) 09:58, March 12, 2011 (UTC) - If started adding featured templates to the various BUTT POOP!!!! articles in my userspace, (and made that fact known), I'm fairly certain some admin would revert me, and probably hand out some manner of ban. Its been established policy for years that certain maintenance-categorizing templates can't be in userspace. --Mn-z 14:25, March 12, 2011 (UTC) I agree with some of you. Mandatory derailing of thread begins here. I disagree with the others, though. MegaPleb • Dexter111344 • Complain here 10:02, March 12, 2011 (UTC) - No, you're wrong. mAttlobster. (hello) 10:03, March 12, 2011 (UTC) You smell like... like daisies. Don't you know what the trees sound like? Seriously, stop it all. Just stop; listen to them. It's beautiful. —The preceding unsigned comment was added by Lyrithya (talk • contribs) Guys this is really important: the template removal template actually exists in userspace. If Hyperbole ever takes a year off we're going to see some real serious paradoxes going down. – Sir Skullthumper, MD (criticize • writings • SU&W) 10:09 Mar 12, 2011 Yeah, it's a tough issue I generally agree with our age-old principle that userspace should be sacred and untouchable. The problem is - category space is public. And categories really should be something that casual users can use to find funny articles that are actually grouped together for a reason. But as it is, categories are basically a way for private space to encroach on public space, and Mn-z is absolutely right that we have a bunch of categories that have just turned into... utter garbage. They were meant as a collection of articles on a subject (like: things that could be humorously referred to as evil), but years of users adding themselves, sometimes with DEFAULTSORTs, have left them in terrible shape. Click on articles at random, and you're just going to see piles of garbage that no one's cleaned up because it's been on protected land. I, for one, would like this wiki to have nice, usable, maintained categories. There's merit to Modus's philosophy that admins should leave things alone to the greatest extent possible - but we can see after six years that if we don't police our categories, they eventually turn to shit, becoming nothing but annoying to the reader and embarrassing to the project. If we're not going to maintain our categories, we shouldn't even have categories. Being unable to remove garbage from categories is roughly the same thing as telling users they aren't allowed to revert vandalism to articles because that vandalism represents someone else's work. Lyri's concern about decategorizing userpages versus decategorizing actual articles in userspace just has to be a case-by-case call; there's no other way to do it. A reasonably complete userspace article might have a place in, say, Category:My sojourn. Whereas a userspace article that just says "FUCK MODUSOPERANDI HE SMELLS LIKE POO" shouldn't have a right to be in our one featured category, forever, just because the categorization technically exists in userspace where no one is ever allowed to touch it. For active users, I think this case-by-case call needs to be made by admins, meaning that admins should do nothing at all unless the page is just clearly abusive (like the example above). For inactive users, I think they've forfeited some of their rights by leaving the project. We're not actually deleting any of their work - just de-categorizing it (and removing templates that might categorize it). What I'm quickly finding out is that we've had a few dozen users over the years who have done nothing at all with Uncyclopedia except create an account, spend six days adding every single template and category to their userspace that they can find, and then vanish for all eternity. These people aren't useful to us. They shouldn't be protected. And if they want to basically create a sandbox page outside of the sandbox - they need to be doing it in a way that doesn't encroach on public spaces. Sort of reminds me of the public librarian's quandary. Everybody is legally allowed to look at pornography at a public library. No one is allowed to put their hand down their pants and moan while they do it. Figuring out when it's okay to tell a member of the public to leave a public building that taxpayer money pays for it can be really difficult. It's got to be handled on a case-by-case basis. pillow talk 10:39, March 12, 2011 (UTC) - Frankly this is why loathe characters like yourself being put into positions like this admin thing - you cannot stop yourself peddling personal agendas whilst circumventing the arguments you made in the first place about admin rights being abused... I resent such agenda's being peddled to other contributors. Mo's comments are applicable here and I think it will be a benefit for the entire community when the four of you will be de-opped. I can only hope you will have the decency to as for their removal beforehand, and end this farce and hypocrisy. The simple fact is Hype, is that you are at your best writing articles here, you have completely gone over the edge of acceptable conduct on here with this, its not the odd removal, but on mass policy of editing another users space which is not acceptable or indeed applicable for this community--Sycamore (Talk) 12:27, March 12, 2011 (UTC) - Um, can we please not cause drama while we're at it? I know it's one of the central pillars of Uncyclopedia and all, but it tends to upset some people, ya:55, 12 March 2011 - Actually, I've just been upfront and honest - there is a difference. Incidentally I would like Hyperbole to respond to my comment.--Sycamore (Talk) 14:29, March 12, 2011 (UTC) - Sure, I'll give you a response. I'm putting it on your talk page, though. Because there's no need for this drama on the dump. pillow talk 18:41, March 12, 2011 (UTC) - Sure, there's a difference. That doesn't mean you can't cause drama by being honest and upfront, though. If I tell a couple people they suck and don't don't deserve the privileges they've been given, being honest and upfront about it isn't gonna decrease the odds that it'll cause:01, 12 March 2011 - I personally think Hype made a good decision which will result in cleaning up some categories by getting rid of very inactive users template links and other links. Mn-z helped by pushing the idea. The abuse comes when things like the Maniac1075 thing by Mn-z occurred. The privilage of removing older users category spam should be given to people who do not have an agenda, or know how to edit without giving in to their agenda, maybe a bot, but be limited to category spam for now. Make sense? I will also chime in and say the four new temp admins are doing a great job, and are working like dogs to help the wiki. Some concerns about the pic destruction, and the glee that is going into it, but aside from that they seem to be growing in their jobs which is what the month test-period is meant to do, impart a learning curve. Aleister 13:52 12-3-'11 - Now, now, Al, let's not get OCD mixed up with "good decision." MegaPleb • Dexter111344 • Complain here 14:29, March 12, 2011 (UTC) - Re: that edit I made to Maniac1075's page There was a slight communication breakdown on that issue. Someone pointed out that I probably shouldn't have done it, I brought the issue back to Hype and undid the edit, and Hype clarified what he said on the issue. I don't think a minor edit like that rises to the level of an "abuse of power". However, I will admit in hindsight, that edit shouldn't have been done (hence the revert). --Mn-z 14:18, March 12, 2011 (UTC) - Also, to reply to Aleister, removing the content would be difficult with a bot, for the simple reason that a bot might make 30 passes on a given userpage to remove all the categorizes categorizing-templates one by one. Also, I'd need some way to generate a list of users who haven't edited in a year to do the removal automatically. If a bot isn't working automatically, its actually the same as a user doing it. By doing it manually, a page can normally be fixed in one edit. Granted, some users might be tempted to be a bit overzealous in trimming down userpages at times, (i.e. accidentally removing a mainspace template that looks like it probably categorizes, but actually doesn't) but its also important to remember that these are pages of inactive users who probably don't care, and if they come back, they have the right to undo the edits. Also, if someone really really cares about this, they could go through Category:Decategorized Inactive Userpages and partially revert any overzealous pruning. --Mn-z 15:17, March 12, 2011 (UTC) - There is a precedent for removing content from userpages. Mordillo forced us to remove transcluded articles from User:CheddarBBQ/Pile_Of_Shit because it resulted in the transclusion of Category:Featured on May 13, 2009. Soon afterwards, Spang removed transcluded pages from User:MrCleveland on May 19, 2009 to get that page out of the featured category. I believe Mordillo forbade the categorization of User:PuppyOnTheRadio/YOUR_MOTHER_SUCKS_COCKS_IN_HELL_MOTHERFUCKER! back in 2010. Mordillo & Zombiebaron also removed templates User:Happytimes/Temples/Rewrite to keep it out of incorrect maintenance category. - I could go on, but it has been standard for years that a user's right to foul up his userspace is limited by everyone's else's right to have a clean mainspace. This is just taking an existing policy and applying it consistency to all serious categories. If having random userpages clutter Category:Featured and Category:Rewrite is bad, isn't it also bad to have Category:Axis of Evil-Doers be 70% userspace clutter? (As opposed to merely 45% userspace clutter). Granted there are exceptions to userpage category, like I said earlier. --Mn-z 14:18, March 12, 2011 (UTC) - Good overview! Aleister 14:29 12-3-'11 - I support the Hype/Spike/Mnbvcxz tidy up template reform campaign. --, March 12, 2011 (UTC) This is why you're all idiots Reading this forum it's become apparent that certain persons don't like the idea of userspace pages being in public categories. And you know what? I can totally get behind that. Certain categories shouldn't be littered with that junk. So yeah, get rid of the categories. But don't get rid of the templates! That's just dumb. Categories aren't really part of a page anyway, and removing/modifying them has no impact on the page itself whatsoever. Which is good! We don't want the pages modified because, you know, userspace. Getting rid of templates, on the other hand, is actually irrelevant. "But Skull!" you say. "Templates sometimes include categories!" And this is why you are stupid. The problem with removing templates is twofold and negatively impacts both sides of the issue here: - For the users who are totally against userspace modification, period, removing templates does change the content of the page in a way that removing categories does not. - For the users that want this userspace crap out of the categories, removing templates is such a big, disruptive deal that they can't do it to active users (for a pretty liberal definition of active, I might add). So this sucks for everyone. But fear not, there is a solution. Actually, two solutions. The most straightforward one is to subst all templates on a page (lots of bots can do this for you) and manually remove the categories. This would both leave the page intact, templates and all, and remove the categories. The less straightforward solution, but the more efficient one, is to take templates that are often included in userspace and edit the templates instead of userspace. A simple parserfunction to check the namespace before including the category would solve lots of problems. Heck, if you want me to, I'll even write the line myself and put it in a template somewhere that you can include on offending templates. tl;dr Our enemies are categories, not templates. And this would be a buttload less controversial if we'd remember that. – Sir Skullthumper, MD (criticize • writings • SU&W) 19:04 Mar 12, 2011 - I hadn't even thought of the idea of putting a parserfuction on certain templates to prevent them from categorizing userpages. That's a really, really good idea. Can you make it happen? pillow talk 19:45, March 12, 2011 (UTC) - What me?! I'm only good at pointing out the flaws in others' plans, not actually doing work... okay I'll give it a go – Sir Skullthumper, MD (criticize • writings • SU&W) 19:47 Mar 12, 2011 - In other words, Skully is a useless fucker and this guy is way cooler than he is. MegaPleb • Dexter111344 • Complain here 19:51, March 12, 2011 (UTC) - Done and done. Meet {{nouserspace}}. To use it, just put the category you don't want included in userspace as the first parameter, e.g. {{nouserspace|[[Category:Featured]]}}. – Sir Skullthumper, MD (criticize • writings • SU&W) 19:56 Mar 12, 2011 - How does one go about using that, does it go in templates or userspaces?--Mn-z 20:05, March 12, 2011 (UTC) - The templates themselves. Find the offending categories in the templates and put them in the first parameter of {{nouserspace}} per my example above. Suddenly every user page that includes that template is no longer in the category. It's a lot less work for you, and the userpages remain untouched. Everybody wins. – Sir Skullthumper, MD (criticize • writings • SU&W) 20:06 Mar 12, 2011 - Thanks. --Mn-z 20:13, March 12, 2011 (UTC) - Totally welcome. By the way, is it too late to go through the userpages you've already modified and revert the damage done? With the exception of really extreme examples, of course, like the one guy who had seven pages transcluded onto his userpage, I mean holy crap. – Sir Skullthumper, MD (criticize • writings • SU&W) 20:16 Mar 12, 2011 - We could, all the pages at are Category:Decategorized Inactive Userpages. However, I don't have time to do that at the moment. --Mn-z 21:37, March 12, 2011 (UTC) Oh, you can also use the nocat parameter. It works like <includeonly>{{#if:{{{nocat|}}}||[[Category:<here goes the name of a category itself>]]}}</includeonly> replacing the original category line. Add it in a template and then just place "|nocat=1" into that double curly brackets thing on a userpage or wherever else you want. — Praetorian the Glorious Strategist. 21:52, March 12, 2011 (UTC) - Thanks,:43, 12 March 2011 - Holy balls don't do that. That would require editing every single userpage, again, to set every single instance of that template to nocat=1. {{nouserspace}} works out of the box and does not require the editing of userpages to work, which is key. – Sir Skullthumper, MD (criticize • writings • SU&W) 22:56 Mar 12, 2011 - Back in the day, people used to vandalize my user page on a regular basis and get credit for it. You mean everywhere else it's a crime? Holy Bee Stings! WHY???PuppyOnTheRadio 21:48, March 15, 2011 (UTC) I should point out that conversations such as those on this page... ...would never happen if you all went out and got some pussy. Sir Modusoperandi Boinc! 00:47, March 13, 2011 (UTC) - I could take offence:14, 13 March 2011 - Mine's in Mexico :( – Sir Skullthumper, MD (criticize • writings • SU&W) 04:06 Mar 13, 2011 - Mine is literally a cat. mAttlobster. (hello) 10:13, March 13, 2011 (UTC) - Sounds like a bundle of annoyance, claws and matted fur. Sir Modusoperandi Boinc! 12:57, March 13, 2011 (UTC) I got laid this one time. Is this the right forum to discuss:08, 13 Mar
http://uncyclopedia.wikia.com/wiki/Forum:Removing_templates_and_categories_from_inactive_userpages
CC-MAIN-2016-40
refinedweb
4,178
61.97
let average a b = (a +. b) /. 2.0;;instead of def average(a, b) = {(a + b) / 2}(as you would say in another Functional/ObjectOriented language, NemerleLanguage)? Sure, the former is more consise for this one, but what about a monster like (and excuse me if I get the syntax wrong) let quadratic a b c = sqrt(square(b) (* Does OCaml have a ** operator? *) -. (4.0 *. a *. c)) -. b;; (* not returning a pair for sake of space *)Versus def quadratic(a, b, c) = {sqrt(square(b) /* does NemerleLanguage have a ** operator? */ - 4*a*c) - b} /* Not returning a pair for the sake of spaceNow, imagine stuff like that multiplied over an entire program, and Nemerle wins easily. --AnonymousDonor (propose: AnonymousKidHidingRealNameFromPerverts? :)
http://c2.com/cgi-bin/wiki?ObjectiveCaml
CC-MAIN-2014-10
refinedweb
120
53.41
Archives Why are music companies shooting themselves in the foot? As soon as it has been released, I went to my local CD reseller to buy Sleeping with ghosts, Placebo's latest album. I was ready to enjoy this great music, but... What people say about Bamboo.Prevalence lately Justin from News from the Forest tries BP. He writes how he really feels about object prevalence. About which I say something. The discussion about object prevalence continues... Justin has a well thought reflection about object prevalence (and Bamboo.Prevalence). DotNet This is just a trick to have the term DotNet appear on my pages! ;-) New version of XC# ResolveCorp has just released a new version of eXtensible C#. Here's what's new: NUnit I just started to use NUnit and NUnitAddin. They are great! Simplicity is the key word as far as they are concerned. Tools tools tools for .NET I've started to put together a categorized list of various .NET tools. This list is not complete of course, but can be useful if you search a tool for a specific need. .NET Tools The list is moving! Check out. Fun for a change Good one spotted on stronglytyped and coming from The Code Project: Weighting optimizations' worth Scott and Victor had a little discussion about getting the value inside the loop vs outside the loop. Now common sense would dictate to me that outside the loop would of course be faster. So, for my own amusement I threw together a little test. I simply ran their code and tried to figure out which one was faster. Going through a 12 item array, declaring with the loop (i < array.Length) actually was 2 seconds faster than getting it outside the loop. Of course, to get a 2 second difference I had to run each chunk of code 500,000,000 times. The difference may have simply have been the overhead of declaring a variable to store the length. I'm not too sure, I didn't dig into the IL. Oddpost I'm considering moving away from outlook for a new internet based email client (that look pretty close to outlook) called oddPost. I have patented patenting Don Box refers to the AOP patent. What's next? Patents on zeros and ones? Who's holding the patent on OOP ? What a stupid money driven world we live in. Threats Good point from Tim Bray on threats other than Saddam Hussein. Interception methods After my simple AOP sample, here is the source code of a sample presenting three interception methods: Feedster [was "Roogle - RSS search engine"] [The FuzzyBlog] What 10 odd Hours of Hacking Can Produce: An RSS Search Engine OfficeForms After WinForms, WebForms, MobileForms, we may soon be using OfficeForms. VS 2003 AutoImplement public class MyClass : MyInterface
http://weblogs.asp.net/fmarguerie/archive/2003/3
CC-MAIN-2015-48
refinedweb
469
75.2
I am attempting to use an LSTM to classify what the type of weather on a particular day at a specific location on features e.g. humidity, temperature, etc . Therefore my data takes the following format: [locations, days_for_that_location, number_of_dimensions] which turns out to be [100, x, 8]. Where x is anywhere between 900 and 6000 - I have more data for some locations than I do for others. I have to classify each day into Sunny, Rainy, Snowy, etc as a result the label dataset has size [locations, days_per_location, 1] which turns out to be [100, x, 1]. Where the third dimension is a number from 0 to 6, each number representing a type of weather - so that I can use cross entropy loss. Below follows what I built - which I do not think is correct. In some sense I am trying to use PoS tagging techniques for this. from torch.utils.data import Dataset class WeatherData(Dataset): def __init__ (self, location): self.samples = [] for day in location: self.samples.append((day['features'], day['lables'])) def __len__(self): return len(self.samples) def __getitem__(self, idx): return(self.samples[idx]) BATCH_SIZE = 1 DatasetW= WeatherData(Data) train_iterator = torch.utils.data.DataLoader(DatasetW, batch_size = BATCH_SIZE) import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.lstm1 = nn.LSTM(input_size = 8, hidden_size = 32, num_layers = 2, dropout = 0.5) self.fc1 = nn.Linear(32, 120) self.fc2 = nn.Linear(120,9) self.softmax = nn.Softmax(dim=1) def forward(self, x): x = x.unsqueeze(0) x = x.unsqueeze(0) x, _ = self.lstm1(x) x = F.dropout(x, p = 0.9) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.softmax(x) return x net = Net() net = net.double() import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) Where the issue is most likely - I should not be passing in each data point one at a time. for epoch in range(100): # loop over the dataset multiple times running_loss = 0.0 o_lst = [] l_lst = [] for i, data in enumerate(train_iterator, 0): inputs, labels = data optimizer.zero_grad() for data_point, lab in zip(inputs[0], labels[0][0]): outputs = net(data_point) lab = lab.unsqueeze(0) outputs = outputs.squeeze(0) loss = criterion(outputs, lab) loss.backward() optimizer.step() running_loss += loss.item() o_lst.append((outputs.argmax().item())) l_lst.append(lab.item()) print("Epoch: ", epoch, "Loss: ",running_loss) print('Finished Training') I am certain this is wrong because I am only passing in one data-point a time and I believe I should be using the LSTM differently. Confused about how to create a dataset and a subsequent model that has variable input length and take advantage of batching if possible. The current model works in the sense that it runs but it does not learn - the loss is stuck - which to be honest is not surprising.
https://discuss.pytorch.org/t/confused-about-lstms-rnn-data-iterator/82952
CC-MAIN-2022-21
refinedweb
493
53.37
JPA Hibernate Alternatives: When JPA & Hibernate Aren't Right for Your Project JPA Hibernate Alternatives: When JPA & Hibernate Aren't Right for Your Project Join the DZone community and get the full member experience.Join For Free Hello, how are you? Today we will talk about situations in which - sql2o - Take a look at: jOOQ and Avaje - Is a raw JDBC approach worth it? - How can I choose the right framework? - Final thoughts I have created 4 CRUDs in my github using the frameworks mentioned in this post, you will find the URL at the beginning of each page. I am not a radical that thinks that JPA is worthless, but I do believe that we need to choose the right framework for the each situation. If you do not know I wrote a JPA book (in Portuguese only) and I do not think that JPA is the silver bullet that will solve all the problems. I hope you like the post. [= JPA/Hibernate Problems There are times that JPA can do more harm than good. Below you will see the JPA/Hibernate problems and in the next page you will see some solutions to these problems: - Composite Key: This, in my opinion, is the biggest headache of the JPA developers. When we map a composite key we are adding a huge complexity to the project when we need to persist or find a object in the database. When you use composite key several problems will might happen, and some of these problems could be implementation bugs. - Legacy Database: A project that has a lot of business rules in the database can be a problem when wee need to invoke StoredProcedures or Functions. - Artifact size: The artifact size will increase a lot if you are using the Hibernate implementation. The Hibernate uses a lot of dependencies that will increase the size of the generated jar/war/ear. The artifact size can be a problem if the developer needs to do a deploy in several remote servers with a low Internet band (or a slow upload). Imagine a project that in each new release it is necessary to update 10 customers servers across the country. Problems with slow upload, corrupted file and loss of Internet can happen making the dev/ops team to lose more time. - Generated SQL: One of the JPA advantages is the database portability, but to use this portability advantage you need to use the JPQL/HQL language. This advantage can became a disadvantage when the generated query has a poor performance and it does not use the table index that was created to optimize the queries. - Complex Query: That are projects that has several queries with a high level of complexity using database resources like: SUM, MAX, MIN, COUNT, HAVING, etc. If you combine those resources the JPA performance might drop and not use the table indexes, or you will not be able to use a specific database resource that could solve this problem. - Framework complexity: To create a CRUD with JPA is very simples, but problems will appear when we start to use entities relationships, inheritance, cache, PersistenceUnit manipulation, PersistenceContext with several entities, etc. A development team without a developer with a good JPA experience will lose a lot of time with JPA ‘rules‘. - Slow processing and a lot of RAM memory occupied: There are moments that JPA will lose performance at report processing, inserting a lot of entities or problems with a transaction that is opened for a long time. After reading all the problems above you might be thinking: “Is JPA good in doing anything?”. JPA has a lot of advantages that will not be detailed here because this is not the post theme, JPA is a tool that is indicated for a lot of situations. Some of the JPA advantages are: database portability, save a lot of the development time, make easier to create queries, cache optimization, a huge community support, etc. In the next page we will see some solutions for the problems detailed above, the solutions could help you to avoid a huge persistence framework refactoring. We will see some tips to fix or to workaround the problems described here. Solutions to Some of the JPA/Hibernate problems We need to be careful if we are thinking about removing the JPA of our projects. I am not of the developer type that thinks that we should remove a entire framework before trying to find a solution to the problems. Some times it is better to choose a less intrusive approach. Composite Key Unfortunately there is not a good solution to this problem. If possible, avoid the creation of tables with composite key if it is not required by the business rules. I have seen developers using composite keys when a simple key could be applied, the composite key complexity was added to the project unnecessarily. Legacy Databases The newest JPA version (2.1) has support to StoredProcedures and Functions, with this new resource will be easier to communicate with the database. If a JPA version upgrade is not possible I think that JPA is not the best solution to you. You could use some of the vendor resources, e.g. Hibernate, but you will lose database and implementations portability. Artifact Size An easy solution to this problem would be to change the JPA implementation. Instead of using the Hibernate implementation you could use the Eclipsellink, OpenJPA or the Batoo. A problem might appear if the project is using Hibernate annotation/resources; the implementation change will require some code refactoring. Generated SQL and Complexes Query The solution to these problems would be a resource named NativeQuery. With this resource you could have a simplified query or optimized SQL, but you will sacrifice the database portability. You could put your queries in a file, something like SEARCH_STUDENTS_ORACLE or SEARCH_STUDENTS_MYSQL, and in production environment the correct file would be accessed. The problem of this approach is that the same query must be written for every database. If we need to edit the SEARCH_STUDENTS query, it would be required to edit the oracle and mysql files. If your project is has only one database vendor the NativeQuery resource will not be a problem. The advantage of this hybrid approach (JPQL and NativeQuery in the same project) is the possibility of using the others JPA advantages. Slow Processing and Huge Memory Size This problem can be solved with optimized queries (with NativeQuery), query pagination and small transactions. Avoid using EJB with PersistenceContext Extended, this kind of context will consume more memory and processing of the server. There is also the possibility of getting an entity from database as a “read only” entity, e.g.: entity that will only be used in a report. To recover an entity in a “read only” state is not needed to open a transaction, take a look at the code below: String query = "select uai from Student uai"; EntityManager entityManager = entityManagerFactory.createEntityManager(); TypedQuery<Student> typedQuery = entityManager.createQuery(query, Student.class); List<Student> resultList = typedQuery.getResultList(); Notice that in the code above there is no opened transaction, all the returned entities will be detached (non monitored by the JPA). If you are using EJB mark your transaction as NOT_SUPPORTED or you could use @Transactional(readOnly=true). Complexity I would say that there is only one solution to this problem: to study. It will be necessary to read books, blogs, magazines or any other trustful source of JPA material. More study is equals to less doubts in JPA. I am not a developer that believes that JPA it is the only and the best solution to every problem, but there are moments that JPA is not the best to tool to use. You must be careful when deciding about a persistence framework change, usually a lot of classes are affected and a huge refactoring is needed. Several bugs may be caused by this refactoring. It is needed to talk with the project mangers about this refactoring and list all the positive and negative effects. In the next four pages we will see 4 persistence frameworks that can be used in our projects, but before we see the frameworks I will show how that I choose each framework. Criteria for Choosing the frameworks Described Here Maybe you will think: “why the framework X is not here?”. Below I will list the criteria applied for choosing the framework displayed here: - Found in more than one source of research: we can find in forums people talking about a framework, but it is harder to find the same framework appearing in more than one forum. The most quoted frameworks were chosen. - Quoted by different sources: Some frameworks that we found in the forums are indicated only by its committers. Some forums does not allow “self merchandise”, but some frameworks owners still doing it. - Last update 01/05/2013: I have searched for frameworks that have been updated in this past year. - Quick Hello World: Some frameworks I could not do a Hello World with less than 15~20min, and with some errors. To the tutorials found in this post I have worked 7 minutes in each framework: starting counting in its download until the first database insert. The frameworks that will be displayed in here has good methods and are easy to use. To make a real CRUD scenario we have a persistence model like below: - A attribute with a name different of the column name: socialSecurityNumber —-> social_security_number - A date attribute - a ENUM attribute With this characteristics in a class we will see some problems and how the framework solve it. Spring JDBC Template One of the most famous frameworks that we can find to access the database data is the Spring JDBC Template. The code of this project can be found in here: The Sprint JDBC Template uses natives queries like below: As it is possible to see in the image above the query has a database syntax (I will be using MySQL). When we use a native SQL query it is possible to use all the database resources in an easy way. We need an instance of the object JDBC Template (used to execute the queries), and to create the JDBC Template object we need to set up a datasource: We can get the datasource now (thanks to the Spring injection) and create our JDBCTemplate: PS.: All the XML code above and the JDBCTemplate instantiation could be replace by Spring injection and with a code bootstrap, just do a little research about the Spring features. One thing that I did not liked is the INSERT statement with ID recover, it is very verbose: With the KeyHolder class we can recover the generated ID in the database, unfortunately we need a huge code to do it. The other CRUD functions are easier to use, like below: Notice that to execute a SQL query it is very simple and results in a populated object, thanks to the RowMapper. The RowMapper is the engine that the JDBC Template uses to make easier to populate a class with data from the database. Take a look at the RowMapper code below: The best news about the RowMapper is that it can be used in any query of the project. The developer that is responsible to write the logic that will populate the class data. To finish this page, take a look below in the database DELETE and the database UPDATE statement: About the Spring JDBC Template we can say: - Has a good support: Any search in the Internet will result in several pages with tips and bug fixes. - A lot of companies use it: several projects across the world use it - Be careful with different databases for the same project: The native SQL can became a problem with your project run with different databases. Several queries will need to be rewritten to adapt all the project databases. - Framework Knowledge: It is good to know the Spring basics, how it can be configured and used. To those that does not know the Spring has several modules and in your project it is possible to use only the JDBC Template module. You could keep all the other modules/frameworks of your project and add only the necessary to run the JDBC Template. MyBatis MyBatis (created with the name iBatis) is a very good framework that is used by a lot of developers. Has a lot of functionalities, but we will only see a few in this post. The code of this page can be found in here: To run your project with MyBatis you will need to instantiate a Session Factory. It is very easy and the documentation says that this factory can be static: When you run a project with MyBatis you just need to instantiate the Factory one time, that is why it is in a static code. The configuration XML (mybatis.xml) it is very simple and its code can be found below: The Mapper (an attribute inside the XML above) will hold information about the project queries and how to translate the database result into Java objects. It is possible to create a Mapper in XML or Interface. Let us see below the Mapper found in the file crud_query.xml: Notice that the file is easy to understand. The first configuration found is a ResultMap that indicates the query result type, and a result class was configured “uai.model.Customer”. In the class we have a attribute with a different name of the database table column, so we need to add a configuration to the ResultMap. All queries need a ID that will be used by MyBatis session. In the beginning of the file it is possible to see a namespace declared that works as a Java package, this package will wrap all the queries and the ResultMaps found in the XML file. We could also use a Interface+Annotation instead of the XML. The Mapper found in the crud_query.xml file could be translated in to a Interface like: Only the Read methods were written in the Interface to make the code smaller, but all the CRUD methods could be written in the Interface. Let us see first how to execute a query found in the XML file: The parsing of the object is automatically and the method is easy to read. To run the query all that is needed is to use the combination “namespace + query id” that we saw in the crud_query.xml code above. If the developer wants to use the Interface approach he could do like below: With the interface query mode we have a clean code and the developer will not need to instantiate the Interface, the session class of the MyBatis will do the work. If you want to update, delete or insert a record in the database the code is very easy: About MyBatis we could say: - Excellent Documentation: Every time that I had a doubt I could answer it just by reading its site documentation - Flexibility: Allowing XML or Interfaces+Annotations the framework gives a huge flexibility to the developer. Notice that if you choose the Interface approach the database portability will be harder, it is easier to choose which XML to send with the deploy artifact rather than an interface - Integration: Has integration with Guice and Spring - Dynamic Query: Allows to create queries in Runtime, like the JPA criteria. It is possible to add “IFs” to a query to decide which attribute will be used in the query - Transaction: If your project is not using Guice of Spring you will need to manually control the transaction Sormula Sormula is a ORM OpenSource framework, very similar to the JPA/Hibernate. The code of the project in this page can be found in here: Sormula has a class named Database that works like the JPA EntityManagerFactory, the Database class will be like a bridge between the database and your model classes. To execute the SQL actions we will use the Table class that works like the JPA EntityManager, but the Table class is typed. To run Sormula in a code you will need to create a Database instance: To create a Database instance all that we need is a Java Connection. To read data from the database is very easy, like below: You only need to create a Database instance and a Table instance to execute all kind of SQL actions. How can we map a class attribute name different from the database table column name? Take a look below: We can use annotations to do the database mapping in our classes, very close to the JPA style. To update, delete or create data in the database you can do like below: About Sormula we can say that: - Has a good documentation - Easy to set up - It is not found in the maven repository, it will make harder to attach the source code if needed - Has a lot of checked exceptions, you will need to do a try/catch for the invoked actions sql2o This framework works with native SQL and makes easier to transform database data into Java objects. The code of the project in this page can be found in here: sql2o has a Connection class that is very easy to create: Notice that we have a static Sql2o object that will work like a Connection factory. To read the database data we would do something like: Notice that we have a Native SQL written, but we have named parameters. We are not using positional parameters like ‘?1′ but we gave a name to the parameter like ‘:id’. We can say that named parameters has the advantage that we will not get lost in a query with several parameters; when we forget to pass some parameter the error message will tell us the parameter name that is missing. We can inform in the query the name of the column with a different name, there is no need to create a Mapper/RowMapper. With the return type defined in the query we will not need to instantiate manually the object, sql2o will do it for us. If you want to update, delete or insert data in the database you can do like below: It is a “very easy to use” framework. About the sql2o we can say that: - Easy to handle scalar query: the returned values of SUM, COUNT functions are easy to handle - Named parameters in query: Will make easy to handle SQL with a lot of parameters - Binding functions: bind is a function that will automatically populate the database query parameters through a given object, unfortunately it did not work in this project for a problem with the enum. I did not investigate the problem, but I think that it is something easy to handle Take a look at: jOOQ and Avaje jOOQ jOOQ it is a framework indicated by a lot of people, the users of this frameworks praise it in a lot of sites/forums. Unfortunately the jOOQ did not work in my PC because my database was too old, and I could not download other database when writing this post (I was in an airplane). I noticed that to use the jOOQ you will need to generated several jOOQ classes based in your model. jOOQ has a good documentation in the site and it details how to generate those classes. jOOQ is free to those that uses a free database like: MySQL, Postgre, etc. The paid jOOQ version is needed to those that uses paid databases like: Oracle, SQL Server, etc. Avaje Is a framework quoted in several blogs/forums. It works with the ORM concept and it is easy to execute database CRUD actions. Problems that I found: - Not well detailed documentation: its Hello World is not very detailed - Configurations: it has a required properties configuration file with a lot of configurations, really boring to those that just want to do a Hello World - A Enhancer is needed: enhancement is a method do optimize the class bytecode, but is hard to setup in the beginning and is mandatory to do before the Hello World Is a Raw JDBC Approach Worth It? The advantages of JDBC are: - Best performance: We will not have any framework between the persistence layer and the database. We can get the best performance with a raw JDBC - Control over the SQL: The written SQL is the SQL that will be executed in the database, no framework will edit/update/generate the query SQL - Native Resource: We could access all natives database resources without a problem, e.g.: functions, stored procedures, hints, etc The disadvantages are: - Verbose Code: After receiving the database query result we need to instantiate and populate the object manually, invoking all the required “set” methods. This code will get worse if we have classes relationships like one-to-many. It will be very easy to find a while inside another while. - Fragile Code: If a database table column changes its name it will be necessary to edit all the project queries that uses this column. Some project uses constants with the column name to help with this task, e.g. Customer.NAME_COLUMN, with this approach the table column name update would be easier. If a column is removed from the database all the project queries would be updated, even if you have a column constants. - Complex Portability: If your project uses more than one database it would be necessary to have almost all queries written for each vendor. For any update in any query it would be necessary to update every vendor query, this could take a lot the time from the developers. I can see only one factor that would make me choose a raw JDBC approach almost instantly: - Performance: If your project need to process thousands of transactions per minutes, need to be scalable and with a low memory usage this is the best choice. Usually median/huge projects has all this high performance requirements. It is also possible to have a hybrid solution to the projects; most of the project repository (DAO) will use a framework, and just a small part of it will use JDBC I do like JDBC a lot, I have worked and I still working with it. I just ask you to not think that JDBC is the silver bullet for every problem. If you know any other advantage/disadvantage that is not listed here, just tell me and I will add here with the credits going to you. [= How Can I Choose the Right Framework? We must be careful if you want to change JPA for other project or if you are just looking for other persistence framework. If the solutions in page 3 are not solving your problems the best solution is to change the persistence framework. What should you considerate before changing the persistence framework? - Documentation: is the framework well documented? Is easy to understand how it works and can it answer most of your doubts? - Community: has the framework an active community of users? Has a forum? - Maintenance/Fix Bugs: Is the framework receiving commits to fix bugs or receiving new features? There are fix releases being created? With which frequency? - How hard is to find a developer that knows about this framework? I believe that this is the most important issue to be considered. You could add to your project the best framework in the world but without developers that know how to operate it the framework will be useless. If you need to hire a senior developer how hard would be to find one? If you urgently need to hire someone that knows that unknown framework maybe this could be very difficult. Final Thoughts I will say it again: I do not think that JPA could/should be applied to every situation in every project in the world; I do no think that that JPA is useless just because it has disadvantages just like any other framework. I do not want you to be offended if your framework was not listed here, maybe the research words that I used to find persistence frameworks did not lead me to your framework. I hope that this post might help you. If your have any double/question just post it. [= See you soon! \o_ Published at DZone with permission of Hebert Coelho De Oliveira , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/jpa-hibernate-alternatives
CC-MAIN-2019-26
refinedweb
4,095
57.4
TechEd 2001 has arrived in Atlanta, and with it the hordes of sandaled programmers and besieged support staff. The attendees are spread over numerous hotels around the downtown area, with the conference itself being held at the Georgia World Conference Center Arriving at the airport I was told that it was very easy to find your way around - which it is. I just wasn't told how big the place is. I got off the plane and sneered at the train that went between the concourses, then sneered even more ambitiously at the moving walkways that stretch the length of the long corridors. I figured that I'm still reasonably young and fit and that a casual stroll would be good for me. From the arrival gate to the luggage claim area was a 20 minute walk. I figured this is about 2km (1.25 mile)! It was an amazing sight to watch the attendees slowly but inexorably take over the hotel foyers, bars and amenities. On Saturday afternoon the pool area was full of intensely bronzed and fit (and intensely bronzed and not-so-fit) holiday makers in tiny bathing suits, and one very pale, very skinny developer in long shorts and a T-shirt. By Sunday afternoon the view was altogether different, and a lot more disturbing. The attendees are spread over (I think) about 23 hotels and there is a steady stream of coaches provided to shuttle us to and from the conference centre. The strangest site is seeing the armed policeman standing guard at our pickup point each morning. I guess with most attendees carrying several thousand dollars worth of gadgets there could be some easy pickings. As developers we are not renowned for our commanding physical prowess in the face of danger. Commando teddy-bear test drop from the 17th floor of the Hilton. The choices for dining in downtown Atlanta around the hotels - as far as I can determine - boil down to Steak, Fajitas and Sushi. After an episode last year involving Sushi and Tequila that best remain forgotten, the choice is pretty much Sushi or Steak - but the later can be subclassified into Steak and Lager, Steak and Ale, and Steak. The adventurous can also try the burgers. Dinner time saw the area around the hotels hotels a-swarm with badged, bag carrying developers flowing between the eating establishments like ants, streaming between the establishments in visible lines, bumping into one another, forming clots at intersections, with each individual working toward the common goal of getting fed. It was a sight to behold. I was standing next to a guy who obviously was not an attendee and he looked at the hordes then shook his head and said "you know something is terribly wrong when downtown Atlanta is packed with pale skinned guys carrying laptops". Microsoft certainly knows how to put on a decent meal. Breakfast and lunch were all you can eat affairs, and in between sessions there was a constant supply of potato chips, muffins, Krispy Kream donuts, diet coke and the second best chocolate chip cookies I have ever tasted. For the health conscious (or merely guilty at heart) there was fruit, muesli bars, juice and TechEd brand water. David Cunningham flew in yesterday morning and promptly found the world longest escalator. He promised to show me tomorrow. I'm tingling with anticipation. Tuesday morning saw some very subdued developers quaffing serious amounts of water and coffee. No doubt the exertions of the night before (late night coding sessions? hearty debates about the new features in .NET? The Tabernacle?) took their toll. We all get Beta 2 CD's on Wednesday, but in the meantime is available for download from Microsoft. System requirements are far more modest than the original requirements for the PDC bits: a 450MHz CPU, W2K, 192Mb RAM, a 800 x 600, 256 color screen and 3Gb HDD space in total. A CD would also be handy if you plan on installing from the disks. Beta 2 is significantly different from Beta 1. Many of the namespaces have changed, and even some basic naming conventions (For example, WinForms are now Windows Forms). Everything from the System.Data namespace, delegates, keywords, and the IDE itself have all changed in degrees ranging from wide ranging API changes to more innocuous changes such as the addition or removal of underscores in names. The beta 2 IDE is much improved, both in terms of usability and stability, and companies can now create shipping applications (with a few limitations) using the ASP.NET Go Live license. Also announced at the keynote presentation was the availability of the UDDI developer tools, the Mobile Internet toolkit, and a peer-to-peer code snippet sharing service. Integrated within the IDE is a new peer-to-peer code snippet sharing service that allows a developer to enter a set of keywords in a dialog box and locate code snippets from other developer's machines. These code snippets can then be accessed across the 'net and pasted into the developers source code directly. It's essentially a Napster-style code sharing initiative. Once the keynote was over it was back to hands-on sessions and seminars. Today's talks built on yesterday's introductory talks. Breakfast and lunch were again a nice affair (mmm - cheesecake!) and after the break-out sessions there we had an 'ask the experts' open peer forum where we had the chance to speak directly to the MS guys and ask them anything from the smallest niggling question on CE SQL to questions on design and implementation of full e-commerce applications. I saw what David and I consider to be the worlds longest (and I think steepest) escalator. We rode it up and down with stupid grins on our faces. We also spent an entertaining few minutes throwing parachuted teddy bears off the top balcony at the hotel. Action photos will be posted soon. I've finally worked out the difference between "y'all" and "all y'all". Wednesday started with the usual breakfast of back bacon, eggs, fruit and something brown and unidentifiable. After that was more hands-on labs, more break-out sessions and more of the exhibitors trying everything they could to get their hands on your swipe card. The announcement of the Mobile Information Server (and related toolkit) means that developers can now write mobile applications in a device independent fashion. Extending the idea that ASP.NET applications no longer need to worry about handling the idiosyncrasies of various browsers, the Mobile Information Server releases developers from worrying about the capabilities of individual devices. If you are brave and have lots of spare time to wade through lots of marketing fluff you can read more here. Two things have really been evident in these last 2 days: Firstly, there are some seriously overweight developers, and secondly, the mood is really subdued. People aren't depressed, just, well, quiet. Maybe it's because .NET has been out for a year, so most of the attendees at least have an idea of what it's all about. This time last year we were all learning that C# was Cool and finding about about the amazing advances in ASP.NET. This years it's more about the fine tuning that has been going on, and a continuation of the evangelical message. Maybe it was also due to some of the higher profile guys not being in attendance. Chris Sells, Jeff Prosise and Jeff Richter weren't around, Tony Goodhew and Chris Anderson weren't there, and most disappointingly: no Kent Sharkey. The weather has been perfect, which is a huge disappointment. I was hoping for a tornado or two, or at least the remnants of a tropical cyclone. Obviously this statement is spoken with the brash bravado of someone who has never actually been near either of these two pieces of excitement. I was talking to a guy about Tornadoes and he told me a story about waiting for a flight in an airport in the south east of the States. He was waiting at the departure gate when two tornadoes were spotted. Everyone in the airport was moved into the center of the airport while the storms moved by, and when they were allowed back the plane that he had been about to board had been turned around 30 degrees. whoa. It was a Visual Basic thing - you're not really interested are you? It was pretty big, and took up the entire stadium at the conference center. I was a little worried when we entered the doors to find a whole bunch of mimes, but these were soon replaced by Blues Brothers look-alikes, jugglers, monocyclists and other performance artists. There was a ton of food and drink and a live bands, but unfortunately the accoustics were terrible, so you couldn't really hear them. The funniest thing about the whole night was that the helium balloons were all removed and popped after some guys tied beer bottles to clumps of balloons and released them. Little beer gondolas were floating around 100 feet above our heads. It was so cool. Apart from that it was a pretty quiet affair. It was a pity it was held indoors, since the weather was perfect. After being inside air conditioned conference halls all day it would have been nice to enjoy a southern summer evening outdoors. Thursday was the final day of the conference, and one that many people (me included) missed, which sucked because many of more interesting talks such as Nick Hodapp's and Ronald Laeremans' were scheduled for that day. The conference center entrance turned into a baggage warehouse. Once at the airport you could tell the attendees (I was about to say 'fellow geeks' but I figured that was harsh) by spotting the TechEd 2001 paraphernalia and VB.NET T-shirts. Overall it was a quiet affair. I talked to lots of people to gauge the general feeling and describe the tone of the conference and invariably the word was 'subdued'. Visual C++ developers in particular felt left out (again) because the 10 year anniversary of Visual Basic overshadowed everything. I think every VC++ developer in the house was gritting their teeth when speaker after speaker waxed lyrical about how wonderful and productive and powerful and scalable VB was. Hopefully PDC will bring VC++ back into the limelight. It doesn't seem 'sexy' to MS at the moment, which is nuts because VC++ is the most powerful of the .NET languages, and the only one that can be used to write native code. Server side .NET has a brilliant future and solves obvious problems, both in terms of code writing and management, and in application performance and deployment. Client side .NET apps may face the same uphill battle that client side Java apps are facing, so wouldn't it be prudent to push the excellent advances made in VC++ (language, compiler and IDE) to keep current developers happy and convince them that upgrading to VS.NET is a Good Thing? Maybe MS is worried this will send mixed messages about their future goals, but as a C++ developer I just want to use the best tools now, and when better tools come out I'll upgrade to them too. I wrote this while hanging out in LA after 20 hours of traveling with no sleep. I had a 10 hr layover followed by a 5hr red-eye to arrive in Toronto at 6am the following day. Needless to say I was not the most cheerful traveller at the airport. One of the first things that long haul air travel in cattle class gives you is a realisation of what it will be like when you are old and frail and living in a retirement village. The mind numbing expanse of time that soon ceases to be a moving quantity, but rather a static position with no beginning and no ending. The dimmed lights, the unchanging surroundings, and the close proximity to your neighbours giving immediate knowledge of their habits, their hopes and their lives all fuel the feeling that you have been here forever, and that you will continue to be here until your mind slowly fades away. Beyond the metaphysic there is also the stark reality of the hopelessness that sets in when you unwisely choose a window seat on a sold out flight. You sit. You watch. You wait. Your meals are bought to you in turn, and you watch with hopeless salivation as the surly stewards bring a meal that you know full well you will not enjoy, but which you nevertheless look forward to as a means of marking out the various segments of the journey. There have been meals before, and there will no doubt be meals ahead. You live in a world where there are only three states - either waiting for a meal, eating a meal, or suffering the after effects of a meal. You eat what you are given, and what is given is that same that is given to everyone else cramped in with you. You even engage in mind games with the person next to you in order to secure their blueberry crumble, only to realise that you, in turn, have lost your Caesar salad. The worst part - the very worst part - the part that makes you promise to yourself that you will never send your aging parents into a retirement home, is that once you wolf down your unpalatable meal you are forced to sit there with the decaying remains of the pasta-or-the-chicken congealing on the plate in front of you, and that the two people blocking your escape to the freedom of the isle also have the remnants of their meal similarly congealing, their tray tables down, and that you are not going anywhere until the stewards do the rounds to collect the empties. You have half an hour of concentrated bladder control to keep you occupied, and there is not a thing you can do about it. To be completely at the mercy of strangers who think of you as a seat number and not be able to do a thing about it is a sobering and mind expanding experience. If you ever feel that the time has come to put Ma and Pop in a home then you should first fly to Australia coach class - preferably using one of the cheaper, more conservative carriers - and have a good, long think about what you are about to do. Even when the meals are finally cleared, the tray tables returned to their upright position, and the complicated ballet of legs, arms, overhead lockers and headphone wires is negotiated, you still have nowhere to go. You cannot ring up friends and say 'I'm so bored I've started making sculptures out of my fingernail clippings'. No. Sit and stay you will. Enjoy the reruns of 'Friends' you will. Sleep comfortably you will not..
http://www.codeproject.com/Articles/1204/TechEd-Atlanta
CC-MAIN-2015-18
refinedweb
2,509
67.28
It’s time for yet another code trivia and it’s business as usual. What will the following program output to the console? using System; using System.Drawing; using System.Threading; class Program { [ThreadStatic] static Point Mark = new Point(1, 1); static void Main() { Thread.CurrentThread.Name = "A"; MoveMarkUp(); var helperThread = new Thread(MoveMarkUp) { Name = "B" }; helperThread.Start(); helperThread.Join(); } static void MoveMarkUp() { Mark.Y++; Console.WriteLine("{0}:{1}", Thread.CurrentThread.Name, Mark); } } The output will be the following: // A:{X=1,Y=2} // B:{X=0,Y=1} This is due to the fact that, as documented in ThreadStatic MSDN entry, initialization occurs in the class static constructor and therefore affects only one thread. The second thread will see the default value if it is a value type or a null reference if it is a reference type. In this specific case Point is a value type with the default value of {0, 0}. Advertisements 2 thoughts on “Code Trivia #6” A:1,2 B:1,2 Assuming the Point.ToString() returns “X,Y” You are correct in the order in which the coordinates are printed, however, the values printed will differ between threads. I’ll update the post with the explanation.
https://exceptionalcode.wordpress.com/2011/02/07/code-trivia-6/
CC-MAIN-2017-17
refinedweb
203
58.08
Here is a simple example: To get this image, all you have to do is reference this url: You can simply couple this with python using the webbrowser library. To open the image in a web browser via python try this: import webbrowser url = webbrowser.open(url) 4 comments: That's cool but only URL parameters is somewhat limiting. Does this accept post parameters or other type of REST/SOAP style access? Here is some documents on how to do a POST request: Here is some JSON documents: If you want something more local, I would look at the matplotlib for python. Cool blog by the way. Nice feature. Could this also be used for visualizing the csv file generate in the model described in this post? Yes you can use it for my older post, one day I'll finish part two.. I actually was going to use the MS Charting Control to create the elevation profile.
http://anothergisblog.blogspot.com/2010/04/google-charts.html
CC-MAIN-2017-22
refinedweb
158
71.14
Table Of Contents Input recorder¶ New in version 1.1.0. Warning This part of Kivy is still experimental and this API is subject to change in a future version. This is a class that can record and replay some input events. This can be used for test cases, screen savers etc. Once activated, the recorder will listen for any input event and save its properties in a file with the delta time. Later, you can play the input file: it will generate fake touch events with the saved properties and dispatch it to the event loop. By default, only the position is saved (‘pos’ profile and ‘sx’, ‘sy’, attributes). Change it only if you understand how input handling works. Recording events¶ The best way is to use the “recorder” module. Check the Modules documentation to see how to activate a module. Once activated, you can press F8 to start the recording. By default, events will be written to <currentpath>/recorder.kvi. When you want to stop recording, press F8 again. You can replay the file by pressing F7. Check the Recorder module module for more information. Manual play¶ You can manually open a recorder file, and play it by doing: from kivy.input.recorder import Recorder rec = Recorder(filename='myrecorder.kvi') rec.play = True If you want to loop over that file, you can do: from kivy.input.recorder import Recorder def recorder_loop(instance, value): if value is False: instance.play = True rec = Recorder(filename='myrecorder.kvi') rec.bind(play=recorder_loop) rec.play = True Recording more attributes¶ You can extend the attributes to save on one condition: attributes values must be simple values, not instances of complex classes. Let’s say you want to save the angle and pressure of the touch, if available: from kivy.input.recorder import Recorder rec = Recorder(filename='myrecorder.kvi', record_attrs=['is_touch', 'sx', 'sy', 'angle', 'pressure'], record_profile_mask=['pos', 'angle', 'pressure']) rec.record = True Or with modules variables: $ python main.py -m recorder,attrs=is_touch:sx:sy:angle:pressure, profile_mask=pos:angle:pressure Known limitations¶ - Unable to save attributes with instances of complex classes. - Values that represent time will not be adjusted. - Can replay only complete records. If a begin/update/end event is missing, this could lead to ghost touches. - Stopping the replay before the end can lead to ghost touches. - class kivy.input.recorder. Recorder(**kwargs)[source]¶ Bases: kivy.event.EventDispatcher Recorder class. Please check module documentation for more information. - Events - on_stop: Fired when the playing stops. Changed in version 1.10.0: Event on_stop added. counter¶ Number of events recorded in the last session. counteris a NumericPropertyand defaults to 0, read-only. filename¶ Filename to save the output of the recorder. filenameis a StringPropertyand defaults to ‘recorder.kvi’. play¶ Boolean to start/stop the replay of the current file (if it exists). playis a BooleanPropertyand defaults to False. record¶ Boolean to start/stop the recording of input events. recordis a BooleanPropertyand defaults to False. record_attrs¶ Attributes to record from the motion event. record_attrsis a ListPropertyand defaults to [‘is_touch’, ‘sx’, ‘sy’]. record_profile_mask¶ Profile to save in the fake motion event when replayed. record_profile_maskis a ListPropertyand defaults to [‘pos’]. window¶ Window instance to attach the recorder. If None, it will use the default instance. windowis a ObjectPropertyand defaults to None.
https://kivy.org/doc/master/api-kivy.input.recorder.html
CC-MAIN-2020-34
refinedweb
546
70.19
How to build a RESTful API in Django An API (Application Program Interface) is a regulated method of providing data to other applications and services. It's a set of protocols for building and integrating application software. APIs lets your service or app communicate with other apps without having to know how they are built and implemented. What is a RESTful API? A RESTful API or simply REST (Representational State Transfer), is a type of API that uses HTTP requests to access data. Most REST APIs use JSON to send the payload. This is because JSON is lightweight and easy to interpret. Some of the main types of requests in a REST API are: - GET -- retrieves resources from the API based on the endpoint and parameters passed - POST -- creates a new resource or record - PUT -- updates an existing resource or a record - DELETE -- deletes a resource or record at the given URI Prerequisites To follow along with this tutorial, it's good to have at least some basic understanding of: By the end of this tutorial, you will be able to: - Install Django REST framework - Serialize Django models - Setup common HTTP requests - Install and configure Cross-Origin Resource Sharing (CORS) Let's dive in! Step 1 -- Installing Django It's recommended to build Python projects in virtual environments. Let's create a virtual environment for our project. Run the command below in a terminal. $ python3 -m venv virtual The command creates a new virtual environment virtual in the current directory. Activate the environment by running: $ source virtual/bin/activate Then install the latest version of Django via PyPi using the command below. $ pip install django You can test Django installation in the python shell by running the command below: $ python -m django --version The command prints the version of your Django installation. If not installed, you will get a no module named django error. Step 3 -- Creating a Django project Now that there's an activated virtual environment with Django installed let's create a project. A Django project is a directory that contains all settings associated with a Django instance. This includes: - Database configuration - Language and time zone configurations - Middleware configurations - Debug configurations - Template configurations Run the command below to create a new Django project. $ django-admin startproject rest The command creates a new folder rest in the current directory. Navigate to this folder in your terminal. $ cd rest The project directory tree should look like this: . ├── rest │ ├── manage.py │ └── rest │ ├── asgi.py │ ├── __init__.py │ ├── settings.py │ ├── urls.py │ └── wsgi.py To learn more about the generated files and folders, head over to the documentation. Step 4 -- Creating the API app. Django comes with a utility that generates the basic folder structure of an app so that you can focus more on writing code rather than creating directories. Run the command below to create a Django app. $ python manage.py startapp api The command generates a new api folder with the following folder structure: . ├── api │ ├── admin.py │ ├── apps.py │ ├── __init__.py │ ├── migrations │ │ ├── __init__.py │ ├── models.py │ ├── tests.py │ ├── urls.py │ └── views.py Step 5 -- Registering the API app to the Project Open the project directory in your code editor. Open settings.py and add your api app to the installed apps. rest/settings.py INSTALLED_APPS = [ #..... 'api', ] Step 6 -- Configuring URL routing Since we'll be creating only one app inside our Django project, we can point the base route of our project to our API app. Create a file urls.py in the api folder and put the following code. api/urls.py from . import views from django.urls import path urlpatterns = [ ] We import the views.py file in the API directory and create an empty list of URL patterns. Let's point the project URL to these URL patterns. Edit the urls.py file in the rest folder to look like this: rest/urls.py from django.contrib import admin from django.urls import path, include urlpatterns = [ path('admin/', admin.site.urls), path('', include('api.urls')) ] Step 7 -- Running the app Time to run our app. Create the default database schema by running migrations using the following command. $ python manage.py migrate Then run the app in the local server using the following command. $ python manage.py runserver The app is served at localhost port 8000 by default. ie.. You can change the localhost port by appending the port number to the command above. For example to serve the app at port 7000, run the command below. $ python manage.py runserver 7000 Step 8 -- A simple API View Let's see what a simple API in Django looks like. Open views.py and add the following code. api/views.py from django.http import JsonResponse from rest_framework.decorators import api_view @api_view(["GET"]) def index(request): content = {"Message": "Hello World!"} return JsonResponse(content) We import the Json Response class which allows us to return JSON in functions. We then create a GET function index that returns some JSON. Let's create a URL configuration for this function. Add the following code to urls.py. api/urls.py #..... urlpatterns = [ path('', views.index) ] The function index is directed to the base app route. Reload your browser to see the API. GETAPI view Now that we have the simplest API running, let's build a larger API. We'll build a notes API. The API will be able to: - Get existing notes - Add notes to the Database - Edit existing notes - Delete notes Step 9 -- Installing Django REST framework To build APIs more easily, we'll be using Django REST framework. It helps us a lot, as we'll not be hard-coding everything. Let's install it using the command below. $ pip install djangorestframework Then, we'll need to register the framework under installed apps in settings.py. Edit the installed apps section in settings.py to look like this: rest/settings.py #..... INSTALLED_APPS = [ #..... 'api', 'rest_framework', # register rest framework ] #..... Step 10 -- Creating a database model Open models.py and add the following code. api/models.py #...... class Note(models.Model): title=models.CharField(max_length=200) body=models.TextField() date=models.DateField() def __str__(self): return self.title The code creates a database Model Note with the title, body and date properties. Let's create this database structure by running these 2 commands. $ python manage.py makemigrations $ python manage.py migrate The makemigrations command detects the changes in your models and makes migrations accordingly. Then the migrate command applies the changes to the database. Step 11 -- Serializing the model A serializer is a component that will convert Django models to JSON objects and vice-versa. Create a file serializer.py in your app directory and put the following code. api/serializer.py from rest_framework import serializers from .models import Note class NoteSerializer(serializers.ModelSerializer): class Meta: model = Note fields = ('id','title','body','date') We import serializers module from the rest_framework package. We then create NoteSerializer class that inherits from the ModelSerializer class. We then specify the model and fields we want to return. Now we need to create an API view to handle the requests. Step 12 -- Adding some Notes Before we write the API view, let's add some test notes. To do this, we need to: Notemodel to admin dashboard - Create a superuser - add notes Open admin.py and put the following code. rest/admin.py from django.contrib import admin from . import models admin.site.register(models.Note) The code registers the Note model to the admin dashboard. Let's create a superuser to login to the admin dashboard. Run the following command and enter the credentials as prompted. $ python manage.py createsuperuser Now that you have an admin created, go to /admin route .ie. and log in using the credentials you created above. Add some notes by clicking Notes and then Add Note Step 13 -- Getting notes list view Add the following code inside views.py. api/views.py from rest_framework.response import Response from rest_framework.views import APIView from .models import Note from .serializer import NoteSerializer #..... class NotesList(APIView): def get(self, request, format=None): all_notes = Note.objects.all() serializers = NoteSerializer(all_notes,many=True) return Response(serializers.data) We import the Response class from rest framework to handle API requests. We also import the APIView as a base class for our API view function. We then create NoteList class that inherits from the APIView class. We then define a get method that: - queries the database to get all Noteobjects - serializes the model objects - returns the serialized data as response Let's create a new URL configuration (URLConf) to handle requests to this model. Step 14 -- Creating API View URLConf Edit the urlpatterns under urls.py to look like this: api/urls.py urlpatterns = [ #..... path('notes', views.NotesList.as_view()), ] And now your new view is ready. Go to the /notes endpoint to view the results. /notesendpoint on a browser /notesendpoint in Postman Testing the API Testing an API is a critical thing in API building. To test this API, you can use any API testing tool or choose one from below. - Postman. You can download Postman here. - Postman chrome extension. Although this extension is deprecated, it's still working and can be useful if you want to save your RAM. You can download the extension here. - Rested Firefox extension. This is a Firefox extension that can be used to test APIs. You can download it here. Step 15 -- Adding notes to the database It's boring and illogical to be adding notes via the admin dashboard. Why don't we write some code to enable us to add notes using the API? Under views.py add the following code. api/views.py from rest_framework import status #..... class Notes(APIView): #..... def post(self, request, format=None): serializers = NoteSerializer(data=request.data) if serializers.is_valid(): serializers.save() return Response(serializers.data, status=status.HTTP_201_CREATED) return Response(serializers.data, status=status.HTTP_400_BAD_REQUEST) We import status from rest framework to return status codes in each request. We also add a post method that: - serializes input data - saves it to the database if the data is valid - returns a 400 status code if the data is invalid Making a post request to the route /notes and passing the note properties will add a new note to the database. /notes Step 16 -- Getting a single note Let's say, you want to view a single note. You don't have to scroll through a long list of notes. We can write some code to get that specific note. api/views.py class singleNote(APIView): def get(self, request,pk, format=None): try: note = Note.objects.get(pk=pk) serializers = NoteSerializer(note) return Response(serializers.data) except Note.DoesNotExist: return Response(status=status.HTTP_404_NOT_FOUND) We create another API View class singleNote. Inside it, we create a get method that: - gets a note from the database by id - serializes it if the note exists - returns the data - returns a 404 status code if the note does not exist Let's create a URLConf for the singleNote class. Open urls.py and add the code. api/urls.py urlpatterns = [ #..... path('notes/<pk>', views.singleNote.as_view()) ] At this point, if you make a get request at /notes/1 .ie you'll get the details of the note id 1. /notes/<id> Step 17 -- Editing existing notes Let's say we add a note and there's a typo in between. Do you always have to go to the admin dashboard? NO. We can set up another method to do that for us. To edit existing notes, we'll need to identify the note by an id. Let's add another method inside the singleNote class. api/views.py class singleNote(APIView): #..... def put(self, request, pk, format=None): note = Note.objects.get(pk=pk) serializers = NoteSerializer(note, request.data) if serializers.is_valid(): serializers.save() return Response(serializers.data) else: return Response(serializers.errors,status=status.HTTP_400_BAD_REQUEST) The put method above: - gets a note id - serializes input data - saves the serialized data to the identified note if the data is valid. - returns a 400 status code if the input data is invalid If you make a put request at /notes/1, the note id 1 will be updated. PUTrequest at /notes/<id> Step 17 -- Deleting a note To delete a single note, delete() method. A note is identified by id and then deleted. Let's add another function inside the singleNote class. api/views.py class singleNote(APIView): #..... def delete(self,request, pk, format=None): try: note = Note.objects.get(pk=pk) note.delete() return Response(status=status.HTTP_200_OK) except: return Response(status=status.HTTP_404_NOT_FOUND) The delete method above: - gets a note by id - deletes the note - returns a 200 status code on deletion - returns a 404 status code if the note does not exist Making a delete request at /notes/1 will delete note id 1. DELETErequest at /notes/<id> Cross-Origin Resource Sharing (CORS) If your API is to be accessed by some front-end or a remote machine, you'll need to set up CORS. Django-cors-headers is a package that sets up CORS in Django. Install it via PyPI using the command below. $ pip install django-cors-headers Then register the package under installed apps in settings.py. rest/settings.py INSTALLED_APPS = [ #..... 'rest_framework', 'corsheaders' ] You'll also be required to add CORS middleware to listen in on responses. rest/settings.py MIDDLEWARE = [ 'corsheaders.middleware.CorsMiddleware', #..... ] Finally, configure allowed origins. You can either: - set a list of allowed origins under CORS_ALLOWED ORIGINS - allow all origins by setting CORS_ALLOW_ALL_ORIGINSto True # allowing selected origins CORS_ALLOWED_ORIGINS = [ "", #etc... ] # allowing all origins CORS_ALLOW_ALL_ORIGINS = True Returning clean JSON. By default, Django REST framework returns an inbuilt page along with your JSON at every route. You can remove it by returning pure JSON from your view functions. To do this, you'll need to import JsonResponse from the Django http package. api/views.py from django.http import JsonResponse You'll then return output using JsonResponse instead of Response from your methods. You'll also be required to set the safe parameter to False since Django doesn't allow serializing non-dictionary objects by default. api/views.py return JsonResponse(serializers.data, safe=False) JsonResponse The code in this tutorial can be found in GitHub. Summary We've looked at how you can set up a RESTful API in Django. REST APIs can be made with the help of the Django REST framework. It makes it easier to set up the API as you don't have to hard-code everything. We have created a simple notes API and looked at how we set up the common types of HTTP requests in Django. If CORS headers are not set up, integration with front-end services is impossible.
https://developers.decoded.africa/introduction-to-restful-apis-in-django/
CC-MAIN-2020-50
refinedweb
2,464
59.6
Have you ever wished to create a component that supports v-model directive, but works without it as well? First things first. If you've tried Vue.js you've probably learned that you can bind variables on inputs. This creates a two-way data binding which syncs the variable and the input's state. All you need to do is to use the v-model directive. You may also have learned that you can use this directive with any custom component since v-model is just a syntax sugar to cover both ways of the data binding. You can learn more about this here. Hence <input v- turns into <input v-bind: As you can see, in order to implement the support, you have to declare a prop variable called value and emit an event labeled input. And that's it. However, you will quickly find out that at this point the component indeed supports the v-model directive, but it doesn't work at all without it. That's often undesirable. For instance, imagine you'd like to create a custom search component that includes a text input. Since it's a mere extension of a text input, it's reasonable that it should support v-model. But it is also reasonable that you'd like to be able to use it without it since the input inside would normally work straight away had it been a plain HTML element. Let's tackle this. Optional v-model support Let's start by creating a simple search component that will accept value as prop. If the user doesn't provide it, it's initiated to an empty value. props: { value: { type: String, default: "", }, }, However, we can't use this prop directly in the input since that would mutate it which is not recommended. To circumvent this problem we'll create a clever computed value that will use the value prop if passed from the parent, or a custom local value otherwise. We'll make use of the extended computed property syntax where one can declare different functions for setter and getter of the computed function. data() { return { localValue: this.value, }; }, computed: { searchValue: { get() { return this.isValuePropSet() ? this.value : this.localValue; }, set(value) { this.$emit("input", value); this.localValue = value; }, }, }, methods: { isValuePropSet() { return ( !!this.$options.propsData && this.$options.propsData.value !== undefined ); }, }, Let's first take a look at the getter. When retrieving the value, the isValuePropSet() method is invoked. This method returns true when the value prop was set by the parent, not initialized to empty string by the default property. So when it was set from the outside, we'll just return the value property and the component works as if it was implemented as a regular component with v-model support. However, when the value was not set, then the getter returns localValue instead. In the setter the current value is both emitted as an input event and stored in the localValue. With this pattern, we can bind the clever searchValue computed property to the input as usual <input v- And that's it. The search component works with v-model attached as well as without it. Check out the example sandbox to see it wholly in action. Discussion (0)
https://dev.to/localazy/v-model-support-without-requiring-value-prop-2jcg
CC-MAIN-2021-10
refinedweb
542
65.52
Try putting this in your <head> <meta http- Also, try changing the de-DE in the <html> tag to <html lang="de" prefix="og:"> let me know if that works You can use tag to show HTML entities You need to encode all Your HTML entities like < => < like way. Also you can show a text area in which all those HTML code need to echo, it will not execute your code simply it will print it.: HTML <ul class="palette"> <li swatch="#4362ff"> <li swatch="#ee3d5f"> <li swatch="#FFFFFF"> <li swatch="#FFFFFF"> <li swatch="#FFFFFF"> </ul><span><a href="palette/URL">Title</a></span> jQuery //document ready $(document).ready(function () { $('.palette li').each(function () { $(this).html('<input value='+$(this).attr("swatch")+' type="text" style="background: ' + $(this).attr('swatch') + '" />'); }); }); fiddle There were lots of mistakes in the code. You didn't select the correct class for the UL. Also UL elements can not contain span elements. Also using inspect element would have showed the code does what it told it to and put ## on front of the color. You are not really applying the XSLT when you simply tell ASP.NET to display the XML in the literal's text. You have to apply the XSLT separately. From this question you can learn how to transform an XML: XPathDocument myXPathDoc = new XPathDocument(myXmlFile) ; XslCompiledTransform myXslTrans = new XslCompiledTransform(); myXslTrans.Load(myStyleSheet); XmlTextWriter myWriter = new XmlTextWriter("result.html",null); myXslTrans.Transform(myXPathDoc,null,myWriter); Note the example is writing to a file but you would write to a string. You would then have to do that inside the ListView.ItemDataBound event for each bound row and generate the HTML you want. I'm no PHP expert, but there are a few ways you could do this: Use something like shell_exec or make a system call expose your java logic through a webservice Don't do it all! If your posted code is all that's going on, just rewrite the formatting stuff in php or write the whole thing in java. probably some other weird way So once you choose a path for your server side stuff, if you want to display the results without a page reload, you will want to use some javascript and most likely some jQuery.ajax or maybe some jQuery('.target-area').load. Normally this comes down to the user preventing platform access and locking down their privacy settings. The best way to check would be to inspect the users missing SELECT uid FROM page_fan WHERE page_id='20531316728' AND uid ='friend_id' If you have PHP 5.3.6 or higher you can do the following: $url = "remotesite.com/page1.html"; $html = file_get_contents($url); $doc = new DOMDocument(); // create DOMDocument libxml_use_internal_errors(true); $doc->loadHTML($html); // load HTML you can add $html $testElement = $doc->getElementById('divIDName'); echo $doc->saveHTML($testElement); If you have a lower version I believe you would need to copy the Dom node once you found it with getElementById into a new DomDocument object. $elementDoc = new DOMDocument(); $cloned = $testElement->cloneNode(TRUE); $elementDoc->appendChild($elementDoc->importNode($cloned,TRUE)); echo $elementDoc->saveHTML(); Don't use mod_rewrite, use DirectoryIndex and give priority to index.html DirectoryIndex index.html index.php Note: While this should work, I don't recommend it as it creates ambiguity. One of those configuration changes that will haunt you later. Create a QTimer with 1 sec interval (or e.g. 100 msec for more accuracy), connect its timeout signal to your slot. In the slot get the current time using QTime::currentTime() static function, convert it to string using toString and assign it to a GUI element (e.g. a label). You need to use $scope.$apply, otherwise any changes to $scope made in non-Angular event handlers won't be processed properly: img.onload = function () { $scope.$apply(function() { $scope.image.width = img.width; $scope.image.height = img.height; $scope.image.path = $scope.imageurl; }); } You can simply call this data by getting the quote: <?php $cart = Mage::getModel('checkout/cart')->getQuote(); $cartData = $cart->getData(); ?> Then display e.g. the grand total: <?php echo $cartData['grand_total']; ?> the problem was not simple section navigation plugin widget but the sidebar from my theme that encapsulates the plugin widget. For information for other users , the simple section navigation plugin does work with wordpress 3.5.2. simply replace _get_ancestors function by get_ancestors Sifting through CSS for a few minutes and I found a solution, I have made a list of corrections. Here is the CSS code on pastebin, it does all the fixes I have mentioned below. Manual Fix Find #middlewrapper #content .contentbox, disable float:left. #middlewrapper #content .contentbox { width: 650px; padding: 15px; /* float: left; */ background-color: white; } Find #middlewrapper #content .contentbox_shadow, disable float:left. #middlewrapper #content .contentbox_shadow { width: 690px; height: 20px; /* float: left; */ background-image: url(''); background-repeat: no-repeat; } Find #middlewrapper #content, disable float:left. #middlewrapper #content { width: 690px and the scroll and text area are showing as transparent (non-opaque), however, I can't seem to affect the actual scroll bars. They still appear as the default grey color where I'd like to change their color and/or make them transparent to match the rest. there are two ways you would need to override BasicScrollBarUI(), without any commnets milion dollars baby by @aterai, there is VerticalScrollBar only, you need to override and add HorizontalScrollBar, to ScrollPaneLayout() that returns coordinated for Horizontal JScrollBar, Use request.GET['next']: return HttpResponseRedirect(request.GET.get('next', '/')) Also see: Django: Redirect to previous page after login The hard way is to interpret the code within the database, the easier way is to store c# code and runtime compile it with CodeDOM Or take a look at Iron Python. Use .html() instead of .text(). When you use .text(), the value is the literal text that you want users to see, so it replaces special HTML characters with entities so they'll show up literally. It is really difficult to debug in PHP. Only few tricks and tools can be used in debugging. To solve this type of problem, when you get blank window without any error indication, it is good to use a error reporting tool of PHP. To enable the error reporting add the following line of code in your webpage error_reporting(E_ALL); ini_set('display_errors',true); Since this is a JavaScript modal, when the page finishes loading the JavaScript code could still be running. The solution is to wait until the button to close the modal be displayed, close it and then follow with your test. Like this: WebDriverWait wait = new WebDriverWait(driver, TimeSpan.FromSeconds(30)); wait.Until(ExpectedConditions.ElementIsVisible(By.Id("csclose"))); driver.FindElement(By.Id("csclose")).Click(); Tested myself and works fine. Hope it helps. try this: s = "4.4 out of 5 stars" p s[/([+-]?d*.d+)(?![-+0-9.])/] # >> "4.4" You can find this wayRegexp.new: Regexp.new('([+-]?\d*\.\d+)(?![-+0-9\.])') # => /([+-]?d*.d+)(?![-+0-9.])/ Isn't is just as simple as checking if the requested path terminates in .png, .gif, .jpg etc... ? i.e. /* $reqPath is for example /the/stuff/i/want/image.png */ if (in_array(strtolower(substr($reqPath, -4)), ['.png', '.jpg', '.gif', '.svg']) { /* It's an image! */ } else { /* Must be something else */ } I added the strtolower just in case you have a request for IMAGE.PNG, Image.Png or something silly. Your production server is Server:Microsoft-IIS/6.0 The instructions for installing PHP on IIS 6 are at Here are the headers from the Server response: HTTP/1.0 304 Not Modified Content-Location: Last-Modified: Mon, 15 Jul 2013 17:15:42 GMT Accept-Ranges: bytes Server: Microsoft-IIS/6.0 MicrosoftOfficeWebServer: 5.0_Pub X-Powered-By: ASP.NET Date: Mon, 15 Jul 2013 17:57:35 GMT If you do not already have MySQL installed on the server, you can find the instructions at You have just encountered the classic DEL vs. BKSP dilemma. There is no perfect solution, you have to acknowledge that some terminals are configured to output a BKSP (0x08) ASCII character when the user presses backspace, while others output DEL (0x7f). Most terminal emulators have an option for this sort of thing, I have not seen a lot of application software that actually works around this issue. It is usually left up to the terminal program, and the user decides whether to press DEL or BKSP or re-configure their terminal in order to ensure proper program operation. You need a second generic parameter: public interface MyInterface2<U estends MyAbstractClass, T extends MyInterface1<U>> { U returnInstanceOfMyClass(); } You can maybe use getBouningClientRect on your element. This will give you the position and size of that element: var clientRect = element.getBoundingClientRect(); And now you can get the size and position: var leftPos = clientRect.left; var topPos = clientRect.top; var width = clientRect.width; var height = clientRect.height; Hope this helps! Here is a link to more information: Either check which <div> tag has the most content (really unreliable) or make a list of all class names/ ids that are used by major sites to mark their main content-markup and save them in a database. you should be able to do with a couple thousand rows and then parse the pages using DOM to check with class name is available. This might not be the fastest solution, but you could speed it up, if you map certain sites, you know which class names they use. EDIT: You will still have to refine your algorithm. For example: how do you handle multiple of those stored class names being present what do you do, if none is present (show the whole page?, Show only the biggest div? Ok, I think I've worked it out. main(); function main(){ app.scriptPreferences.userInteractionLevel = UserInteractionLevels.interactWithAll; app.findGrepPreferences.findWhat="~N"; var FindGrep=app.activeDocument.findGrep(); for(i=0; i<FindGrep.length; i++) { var item = FindGrep[i]; var page = item.parentTextFrames[0].parentPage; item.contents = page.name; } alert("done"); } Struggled to find any valuable documentation from Adobe. This really helped: As well as this SO question: Get current page number in InDesign CS5 from Javascript Edit: If your page numbering is in a master, you will need to "override all page master items" (check the pages palette) Edit 2: This worked on inDesign 5.5 (not. use this public function loadfile($fl) { $mime = 'application/force-download'; header('Pragma: public'); // required header('Expires: 0'); // no cache header('Cache-Control: must-revalidate, post-check=0, pre-check=0'); header('Cache-Control: private',false); header('Content-Type: '.$mime); header('Content-Disposition: attachment; filename="'.basename($fl).'"'); header('Content-Transfer-Encoding: binary'); header('Connection: close'); readfile($fl); // push it out exit(); } Have you marked Radio button for product image as Base Image in Administration area on live site? This radio button causes to display the product image on Product detail Page. please check. Don't give buttons or inputs that are buttons display: table-cell. It doesn't really work. If you read it states: Buttons in input groups are a bit different and require one extra level of nesting. Instead of .input-group-addon, you'll need to use .input-group-btn to wrap the buttons. This is required due to default browser styles that cannot be overridden. Here's a demo showing how to do this without Bootstrap, if you need it: This uses the same basic method as Bootstrap. And here's a demo where I copied the HTML from the Bootstrap demo and imported Bootstrap: You're introducing a PDF syntax error here: PdfContentByte canvas = writer.DirectContent; canvas.SetTextMatrix(50, 50); With DirectContent, you grab an object that allows you to write PDF syntax directly to the content stream. Immediately after that, you introduce a text state operator. This is illegal PDF syntax: A text state operator is only valid inside a text object; that is: between BT (BeginText()) and ET (EndText) operators). Why do you need to set the text matrix? That line doesn't make sense, because you're defining the position of the table in the writeSelectedRows() method. Long story short: remove canvas.SetTextMatrix(50, 50); and you'll have at least one syntax error less in your PDF. I just checked the iText codebase, and normally your code should throw an "unbalanced Try this. It will help you. Using Jquery $(document).ready(function () { $("#campaign-alert").show(); }); Using JavaScript window.onload = function () { document.getElementById("campaign-alert").style.display = "block"; };. That's partly right. Using an IDE (Eclipse, IntelliJ) you can create a test. In that test invoke a method (that does not exist) and using a refactoring tool create a method with the proper signature. That's a trick that makes working with TDD easier and more fun. According to Now i will write junit test case where i will check the size as 0. Is this Right? you should write a test that fails, and the provide proper implementation. You can create a custom middleware to log this. Here is how I create a middleware to achieve this purpose base on (I modified the code a bit). Firstly, assuming your project has a name: test_project, create a file name middlewares.py, I place it in the same folder as settings.py: from django.db import connection from time import time from operator import add import re class StatsMiddleware(object): def process_view(self, request, view_func, view_args, view_kwargs): ''' In your base template, put this: <div id="stats"> <!-- STATS: Total: %(total_time).2fs Python: %(python_time).2fs DB: %(db_time).2fs Queries: %(db_queries)d ENDSTATS --> </div> ''' # Uncomment the follo If you dont want to use images for the preview, you have to render every .pdf file. Simplest way would be an IFrame, but you can find more solutions here. But keep in mind that every .pdf you put in an IFrame will be loaded completely und if you have a list of several .pdf's it could make your page quite slow. Do you mean to use two pages or UserControls? If you're using UserControls, it's a classic Master - Details scenario, you can find an example here:. If you want to use pages, you'll have to pass the parameter as the second page's DataContext. If you create the second page in the first it's easy enough. If not you'll need some common object to pass the information, you can try a MVVM framework like MVVM Light which have a messenger for exactly these situations. Regards, Yoni. Try this code: it will exclude the current page by taking its ID. <?php $args = array( 'post_type' => 'page', 'numberposts' => -1, 'sort_order' => 'ASC', 'sort_column' => 'post_title', 'post_status' => 'publish', 'exclude' => get_the_ID() ); //$allpages = get_pages($args ); ?> <?php wp_list_pages( $args ); ?>
http://www.w3hello.com/questions/How-to-Display-Actual-Code-in-ASP-Page
CC-MAIN-2018-17
refinedweb
2,437
57.47
Red Hat Bugzilla – Bug 599227 mingw <pthread.h> is broken Last modified: 2011-05-12 13:18:33 EDT Description of problem: The cross-compilation header /usr/i686-pc-mingw32/sys-root/mingw/include/pthread.h, installed as part of the mingw32-pthreads package, has several coding bugs. Version-Release number of selected component (if applicable): mingw32-pthreads-2.8.0-10.fc13.noarch How reproducible: Always Steps to Reproduce: 1. Try cross-compiling any code that uses localtime_r with a second argument with side effects, or try calling (localtime_r)(arg1,arg2). 2. Try cross-compiling any project that uses gnulib's <time.h> replacement header (libvirt is an example project; it includes a ./autobuild.sh script that will automatically try a mingw cross-compilation, if you have installed a mingw portablexdr library, although that library is not yet part of fedora). Actual results: The definition of localtime_r is broken, because it evaluates the second argument twice. And, since POSIX allows one to #undef localtime_r, but there is no localtime_r function in the library, you get a link failure if you bypass the function-like macro. Finally, the pthreads-win32 library made the mistake of installing <config.h>, which is asking for namespace collision with most other autotooled packages. Expected results: <pthread.h> should not define any *_r functions, nor should it interfere with a proper <time.h>. Also, the library should not install <config.h>, but should instead modify its installed headers to be self-contained. Additional info: See this thread on bug-gnulib for more details: I recently came across this bug when trying to compile libcheck under mingw32, which choked on the localtime_r problem described in comment #1. Do you happen to know if these issues have been resolved upstream already? There has been activity in upstreams CVS repo to add support for mingw-w64 and possibly other fixes. See for details. On that link there also are patches to fix the config.h issue which you mentioned. For the new cross compiler framework which I mentioned in your other bugreport (bug 599567) I have those patches already applied. If you want to test this new version of mingw32-pthreads you can use the testing repository for the new cross compiler framework. Details for that can be found at (In reply to comment #0) [snip] > Finally, the pthreads-win32 library made the mistake > of installing <config.h>, which is asking for namespace collision with most > other autotooled packages. > > Expected results: [snip] > Also, the library should not install <config.h>, but should > instead modify its installed headers to be self-contained. I gave a stab at fixing the config.h issue. In rawhide pthreads.h no longer includes config.h and both config.h and its other private headers are no longer installed. The fix is in mingw32-pthreads-2.8.0-14.fc16;a=blobdiff;f=mingw32-pthreads.spec;h=4b7aaff11c12bb7df614ea7f4d2a0af35aa66d76;hp=e6dbf82ceefe2a5b5ea4ca9164a1be94ae046c56;hb=7b7cbed7d5e5a0ec8c68a42e280e8bc74d01a734;hpb=dc57f4600486bd31e906493e4dcec1396d33e5a0 It would also appear that in CVS HEAD localtime_r and other *_r definitions are gone from pthreads.h; perhaps we should consider packaging up a snapshot. Anyone interested in talking to upstream and asking when they are going to make a new release? After some discussion with Kalev on IRC I think it's a good idea to update the mingw32-pthreads package in Fedora to the latest CVS version. That way we should also be ready with a compatible version for when mingw-w64 will be introduced in Fedora 16 I just built mingw32-pthreads-2.8.0-15.20110511cvs.fc16 for rawhide, which should fix both the config.h issue by removing the header, and also remove the possibly conflicting localtime_r definition. I did some basic testing, but it would be very awesome if someone else could try it with a real world test case. Closing the ticket.
https://bugzilla.redhat.com/show_bug.cgi?id=599227
CC-MAIN-2018-22
refinedweb
640
58.58
The stack is an abstract data structure which keeps track of function calls recursively and grows from the higher addressed memory to the lower addressed memory. The fact that the stack grows allows the subject of buffer overflows to exist. A buffer is an array and refers to a storage place to receive and hold until it can be used by a process. Since each process may have its own set of buffers means that it is critical to keep them in place. Like a stack of dishes, each dish that you put (or push) onto the stack is taken off from the top of the stack (popped off). Both of these basic operations performed on a stack (push and pop) always work from the top, the memory algorithm is said to have last in first out (LIFO) semantics. In terms of code execution on a Windows operating system, the stack is a block of memory assigned by the operating system to a running thread. As stated, the function of the stack is to track the function call chain. For instance, when you see a box that requests user input for verification and validation, a function call was made to output that data box, wherein another function is made to grab the user input, wherein the another function is made to compare that user input with what the system expects, and so on. Tracking a function call therefore involves the allocation of local variables, parameters passing, and so on. Think of the algebraic function y = f(x). The values inserted into x determine the value of y, which determines the output of an operation. But any time a function call is made; another frame is created and is pushed on the stack. As the thread (that executes within a process) makes more function calls, the stack grows larger and larger. To give an example, examine this code that shows the starting point of a new thread that makes a series of nested function calls, as well as declaring local variables in each of the functions: y = f(x) x y #include <windows.h> #include <stdio.h> #include <conio.h> DWORD WINAPI ThreadProcedure(LPVOID lpParameter); VOID ProcA(); VOID Sum(int* numArray, int iCount, int* sum); void __cdecl wmain () { HANDLE hThread = NULL ; wprintf(L"Starting new thread..."); hThread = CreateThread(NULL, 0, ThreadProcedure, NULL, 0, NULL); if(hThread!=NULL) { wprintf(L"Successfully created thread\n"); WaitForSingleObject(hThread, INFINITE); CloseHandle(hThread); } } DWORD WINAPI ThreadProcedure(LPVOID lpParameter) { ProcA(); wprintf(L"Press any key to exit thread\n"); _getch(); return 0; } VOID ProcA() { int iCount = 3; int iNums[] = {1,2,3}; int iSum = 0; Sum(iNums, iCount, &iSum); wprintf(L"Sum is: %d\n", iSum); } VOID Sum(int* numArray, int iCount, int* sum) { for(int i=0; i<icount;i++) *sum+=""numArray[i];"" To gain a better understanding of how the stack works and how it can become corrupted, we will use the cl.exe compiler that ships with Microsoft Visual Studio. When we compile this code, we use the /Zi switch to obtain extra debugging information and we will disable the /GS switch flag to avoid stack protection information: c:\...\VC\bin> cl.exe /Zi /GS- stackdesc.cpp. We now have an executable with debugging information, and object file, and an incremental linking file. We copy and paste these files to the “Debugging Tools for Windows” directory in order to use the console debugger, cdb.exe. When I wrote that stack operation gives rise to possible buffer overflows, consider this basic C program: #include <stdio.h> #include <string.h> int main (int argc, char *argv[]) { char buffer[500]; strcpy (buffer, argv[1]); return 0; } This code allocates a buffer of a certain size, and then uses the string copy function to copy a string much into the allocated buffer (note that argv[0] is the name of the program file). Running this code results in a segmentation fault and it is suggested that you do not compile and run it. argv[0] Consider this image: The referenced code above shows the main function creating a thread using the CreateThread API and the starting function of the thread to a function called ThreadProcedure. The ThreadProcedure() function is therefore the starting point of investigating this code with the cdb.exe debugger. When using the debugger, note that we have copied and pasted the debugging information into that directory. But good practice means setting the symbol path. CreateThread ThreadProcedure ThreadProcedure() So we will start debugging, noting that the ‘x’ command requires the accurate debugging information in the symbols generated by the /Zi switch of the compiler. To begin, we start out with the cdb.exe debugger: C:\Program Files\Debugging Tools for Windows> cdb.exe StackDesc.exe: c:\Program Files\Debugging Tools for Windows>cdb.exe StackDesc.exe Microsoft (R) Windows Debugger Version 6.8.0004.0 X86 CommandLine: StackDesc.exe Symbol search path is: c:\symbols Executable search path is: ModLoad: 00400000 0042c000 StackDesc.exe ModLoad: 77860000 77987000 ntdll.dll ModLoad: 77320000 773fb000 C:\Windows\system32\kernel32.dll (1720.120c): Break instruction exception - code 80000003 (first chance) eax=00000000 ebx=00000000 ecx=0012fb08 edx=778b9a94 esi=fffffffe edi=778bb6f8 eip=778a7dfe esp=0012fb20 ebp=0012fb50 iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000246 ntdll!DbgBreakPoint: 778a7dfe cc int 3 We notice that the Windows debugger automatically breaks in after initializing the process before execution begins. (You can disable this breakpoint by passing -g to cdb on the command line.) This is handy because at this initial breakpoint, your program has loaded, and you can set any breakpoints you’d like on your program before execution begins. 0:000> x stackdesc!*threadprocedure* *** WARNING: Unable to verify checksum for StackDesc.exe 00401090 StackDesc!ThreadProcedure (void *) 0:000> bp stackdesc!threadprocedure 0:000> g Starting new thread...Successfully created thread Breakpoint 0 hit eax=773648ff ebx=00000000 ecx=00000000 edx=00401005 esi=00000000 edi=00000000 eip=00401090 esp=008aff8c ebp=008aff94 iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000246 StackDesc!ThreadProcedure: 00401090 55 push ebp 0:001> kb ChildEBP RetAddr Args to Child 008aff88 77364911 00000000 008affd4 7789e4b6 StackDesc ThreadProcedure! 008aff94 7789e4b6 00000000 77e175a8 00000000 kernel32!BaseThreadInitThunk+0xe 008affd4 7789e489 00401005 00000000 00000000 ntdll!__RtlUserThreadStart+0x23 008affec 00000000 00401005 00000000 00000000 ntdll!_RtlUserThreadStart+0x1b The thread procedure is not the first function to execute. Rather, it is a function defined in kernel32.dll named BaseThreadInitThunk followed by a call to our first function. Now here is the tricky part. We have reached the starting point of our thread. We want to take a closer look at the stack to see how it is set up. The next instruction is to un-assemble the code, but we will see the typical program entry point 55 and the push ebp instruction. So what’s wrong with that? Well we are supposed to do a mov edi, edi instruction. This will be made clearer as we go on. Now recall that stack operations work recursively – that pop and push operations work from the top, while the stack is read from the bottom when we need to analyze a thread call stack. BaseThreadInitThunk push ebp mov edi, edi pop push 0:001> u stackdesc!threadprocedure StackDesc!ThreadProcedure: 00401090 55 push ebp 00401091 8bec mov ebp,esp 00401093 e87cffffff call StackDesc!ProcA(00401014) 00401098 68c82c4200 push offset StackDesc!’string’ (00422cc8) 0040109d e8ea000000 call StackDesc!wprintf (0040118c) 004010a2 83c404 add esp,4 004010a5 e886050000 call StackDesc!_getch (00401630) 004010aa 33c0 xor eax,eax There is no mov edi, edi instruction above the push ebp function. Right about now, any reader must be wondering what that instruction has to with anything anyway. Deem it sufficient for now to know that the register ebp stores the base pointer for any given frame. So if it is saved onto stack, it is the frame pointer to the stack frame that existed prior to the creation of the new stack frame (that is, the call instruction). Since the base pointer (ebp) needs to be retained for each frame, it gets pushed onto the stack. The next instruction moves the stack pointer (not the base pointer) to the ebp register to establish the beginning of a new stack frame (because a function was called). Now we are at the point at which we are ready to call the ProcA procedure via the call function. The ‘uf’ instruction is to disassemble the entire ProcA function, which was called after the stack pointer started with 008aff8c to then be decremented by 4 (32 bits, 4 bytes, a DWORD). The resulting value for esp is 008AFF88. Note the value in the previous kernel stack dump: push ebp ebp ebp ProcA uf 008aff8c DWORD esp 008AFF88 008aff88 77364911 00000000 008affd4 7789e4b6 StackDesc ThreadProcedure! This is why we are now in a position to disassemble the function ProcA: ProcA 0:001> uf stackdesc!ProcA StackDesc!ProcA: 004010b0 55 push ebp 004010b1 8bec mov ebp,esp 004010b3 83ec14 sub esp,14h) 004010ea 83c40c add esp,0Ch 004010ed 8b45f0 mov eax,dword ptr [ebp-10h] 004010f0 50 push eax 004010f1 68042d4200 push offset StackDesc!__xt_z+0x1c8 (00422d04) 004010f6 e891000000 call StackDesc!wprintf (0040118c) 004010fb 83c408 add esp,8 004010fe 8be5 mov esp,ebp 00401100 5d pop ebp 00401101 c3 ret The instruction sub esp, 0x14 (or decimal 20) means that there is a subtraction of 0x14 bytes from the stack pointer. Why? It is making room for local variables. Recall the source code for ProcA. It allocates the following local variables on the stack: sub esp int iCount = 3; int iNums[] = {1,2,3}; int iSum = 0; There are three ‘int’ variable declarations and assignments: one value for 4 bytes, 3 values for 12 bytes, and one value for 4 bytes, for a total of 20 bytes. So when we subtract 20 bytes from the stack pointer, the gap in the stack becomes reserved for the local variables declared in the function. After the stack pointer has been adjusted to make room for the local variables, the next set of instructions executed initializes the stack-based local variables to the values specified in the source code: int After the local variable initialization (where the data types are assigned values), we have a series of instructions that gets the application to make another function call:) Whenever a call instruction results in calling a function with parameters, the calling function is responsible for pushing the parameters onto the stack from right to left. This is how the above list (parameters) is passed from the ThreadProc function to the Sum function. Now notice the call to the printf function and the string. ThreadProc Sum printf string 0:001> du 00422d04 00422d04 "Sum is: %d." The ‘du’ instruction is meant to dump Unicode text, as evidenced by the “L” that precedes the statements in the source code. Many, if not every function prologues, begin with a mov edi, edi instruction. While it is simply a NOP instruction, the Advanced Debugging for Windows teaches that it could be used to enable hot patching. Hot patching refers to the capability to patch running code without the frustration of first stopping the component being patched. It is important because it avoids downtime in system availability. The concept behind this mechanism is that the 2 byte mov edi, edi instruction can be replaced by a “jmp” instruction that can execute whatever new code is desired. Opcode contains 7 jump instructions, 6 of which are conditional and one of which takes control by jumping straight to a landing address. Examine the following instruction and recall that the first function was contained in kernel32.dll: du L jmp 0:001> u kernel32!FindFirstFileExW kernel32!FindFirstFileExW: 77360a33 8bff mov edi,edi 77360a35 55 push ebp 77360a36 8bec mov ebp,esp 77360a38 81eccc020000 sub esp,2CCh 77360a3e a1acd43e77 mov eax,dword ptr [kernel32!__security_cookie (773e d4ac)] 77360a43 33c5 xor eax,ebp 77360a45 8945fc mov dword ptr [ebp-4],eax 77360a48 837d0c01 cmp dword ptr [ebp+0Ch],1 This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
http://www.codeproject.com/Articles/34017/Examining-Function-Calls-using-the-Console-Debugge
crawl-003
refinedweb
2,048
61.77
In this video, I walk through the process of packaging a toolbar and its script commands into an addon. I also touch on the subject of the command maps used by Softimage for non-self-installing script commands. Monthly Archives: April 2012 Screenshots of the week Soft Softimage 2013 Python shortcuts gotcha Softimage 2013 includes some changes to the Python shortcuts: from siutils import si si = si() # win32com.client.Dispatch('XSI.Application') from siutils import log # LogMessage from siutils import disp # win32com.client.Dispatch from siutils import C # win32com.client.constants The big difference is that si shortcut is now a function that returns the Application object. We had to make this change to fix some weird memory leak (the si object was out of scope in the module where it was created, and the Python parser could not properly delete it upon exit). Here’s a suggested workaround from the Dev team to maintain backward-compatibility: # ---python code begin --- from siutils import si if Application.Version().split('.')[0]>= "11": si = si() # win32com.client.Dispatch('XSI.Application') from siutils import log # LogMessage from siutils import disp # win32com.client.Dispatch from siutils import C # win32com.client.constants # --- python code end --- The if block executes only if the current Softimage version is later than 11, which corresponds to 2013 or later. hat tip: SS Friday Flashback #67 Pull. ObjectIDs and objects that don’t exist As noted by iamVFX on si-community, the new Python method Application.GetObjectFromID2 doesn’t deal well with IDs that don’t match an object that exists in the scene (in fact, it crashes Softimage). The JScript version, Application.GetObjectFromID simply returns null in that situation. So, it is probably best to use DataRepository.HasData to check if the ID represents a real object. Note that Dictionary.GetObject can also be used to get objects by ID. from siutils import si si = si() # win32com.client.Dispatch('XSI.Application') from siutils import log # LogMessage from siutils import disp # win32com.client.Dispatch from siutils import C # win32com.client.constants MAX_IDS = 2000 for i in range(0,MAX_IDS): if XSIUtils.DataRepository.HasData( i ): try: o = si.GetObjectFromID2( i ) log( "%s : %s" % (str(i), o.FullName) ) except: o = Application.Dictionary.GetObject( "object<%s>" % str(i) ) s = "%s : GetObjectFromID2 failed. Dictionary.GetObject found %s (%s)" % (str(i), o.Name, si.ClassName(o)) log( s ) else: log( "%s : No object with this ID exists" % str(i) ) A new, blank Softimage 2013 scene has 757 objects. For your reading pleasure, the full list is below the fold. The case of the missing crowd Our. The case of the missing Previous Version licenses: Here’s a video walkthrough of a license file, where I point out what to look for when you’re trying to understand a license file. In this video, I show you to tell what you’ve got in a license file. # ERROR : 2356 – This plug-in is not installed If”. Screenshots of the week whats… Checking the Softimage 2013 startup with Process Monitor The:
https://xsisupport.com/2012/04/
CC-MAIN-2021-49
refinedweb
503
59.3
Table of Contents Unit Tests is installed as part of VuFind's Composer development dependencies. Once installed, you will have a vendor/bin/phpunit command line tool that you can use to run tests (as well as some convenient Phing tasks to make running the tasks more convenient). 1.) Create a fresh copy of VuFind (i.e. git clone the repository to a fresh directory, then run “composer install”) 2.) Create a phing.sh script to automatically pass important parameters to Phing (see build.xml for other parameters that may be set here with the -D parameter): #!/bin/sh $VUFIND_HOME/vendor/bin/phing -Dmysqlrootpass=mypasswd $* If you are managing multiple VuFind test environments, it may make sense to have a different phing.sh in each VuFind directory (e.g. $VUFIND_HOME/phing.sh). In the more common scenario where you have just one test environment, it is usually more convenient to put this in your own home directory (i.e. ~/phing.sh). The examples below will assume that you are using a script in your home directory. Running tests after setup Follow these steps to run tests. Keep in mind that testing will create a test Solr index listening on port 8983. or ~/phing.sh phpunitfaster (the phpunit command will run tests and generate report data for use by continuous integration; phpunitfast will run tests more quickly by skipping reports; phpunitfaster is like phpunitfast but stops upon the first test failure instead of going through the full suite) 3.) ~/phing.sh shutdown This command will turn off the VuFind test instance created by step 1. The Faster Version If you don't want to run integration tests using a running VuFind, you can simply bypass the startup/shutdown steps and only=$VUFIND_HOME to make it work when running tests as a different user - dbtype (optional) - defaults to mysql, but can be set to “pgsql” to test with PostgreSQL instead - mysqlrootpass (optional) - the MySQL root password (needed for building VuFind's database, when dbtype = mysql) - vufindurl (optional) - the URL where VuFind will be accessed (defaults to if omitted) - mink_driver (optional) - the name of the Mink driver to use (e.g. “selenium” or “chrome”). Here's an example script from an Ubuntu environment using MySQL: #!= -Dmink_driver=chrome $* You will also need to install and run a few things, depending on how you want to run your tests. Selenium Testing Selenium testing can be useful for testing a variety of browsers, but headless Chrome browsing is generally easier to set up; see below for details on that. Before running your tests, you should download the Selenium server .jar file from here. It may also be called Grid. The Selenium Server needs to be started before you run phpunit with “java -jar [downloaded file]”. If you want to use a browser other than Firefox with Selenium, use the -Dselenium_browser=[name] setting to switch. For example, you can use -Dselenium_browser=chrome, though this requires you to have the ChromeDriver installed on your system. Download the version that matches your installed Chrome version and place the unpacked file along your bin path, such as in /usr/bin.. When running tests in a windowed environment, it's a good idea to open a couple of Terminal windows – that way you can run the Selenium server in one window (to watch it receive commands) while running your tests themselves in another (to see failures, etc.). Headless Chrome Testing Headless Chrome testing was introduced in VuFind 7. Versions of Chrome 80+ have a built-in remote debugging feature. You can run Chrome instead of the Selenium Server like so: google-chrome --disable-gpu --headless --remote-debugging-address=0.0.0.0 --remote-debugging-port=9222 --window-size="1366,768" --disable-extensions Then, when you run the phpunit Phing task, add the parameter -Dmink_driver="chrome" (or make sure this is set as a default in your phing.sh script). Troubleshooting This section contains some notes that may help you avoid problems when trying to run VuFind's test suite. - Some of VuFind's tests use a record ID containing a slash. In order for these tests to work, your Apache installation needs to be configured with “AllowEncodedSlashes on” set in the VirtualHost used for running the VuFind instance being tested. - When testing with PostgreSQL, it may be necessary to edit the pg_hba.conf file and change the line “local all all peer” to “local all all md5” to allow password-based database logins required my the test environment. Mink Tests with Chrome on macOS Running Chrome Headless A command to start Chrome before running the tests: /Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome --disable-gpu --headless --remote-debugging-address=0.0.0.0 --remote-debugging-port=9222 --window-size=1600,900 Required GNU Utilities Note: This applies only to VuFind versions before 8.0. VuFind's Mink tests up to version 7.x rely on options available only in GNU versions of find, sed and basename, so you will need to install those versions and make sure they are used when running Mink tests on macOS. One option is to use Homebrew to install them: brew install findutils gnu-sed gnu-basename Then prefix the phing command in phing.sh with a path that contains these utils. Here is an example that uses a static Apache configuration and mysql with passwordless authentication (both installed with Homebrew): PATH="/usr/local/opt/gnu-sed/libexec/gnubin:/usr/local/opt/findutils/libexec/gnubin:$PATH" $VUFIND_HOME/vendor/bin/phing -Dmysqlrootuser=$LOGNAME -Dmysqlrootpass="" -Dvufindurl= -Dmink_driver=chrome $* src directory (module/VuFind/src/VuFind). Tests should be placed in a directory that corresponds with the component being tested. For example, the unit tests for the \VuFind\Date\Converter class are found in “module/VuFind/tests/unit-tests/src/VuFindTest/Date/ConverterTest.php”. Support Classes/Traits The VuFindTest namespace (code found in module/VuFind/src/VuFindTest) contains several classes and traits which may be useful in writing new tests – a fake record driver in VuFindTest\RecordDriver, some helpful traits in VuFindTest\Feature, and some base test classes containing reusable patterns in VuFindTest\Unit. When working on integration tests, pay special attention to the VuFindTest\Feature\Live* traits (introduced during VuFind 8 development), which provide helpful methods for retrieving services that provide access to the real database and Solr instances for tests where actual data needs to be manipulated. Such tests should be written with caution, and should only be run in a safe test environment, as noted elsewhere. Mink Browser Automation If you want to write browser automation tests, you should extend \VuFindTest\Integration\MinkTestCase. This class will automatically set up the Mink. Related Video See the Testing, Part 1 and Testing, Part 2 videos for more on this topic.
https://vufind.org/wiki/development:testing:unit_tests
CC-MAIN-2022-05
refinedweb
1,122
52.29
in reply to Connection to the Monastery Well, my mom died this April, and I have had no time to do much on Perl Monks except read and vote and consider nodes. It was quite a shock when liz pointed out to me that my node Where and how to start learning Perl has ended up in the best nodes of the year. I'm quite ashamed to say that I'm still not in a position to answer questions from monks, while I am a pontiff saint (shriek) by now, completely undeservedly. There are monks that are "just" a monk or scribe that have much more Perl knowledge and experience than me. There's a real Perl connection here. Perl Monks works. It's a pity some people don't return (come on Abi, you are sorely missed!). I feel really attached to this community. Imagine how proud I am of this Best Node
http://www.perlmonks.org/?node_id=380415
CC-MAIN-2015-18
refinedweb
155
88.36
A JavaFX 2.0 Custom Control In previous blogs over the past few months, I've loosely described JavaFX 2.0 and its big changes, soon to be available to the general public in beta form. Given I'm involved in the early access program, I've had a chance to work extensively with the new release and its all-new pure Java API, and I have to say I truly love it. It's not that JavaFX Script wasn't good to me, but working with Java is much more familiar and comfortable, especially for large JavaFX projects. So far, I've built a few simple "crash and burn" experiment applications, as I like to call them. You know, the types of projects that you start to gain an understanding of something new, but quickly abandon because you've hacked the heck out of it. But I became proficient enough to port an existing custom JavaFX 1.3.1 control to JavaFX 2.0, with the help of the FXTranslator tool Oracle is working on to help translate existing JavaFX Script code to pure Java code that uses the new JavaFX 2.0 API. The results can seen in the comparison screen shot from my previous blog here. However, to best illustrate the new JavaFX 2.0 API, I decided to build a simple custom JavaFX control. I've chosen to build a type of button that I've seen used on the iPhone and iPad; one that I call an "arrow button." This type of control works like a button, which contains text and is clickable, but indicates that you'll be moved to the next screen or step, or back to the previous screen, because the button is in the shape of an arrow. To illustrate, the screen shot below shows a mock bank account application, where the button labeled "Accounts" is an arrow button. Intuitively, the arrow indicates that you'll move to a new screen view when clicked. Now, let's get to the code! The JavaFX 2.0 Application To begin, let's start with a simple JavaFX 2.0 application that will contain the components for the application shown in the picture above. In NetBeans, I've created a new project with a reference to the JavaFX 2.0 runtime JAR file. Next, I've create a Java class called Main, which has the usual public static void main() method to make it a Java application, as you would expect. To turn down the road to JavaFX 2.0 development land, simply extend the new Application class, add the required public void start method and a call to the JavaFX Launcher, as shown below: package arrowbutton; import javafx.application.Application; … public class Main extends Application { @Override public void start(Stage stage) { … } public static void main(String[] args) { Launcher.launch(Main.class, args); } } We'll get to the code inside the start method in a bit. For now, let's define the custom control. The JavaFX 2.0 Custom Arrow Button I tend to use interfaces where applicable, as it fosters good coding practice, and so it is with my custom control development. I begin to define the arrow button by creating its API interface, shown below: public interface ArrowButtonAPI { public static final int RIGHT = 1; public static final int LEFT = 2; public void setText(String text); public void setOnMouseClicked(EventHandler eh); public void setDirection(int direction); } The arrow has a direction it points (left or right), text that it displays, and behavior (it's clickable). You call setDirection to display an arrow button that points in the specified direction; setText to provide the button's text, and setOnMouseClicked to provide a callback in the form of a JavaFX 2.0 EventHandler, to be called when the user clicks on the button. Next, the control framework is created with a class that extends the javafx.scene.control.Control base class, as shown here: package arrowbutton; import javafx.event.EventHandler; import javafx.scene.control.Control; import javafx.scene.control.Skin; public class ArrowButton extends Control implements ArrowButtonAPI { private String title = ""; public ArrowButton() { this.storeSkin( new ArrowButtonSkin(this) ); } public ArrowButton(String title) { this(); this.title = title; ArrowButtonSkin skin = (ArrowButtonSkin)this.getSkin(); skin.setText(title); } public void setText(String text) { getSkin(getSkin()).setText(text); } public void setOnMouseClicked(EventHandler eh) { getSkin(getSkin()).setOnMouseClicked(eh); } public void setDirection(int direction) { getSkin(getSkin()).setDirection(direction); } private ArrowButtonSkin getSkin(Skin skin) { return (ArrowButtonSkin)skin; } … } There are two constructors: one that takes a String as a shortcut to set the button's text, and one without. In either case, the arrow button's skin is created and stored within the control. The skin defines the actual graphical elements of the button, as well as most of its behavior. The remainder of the ArrowButton control class implements the ArrowButtonAPI interface, and other required methods such as those that handle layout (not shown), and so on. Let's move on to the real fun; the skin class. The ArrowButtonSkin Class The arrow button control contains a skin class that implements the javafx.scene.control.Skin interface, which requires you to define a root node for your control (which contains the graph of nodes that make up your control's look and feel), and methods to gain access to the root node, and so on: public class ArrowButtonSkin implements Skin<ArrowButton>, ArrowButtonAPI { static final double ARROW_TIP_WIDTH = 5; ArrowButton control; String text = ""; Group rootNode = new Group(); Label lbl = null; int direction = ArrowButtonAPI.RIGHT; EventHandler clientEH = null; public ArrowButtonSkin(ArrowButton control) { this.control = control; draw(); } public ArrowButton getControl() { return control; } public Node getNode() { return rootNode; } public void dispose() { } ////////////////////////////////////////////////////////////// public void draw() { … } public void setText(String text) { this.text = text; lbl.setText(text); // update button draw(); } public void setOnMouseClicked(EventHandler eh) { clientEH = eh; } public void setDirection(int direction) { this.direction = direction; // update button draw(); } } I've listed the required methods first, followed by the ArrowButtonAPI methods (which are called from the ArrowButton control class), as well as an additional method, draw. You don't need to implement it this way — in fact, the intent is to actually create the node graph for your custom control in the getNode method — but I prefer to do all of the layout this way so I can call it when certain things change. Alternatively, JavaFX binding can be used to eliminate that, but since it's still in a state of flux in JavaFX 2.0, I've avoided it so far. As binding progresses (and I get my head around how it actually works in the new API), I'll include a comprehensive example at a later date. For now, let's look in more detail at the draw method, since this is the where the real action takes place. First, the code creates a label, stores its width and height (which varies by the text length and font size), and positions it within the control using offsets: if ( lbl == null ) lbl = new Label(text); double labelWidth =lbl.getBoundsInLocal().getWidth(); double labelHeight = lbl.getHeight(); lbl.setTranslateX(2); lbl.setTranslateY(2); Next comes the rather lengthy bit of code to create the lines and curves that make up the arrow button itself. This code uses the JavaFX 2.0 Path class to define a custom shape that makes up the control's body. The shape is comprised of a starting point, a top flat line, a curved line for the top part of the arrow, another curved line for the bottom part of the arrow, a bottom flat line, and a vertical line to close the shape: // Create arrow button line path elements Path path = new Path(); MoveTo startPoint = new MoveTo(); double x = 0.0f; double y = 0.0f; double controlX; double controlY; double height = labelHeight; startPoint.setX(x); startPoint.setY(y); HLineTo topLine = new HLineTo(); x += labelWidth; topLine.setX(x); // Top curve controlX = x + ARROW_TIP_WIDTH; controlY = y; x += 10; y = height / 2; QuadCurveTo quadCurveTop = new QuadCurveTo(); quadCurveTop.setX(x); quadCurveTop.setY(y); quadCurveTop.setControlX(controlX); quadCurveTop.setControlY(controlY); // Bottom curve controlX = x - ARROW_TIP_WIDTH; x -= 10; y = height; controlY = y; QuadCurveTo quadCurveBott = new QuadCurveTo(); quadCurveBott.setX(x); quadCurveBott.setY(y); quadCurveBott.setControlX(controlX); quadCurveBott.setControlY(controlY); HLineTo bottomLine = new HLineTo(); x -= labelWidth; bottomLine.setX(x); VLineTo endLine = new VLineTo(); endLine.setY(0); path.getElements().add(startPoint); path.getElements().add(topLine); path.getElements().add(quadCurveTop); path.getElements().add(quadCurveBott); path.getElements().add(bottomLine); path.getElements().add(endLine); Next, to add a little UI flare (this is JavaFX, after all), the arrow shape is filled with a linear gradient, consisting of two grey colors. This is done by defining the stop points of the gradient within an array, and the type of gradient you want (linear or radial): // Create and set a gradient for the inside of the button Stop[] stops = new Stop[] { new Stop(0.0, Color.LIGHTGREY), new Stop(1.0, Color.SLATEGREY) }; LinearGradient lg = new LinearGradient( 0, 0, 0, 1, true, CycleMethod.NO_CYCLE, stops); path.setFill(lg); You can define as many stop points and colors as you'd like in a gradient. Finally, the text label and the custom shape path are added as child nodes to the root node with a single statement: rootNode.getChildren().setAll(path, lbl); To pass along mouse clicks to the client callback EventHandler, we need to provide our own EventHandler to handle clicks on our control's root node. The simple way to handle this is with an anonymous inner class, shown here: rootNode.setOnMouseClicked(new EventHandler<MouseEvent>() { public void handle(MouseEvent me) { // Pass along to client if an event handler was provided if ( clientEH != null ) clientEH.handle(me); } }); The EventHandler we create handles MouseEvents (indicated with Java Generics), and passes along the event notifications if the client has provided its own EventHandler. All that's left is to write code that actually uses the arrow button. Let's look at that now. Using the Arrow Button Remember the start() method at the beginning of this blog? Let's fill it in now to create the JavaFX 2.0 application shown in the screen shot above. Notice that that application's Stage object is passed as a parameter; the JavaFX scene graph exists almost exactly as it did in earlier versions. To set the application title, you set the Stage's title: stage.setTitle("The JavaFX Bank"); Next, I've chosen a Group layout for this application. I could have otherwise chosen HBox, VBox, GridPane, and so on, but I prefer to handle the layout manually for this example. Next, I create a normal button (the Close button), an arrow button, and a label with some description text: // Create the node structure for display Group rootNode = new Group(); Button normalBtn = new Button("Close"); normalBtn.setTranslateX(140); normalBtn.setTranslateY(170); final Stage s = stage; normalBtn.setOnMouseClicked(new EventHandler<MouseEvent>() { public void handle(MouseEvent me) { // Close the stage (accessible through the scene graph) Node node = (Node)me.getSource(); node.getScene().getStage().close(); } }); // Create a directional arrow button to display account information ArrowButton accountBtn = new ArrowButton("Accounts"); accountBtn.setDirection(ArrowButton.RIGHT); accountBtn.setTranslateX(125); accountBtn.setTranslateY(10); accountBtn.setOnMouseClicked(new EventHandler<MouseEvent>() { public void handle(MouseEvent me) { // ... } }); // Handle arrow button press accountBtn.setOnMouseClicked(new EventHandler<MouseEvent>() { public void handle(MouseEvent me) { System.out.println("Arrow button pressed"); } }); // Some description text Label description = new Label( "Thanks for logging into the\n" + "JavaFX Bank. Click the button\n" + "above to move to the next \n" + "screen, and view your active \n" + "bank accounts."); description.setTranslateX(10); description.setTranslateY(50); Notice I created EventHandlers to know when the Close and arrow buttons are clicked. Next, the three controls are added to the scene graph as follows: rootNode.getChildren().add(accountBtn); rootNode.getChildren().add(description); rootNode.getChildren().add(normalBtn); Finally, the Stage's Scene is created, the scene graph is completed, and the stage is made visible: Scene scene = new Scene(rootNode, 200, 200); stage.setScene(scene); stage.setVisible(true); Voila! There you have a shiny new JavaFX 2.0 application with a custom JavaFX control that looks and works like one of those fancy arrow buttons on the iPhone. In a future blog, I plan to cover binding in JavaFX 2.0, as well as property change listeners, which can be used as an alternative to binding. Happy coding! — EJB
http://www.drdobbs.com/tools/a-javafx-20-custom-control/229400781
CC-MAIN-2014-15
refinedweb
2,050
54.83
What is Out of Memory Killer ? Major distribution kernels set the default value of /proc/sys/vm/overcommit_memory to zero, which means that processes can request more memory than is currently free in the system. This is done based on the heuristics that allocated memory is not used immediately, and that processes, over their lifetime, also do not use all of the memory they allocate. Without overcommit, a system will not fully utilize its memory, thus wasting some of it. Overcommiting memory allows the system to use the memory in a more efficient way, but at the risk of OOM situations. Memory-hogging programs can deplete the system’s memory, bringing the whole system to a grinding halt. This can lead to a situation, when memory is so low, that even a single page cannot be allocated to a user process, to allow the administrator to kill an appropriate task, or to the kernel to carry out important operations such as freeing memory. In such a situation, the OOM-killer kicks in and identifies the process to be the sacrificial lamb for the benefit of the rest of the system. So, the OOM Killer or Out of Memory killer is a linux kernel functionality ( refer to kernel source code mm/oom_kill.c ) which is executed only when the system starts going out of memory. How to Control which process to avoid getting Killed ? Users and system administrators have often asked for ways to control the behavior of the OOM killer. To facilitate control, the /proc/ Lets try to create a simple process as, vim main.c #include <stdio.h> int main(int argc, char **argv) { while(1); } $ gcc simple_process main.c $ ./simple_process & This will create a simple process in background on this terminal, Now lets check the process ID of this simple process as, $ pgrep simple_process 16350 $ cd /proc/16350 $ cat oom_adj 0 $ sudo echo -17 > oom_adj $ cat oom_adj -17 How linux decides which process should get killed first ? The process to be killed in an out-of-memory situation is selected based on its badness score. The badness score is reflected in /proc/ We have chrome browser running which consumes more memory compared to our process simple_process so lets check the PID of chromium-browse as, $ top | grep chromium-browse 17720 myuser 20 0 480400 154296 91384 R 11.8 3.8 0:54.33 chromium-browse Lets check oom_score of chromium browser as, $ cat /proc/17720/oom_score 318 and lets check the oom_score of the simple_process we created above as, $ cat /proc/16350/oom_score 0 Which shows that chromium-browsers oom_score is more than our process simple_process so browser has more chances of getting killed when OOM Killer gets executed. How to invoke OOM Killer manually for understanding which process gets killed first For this, please refer to our post at How to invoke OOM Killer manually for understanding which process gets killed first Reference –
https://www.lynxbee.com/understanding-linux-oom-killer-and-avoiding-perticular-process-from-being-killed-in-case-of-out-of-memory/
CC-MAIN-2018-22
refinedweb
487
56.59
1 –: <p><u>Normal ASP.NET Binding</u></p> <asp:ListView <LayoutTemplate> <table id="Table1" runat="server"> <tr id="Tr1" runat="server"> <td id="Td1" runat="server"> Item ID </td> <td id="Td2" runat="server"> Item Name </td> </tr> <tr id="ItemPlaceholder" runat="server"> </tr> </table> </LayoutTemplate> <ItemTemplate> <tr> <td> <asp:Label </asp:Label> </td> <td> <asp:Label </asp:Label> </td> </tr> </ItemTemplate> </asp:ListView> Code behind populating the data: Create a class file called "Product" in a separate file and it will be added in "App_code" folder in the project. This type will be used in the code running at the back end for populating the data of the type "Product". /// <summary> /// Summary description for Product /// </summary> public class Product { public int ID { get; set; } public string ItemName { get; set; } } /// <summary> /// Page Load Method /// </summary> /// <param name="sender"></param> /// <param name="e"></param> protected void Page_Load(object sender, EventArgs e) { //bind the data to the list view by calling the method PopulateProducts. displayData.DataSource = PopulateProducts(); displayData.DataBind(); } /// <summary> /// Method to create new product /// </summary> /// <returns></returns> public IList < Product > PopulateProducts() { return new List < Product > { new Product { ID = 1, ItemName = "Item1" }, new Product { ID = 2, ItemName = "Item2" }, new Product { ID = 3, ItemName = "Item3" }, new Product { ID = 4, ItemName = "Item4" } }; } Code Execution: Your output will look as follows: For more details, click on the link: 3 –Performance Tuning in ASP.Net Answer The following are the points which one needs to check before collecting data as a .NET developer. Set debug=false in web.config.. <trace enabled="false" requestLimit=”10” pageoutput=”false” traceMode=”SortByTime” localOnly=”true”> Choose Session State management carefully One extremely powerful feature of ASP.NET is its ability to store session state for the users for any Web Applications. Since ASP.NET manages session state by default, we pay the cost in memory, even if you don’t use it. In other words, whether you store your data in process or on a state Server or in a SQL Database, session state requires memory and it’s also time consuming when you store or retrieve data from it. You may not require session state, when your pages are static or when you do not need to store information captured in the page. In such cases, where there is no need to use session state, disable it on your Web form using the directive: <@%Page EnableSessionState="false"%> For more details, click on the link:: - App_Start folder This folder contains the Application configuration details such as routing, authentication, filtering of URL and so on. Controller This folder contains the Controller and their methods. The Controller is responsible for processing the user request and returns output as a view. Models This folder contains the entities or properties used to store the input values. Step 2:Create Model Class. EmpModel.cs public class EmpModel { public string Name { get; set; } public string City { get; set; } } Step 3:Add Web API Controller. HomeController.cs using System; using System.Web.Http; namespace CreatingWebAPIService.Controllers { public class HomeController: ApiController { [HttpPost] public bool AddEmpDetails() { return true; //write insert logic } [HttpGet] public string GetEmpDetails() { return "Vithal Wadje"; } [HttpDelete] public string DeleteEmpDetails(string id) { return "Employee details deleted having Id " + id; } [HttpPut] public string UpdateEmpDetails(string Name, String Id) { return "Employee details Updated with Name " + Name + " and Id " + Id; } } } The following is the default URL structure of ASP.NET Web API defined into the WebApiConfig.cs file. Step 5:Testing Web API REST service using REST client. Now let us test the application using REST client as: In the image shown above Web API REST Service-HTTP POST method returns the 200 Status code which means REST service is successfully executed. GET Method In the image depicted above, Web API REST Service- HTTP GET method returns the 200 Status code which means REST service is successfully executed and returns the XML format output. Note - The Web API REST service by default returns the output as per the browser default message header such as XML or JSON. For more details, click on the link: 5- Support for WebSockets protocol Answer Web Sockets - Create a single bi-directional connection between a Client & Server. It means that the client can send a request to the Server and forget, when the data is ready at the Server; where the Server will send it to the Client. - Native to the Browser, which makes them light weight and easy to implement. - Uses its own protocol and can tunnel thru Firewalls and Proxy with no extra effort. i.e Client does not use HTTP request but when a new WebSocket connection establishes, then the Browser establishes the HTTP connection with the Server and subsequently upgrades that connection to a dedicated WebSocket connection by setting a tunnel passing thru Firewall and Proxies. - Once the connection is established (as depicted above), it is a two way channel on which the Client and Server can communicate. Creating a WebSocket is as easy as the code shown below: var ws = new WebSocket("ws:<URL>"); Once the connection is established, there are the methods to control i terms off sending the data and monitoring the connection. ws.onopen = function(){ } // Connection Open ws.onmessage = function(){ } // received update ws.onopen = function(){ } // Connection Close //Funtions of WebSocket Object ws.send(<text>); ws.close(); Advantages: - Since the Firewall is bypassed, the streaming is very easy and can be done through any connection. - Does not need a separate connection for up-stream and down-stream. - Can be used with any client like AIR & Flex which comes with JavaScript support. Limitations: - All Browsers do not support WebSockets as of now.? - Synchronous represents a set of activities that starts happening together at the same time. - A synchronous call waits for the method to complete before continuing with the program flow. In general, asynchronous programming makes sense in two cases as: - If you are creating a UI intensive application in which the user experience is the prime concern. In this case, an asynchronous call allows the user interface to remain responsive. Unlike as shown in Figure 1-1. - If you have other complex or expensive computational work to do, you can continue; interact with the application UI while waiting for the response back from the long-running task. Asynchronous Programming Model Pattern - Relies on two corresponding methods to represent an asynchronous operation: BeginMethodName and EndMethodName. - Most often you must have seen this; while using delegates or method invocation from a Web Service.. <configuration> <runtime> <gcServer enabled="true"/> </runtime> </configuration> For more details, click on the link:: - Color - Date - Datetime - Month - Number - Range - Search - Tel - Time - URL - Week In order to define, all these new input types the "<input>" Tag is used. Input type Color Description This input type allows the collection of a color of the form. If a Browser supports this input type then the intention is that clicking in the textfield will result in a Color Chooser that pops up. The input element with a type attribute, whose value is "color" represents a color-well control to set the element’s value to a string representing a simple color. Note: Color keywords (for example, strings such as "red" or "green") are not allowed. Syntax <input type="color" name="some-name"/> Example of color input type <!DOCTYPE html> <html lang="en" xmlns=""> <head> <meta charset="utf-8" /> <title>Color</title> </head> <body> <h2>Implementation Of color as New Input Type</h2> <form action="form.asp"> Choose your favorite color: <input type="color" name="favcolor"><br> <br> <input type="submit"> </form> </body> </html> Input type Date Description The date type allows the user to select a date. The input element with a type attribute whose value is "date" represents a control for setting the element’s value to a string, representing a date. In simple words, we can say that this input type allows collection of a date. Syntax <input type="date" name="some-name"/> Input type DateTime Description The datetime type allows the user to choose a date and time (with time zone).The input element with a type attribute, whose value is "datetime" represents a control to set the element’s value to a string representing a global date and time (with timezone information). Syntax <input type="datetime" name="some-name"/> - Input type Email Description The Email type is used for input fields that should contain an e-mail address. This gives liberty to have a field to fill E-mail address(es). This input type allows collection of an Email address. If the "list" attribute is not specified, then the intention is that the Browser supplies some help in entering a legal email address (e.g., the iPhone Browser uses an Email-optimized keyboard) and/or validation on submission. Syntax <input type="email" name="some-name"/> <input type="email" list="email-choices" name="some-name"/> <datalist id="email-choices"> <option label="First Person" value="abc@example.com"> <option label="Second Person" value="xyz@example.com"> <option label="Third Person" value="pqr@example.com"> … </datalist> - Input type number Description The number type is for the input fields that should contain a numeric value. This input type allows a collection of a number (either integer or floating point). In other words, input type number means picking a number. The input element with a type attribute, whose value is "number" represents a precise control for setting the element’s value to a string representing a number. Syntax <input type="number" min="0" max="20" step="2" value="10" name="some-name"/> Input type month Description The month type allows the user to choose a full month and an year. The input element with a type attribute, whose value is "month" represents a control to set the element’s value to a string representing a month. Syntax <input type="month" name="some-name"/> Input type range Description This input type allows a collection of a number (either integer or floating point). All known Browsers that support this use a Slider. The exact value is not displayed to the user unless you use JavaScript. Hence, use the number (Spinner) input type if you want to let the user choose an exact value. Browsers are supposed to use a Horizontal Slider unless you attach CSS, that specifies a smaller width than height, in which case, they are supposed to use a Vertical Slider that sets to a certain value/position. Syntax <input type="range" name="some-name"/> Input type tel Description It is used to enter a telephone number. This input type is intended to help you collect a telephone number. Since the format of telephone numbers is not specified, it is not clear how a normal Browser would help you with this. However, a cell phone might use an on-screen keyboard that is optimized for phone number input. Syntax <input type="tel" name="some-name"/> Input type time Description It allows the user to select a time. The input element with a type attribute, whose value is "time" represents a control to set the element’s value to a string representing a time (with no timezone information). Syntax <input type="time" name="some-name"/> Input type week Description The week type allows the user to select a week and an year. In other words, it means picking up a specific week. Syntax <input type="week" name="some-name"/> - Input type Search Description This input type is intended to help you collect a string for a search. Since search queries are free-form text, there is never any help in inputting characters and you never have any validation on submission. However, on some platforms, search fields should look slightly different than regular textfields (e.g., with rounded corners instead of with square corners). Define a search field (like a site search, or Google search). Syntax <input type="search" name="some-name"/> Input type URL Description This input type allows a collection of an absolute URL. If the "list" attribute is not specified, then the intention is that the Browser supplies some help in entering a legal URL (e.g., the iPhone browser uses a URL-optimized keyboard) and/or validation on the submission. If the "list" attribute is specified, then the intention is that the Browser allows the user to choose among a set of URLs defined separately with the "datalist" element. Syntax <input type="url" name="some-name"/> <input type="url" list="url-choices" name="some-name"/> <datalist id="url-choices"> <option label="HTML5 Spec" value=""> <option label="Some other URL" value=""> <option label="Yet Another URL" value=""> … </datalist> For more details, click on the link: 10 – Bundling and Minification in ASP.NET Answer: function sum(a,b) { return (a + b); //will return the sum of numbers } Generally, when we write this function in a file, the entire JavaScript file is downloaded as it is. With a minified JavaScript file, the code shown above will look like: function sum(a,b){return (a+b);} Clearly, it reduces the size of the file. When to apply bundling - Having many common JavaScript/CSS files used throughout the Application - Too many JavaScript/CSS files in use in a page For more details, click on the link: 评论 抢沙发
http://www.shellsec.com/news/24627.html
CC-MAIN-2017-13
refinedweb
2,195
51.28
Last Updated on August 28, 2020 The. Kick-start your project with my new book Deep Learning for Time Series Forecasting, including step-by-step tutorials and the Python source code files for all examples. Let’s get started. - Updated Apr/2019: Updated the link to dataset. step -. Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.ful. Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.. Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.. Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.. Hi Jason! I have some second thoughts about the stateless lstm. The main purpose of the LSTM is to utilize its memory property. Based on that what is the point of a stateless LSTM to exist? Don’t we “convert” it into a simple NN by doing that? In other words.. Does the stateless use of LSTM aim to model the sequences (window) in the input data – if we apply shuffle=False in the fit layer in keras – (eg. for a window of 10 time steps capture any pattern between 10-character words)? If yes why don’t we convert the initial input data to match the form of the sequencers under inspection and then use a plain NN (by adding extra columns to the original dataset that are shifted)? If we choose to have shuffle = True then we are losing any information that could be found in our data (e.g. time series data – sequences), don’t we? In that case I would expect in to behave similarly to a plain NN and get the same results between the two by setting the same random seed. Am I missing something in my thinking? The “stateless” LSTM just means that internal state is reset at the end of each batch, which works well in practice on many problems. Really, maintaining state is part of the trade-off in backprop through time and input sequence length. Shuffle applies to samples within a batch. BPTT really looks at time steps within a sample, then averages the gradient across the batch. Does that help? Regarding the shuffling according to the documentation “shuffle: boolean or str (for ‘batch’). Whether to shuffle the samples at each epoch”. So it first resamples the data (i.e. changes the original order) and then on the new order creates the batches. Do I get it right? Let me restate my previous question because I might have confused you. Suppose the dataset has only one variable X and one label Y. I actually want to know whether a stateless LSTM of batch size say 5 and timestep 1 is equivalent to a NN that will get as input X and X.shift(1) (so in total 2 inputs (2 columns) although they point to the same original X column of my dataset and batch size also 5. Thanks in advance and congrats on your helpful website! Even if you frame the sequence problem the same way for LSTMs and MLPs, the units inside each network are different (e.g. LSTMs have memory and gates). In turn, results will probably differ. I would encourage you to test both types of networks on your problem, and most importantly, brainstorm many different ways to frame your sequence prediction problem to see which works best. My question doesn’t have much to do with LSTM. “Transform the observations to have a specific scale. Specifically, to rescale the data to values between -1 and 1 to meet the default hyperbolic tangent activation function of the LSTM model.” I have often ran into this comment where data should be prepared to fit the Y range of the activation function. Consequently, I’ve been trying to find the intuition or theoretical reason behind such transformation, but could not find any besides discussions on SO. I would much appreciate if you could provide your wisdom regarding following questions. a) why such transformation is necessary b) is the transformation applicable to all other activation functions (e.g. [0,1] for sigmoid and on)? Sincerely, YJ Generally, normalizing or standardizing data doe help with neural nets. I would recommend testing with and without it and see how it impacts model skill. Hi Jason, what do you think of using a callback to reset states so that it is possible to use model.fit() on the entire set? Is there any reason to not do this? Best regards If it’s a good fit for your model/setup, go for it. Hello, Thanks for the wonderful post . I have a couple of questions on the way these long sequences have to be handled when traininig a LSTM network : Suppose I have a sequence classification task for which I have a set of 100 sequences with each sequence of varying length in the range 1000 – 2000 (samples). I would need the sequence classification task to identify sequences at regular intervals say every 10 or 20 samples within a sequence Input Sequences : Sequence 1 : s1_1,s1_2,s1_3………………….s1_1000 Sequence 2 : s2_1,s2_2,s2_3………………….s2_1500 Sequence 3 : s3_1,s3_2,s3_3………………….s3_2000 . . . Sequence 100 : s100_1,s100_2,s100_3………………….s100_1100 a. How do I preprocess the data i.e break down the data in to subsequences for training ? Is the sub-sequence length based on the dependency of output over the number of input Samples ? b. If my output of Sequence classification depends on say last 20 samples within a sequence, how do I Split the input data sequence for training ? i.Should it be this way : Overlapping sub-sequences and Stateless – LSTM s1_1,s1_2,…….s1_20 s1_2,s1_3……..s1_21 s1_3,s1_4……..s1_22 ii.Or Should it be this way : Non-overlapping sub-sequences and Stateful – LSTM s1_1,s1_2,…….s1_20 s1_21,s1_22……..s1_40 s1_41,s1_42……..s1_60 Which among the above mentioned options(i and ii)is right? and Why ? Does the approach ii. learn dependencies longer than 20 samples … as the state is carry forwarded after each sub-sequence? If yes till what extent (number of samples … say 60 or 100)? c. If my output of Sequence classification depends on say last 900 samples within a sequence, can the LSTM solve/address this problem ? If yes what would the split of a single training sequence be in such a situation and what would be the LSTM implementation be(Stateful or Stateless)? This post will give you ideas on how to prepare your data: And this post: I think you mean 20 time steps, not 20 samples. I would further encourage you to explore many different framings of your sequence classification problem and see what works best for your specific data. Hi Dr. Jason, Thanks alot for your tutorials they are very helpful. I am trying to implement the stateless LSTM without shuffle. I basically used your code as it is with only the changes that you suggested, but unfortunately i am getting the following error: GPU sync failed By any chance do you have any idea why am I getting this error? Thank you Nat Looks like an issue with your Python environment. Perhaps try posting the error to stackoverflow? Hi Jason, I am working on an industry problem in which we are trying to predict scrap rates in a manufacturing line based on large datasets (machine data, sensor data, …). One approach is to model the problem as a time series (sequence) regression problem in RNNs. I am frequently using your blog as incredily helpful resource (thanks!). The prototyping is done in Keras and therefore, I have the following question: Two parameters suggest to influence sequence learning problems: – batch_size in model.fit(batch_size) – time_steps in layers.LSTM(input_shape(samples,time_steps,obs) -> If batch_size < time_steps , doesn't the internal state get reset too frequently and causes a problem with the BPTT? As an example, suppose we have a sequence of length 50 (time_steps=50) and a training batch size of 25 e.g. for stochastic gradient descent (batch_size=25). Even though TBPTT(50,50) is set up to learn sequence patterns from 50 time steps, can the internal state keep the information? Thanks much and regards from Germany Max Time steps and batch size are not related. Batch size covers the number of samples, where time steps refers to one sample. Does that help? Hi Jason, What about variable batch size for LSTM stateful Perhaps you can summarize the content of the link for me? Oh, I will try to summarize here so if something not clear please tell me. First, i have solved my problem in the link thanks to your tutorial on saving weights for prediction in different batch size LSTM. yet i have issue in accuracy. second, i have 4 features, that i am using the last one in training and also as a target. thanks to your tutorials i was able to reshape the data and make the below stateful architecture, note its window size of 10 n_batch = X_train[0].shape[0] n_epoch = 25 n_neurons = 256 model = Sequential() model.add(LSTM(n_neurons, batch_input_shape=(n_batch, X_train[0].shape[1], X_train[0].shape[2]), stateful=True)) model.add(Dense(1)) model.compile(loss=’mae’, optimizer=’adam’, metrics=[‘accuracy’]) # fit network for i in range(len(X_train)): if X_train[i].shape[0] == 250: model.fit(X_train[i], y_train[i], epochs=n_epoch, batch_size=n_batch, verbose=1, shuffle=False) #model.reset_states() note that commenting the reset_states or not doesn’t affect the accuracy Third, i am not sure if i am calculating the accuracy and loss right, also if i am using the proper optimizer, also what activation functions to use, should i stack more LSTM. why stateful doesn’t affect the learning. Also the last feature has large positive and negative values, not more than 1000. so what is the proper way to normalize that. i tried to normalize using the below line, but nothing changed, same 0% accuracy!! m15 = m15.assign(NormDirHeight=(m15[‘DirHeight’]-m15[‘DirHeight’].mean())/m15[‘DirHeight’].std()) If your problem is regression (predicting a quantity instead of a label), you cannot calculate accuracy, more here: Perhaps normalize the data using the MinMaxScaler from the scikit-learn library? First, Thank you so much and your tutorials are super awesome and super fun, this is something i wanted to say. Sorry for dumping the log here Okay, so using mae for metrics, i got the below, without MinMaxScaler Epoch 1/25 250/250 [==============================] – 1s 3ms/step – loss: 140.9831 – mean_absolute_error: 140.9831 Epoch 2/25 250/250 [==============================] – 0s 440us/step – loss: 140.9362 – mean_absolute_error: 140.9362 Epoch 3/25 250/250 [==============================] – 0s 464us/step – loss: 140.8762 – mean_absolute_error: 140.8762 Epoch 4/25 250/250 [==============================] – 0s 456us/step – loss: 140.8182 – mean_absolute_error: 140.8182 Epoch 5/25 250/250 [==============================] – 0s 464us/step – loss: 140.7660 – mean_absolute_error: 140.7660 Epoch 6/25 250/250 [==============================] – 0s 440us/step – loss: 140.7117 – mean_absolute_error: 140.7117 Epoch 7/25 250/250 [==============================] – 0s 456us/step – loss: 140.6606 – mean_absolute_error: 140.6606 Epoch 8/25 250/250 [==============================] – 0s 440us/step – loss: 140.6121 – mean_absolute_error: 140.6121 Epoch 9/25 250/250 [==============================] – 0s 512us/step – loss: 140.5622 – mean_absolute_error: 140.5622 Epoch 10/25 Okay, so using mae for accuracy metrics, i got the below, with MinMaxScaler (-1,1) Epoch 1/25 250/250 [==============================] – 1s 3ms/step – loss: 0.0601 – mean_absolute_error: 0.0601 Epoch 2/25 250/250 [==============================] – 0s 444us/step – loss: 0.1010 – mean_absolute_error: 0.1010 Epoch 3/25 250/250 [==============================] – 0s 456us/step – loss: 0.0610 – mean_absolute_error: 0.0610 Epoch 4/25 250/250 [==============================] – 0s 456us/step – loss: 0.0732 – mean_absolute_error: 0.0732 Epoch 5/25 250/250 [==============================] – 0s 480us/step – loss: 0.0759 – mean_absolute_error: 0.0759 Epoch 6/25 250/250 [==============================] – 0s 524us/step – loss: 0.0619 – mean_absolute_error: 0.0619 Epoch 7/25 250/250 [==============================] – 0s 480us/step – loss: 0.0597 – mean_absolute_error: 0.0597 Epoch 8/25 250/250 [==============================] – 0s 484us/step – loss: 0.0613 – mean_absolute_error: 0.0613 Epoch 9/25 But if i am not calculating accuracy how could i know that this is good or bad ? Also why is it always the same either reset_states() or without it ? The Stateful is very confusing, nevertheless going from here to predicting multiple steps using TimeDistrubed is a whole new fun journey xD Good question, I answer it here: I was able to show the actual and predicted after applying inverse transform, clearly its a mess. Is it the data or the architecture, how could i know ? >Expected=182.0, Predicted=-3.9 >Expected=-73.0, Predicted=-31.3 >Expected=-49.0, Predicted=-10.6 >Expected=48.0, Predicted=-7.8 >Expected=46.0, Predicted=-12.8 >Expected=-41.0, Predicted=-19.9 >Expected=-66.0, Predicted=-13.6 >Expected=22.0, Predicted=-1.8 >Expected=87.0, Predicted=-14.1 >Expected=47.0, Predicted=-33.3 >Expected=31.0, Predicted=-30.5 Okay, i have one last question, How could i know that the problem is not in my data ? i mean that i always get the same bad accuracy, is it the data ? is there something that check the consistency of the data that it would be valid to make a regression function for it with LSTM ? Start with a naive baseline, then evaluate models against that baseline to see if they are skillful. I explain this process here: Thank you so much, i have read the article carefully, analyzed how much of it i did and which steps did i skip, and put an action plan. really great blog and thanks for your nice and fast replies) Thanks. Thanks for great post! I have a question. I try to train a lstm autoencoder on signals. I wonder what’s difference between 1) I use a stateless network and send whole signal as input and set batch size to 1 2) I use a stateful network and in for loop reset state after train_on_batch en each input signal ?? Not much. Hi Jason, In Stateful model, callbacks of Keras(EarlyStopping, ReduceLROnPlateau, and ModelCheckpoint) not working. As we are resetting the states after each iteration and in between each iteration there is only one epoch, hence Keras is not able to find logs of previous epochs, hence not able to apply above mention callbacks. So, How can I implement EarlyStopping, ReduceLROnPlateau, and ModelCheckpoint? If you are driving the epochs manually, then the perhaps callbacks are not needed (just an idea?), you can run their operations manually as well. E.g. evaluate the model and see if you the next iteration is required or not. Hello Brownlee, Have you ever read any paper discuss the stateful and stateless implementation? Thanks What do you mean exactly? sir, stateful and stateless lstm possible for multivariate time series dataset Yes. Thanks..useful blog ..useful post…great Thanks. I’m happy it helped. Hey Jason, Thanks for all the resources you post, they’re amazing. I’m trying to apply a custom loss function to the LSTM network, nothing fancy, just RMSE with inverted min max scaler, using the following function: def RMSE_inverse(y_true, y_pred): y_true = (y_true – K.constant(scaler.min_)) / K.constant(scaler.scale_) y_pred = (y_pred – K.constant(scaler.min_)) / K.constant(scaler.scale_) return K.sqrt(K.mean(K.square(y_pred – y_true))) For some reason I’m getting a different loss value to the RMSE calculation in your example. Epoch 1/1 22/22 [==============================] – 0s 4ms/step – loss: 75.2212 – val_loss: 113.2487 1) Test RMSE: 138.084 Pretty much all of the rest of the code is identical to your example. Any help would be great! Cheers. Thanks! The epoch loss is an average across the batches I believe. Ah okay, that makes sense. I’m trying to compare the LSTM to a naive forecast. Do you think the epoch loss or the Test RMSE is better for comparison? Thanks! Tune the learning using loss, compare using RMSE. Hey Jason, Great tutorial. I just have one question that in step 3 of data preparation you said that: “Transform the observations to have a specific scale. Specifically, to rescale the data to values between -1 and 1 to meet the default hyperbolic tangent activation function of the LSTM model” Why are you scaling your input between -1 and 1? tanh has domain of all real line and range from -1 to 1 so why to scale between -1 and 1 and not [0,1] or any other? I know that the active region for tanh is [-2, 2] but I don’t understand your logic. It is good practice to scale data to the range of the output of the LSTM layers. I no longer do this as a practice, I found scaling to 0-1 or even no scaling is sufficient. See the tutorials here: Hi Jason, A great post yet once again. I have a doubt w.r.t. the sample size while prediction. Let’s say during training my batch_size=32 and being a stateful=True network, I need to specify batch_size in the input layer itself. Now during prediction, it expects the batch_size to be the same as used while training. But in case if I want to predict for records less than 32, do I need to pad the whole sequence until size 32? Thanks in advance. BR, Tanmay Thanks. Great question!!! Yes, I have a solution here: Dear Dr Jason, Thank you very much for your tutorial. I jumped into an error, and could not figure it out what went wrong, When I tried to load these files: “experiment_stateful.csv” and “experiment_stateful2.csv” I load it as your script but it throw the error “ValueError: Cannot set a frame with no defined index and a value that cannot be converted to a Series” at these lines [for name in filenames: results[name[11:-4]] = read_csv(name, header=0)] I am using panda 1.0.1 and Python 3.7, sorry but I am relatively new to this area, I hope that you can help with the explanation as well as suggestion for other supplement tutorials. Thank you very much! Best regards, Hazard I’m sorry to hear that, is it possible you skipped some code or a step? This might help: Thank you for your reply, the 1st two parts of the code ran smoothly and gave me 2 expected output files. However, at this line: results[name[11:-4]] = read_csv(name, header=0) I suppose that the index[11:-4] includes a no defined index as the shown error. Would you mind suggest me what may goes wrong or where to look further here. Thanks in advance. You must run the experiment code in the first part of the tutorial to create the CSV files required for the second part of the tutorial. Thank you for your reply. I did run the first part, and got the csv files since my 1st comment, however in the loading files for comparing part, I jumped into the mentioned error. Again, sorry for taking ur time due to my lack of exp in this field. I managed to modify the code and display it with pandas concat method as follow: for name in filenames: df = read_csv(name, header=0) df[‘name’] = name results = pd.concat([results, df]) # describe all results print(results.describe()) # box and whisker plot results.boxplot(by=’name’) pyplot.show() Could you help me to understand what may go wrong with “results[name[11:-4]]” in your scripts ? It retrieves the strings “stateful” and “stateful2” from the respective filenames and uses them as keys in the results dictionary. Great! Thank you very much for your time and help. I managed to make it work by adding [] to the code: results[[name[11:-4]]] = read_csv(name, header=0) I think the updated pandas version may have some changes. Awesome tutorial Dr Jason Fair enough. It is not pandas, it is Python 3 array slicing of strings. Something I’m missing here. I get that this is model is trying to predict a “known” sale amount. What I don’t understand is how you would use this to forecast beyond the “known” sale history? How do you use the model to continue making predictions beyond the last “known” sale date? Fit the model on all data and call predict() to go out of sample. Perhaps this will help: Hey jason, great article. I think there is typo in the article after the Expectation:2 line. “The code changes to the stateful LSTM example above to make it stateless involve setting stateless=False in the LSTM layer” you have written “stateless=False” where it should be “stateful=False” Thanks! Fixed. Hi, I did not see anyone evaluating loss function on validation data during training using stateful LSTM yet. Is it even possible? I am not sure how state is handled, because when using stateful network you will probably need different states for training and validation data, is that right? You would have to do it manually for each epoch – that’s what I would do. So after each epoch resetting model and evaluate on set of validation data? Makes sense, but I am afraid that it will make training painfully slow in Keras. Thank you Sure will! Hello Mr. Brownlee, Thank you for an amazing article. I have learnt a lot from it. I have a question when the problem is a multivariate one. My output (say O1) looks very similar to the one shown here, with its own trend and seasonality but my input signals are different (say I1, I2). In this scenario, does it make sense to make all the input and output signals stationary (irrespective of whether I1 and I2 have seasonality) and try through stateful LSTM or would you recommend another approach? Looking forward to your response. Regards, Gopal. You’re welcome. Yes, try making the data stationary before modeling and compare results to a model fit on the raw data. Use whichever works better on your dataset. Hello Mr.Brownlee, Thank you for your response. I will go ahead and try the approach. One small follow up questuin though. I have multiple excel files with such time series data. Could you kindly help me in understanding how to use these multiple data files for training the LSTM network? Stitching them together causes a sudden drop because the end of one file has a much lower value than the beginning of the next file. So, Im not sure if that is the right way to do it. I would really appreciate your help on this. Thank you, Gopal Perhaps each file is a separate sample? E.g.you can learn across samples. This may help: Hey Jason! For text classification which is best? stateless or stateful?? I recommend a CNN for text classification: For your initial stateful vs stateless attempt, you seem to get better results for stateless, but then you proceed to show that they give the same results when batch size is the same. But for the initial attempt, you don’t show the batch size for the two. What did you specify for batch size for the stateful and stateless in the first attempt? Thanks. I can see you did batch_size=1 for the stateful, but not sure about the stateless. Batch size of 1 I believe. Thanks a lot for knowledge sharing!!! I have a question regarding one of the experiments where it was concluded that for a stateful model, resetting the state after every epoch turns better than stateless. In this case, how different is this than a stateless model as we are explicitly resetting the state? You’re welcome. The difference is the state being reset or not. Perhaps I don’t understand your question? If we are explicitly resetting the state of a stateful model, then we are essentially doing what a stateless would have done. So, why we are using stateful here and not stateless? The above tutorial contracts the two methods, stateless vs stateful. Stateless will reset state after each batch, stateful resets state after each epoch in the above example
https://machinelearningmastery.com/stateful-stateless-lstm-time-series-forecasting-python/
CC-MAIN-2021-31
refinedweb
4,091
66.23
Data Structures for Drivers no-involuntary-power-cycles(9P) usb_completion_reason(9S) usb_other_speed_cfg_descr(9S) usb_request_attributes(9S) - options structure for M_SETOPTS message #include <sys/stream.h> #include <sys/stropts.h> #include <sys/ddi.h> #include <sys/sunddi.h> Architecture independent level 1 (DDI/DKI) The M_SETOPTS message contains a stroptions structure and is used to control options in the stream head. uint_t so_flags; /* options to set */ short so_readopt; /* read option */ ushort_t so_wroff; /* write offset */ ssize_t so_minpsz; /* minimum read packet size */ ssize_t so_maxpsz; /* maximum read packet size */ size_t so_hiwat; /* read queue high water mark */ size_t so_lowat; /* read queue low water mark */ unsigned char so_band; /* band for water marks */ ushort_t so_erropt; /* error option */ The following are the flags that can be set in the so_flags bit mask in the stroptions structure. Note that multiple flags can be set. Set read option. Set write offset. Set minimum packet size Set maximum packet size. Set high water mark. Set low water mark. Set read notification ON. Set read notification OFF. Old TTY semantics for NDELAY reads and writes. Semantics for NDELAY reads and writes. The stream is acting as a terminal. The stream is not acting as a terminal. Stop on background writes to this stream. Do not stop on background writes to this stream. Water marks affect band. Set error option. When SO_READOPT is set, the so_readopt field of the stroptions structure can take one of the following values. See read(2). Read message normal. Read message discard. Read message, no discard. When SO_BAND is set, so_band determines to which band so_hiwat and so_lowat apply. When SO_ERROPT is set, the so_erropt field of the stroptions structure can take a value that is either none or one of: Persistent read errors; default. Non-persistent read errors. OR'ed with either none or one of: Persistent write errors; default. Non-persistent write errors. STREAMS Programming Guide
https://docs.oracle.com/cd/E26505_01/html/816-5181/stroptions-9s.html
CC-MAIN-2019-30
refinedweb
309
70.8
import "github.com/gobuffalo/envy" package envy makes working with ENV variables in Go trivial. * Get ENV variables with default values. * Set ENV variables safely without affecting the underlying system. * Temporarily change ENV vars; useful for testing. * Map all of the key/values in the ENV. * Loads .env files (by using [godotenv]()) * More! GO111MODULE is ENV for turning mods on/off CurrentModule will attempt to return the module name from `go.mod` if modules are enabled. If modules are not enabled it will fallback to using CurrentPackage instead. CurrentPackage attempts to figure out the current package name from the PWD Use CurrentModule for a more accurate package name. Get a value from the ENV. If it doesn't exist the default value will be returned. GoPaths returns all possible GOPATHS that are set. Load .env files. Files will be loaded in the same order that are received. Redefined vars will override previously existing values. IE: envy.Load(".env", "test_env/.env") will result in DIR=test_env If no arg passed, it will try to load a .env file. Map all of the keys/values set in envy. Get a value from the ENV. If it doesn't exist an error will be returned MustSet the value into the underlying ENV, as well as envy. This may return an error if there is a problem setting the underlying ENV value. Reload the ENV variables. Useful if an external ENV manager has been used Set a value into the ENV. This is NOT permanent. It will only affect values accessed through envy. Temp makes a copy of the values and allows operation on those values temporarily during the run of the function. At the end of the function run the copy is discarded and the original values are replaced. This is useful for testing. Warning: This function is NOT safe to use from a goroutine or from code which may access any Get or Set function from a goroutine Package envy imports 12 packages (graph) and is imported by 167 packages. Updated 2018-12-08. Refresh now. Tools for package owners.
https://godoc.org/github.com/gobuffalo/envy
CC-MAIN-2018-51
refinedweb
348
68.26
In this article. I will demonstrate how to implement Google column charts dynamically using entity framework in MVC5. Introduction In this article, I will demonstrate how to implement Google Column Chart dynamically using Entity Framework in MVC5. I will use jQuery AJAX to retrieve the data from the database and display it in Google Column Chart. The chart will animate columns with their respective data display in it. Step 1 Open SQL Server 2014 and create a database table to insert and retrieve the data. Screenshot for a database table Step 2 Open Visual Studio 2015, click on New Project and create an empty web application project. Screenshot 1 for creating a new project After clicking on New Project, one window will appear. Select Web from the left panel, choose ASP.NET Web Application to give a meaningful name to your project. Then, click on OK as shown in the below screenshot. Screenshot 2 for creating a new project After clicking on OK, one more window will appear. Choose Empty, check on MVC checkbox, and click on OK. Screenshot for creating a new project-3 After clicking on OK, the project will be created with the name of MvcGoogleColumnChart_Demo. Step 3 Add Entity Framework, right click on Models folder, select Add. Then, select New Item, then click on it. Screenshot 1 for adding Entity Framework After clicking on the new item, you will get a window from there. Select Data from the left panel and choose ADO.NET Entity Data Model, give it a name as DBModels (This name is not mandatory; you can give any name), and click on Add. Screenshot 2 for adding Entity Framework After you click on Add, a wizard will open. Choose EF Designer from the database and click Next. Screenshot 3 for adding Entity Framework On the next window, choose New Connection. Screenshot 4 for adding Entity Framework Another window will appear. Add your server name. If it is local, then enter dot (.). Choose your database and click on OK. Screenshot 5 for adding Entity Framework The connection will get added. If you wish to save it, connect as you want. You can change the name of your connection below. It will save the connection in web config, then click on Next. Screenshot 6 for adding Entity Framework After clicking on NEXT, another window will appear. Choose database table name as shown in the below screenshot. Then, click on Finish. Entity Framework will be added and respective class gets generated under Models folder. Screenshot 7 for adding Entity Framework Screenshot 8 for adding Entity Framework The following class will be added. Step 4 Right click on Controllers folder, select Add, then choose Controller. After clicking on the Controller, a window will appear. Choose MVC5 Controller-Empty and click on Add. After clicking on Add, another window will appear with DefaultController. Change the name to HomeController, then click on Add. HomeController will be added to Controllers folder. Remember, don’t change Controller. Instead of Default just change Home. Add the following namespace in Controller. using MvcGoogleColumnChart_Demo. Models; Complete Controller code Step 5 Right-click on Column action method in Controller. An "Add View" window will appear with default Column name unchecked (use a Layout page), click on Add. The View will be added in Views folder under Home folder with name Column. Screenshot for adding View Step 6 Click on Tools, select NuGet Package Manager, then choose Manage NuGet Packages for Solution click on it. Screenshot for NuGet Package After that, a window will appear. Choose Browse type as bootstrap and install the package in the project. Similarly, type jQuery and install the latest version of jQuery package in the project and jQuery validation file from NuGet, then close NuGet Solution. Keep the required bootstrap and jQuery file and delete the remaining file if not using. Or you can download and add in the project. Step 7 Add required script and style in head section of the View. Step 8 Right-click on Scripts folder, select Add, and choose JavaScript file. Give it a name as ColumnCart.js. Write the script to get data from the database. Step 9 Design the View with HTML, cshtml, and bootstrap 4 classes. Complete View code Step 10 Run Project ctrl+F5. Screenshot 1 Screenshot 2 Conclusion In this article.I have explained how we can dynamically implement Google Column Chart using Entity Framework and MVC5. We can add year of hiring,number of employee hired in account,hr and IT department clicking on Add New Report button.I hope it will be useful in your upcoming projects. View All
https://www.c-sharpcorner.com/article/how-to-implement-google-column-chart-dynamically-using-entity-framework-and-mvc/
CC-MAIN-2019-35
refinedweb
771
76.22
J2EE Tutorial - RMI Example Corba is just a specification. The required Interace is defined in a C-like language known as OMG-IDL. The IDL is a contract. It is like a function declaration in C++. The name of the method, the type of parameters required and the type of return value are specified by the IDL file. Typically, an IDL compiler 'compiles' this IDL file into the language chosen ( it may be C++ or Java ). The idl compiler ,in reality translates the IDL interface into a real language like C++ or Java. Much more important, it automatically generates what are known as 'Stub' & 'Skeleton' files .Companies like IONA ORBIXWEB, VISIGENIC, MICO, JAC-ORB specilaised in creating such IDL-COMPILERS. That was the period between 1991 and 1996. Java had not yet arrived and C++ was the language of choice for Enterprise work, besides COBOL, ADA ( for real time systems) & SMALLTALK. The earliest Corba language bindings were for these languages.The characteristic feature of Distributed Object Technologies is that the client invokes the method of an object which resides in a remote server in its own workspace, as if the object resided locally. Why is this so important? Why not just pass parameters to the remote object and invoke the function in remote server itself? Please read on. That is where the relevance of 'Distributed Objects ' technology becomes evident. If the author of the remote object ( wriiten in C++) wants his object to be accessible to other clients, he can start with the IDL file at his end and publish the idl to others. Using the IDL file, others can create the Stub &skeleton .and they can write their own clients to use the remote object. Thus, the remote object's location is not disturbed.Its implementation details are not revealed. It can be written in any language , suitable for the specific application realm.( As they say, Java is no 'Silver Bullt' (ie) a solution suitable for all situations. Legacy objects can still be used. It may be running on any platform. In the Corba approach, the stub and skeleton source files are automatically created by the IDL compiler. The component designer, then creates an implementation for the interface. During compilation, the stub and skeleton files are referenced.The client file requires information about the stub. After these steps, the class files are installed in both ends, as appropriate. Now the client is able to invoke the remote object's methods from its own workspace.Actually, the remote object after being created is registered in Corba Naming Service.The client simply enquires the Naming service for the remote object , by 'name'.This is known as 'Location Transparency'. Why is 'Location Transparency' important? For Security? No. A server is supposed to work 24X7 ( a fancy way of saying 'continuously'). If for some reason , there is some problem in the server, the remote server's administrator can transfer the classes to another machine without disturbing the service.In Enterprise world, Location transparency is essential. Imagine a continuously touring businessman who informs his current contact number to the enquiry desk and if we want to contact him, we have to get his current whereabouts from the enquiry desk.( read 'Naming service'). The other important characteristic of Interface approach is what is usually referred to as ' Immutability of Service Contract'. , in COM circles. All that it means is that the corba provider can improve the implementation if desired without disturbing the contract with the client. This is an important merit of 'Interface-based'approach over 'Class-based ' approach. SUN MICROSYSTEMS being one of the founder members of the OMG (Object Management Group)which created the CORBA specification , aimed at providing simple API for creating CORBA-compliant RemoteObjectServers , using just the Java language. (ie) the aim is to develop RemoteObjectServers, which can be invoked and used by client programs written in other languages and working in different platforms. As Java language is already platform independent, the only problem was that of the language of the client. With an eventual aim of creating a simple version of Corba-compliant server , using nothing but Java, Sun began the journey with RMI. Remote Method Invocation is a corba-like solution in its approach. It begins with interface . It then creates an implementation class. Using this, it creates the Skeleton class file (Skeleton is the proxy for the RemoteObject in the server side). and also 'Stub ' class file.(Stub is the proxy for the Remote Object in client side). The remote object is registered in the Remote Registry.& waits for the client's call. All this was very much similar to Corba.( However, there is no location transparency in RMI). RMI requires Java at both ends. Many programmers have asked in web forums why we cannot simply use the socket method instead of RMI. It is true that the merit of Interface approach is not all that evident if we have Java at both ends. But, even within that constraint, experienced programmers advise that if our program involves a lot of complex objects created by us and passed as parameters to the server, it is advisable to use the RMI method as it will take care of the Serialization details. Apart from this, we gain a better insight if we consider RMI as just a first step towards RMI-IIOP, which enables us to write Corba-compliant servers without the hassle of learning OMG-IDL. When we are experienced in writing RMI programs, we can easily switch over to RMI-IIOP. The merit of RMI-IIOP is that we write just a java file and the RMI compiler which comes with JDK1.3 can create the IDL from this file! This is known as 'Quick Corba'. ( Previously, Visigenic company had provided a software known as Caffiene , along the same lines.) Many earlier books like ' Java Distributed Objects' & 'Developing Servlets' by James Goodwill refer to that. That role is being done by RMI-IIOP now. The advantage of RMI-IIOP is that the programmer does not need to know anything about OMG-IDL. He simply writes a program with very slight changes from RMI style. He can get the IDL file automatically generated by the IDL compiler. This IDL file can be distributed for CORBA clients.They can use their own IDL compiler suitable for their platform and language and still use our java server. (RMI--IIOP was developed jointly by SUN & IBM). We will now see how the same interface can be used to create an RMI program and then an RMI-IIOP program. Let us now consider the RMI version of the 'greeter' program. RMI PROGRAM...HOW TO CREATE & RUN? ================================= From a DOS window, create a folder as c:\greeter cd to c:\greeter ---- >set path=c:\windows\command;c:\jdk1.3\bin >set classpath=c:\greeter ----- Edit greeterinter.java ( This is the Interface file). import java.rmi.*; public interface greeterinter extends Remote { String greetme(String s) throws RemoteException; } --------------------------------------------------------------------- compile this file. >javac greeterinter.java We get greeterinter.class file after compilation. ----------------------------------------------------------- Next, edit the implementation file as follows: //greeterimpl.java import greeterinter.*; import java.rmi.server.*; // essential. do not omit. public class greeterimpl extends UnicastRemoteObject implements greeterinter { public static void main(String args[]) { // System.setSecurityManager(new RMISecurityManager() ); try greeterimpl app = new greeterimpl(); Naming.rebind("//" + "localhost" + "/greeterinter", app); System.out.println("greeter is registered"); } catch(Exception ex) { System.out.println("point1 "+ex); } //============================================== public greeterimpl() throws RemoteException //================================================ public String greetme(String s) throws RemoteException return("How are you?......"+s); //over. ------------------------------------------------------------------- Now, compile greeterimpl.java >javac greeterimpl.java We get greeterimpl.class ------------------------------------ The next step is to create the class files for the Stub & Skeleton using the rmic.exe provided in jdk. >rmic greeterimpl You will find that this command automatically creates the class files for Stub & Skeleton. -------------------------------------------------------- We can now create the servlet file as follows, which will be the client for the RMI server. -------------------------- greeterclientservlet.java import greeterimpl.*; import greeterimpl_Stub.*; import greeterimpl_Skel.*; import java.net.*; import javax.servlet.*; import javax.servlet.http.*; import java.io.*; import java.util.*; public class greeterclientservlet extends HttpServlet greeterinter server; public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/html"); PrintWriter out = response.getWriter(); String s = request.getParameter("text1"); try server =(greeterinter) Naming.lookup("//" + "localhost" + "/greeterinter"); // Protocol is left unspecied.Normally it is JavaRemoteMethod Protocal // If there are firewall restrictions , it automatically // switches over to http . } catch (Exception e1) {out.println(""+e1);} String s1 = server.greetme(s); System.out.println(s1); out.println("<html>"); out.println("<body>"); out.println(s1); out.println("</body>"); out.println("</html>"); } ----------------------------------------------------------------------- To compile the servlet, change the classpath as follows: >set classpath=%classpath%;c:\jsdk2.0\src -- Now we will be able to compile the servlet. ------------------------------------------------------------ Create the html file to invoke the above servlet. greeterclientservlet.htm <html> <body> <form method=post <input type=text <input type=submit> </form> </body> </html> ================================ Now copy greeterclientservlet.htm to c:\tomcat\webapps\root Copy all the class files in c:\greeter to c:\tomcat\webapps\root\web-inf\classes --------------- Now, we are ready to execute our RMI-servlet program. ==== c:\greeter>start rmiregistry This command creates a blank window. Minimize it. In the same DOS window, >java greetserverimpl This starts the remote server . The remote object is created and registered in the RMI registry in the remote server. We get a message informing us about it. The implementation program creates an instance of the class and registers the instance in the RMI REGISTRY in the remote server, by the identity of 'greeterinter'. // Naming.rebind("//" + "localhost" + "/greeterinter", app); Remember to start the 'Tomcat' server as before. Now , we start the InternetExplorer and type the URL as '' ------------ We get a form. Fill up your name in text1 and submit. --- The data is sent to the servlet. The servlet program looks up at the Registry to locate the object by name 'greeterinter'. After getting an instance.by 'Naming.lookup' , it invokes the 'greetme' function on the object and passes the string from the form as paramter to the method called. The method call with parameter is passed to the stub.The stub sends it to the Skeleton.The skeleton sends it to the remoteobject in remote server. The remote object executes the function there and sends the result to skeleton. The skeleton sends it to stub. The stub sends it to the servlet. The servlet sends the result to the browser. ********************************************************************************** ( For reasons of space , we are constarined to limit ourselves to this much. If you need a more elaborate treatment, the books by Jaworsky and the one by Bill McCarty are ideal). ****************************************************************************** Let us now see how, an RMI-IIOP program is developed. This part is very unique and important. You may not find this in any textbook, because , RMI-IIOP was introduced only in JDK1.3 and most of the Java books in circulation deal only with JDK1.2.Sun's tutorial is not much helpful. So, read carefully. ----------------------------------------------------------------------------- Here ara a few lines from Sun's documentation on RMI--IIOP. 'RMI lacks interoperability with other languages, and because it uses a non-standard communication protocol, cannot communicate with CORBA objects. IIOP is CORBA'S communication protocol.It defines the way in which the bits are sent over a wire between CORBA clients and servers. Previously Java programmers had to choose between RMI and JavaIDL.for Distributed programming solution. Now, by adhering to a few restrictions, RMI objects can use the IIOP protocol.( Internet InterOperable Protocol) and communicate with CORBA objects. Thus, RMI-IIOP combines RMI-style ease of use with CORBA cross-language interoperability. To resume, We will need 4 files (ie) a) greeter.java ( the Interface file) b) greeterimpl.java ( the implementation file) c) greeterclientservlet.java ( the servlet which invokes the remote server) d) greeterclientservlet.htm ( the html file to invoke the servlet) These files have been given below.( Carefully compare these files with their RMI counterpart to appreciate the similarity and minute dfferences). If you enjoyed this post then why not add us on Google+? Add us to your Circles Liked it! Share this Tutorial Ask Questions? Discuss: J2EE Tutorial - RMI Example View All Comments Post your Comment
http://roseindia.net/ejb/introduction/rmiexample.shtml
CC-MAIN-2014-35
refinedweb
2,023
51.24
// Project: XorLib - Xor Encryption Libary. // Copyright: Copyright © 2009-2010 Shadowscape Studios. All Rights Reserved. // Developer: Shadowscape Studios // Website: // Support: support@shadowscape.co.uk // Version: 1.0.0.0 // Release: 151220102240 #define export __declspec (dllexport) #include <stdio.h> #include <stdlib.h> #include <string.h> #include <windows.h> long filesize(FILE *f) { long fs; fseek(f,0L,SEEK_END); fs = ftell(f); fseek(f,0L,SEEK_SET); return fs; } export double xorcrypt(char *rn, char *wn, char *sb, double ss) { FILE *rf, *wf; unsigned char fb[BUFSIZ]; unsigned int bp, sp = 0; size_t bs; if ((rf = fopen(rn,"rb")) == NULL) { return 0; } if ((wf = fopen(wn,"wb")) == NULL) { return 0; } while ((bs = fread(fb, 1, BUFSIZ, rf)) != 0) { for ( bp = 0; bp < bs; ++bp ) { fb[bp] ^= sb[sp++]; if ( sp == ss ) { sp = 0; } } fwrite(fb, 1, bs, wf); } fclose(wf); fclose(rf); return 1; } export double xorcryptf(char *fn, char *tn,(fn,tn,sb,ss); free(sb); return r; } export double xorcrypto; } export double xorcryptof(char *fn,o(fn,sb,ss); free(sb); return r; } hi i have made this dll and was wondering how would i write the functions in devc++ for the ".h" file because i have been informed i need to do that as it just seems to crash the application at the moment and im new to this language so im very unsure how they should look and all i have for refrence is the following which doesnt really seem to help so much as my functions seem diffrent to this. it would be amazing if someone could help me its taken me months to get this far and i am so determined to get this out to developers soon as i think it could prove useful to have. #ifndef _DLL_H_ #define _DLL_H_ #include <windows.h> #ifndef DLL_EXPORT #define DLL_EXPORT extern "C" __declspec(dllexport) #endif DLL_EXPORT void SayHello(); DLL_EXPORT int Addition(int arg1,int arg2); DLL_EXPORT LPTSTR CombineString(LPTSTR arg1,LPTSTR arg2); #endif
https://www.daniweb.com/programming/software-development/threads/332012/creating-a-dll-in-devc
CC-MAIN-2018-30
refinedweb
323
61.36
Operating system name with version (Read Only). Returns detailed information about the operating system of the device, including the version. For a simple platform detection, properties like Application.platform or deviceType might be more appropriate. Note: On Windows Store Apps, it's impossible to tell if the user is running the 32-bit or 64-bit version of Windows. In this situation, the CPU architecture is queried instead. If the CPU is 64-bit, SystemInfo.operatingSystem will return 'Windows <version> 64 bit', and if the CPU is 32-bit - 'Windows <version>'. using UnityEngine; public class ExampleClass : MonoBehaviour { void Start() { // Prints "Windows 7 (6.1.7601) 64bit" on 64 bit Windows 7 // Prints "Mac OS X 10.10.4" on Mac OS X Yosemite // Prints "iPhone OS 8.4" on iOS 8.4 // Prints "Android OS API-22" on Android 5.1 Debug.Log(SystemInfo.operatingSystem); } } See Also: Application.platform, deviceType.
https://docs.unity3d.com/ru/2017.1/ScriptReference/SystemInfo-operatingSystem.html
CC-MAIN-2020-34
refinedweb
150
53.47
rise of the machines - part 224 Jun 2014 Here I show how to use C++ to communicate via Bluetooth with the LEGO Mindstorms EV3 brick (see previous post). If you are on a Mac everything should work right away. If you are using Ubuntu or other Linux distro I think you’ll only need to change the Bluetooth part a bit (my Ubuntu laptop doesn’t have Bluetooth, so I can’t be sure). If somehow you are forced to use Windows I think you’ll need to change the Bluetooth part a lot. All the rest should be the same though. So, you start by cloning the source code of the EV3 firmware: open up your terminal and do git clone Name the folder ev3sources, to make the examples below easier to run. Also, open the ev3sources/lms2012/c_com/source/c_com.h file and change the line #include "lms2012.h" in order to provide the full path to the lms2012.h file. Say: #include "/Users/YourUsername/MyLegoProject/ev3sources/lms2012/lms2012/source/lms2012.h" That’s all the setup you need - you are now ready to write and send commands to the EV3. Turn on your EV3, enable Bluetooth, make it discoverable (see the EV3 user guide if necessary), plug some motor to port A, fire up Xcode or whatever IDE you use, and try running the following code snippet: #include <unistd.h> #include <fcntl.h> #include "ev3sources/lms2012/c_com/source/c_com.h" int main() { // write command to start motor on port A with power 20 unsigned const char start_motor[] {13, 0, 0, 0, DIRECT_COMMAND_NO_REPLY, 0, 0, opOUTPUT_POWER, LC0(0), LC0(1), LC1(20), opOUTPUT_START, LC0(0), LC0(1)}; // send command to EV3 via Bluetooth int bt = open("/dev/tty.EV3-SerialPort", O_RDWR); write(bt, start_motor, 15); // end connection with EV3 close(bt); } If everything went well you should see the motor starting. If instead you get an authentication-related error message, download and install the official LEGO app (if you haven’t already), launch it, use it to connect to the EV3 via Bluetooth, check that it really connected, then close it. Somehow that fixes the issue for good. (I know, it’s an ugly hack, but life is short). Now let’s deconstruct our little script. There are two steps: writing the command and sending the command. Writing the command is the hard part. As you see, it’s not as simple as, say, EV3.start_motor(port = "A", power = 20). Instead of human-readable code what we have here is something called bytecodes. In this particular example every comma-separated piece of the expression inside the inner curly braces is a bytecode - except for the LC1(20) part, which is two bytecodes (more on this in a moment). The first and second bytecodes - 13 and 0 - tell the EV3 the message size (not counting the 13 and the 0 themselves). The third and fourth bytecodes - 0 and 0 - are the message counter. The fifth bytecode - DIRECT_COMMAND_NO_REPLY - tells the EV3 two things. First, that the instruction is a direct command, as opposed to a system command. Direct commands let you interact with the EV3 and the motors and sensors. System commands let you do things like write to files, create directories, and update the firmware. Second, DIRECT_COMMAND_NO_REPLY tells the EV3 that this is a one-way communication: just start the motor, no need to send any data back. So, the three alternatives to DIRECT_COMMAND_NO_REPLY are SYSTEM_COMMAND_NO_REPLY, DIRECT_COMMAND_REPLY, and SYSTEM_COMMAND_REPLY. The sixth and seventh bytecodes - 0 and 0 - are, respectively, the number of global and local variables you will need when receiving data from the EV3. Here we’re using a DIRECT_COMMAND_NO_REPLY type of command, so there is no response from the EV3 and hence both bytecodes are zero. Now we get to the command lui-même. We actually have two commands here, one after the other. The first one, opOUTPUT_POWER, sets how much power to send to the motor. The second one, opOUTPUT_START, starts the motor. Each command is followed by a bunch of local constants (that’s what LC stands for), which contain the necessary arguments. For both commands the first LC0() is zero unless you have multiple EV3 bricks (you can join up to four EV3 bricks together; that’s called a “daisy chain”). Also for both commands, the second LC0() determines the EV3 port. Here we’re using port A - hence LC0(1). Use LC0(2) for port B, LC0(4) for port C, and LC0(8) for port D. Finally, opOUTPUT_POWER takes one additional argument: the desired power. The unit here is percentages: 20 means that we want the motor to run at 20% of its maximum capacity. Unlike the other local constants, this one is of type LC1, not LC0, so it takes up two bytes (see the bytecodes.h file for more on local constants); that is why the message size is 13 even though we only have 12 comma-separated elements. (Don’t be a sloppy coder like me: instead of having these magic numbers, declare proper variables or constants and use these instead - LC0(port), LC1(power), etc.) Now let’s send the command we just wrote. On a Mac the way we communicate with other devices via Bluetooth is by writing to (and reading from) tty files that live in the \dev folder (these are not actual files, but file-like objects). If you inspect that folder you will see one tty file for every Bluetooth device you have paired with your computer: your cell phone, your printer, etc. The EV3 file is called tty.EV3-SerialPort. (If you’re curious, here’s all the specs and intricacies of how Bluetooth is implemented on a Mac.) So, to send the command we wrote before to the EV3 via Bluetooth we open the tty.EV3-SerialPort file (line 16), write the command to it (line 17), and close it (line 20). That’s it, you can now use C++ to control the EV3 motors. Just so you know, your command is automatically converted to hexadecimal format before being sent to the EV3 (those LC()s are macros that make the conversion). In other words, your EV3 will not receive {13, 0, 0, 0, DIRECT_COMMAND_NO_REPLY, 0, 0, opOUTPUT_POWER, LC0(0), LC0(1), LC1(20), opOUTPUT_START, LC0(0), LC0(1)}. It will receive \x0D\x00\x00\x00\x80\x00\x00\xA4\x00\x01\x81\x14\xA6\x00\x01 instead. The mapping is provided in the bytecodes.h file. For instance, DIRECT_COMMAND_NO_REPLY is 0x80, opOUTPUT_POWER is 0xA4, and so on. If you prefer you can hardcode the hexadecimals. This produces the exact same outcome: #include <unistd.h> #include <fcntl.h> #include "ev3sources/lms2012/c_com/source/c_com.h" int main() { // write command to start motor on port A with power 20 char start_motor[] = "\x0D\x00\x00\x00\x80\x00\x00\xA4\x00\x01\x81\x14\xA6\x00\x01"; // send command to EV3 via Bluetooth int bt = open("/dev/tty.EV3-SerialPort", O_RDWR); write(bt, start_motor, 15); // end connection with EV3 close(bt); } If you master the hexadecimals you can use any language to communicate with the EV3. For instance, in Python you can do this: # write command to start motor on port A with power 20 start_motor = '\x0D\x00\x00\x00\x80\x00\x00\xA4\x00\x01\x81\x14\xA6\x00\x01' + '\n' # send command to EV3 via Bluetooth with open('/dev/tty.EV3-SerialPort, mode = 'w+', buffering = 0) as bt: bt.write(start_motor) All right then. Now, how do we get data back from the EV3? Well, it’s the reverse process: instead of writing to tty.EV3-SerialPort we read from it. The trick here is to find the sensor data amidst all the other stuff that the EV3 sends back to your computer, but we’ll get there (btw, I’m grateful to the good samaritan who showed me how to do this). To make matters more clear, plug some sensor on port 1 and try running this code: #include <unistd.h> #include <fcntl.h> #include <iostream> #include "ev3sources/lms2012/c_com/source/c_com.h" int main() { // read sensor on port 1 unsigned const char read_sensor[] {11, 0, 0, 0, DIRECT_COMMAND_REPLY, 1, 0, opINPUT_READ, LC0(0), LC0(0), LC0(0), LC0(0), GV0(0)}; // send command to EV3 via Bluetooth int bt = open("/dev/tty.EV3-SerialPort", O_RDWR); write(bt, read_sensor, 13); // receive data back from EV3 via Bluetooth unsigned char sensor_data[255]; read(bt, sensor_data, 255); for(int i=0; i<255; i++) { printf("%x", sensor_data[i]); } // end connection with EV3 close(bt); } The structure of the code is pretty similar to what we had before. The first change is that now our command type is no longer DIRECT_COMMAND_NO_REPLY but DIRECT_COMMAND_REPLY, as we now want to receive data from the EV3. The second change is the sixth bytecode, which is now 1. That means we are now requesting one global variable - we’ll need it to store the sensor data. The third change is of course the command itself, which is now opINPUT_READ. Its arguments are, in order: the EV3 brick (usually 0, unless you have multiple bricks), the port number (minus 1), the sensor type (0 = don’t change the type), and the sensor mode (0 = don’t change the mode). GV0 is not an argument, but the global variable where the sensor data will be stored. Like the motor power, the data we will get back will be in percentage (alternatively, there is an opINPUT_READSI command that returns the data in SI units). The fourth change is that we now have a new code block. Its first line - unsigned char sensor_data[255] - creates an array of size 255, with all values initialized to zero. The size is 255 because at this point we don’t know exactly what the actual size of the received data will be, so we want to be safe: the data will be in hexadecimal format, so 255 is about as large as it gets (just as with the data we send, the first two bytes of the data we receive tell us how large the message is - but we can only count up to 255 with two bytes in hexadecimal format, so 255 is the limit here). The second line receives the data and the for loop prints each byte to the screen. If everything went well you should see as output something like 400021F00000… Try it a couple more times, moving the sensor around in-between. You will notice that the first five digits or so don’t change, and neither do all the others after the sixth or seventh digit. For instance, your results will look like 400023D00000… or 400025B00000… Only two digits or so will change. That is your sensor data! In these three examples, for instance, your data points are 1F, 3D, and 5B. That’s hexadecimal format; in decimal format that means 31, 61, and 91 (here’s a conversion table). Now, once you’ve figured out what the relevant digits are you can get rid of that loop and print only them (say, printf("%x", sensor_data[5]);). That’s it! Now you can control the motors and read the sensors - that should help you get started. If you inspect the c_com.h files you will see lots of other commands, some of them with usage examples. The way forward is by exploring the firmware code and by trial and error. Happy building!
http://thiagomarzagao.com/2014/06/24/rise-of-the-machines-part-2-2/
CC-MAIN-2017-30
refinedweb
1,913
71.34
. But, since gold is decoupled from USD, which currency is most stable measured in gold? To answer this question, we take a look into the annual return (performance) of gold in different currencies. A stable currency should have a low average return and a low standard deviation (Std) of the returns. Let’s take a look at the corresponding table: Now, we interpret the table – green means: “currency and gold are similar”, red means: “currency and gold are different”. Using this interpretations, we can see that CAD is more green than USD in 2003-2011 and CHF is more green than USD in 2007-2011. In both periods, USD is among the most stable currencies. But, this growths is pretty large – is this a sign of a stable currency? An average return of more than 10% p.a. in 2003-2011 seems too large. Close look at USD Within the financial crisis, the monetary base of USD increase significantly. That means, there are many more USD circulating than before. Does this growth in monetary base explain the growth of gold prices? Using this data from US FED, the amount of USD rose in 2003-2011 from 714 to 2722 billions or 281%. This is impressive. But gold rose from 350 USD to 1650 USD or 375%. Conclusion The price of gold is rising. That is a fact in any currency. Even a close look at the monetary base in USD shows that the price of gold rises faster than the amount of money in the economy. So, will this trend stop? Yes, it will. But, only the future will show when. And to answer the question of the title: Measure in gold, money loses its value.
https://computeraidedfinance.com/2012/03/19/the-price-of-gold-is-constantly-rising-or-is-money-just-losing-its-value/
CC-MAIN-2019-22
refinedweb
286
75.5
Quickstart¶ Let’s see how easy it is to use Eliot. Installing Eliot¶ To install Eliot and the other tools we’ll use in this example, run the following in your shell: $ pip install eliot eliot-tree requests This will install: - Eliot itself. - eliot-tree, a tool that lets you visualize Eliot logs easily. requests, a HTTP client library we’ll use in the example code below. You don’t need it for real Eliot usage, though. Our example program¶ We’re going to add logging code to the following script, which checks if a list of links are valid URLs: import requests def check_links(urls): for url in urls: try: response = requests.get(url) response.raise_for_status() except Exception as e: raise ValueError(str(e)) try: check_links(["", ""]) except ValueError: print("Not all links were valid.") Adding Eliot logging¶ To add logging to this program, we do two things: - Tell Eliot to log messages to file called “linkcheck.log” by using eliot.to_file(). - Create two actions using eliot.start_action(). Actions succeed when the eliot.start_action()context manager finishes successfully, and fail when an exception is raised. import requests from eliot import start_action, to_file to_file(open("linkcheck.log", "w")) def check_links(urls): with start_action(action_type="check_links", urls=urls): for url in urls: try: with start_action(action_type="download", url=url): response = requests.get(url) response.raise_for_status() except Exception as e: raise ValueError(str(e)) try: check_links(["", ""]) except ValueError: print("Not all links were valid.") Running the code¶ Let’s run the code: $ python linkcheck.py Not all the links were valid. We can see the resulting log file is composed of JSON messages, one per line: $ cat linkcheck.log {"action_status": "started", "task_uuid": "b1cb58cf-2c2f-45c0-92b2-838ac00b20cc", "task_level": [1], "timestamp": 1509136967.2066844, "action_type": "check_links", "urls": ["", ""]} ... So far these logs seem similar to the output of regular logging systems: individual isolated messages. But unlike those logging systems, Eliot produces logs that can be reconstructed into a tree, for example using the eliot-tree utility: $ eliot-tree linkcheck.log b1cb58cf-2c2f-45c0-92b2-838ac00b20cc └── check_links/1 ⇒ started ├── timestamp: 2017-10-27 20:42:47.206684 ├── urls: │ ├── 0: │ └── 1: ├── download/2/1 ⇒ started │ ├── timestamp: 2017-10-27 20:42:47.206933 │ ├── url: │ └── download/2/2 ⇒ succeeded │ └── timestamp: 2017-10-27 20:42:47.439203 ├── download/3/1 ⇒ started │ ├── timestamp: 2017-10-27 20:42:47.439412 │ ├── url: │ └── download/3/2 ⇒ failed │ ├── errno: None │ ├── exception: requests.exceptions.ConnectionError │ ├── reason: HTTPConnectionPool(host='nosuchurl', port=80): Max retries exceeded with url: / (Caused by NewConnec… │ └── timestamp: 2017-10-27 20:42:47.457133 └── check_links/4 ⇒ failed ├── exception: builtins.ValueError ├── reason: HTTPConnectionPool(host='nosuchurl', port=80): Max retries exceeded with url: / (Caused by NewConnec… └── timestamp: 2017-10-27 20:42:47.457332 Notice how: - Eliot tells you which actions succeeded and which failed. - Failed actions record their exceptions. - You can see just from the logs that the check_linksaction caused the downloadaction. Next steps¶ You can learn more by reading the rest of the documentation, including: - The motivation behind Eliot. - How to generate actions, standalone messages, and handle errors. - How to integrate with asyncio coroutines, threads and processes, or Twisted. - How to output logs to a file or elsewhere.
https://eliot.readthedocs.io/en/1.4.0/quickstart.html
CC-MAIN-2022-40
refinedweb
527
59.7
Provided by: manpages-posix-dev iso646.h — alternative spellings SYNOPSIS #include <iso646.h> DESCRIPTION The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of POSIX.1‐2008 defers to the ISO C standard. The <iso646.h> header shall define the following eleven macros (on the left) that expand to the corresponding tokens (on the right): and && and_eq &= bitand & bitor | compl ~ not ! not_eq != or || or_eq |= xor ^ xor_eq ^= The following sections are informative. APPLICATION USAGE None. RATIONALE None. FUTURE DIRECTIONS None. SEE ALSO .
http://manpages.ubuntu.com/manpages/eoan/man7/iso646.h.7posix.html
CC-MAIN-2020-29
refinedweb
102
61.83
Determining the length of a fixed-length array Suppose in C++ you want to perform variations of some simple task several times. One way to do this is to loop over the variations in an array to perform each task: /* Defines the necessary envvars to 1. */ int setVariables() { static const char* names[] = { "FOO", "BAR", "BAZ", "QUUX", "EIT", "GOATS" }; for (int i = 0; i < 6; i++) if (0 > setenv(names[i], "1")) return -1; return 0; } Manually looping by index is prone to error. One particular issue with loop-by-index is that you must correctly compute the extent of iteration. Hard-coding a constant works, but what if the array changes? The constant must also change, which isn’t obvious to someone not looking carefully at all uses of the array. The traditional way to get an array’s length is with a macro using sizeof: #define NS_ARRAY_LENGTH(arr) (sizeof(arr) / sizeof((arr)[0])) This works but has problems. First, it’s a macro, which means it has the usual macro issues. For example, macros are untyped, so you can pass in “wrong” arguments to them and may not get type errors. This is the second problem: NS_ARRAY_LENGTH cheerfully accepts non-array pointers and returns a completely bogus length. const char* str = "long enough string"; char* copy = (char*) malloc(NS_ARRAY_LENGTH(str)); // usually 4 or 8 strcpy(copy, str); // buffer overflow! Introducing mozilla::ArrayLength and mozilla::ArrayEnd Seeing an opportunity to both kill a moderately obscure macro and to further improve the capabilities of the Mozilla Framework Based on Templates, I took it. Now, rather than use these macros, you can #include "mozilla/Util.h" and use mozilla::ArrayLength to compute the length of a compile-time array. You can also use mozilla::ArrayEnd to compute arr + ArrayLength(arr). Both these methods (not macros!) use C++ template magic to only accept arrays with compile-time-fixed length, failing to compile if something else is provided. Limitations Unfortunately, ISO C++ limitations make it impossible to write a method completely replacing the macro. So the macros still exist, and in rare cases they remain the correct answer. The array can’t depend on an unnamed type ( class, struct, union) or a local type According to C++ §14.3.1 paragraph 2, “A local type [or an] unnamed type…shall not be used as a template-argument for a template type-parameter.” C++ makes this a compile error: size_t numsLength() { // unnamed struct, also locally defined static const struct { int i; } nums[] = { { 1 }, { 2 }, { 3 } }; return mozilla::ArrayLength(nums); } It’s easy to avoid both limitations: move local types to global code, and name them. // now defined globally, and with a name struct Number { int i; }; size_t numsLength() { static const Number nums[] = { 1, 2, 3 }; return mozilla::ArrayLength(nums); } mozilla::ArrayLength(arr) isn’t a constant, NS_ARRAY_LENGTH(arr) is Some contexts in C++ require a compile-time constant expression: template parameters, (in C++ but not C99) for local array lengths, for array lengths in typedefs, for the value of enum initializers, for static/compile-time assertions (which are usually bootstrapped off these other locations), and perhaps others. A function call, even one evaluating to a compile-time constant, is not a constant expression. One other context doesn’t require a constant but strongly wants one: the values of static variables, inside classes and methods and out. If the value is a function call, even if it computes a constant, the compiler might make it a static initialization, delaying startup. The long and short of it is that everything in the code below is a bad idea: int arr[] = { 1, 2, 3, 5 }; static size_t len = ArrayLength(arr); // not an error, but don't do it void t(JSContext* cx) { js::Vector<int, ArrayLength(arr)> v(cx); // non-constant template parameter int local[ArrayLength(arr)]; // variadic arrays not okay in C++ typedef int Mirror[ArrayLength(arr)]; // non-constant array length enum { L = ArrayLength(arr); }; // non-constant initializer PR_STATIC_ASSERT(4 == ArrayLength(arr)); // special case of one of the others } In these situations you should continue to use NS_ARRAY_LENGTH (or in SpiderMonkey, JS_ARRAY_LENGTH). mozilla/Util.h is fragile with respect to IPDL headers, for include order mozilla/Util.h includes mozilla/Types.h, which includes jstypes.h, which includes jsotypes.h, which defines certain fixed-width integer types: int8, int16, uint8, uint16, and so on. It happens that ipc/chromium/src/base/basictypes.h also defines these integer types — but incompatibly on Windows only. This header is, alas, included through every IPDL-generated header. In order to safely include any mfbt header in a file which also includes an IPDL-generated header, you must include the IPDL-generated header first. So when landing patches using mozilla/Util.h, watch out for Windows-specific bustage. Removing the limitations The limitations on the type of elements in arrays passed to ArrayLength are unavoidable limitations of C++. But C++11 removes these limitations, and compilers will likely implement support fairly quickly. When that happens we’ll be able to stop caring about the local-or-unnamed problem, not even needing to work around it. The compile-time-constant limitation is likewise a limitation of C++. It too will go away in C++11 with the constexpr keyword. This modifier specifies that a function provided constant arguments computes a constant. The compiler must allow calls to the function that have constant arguments to be used as compile-time constants. Thus when compilers support constexpr, we can add it to the declaration of ArrayLength and begin using ArrayLength in compile-time-constant contexts. This is more low-hanging C++11 fruit that compilers will pick up soon. (Indeed, GCC 4.6 already implements it.) Last, we have the Windows-specific #include ordering requirement. We have some ideas for getting around this problem, and we hope to have a solution soon. A gotcha Both these methods have a small gotcha: their behavior may not be intuitive when applied to C strings. What does sizeof("foo") evaluate to? If you think of "foo" as a string, you might say 3. But in reality "foo" is much better thought of as an array — and strings are '\0'-terminated. So actually, sizeof("foo") == 4. This was the case with NS_ARRAY_LENGTH, too, so it’s not new behavior. But if you use these methods without considering this, you might end up misusing them. Conclusion Avoid NS_ARRAY_LENGTH when possible, and use mozilla::ArrayLength or mozilla::ArrayEnd instead. And watch out when using them on strings, because their behavior might not be what you wanted. (Curious how these methods are defined, and what C++ magic is used? See my next post.) […] my last post I announced the addition of mozilla::ArrayLength and mozilla::ArrayEnd to the Mozilla Framework […] Pingback by Where's Walden? » Implementing mozilla::ArrayLength and mozilla::ArrayEnd, and some followup work — 20.10.11 @ 16:05 So, according to this page, MSVC 9 has support for the unnamed types as parameters; G++ reports it since 4.5 and clang since 2.9. constexpr support is so far only in g++ 4.6 (neither in clang nor msvc). Comment by Joshua Cranmer — 20.10.11 @ 16:44 Is there a bug? Comment by Ms2ger — 24.10.11 @ 05:00 Not yet. Comment by Jeff — 24.10.11 @ 11:15
http://whereswalden.com/2011/10/20/computing-the-length-or-end-of-a-compile-time-constant-length-array-jsns_array_endlength-are-out-mozillaarraylengthend-are-in/
CC-MAIN-2021-43
refinedweb
1,220
56.25
Gnome Sort Get FREE domain for 1st year and build your brand new site Reading time: 20 minutes | Coding time: 5 minutes Gnome Sort is a simple sorting algorithm with time complexity O(N^2) where the key idea is to swap adjacent elements (if not in order) to sort the entire list. It is similar to Insertion sort except the fact that in this case, we swap adjacent elements. It is inspired by the standard Dutch Garden Gnome sorting his flower pots. A garden gnome sorts the flower pots by the following method: - If the flower pot just before and after him are in correct order, then he moves one step forward. - If it is not in correct order, he swaps the pots and moves back one step. - At the starting when there is no pot before him, he steps forward and on reaching the end of the pot line, the list is sorted. The simplest sort algorithm is not Bubble Sort..., it is not Insertion Sort..., it's Gnome Sort! Pseudocode Here is pseudocode for the gnome sort using a zero-based array: procedure gnomeSort(a[]): pos := 0 while pos < length(a): if (pos == 0 or a[pos] >= a[pos-1]): pos := pos + 1 else: swap a[pos] and a[pos-1] pos := pos - 1 Example. Given an unsorted array, a = [5, 3, 2, 4], the gnome sort takes the following steps during the while loop. The current position is highlighted in bold and indicated as a value of the variable pos. Complexity This is not a very efficient algorithm. It’s time and space complexity are exactly that of the Bubble Sort. That is, the time complexity is O(n2), and space complexity for in-place sorting is O(1) - Worst case time complexity: Θ(N^2) - Average case time complexity: Θ(N^2) - Best case time complexity: Θ(N) - Space complexity: Θ(1)auxiliary Implementations Gnome sort is a sorting algorithm which is similar to Insertion sort, except that moving an element to its proper place is accomplished by a series of swaps, as in Bubble Sort. Following is the implementation in C: #include <stdio.h> void main() { int i, temp, ar[10], n; printf("\nEnter the elements number:"); scanf("%d", &n); printf("\nEnter elements:\n"); for (i = 0; i < n; i++) scanf("%d", &ar[i]); i = 0; while (i < n) { if (i == 0 || ar[i - 1] <= ar[i]) i++; else { temp = ar[i-1]; ar[i - 1] = ar[i]; ar[i] = temp; i = i - 1; } } for (i = 0;i < n;i++) printf("%d\t", ar[i]); } Following is the implementation in C++: #include <iostream> using namespace std; //gnome sort void gnomeSort(int arr[], int n) { int index = 0; while (index < n) { if (index == 0) index++; if (arr[index] >= arr[index - 1]) index++; else { swap(arr[index], arr[index - 1]); index--; } } return; } //print an array void printArray(int arr[], int n) { cout << "Sorted sequence after Gnome sort: "; for (int i = 0; i < n; i++) cout << arr[i] << " "; cout << "\n"; } int main() { int arr[] = { 5, 3, 2, 4 }; int n = sizeof(arr) / sizeof(arr[0]); gnomeSort(arr, n); printArray(arr, n); return (0); } Following is the implementation in Java: public class GnomeSort { private static void gnomeSort(int[] ar) { int i = 1; int n = ar.length; while (i < n) { if (i == 0 || ar[i - 1] <= ar[i]) { i++; } else { int tmp = ar[i]; ar[i] = ar[i - 1]; ar[--i] = tmp; } } } public static void main(String[] args) { int[] ar= {5, 3, 2, 4}; gnomeSort(ar); for (int i = 0; i < ar.length; i++) { System.out.println(ar[i]); } } } Applications - Gnome sort is used everywhere where a stable sort is not needed. - Gnome Sort is an in-place sort that is does not require any extra storage. - The gnome sort is a sorting algorithm which is similar to insertion sort in that it works with one item at a time but gets the item to the proper place by a series of swaps, similar to a bubble sort. - It is conceptually simple, requiring no nested loops. The average running time is O(n^2) but tends towards O(n) if the list is initially almost sorted
https://iq.opengenus.org/gnome-sort/
CC-MAIN-2021-43
refinedweb
697
62.11
20 July 2012 12:00 [Source: ICIS news] By Mahua Chakravarty ?xml:namespace> From The volume in July is likely to be roughly double the 40,000-tonne average monthly benzene exports of Asia to the Margins for shipping benzene to the West have been robust in the past few weeks, as Benzene prices in the US are currently higher than in the Asian market by $260-310/tonne (€211-251/tonne), while freight costs are estimated at just about $60-70/tonne, market sources said. US benzene spot prices were at $4.80-5.00/gal or $1,435-1,495/tonne FOB Barges late on Thursday, while Asian values were hovering at $1,175-1,185/tonne FOB “But no one cares about the freight now, when the margins are so high,” a Korean producer said. Asia is a net exporter of benzene to the South Korea is the biggest exporter of benzene in Asia, with monthly exports to the US estimated at around 50,000-60,000 tonnes in combined spot and contract volumes, based on official figures in the first half of the year. Korea International Trade Association (KITA) data showed that Benzene exports from South Korea to the US?xml:namespace> Source: K
http://www.icis.com/Articles/2012/07/20/9579757/asia-eyes-more-benzene-exports-to-us-on-high-margins.html
CC-MAIN-2014-35
refinedweb
208
62.11
A State effect can be seen as the combination of both a Reader and a Writer with these operations: get get the current state put set a new state Let’s see an example showing that we can also use tags to track different states at the same time: import cats.data._ import org.atnos.eff._, all._, syntax.all._ type S1[A] = State[Int, A] type S2[A] = State[String, A] type S = Fx.fx2[S1, S2] val swapVariables: Eff[S, String] = for { v1 <- get[S, Int] v2 <- get[S, String] _ <- put[S, Int](v2.size) _ <- put[S, String](v1.toString) w1 <- get[S, Int] w2 <- get[S, String] } yield "initial: "+(v1, v2).toString+", final: "+(w1, w2).toString swapVariables.evalState(10).evalState("hello").run > initial: (10,hello), final: (5,10) In the example above we have used an eval method to get the A in Eff[R, A] but it is also possible to get both the value and the state with run or only the state with exec. Instead of tagging state effects it is also possible to transform a State effect acting on a “small” state into a State effect acting on a “bigger” state: import org.atnos.eff._, all._, syntax.all._ import cats.data.State type Count[A] = State[Int, A] type Sum[A] = State[Int, A] type Mean[A] = State[(Int, Int), A] type S1 = Fx.fx1[Count] type S2 = Fx.fx1[Sum] type S = Fx.fx1[Mean] def count(list: List[Int]): Eff[S1, String] = for { _ <- put(list.size) } yield s"there are ${list.size} values" def sum(list: List[Int]): Eff[S2, String] = { val s = if (list.isEmpty) 0 else list.sum for { _ <- put(s) } yield s"the sum is $s" } def mean(list: List[Int]): Eff[S, String] = for { m1 <- count(list).lensState((_:(Int, Int))._1, (s: (Int,Int), i: Int) => (i, s._2)) m2 <- sum(list).lensState((_:(Int, Int))._2, (s: (Int, Int), i: Int) => (s._1, i)) } yield m1+"\n"+m2 mean(List(1, 2, 3)).runState((0, 0)).run > (there are 3 values the sum is 6,(3,6))
http://atnos-org.github.io/eff/org.atnos.site.lib.StateEffectPage.html
CC-MAIN-2018-26
refinedweb
362
76.62